WorldWideScience

Sample records for computer run time

  1. EnergyPlus Run Time Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences, identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.

  2. A Modular Environment for Geophysical Inversion and Run-time Autotuning using Heterogeneous Computing Systems

    Science.gov (United States)

    Myre, Joseph M.

    Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that

  3. How Many Times Should One Run a Computational Simulation?

    DEFF Research Database (Denmark)

    Seri, Raffaello; Secchi, Davide

    2017-01-01

    This chapter is an attempt to answer the question “how many runs of a computational simulation should one do,” and it gives an answer by means of statistical analysis. After defining the nature of the problem and which types of simulation are mostly affected by it, the article introduces statisti......This chapter is an attempt to answer the question “how many runs of a computational simulation should one do,” and it gives an answer by means of statistical analysis. After defining the nature of the problem and which types of simulation are mostly affected by it, the article introduces...

  4. Design and development of a run-time monitor for multi-core architectures in cloud computing.

    Science.gov (United States)

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  5. Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Junghoon Lee

    2011-03-01

    Full Text Available Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  6. CMS Software and Computing Ready for Run 2

    CERN Document Server

    Bloom, Kenneth

    2015-01-01

    In Run 1 of the Large Hadron Collider, software and computing was a strategic strength of the Compact Muon Solenoid experiment. The timely processing of data and simulation samples and the excellent performance of the reconstruction algorithms played an important role in the preparation of the full suite of searches used for the observation of the Higgs boson in 2012. In Run 2, the LHC will run at higher intensities and CMS will record data at a higher trigger rate. These new running conditions will provide new challenges for the software and computing systems. Over the two years of Long Shutdown 1, CMS has built upon the successes of Run 1 to improve the software and computing to meet these challenges. In this presentation we will describe the new features in software and computing that will once again put CMS in a position of physics leadership.

  7. Preventing Run-Time Bugs at Compile-Time Using Advanced C++

    Energy Technology Data Exchange (ETDEWEB)

    Neswold, Richard [Fermilab

    2018-01-01

    When writing software, we develop algorithms that tell the computer what to do at run-time. Our solutions are easier to understand and debug when they are properly modeled using class hierarchies, enumerations, and a well-factored API. Unfortunately, even with these design tools, we end up having to debug our programs at run-time. Worse still, debugging an embedded system changes its dynamics, making it tough to find and fix concurrency issues. This paper describes techniques using C++ to detect run-time bugs *at compile time*. A concurrency library, developed at Fermilab, is used for examples in illustrating these techniques.

  8. Run-Time and Compiler Support for Programming in Adaptive Parallel Environments

    Directory of Open Access Journals (Sweden)

    Guy Edjlali

    1997-01-01

    Full Text Available For better utilization of computing resources, it is important to consider parallel programming environments in which the number of available processors varies at run-time. In this article, we discuss run-time support for data-parallel programming in such an adaptive environment. Executing programs in an adaptive environment requires redistributing data when the number of processors changes, and also requires determining new loop bounds and communication patterns for the new set of processors. We have developed a run-time library to provide this support. We discuss how the run-time library can be used by compilers of high-performance Fortran (HPF-like languages to generate code for an adaptive environment. We present performance results for a Navier-Stokes solver and a multigrid template run on a network of workstations and an IBM SP-2. Our experiments show that if the number of processors is not varied frequently, the cost of data redistribution is not significant compared to the time required for the actual computation. Overall, our work establishes the feasibility of compiling HPF for a network of nondedicated workstations, which are likely to be an important resource for parallel programming in the future.

  9. 12 CFR 1102.27 - Computing time.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Computing time. 1102.27 Section 1102.27 Banks... for Proceedings § 1102.27 Computing time. (a) General rule. In computing any period of time prescribed... time begins to run is not included. The last day so computed is included, unless it is a Saturday...

  10. LHCb computing in Run II and its evolution towards Run III

    CERN Document Server

    Falabella, Antonio

    2016-01-01

    his contribution reports on the experience of the LHCb computing team during LHC Run 2 and its preparation for Run 3. Furthermore a brief introduction on LHCbDIRAC, i.e. the tool to interface to the experiment distributed computing resources for its data processing and data management operations, is given. Run 2, which started in 2015, has already seen several changes in the data processing workflows of the experiment. Most notably the ability to align and calibrate the detector between two different stages of the data processing in the high level trigger farm, eliminating the need for a second pass processing of the data offline. In addition a fraction of the data is immediately reconstructed to its final physics format in the high level trigger and only this format is exported from the experiment site to the physics analysis. This concept have successfully been tested and will continue to be used for the rest of Run 2. Furthermore the distributed data processing has been improved with new concepts and techn...

  11. 12 CFR 622.21 - Computing time.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Computing time. 622.21 Section 622.21 Banks and... Formal Hearings § 622.21 Computing time. (a) General rule. In computing any period of time prescribed or... run is not to be included. The last day so computed shall be included, unless it is a Saturday, Sunday...

  12. CMS software and computing for LHC Run 2

    CERN Document Server

    INSPIRE-00067576

    2016-11-09

    The CMS offline software and computing system has successfully met the challenge of LHC Run 2. In this presentation, we will discuss how the entire system was improved in anticipation of increased trigger output rate, increased rate of pileup interactions and the evolution of computing technology. The primary goals behind these changes was to increase the flexibility of computing facilities where ever possible, as to increase our operational efficiency, and to decrease the computing resources needed to accomplish the primary offline computing workflows. These changes have resulted in a new approach to distributed computing in CMS for Run 2 and for the future as the LHC luminosity should continue to increase. We will discuss changes and plans to our data federation, which was one of the key changes towards a more flexible computing model for Run 2. Our software framework and algorithms also underwent significant changes. We will summarize the our experience with a new multi-threaded framework as deployed on ou...

  13. LHCb's Time-Real Alignment in RunII

    CERN Multimedia

    Batozskaya, Varvara

    2015-01-01

    LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run 2. Data collected at the start of the fill will be processed in a few minutes and used to update the alignment, while the calibration constants will be evaluated for each run. This procedure will improve the quality of the online alignment. Critically, this new real-time alignment and calibration procedure allows identical constants to be used in the online and offline reconstruction, thus improving the correlation between triggered and offline selected events. This offers the opportunity to optimise the event selection in the trigger by applying stronger constraints. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructure for the trigger. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from both the operational and physics performance points of view. Specific challenges of this novel configur...

  14. LHCb's Real-Time Alignment in Run2

    CERN Multimedia

    Batozskaya, Varvara

    2015-01-01

    Stable, precise spatial alignment and PID calibration are necessary to achieve optimal detector performances. During Run2, LHCb will have a new real-time detector alignment and calibration to reach equivalent performances in the online and offline reconstruction. This offers the opportunity to optimise the event selection by applying stronger constraints as well as hadronic particle identification at the trigger level. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructure for the trigger.

  15. Walking, running, and resting under time, distance, and average speed constraints: optimality of walk-run-rest mixtures.

    Science.gov (United States)

    Long, Leroy L; Srinivasan, Manoj

    2013-04-06

    On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk-run mixture at intermediate speeds and a walk-rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients-a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk-run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill.

  16. Design of an EEG-based brain-computer interface (BCI) from standard components running in real-time under Windows.

    Science.gov (United States)

    Guger, C; Schlögl, A; Walterspacher, D; Pfurtscheller, G

    1999-01-01

    An EEG-based brain-computer interface (BCI) is a direct connection between the human brain and the computer. Such a communication system is needed by patients with severe motor impairments (e.g. late stage of Amyotrophic Lateral Sclerosis) and has to operate in real-time. This paper describes the selection of the appropriate components to construct such a BCI and focuses also on the selection of a suitable programming language and operating system. The multichannel system runs under Windows 95, equipped with a real-time Kernel expansion to obtain reasonable real-time operations on a standard PC. Matlab controls the data acquisition and the presentation of the experimental paradigm, while Simulink is used to calculate the recursive least square (RLS) algorithm that describes the current state of the EEG in real-time. First results of the new low-cost BCI show that the accuracy of differentiating imagination of left and right hand movement is around 95%.

  17. ATLAS Distributed Computing Monitoring tools during the LHC Run I

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Di Girolamo, A; Jezequel, S; Ueda, I; Wenaus, T

    2013-01-01

    This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources.\\\\ During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visua...

  18. ATLAS Distributed Computing Monitoring tools during the LHC Run I

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Di Girolamo, A; Jezequel, S; Ueda, I; Wenaus, T

    2014-01-01

    This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources.\\\\ During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visua...

  19. Injecting Artificial Memory Errors Into a Running Computer Program

    Science.gov (United States)

    Bornstein, Benjamin J.; Granat, Robert A.; Wagstaff, Kiri L.

    2008-01-01

    Single-event upsets (SEUs) or bitflips are computer memory errors caused by radiation. BITFLIPS (Basic Instrumentation Tool for Fault Localized Injection of Probabilistic SEUs) is a computer program that deliberately injects SEUs into another computer program, while the latter is running, for the purpose of evaluating the fault tolerance of that program. BITFLIPS was written as a plug-in extension of the open-source Valgrind debugging and profiling software. BITFLIPS can inject SEUs into any program that can be run on the Linux operating system, without needing to modify the program s source code. Further, if access to the original program source code is available, BITFLIPS offers fine-grained control over exactly when and which areas of memory (as specified via program variables) will be subjected to SEUs. The rate of injection of SEUs is controlled by specifying either a fault probability or a fault rate based on memory size and radiation exposure time, in units of SEUs per byte per second. BITFLIPS can also log each SEU that it injects and, if program source code is available, report the magnitude of effect of the SEU on a floating-point value or other program variable.

  20. Novel Real-time Calibration and Alignment Procedure for LHCb Run II

    CERN Multimedia

    Prouve, Claire

    2016-01-01

    In order to achieve optimal detector performance the LHCb experiment has introduced a novel real-time detector alignment and calibration strategy for Run II of the LHC. For the alignment tasks, data is collected and processed at the beginning of each fill while the calibrations are performed for each run. This real time alignment and calibration allows the same constants being used in both the online and offline reconstruction, thus improving the correlation between triggered and offline selected events. Additionally the newly computed alignment and calibration constants can be instantly used in the trigger, making it more efficient. The online alignment and calibration of the RICH detectors also enable the use of hadronic particle identification in the trigger. The computing time constraints are met through the use of a new dedicated framework using the multi-core farm infrastructure for the LHCb trigger. An overview of all alignment and calibration tasks is presented and their performance is shown.

  1. RAPPORT: running scientific high-performance computing applications on the cloud.

    Science.gov (United States)

    Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt

    2013-01-28

    Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.

  2. ATLAS Distributed Computing experience and performance during the LHC Run-2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160; The ATLAS collaboration

    2017-01-01

    ATLAS Distributed Computing during LHC Run-1 was challenged by steadily increasing computing, storage and network requirements. In addition, the complexity of processing task workflows and their associated data management requirements led to a new paradigm in the ATLAS computing model for Run-2, accompanied by extensive evolution and redesign of the workflow and data management systems. The new systems were put into production at the end of 2014, and gained robustness and maturity during 2015 data taking. ProdSys2, the new request and task interface; JEDI, the dynamic job execution engine developed as an extension to PanDA; and Rucio, the new data management system, form the core of Run-2 ATLAS distributed computing engine. One of the big changes for Run-2 was the adoption of the Derivation Framework, which moves the chaotic CPU and data intensive part of the user analysis into the centrally organized train production, delivering derived AOD datasets to user groups for final analysis. The effectiveness of the...

  3. ATLAS Distributed Computing experience and performance during the LHC Run-2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160; The ATLAS collaboration

    2016-01-01

    ATLAS Distributed Computing during LHC Run-1 was challenged by steadily increasing computing, storage and network requirements. In addition, the complexity of processing task workflows and their associated data management requirements led to a new paradigm in the ATLAS computing model for Run-2, accompanied by extensive evolution and redesign of the workflow and data management systems. The new systems were put into production at the end of 2014, and gained robustness and maturity during 2015 data taking. ProdSys2, the new request and task interface; JEDI, the dynamic job execution engine developed as an extension to PanDA; and Rucio, the new data management system, form the core of the Run-2 ATLAS distributed computing engine. One of the big changes for Run-2 was the adoption of the Derivation Framework, which moves the chaotic CPU and data intensive part of the user analysis into the centrally organized train production, delivering derived AOD datasets to user groups for final analysis. The effectiveness of...

  4. 7 CFR 1.603 - How are time periods computed?

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 1 2010-01-01 2010-01-01 false How are time periods computed? 1.603 Section 1.603... Licenses General Provisions § 1.603 How are time periods computed? (a) General. Time periods are computed as follows: (1) The day of the act or event from which the period begins to run is not included. (2...

  5. 50 CFR 221.3 - How are time periods computed?

    Science.gov (United States)

    2010-10-01

    ... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false How are time periods computed? 221.3... Provisions § 221.3 How are time periods computed? (a) General. Time periods are computed as follows: (1) The day of the act or event from which the period begins to run is not included. (2) The last day of the...

  6. Implementing Run-Time Evaluation of Distributed Timing Constraints in a Real-Time Environment

    DEFF Research Database (Denmark)

    Kristensen, C. H.; Drejer, N.

    1994-01-01

    In this paper we describe a solution to the problem of implementing run-time evaluation of timing constraints in distributed real-time environments......In this paper we describe a solution to the problem of implementing run-time evaluation of timing constraints in distributed real-time environments...

  7. ATLAS Distributed Computing in LHC Run2

    CERN Document Server

    Campana, Simone; The ATLAS collaboration

    2015-01-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run2. An increased data rate and computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (ProdSys2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward the flexible computing model. The flexible computing utilization exploring the opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model, the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover a new data management strategy, based on defined lifetime for each dataset, has been defin...

  8. General purpose computers in real time

    International Nuclear Information System (INIS)

    Biel, J.R.

    1989-01-01

    I see three main trends in the use of general purpose computers in real time. The first is more processing power. The second is the use of higher speed interconnects between computers (allowing more data to be delivered to the processors). The third is the use of larger programs running in the computers. Although there is still work that needs to be done, I believe that all indications are that the online need for general purpose computers should be available for the SCC and LHC machines. 2 figs

  9. CMS Computing Operations During Run1

    CERN Document Server

    Gutsche, Oliver

    2013-01-01

    During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. Up to 100k processor cores were used simultaneously and 100PB of storage was managed. Each month petabytes of data were moved and hundreds of users accessed data samples. In this presentation we will discuss the operational experience from the first run. We will present the workflows and data flows that were executed, we will discuss the tools and services developed, and the operations and shift models used to sustain the system. Many techniques were followed from the original computing planning, but some were reactions to difficulties and opportunities. In this presentation we will also address the lessons learned from an operational perspective, and how this is shaping our thoughts for 2015.

  10. CMS computing operations during run 1

    CERN Document Server

    Adelman, J; Artieda, J; Bagliese, G; Ballestero, D; Bansal, S; Bauerdick, L; Behrenhof, W; Belforte, S; Bloom, K; Blumenfeld, B; Blyweert, S; Bonacorsi, D; Brew, C; Contreras, L; Cristofori, A; Cury, S; da Silva Gomes, D; Dolores Saiz Santos, M; Dost, J; Dykstra, D; Fajardo Hernandez, E; Fanzango, F; Fisk, I; Flix, J; Georges, A; Gi ffels, M; Gomez-Ceballos, G; Gowdy, S; Gutsche, O; Holzman, B; Janssen, X; Kaselis, R; Kcira, D; Kim, B; Klein, D; Klute, M; Kress, T; Kreuzer, P; Lahi , A; Larson, K; Letts, J; Levin, A; Linacre, J; Linares, J; Liu, S; Luyckx, S; Maes, M; Magini, N; Malta, A; Marra Da Silva, J; Mccartin, J; McCrea, A; Mohapatra, A; Molina, J; Mortensen, T; Padhi, S; Paus, C; Piperov, S; Ralph; Sartirana, A; Sciaba, A; S ligoi, I; Spinoso, V; Tadel, M; Traldi, S; Wissing, C; Wuerthwein, F; Yang, M; Zielinski, M; Zvada, M

    2014-01-01

    During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. Up to 100k processor cores were used simultaneously and 100PB of storage was managed. Each month petabytes of data were moved and hundreds of users accessed data samples. In this document we discuss the operational experience from this first run. We present the workflows and data flows that were executed, and we discuss the tools and services developed, and the operations and shift models used to sustain the system. Many techniques were followed from the original computing planning, but some were reactions to difficulties and opportunities. We also address the lessons learned from an operational perspective, and how this is shaping our thoughts for 2015.

  11. Using Simulated Partial Dynamic Run-Time Reconfiguration to Share Embedded FPGA Compute and Power Resources across a Swarm of Unpiloted Airborne Vehicles

    Directory of Open Access Journals (Sweden)

    Kearney David

    2007-01-01

    Full Text Available We show how the limited electrical power and FPGA compute resources available in a swarm of small UAVs can be shared by moving FPGA tasks from one UAV to another. A software and hardware infrastructure that supports the mobility of embedded FPGA applications on a single FPGA chip and across a group of networked FPGA chips is an integral part of the work described here. It is shown how to allocate a single FPGA's resources at run time and to share a single device through the use of application checkpointing, a memory controller, and an on-chip run-time reconfigurable network. A prototype distributed operating system is described for managing mobile applications across the swarm based on the contents of a fuzzy rule base. It can move applications between UAVs in order to equalize power use or to enable the continuous replenishment of fully fueled planes into the swarm.

  12. ATLAS Distributed Computing Monitoring tools during the LHC Run I

    Science.gov (United States)

    Schovancová, J.; Campana, S.; Di Girolamo, A.; Jézéquel, S.; Ueda, I.; Wenaus, T.; Atlas Collaboration

    2014-06-01

    This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources. During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visualization bits across the different tools. A rich family of various filtering and searching options enhancing available user interfaces comes naturally with the data and visualization layer separation. With a variety of reliable monitoring data accessible through standardized interfaces, the possibility of automating actions under well defined conditions correlating multiple data sources has become feasible. In this contribution we discuss also about the automated exclusion of degraded resources and their automated recovery in various activities.

  13. 16 CFR 803.10 - Running of time.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Running of time. 803.10 Section 803.10 Commercial Practices FEDERAL TRADE COMMISSION RULES, REGULATIONS, STATEMENTS AND INTERPRETATIONS UNDER THE HART-SCOTT-RODINO ANTITRUST IMPROVEMENTS ACT OF 1976 TRANSMITTAL RULES § 803.10 Running of time. (a...

  14. 43 CFR 45.3 - How are time periods computed?

    Science.gov (United States)

    2010-10-01

    ... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false How are time periods computed? 45.3... IN FERC HYDROPOWER LICENSES General Provisions § 45.3 How are time periods computed? (a) General... run is not included. (2) The last day of the period is included. (i) If that day is a Saturday, Sunday...

  15. Run 2 analysis computing for CDF and D0

    International Nuclear Information System (INIS)

    Fuess, S.

    1995-11-01

    Two large experiments at the Fermilab Tevatron collider will use upgraded of running. The associated analysis software is also expected to change, both to account for higher data rates and to embrace new computing paradigms. A discussion is given to the problems facing current and future High Energy Physics (HEP) analysis computing, and several issues explored in detail

  16. Computing Models of CDF and D0 in Run II

    International Nuclear Information System (INIS)

    Lammel, S.

    1997-05-01

    The next collider run of the Fermilab Tevatron, Run II, is scheduled for autumn of 1999. Both experiments, the Collider Detector at Fermilab (CDF) and the D0 experiment are being modified to cope with the higher luminosity and shorter bunchspacing of the Tevatron. New detector components, higher event complexity, and an increased data volume require changes from the data acquisition systems up to the analysis systems. In this paper we present a summary of the computing models of the two experiments for Run II

  17. Computing Models of CDF and D0 in Run II

    International Nuclear Information System (INIS)

    Lammel, S.

    1997-01-01

    The next collider run of the Fermilab Tevatron, Run II, is scheduled for autumn of 1999. Both experiments, the Collider Detector at Fermilab (CDF) and the D0 experiment are being modified to cope with the higher luminosity and shorter bunch spacing of the Tevatron. New detector components, higher event complexity, and an increased data volume require changes from the data acquisition systems up to the analysis systems. In this paper we present a summary of the computing models of the two experiments for Run II

  18. Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment

    Directory of Open Access Journals (Sweden)

    Qi Liu

    2016-08-01

    Full Text Available Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs.

  19. ATLAS Distributed Computing in LHC Run2

    International Nuclear Information System (INIS)

    Campana, Simone

    2015-01-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run-2. An increase in both the data rate and the computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (Prodsys-2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward a flexible computing model. A flexible computing utilization exploring the use of opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model; the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover, a new data management strategy, based on a defined lifetime for each dataset, has been defined to better manage the lifecycle of the data. In this note, an overview of an operational experience of the new system and its evolution is presented. (paper)

  20. Novel real-time alignment and calibration of LHCb detector for Run II and tracking for the upgrade.

    CERN Document Server

    AUTHOR|(CDS)2091576

    2016-01-01

    LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run II. Data collected at the start of the fill is processed in a few minutes and used to update the alignment, while the calibration constants are evaluated for each run. The procedure aims to improve the quality of the online selection and performance stability. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructure for the trigger. A similar scheme is planned to be used for Run III foreseen to start in 2020. At that time LHCb will run at an instantaneous luminosity of $2 \\times 10^{33}$ cm$^2$ s$^1$ and a fully software based trigger strategy will be used. The new running conditions and the tighter timing constraints in the software trigger (only 13 ms per event are available) represent a big challenge for track reconstruction. The new software based trigger strategy implies a full detector read-out at the collision rate of 40 MHz. High performance ...

  1. Computing the Maximum Detour of a Plane Graph in Subquadratic Time

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    Let G be a plane graph where each edge is a line segment. We consider the problem of computing the maximum detour of G, defined as the maximum over all pairs of distinct points p and q of G of the ratio between the distance between p and q in G and the distance |pq|. The fastest known algorithm f...... for this problem has O(n^2) running time. We show how to obtain O(n^{3/2}*(log n)^3) expected running time. We also show that if G has bounded treewidth, its maximum detour can be computed in O(n*(log n)^3) expected time....

  2. Run-time middleware to support real-time system scenarios

    NARCIS (Netherlands)

    Goossens, K.; Koedam, M.; Sinha, S.; Nelson, A.; Geilen, M.

    2015-01-01

    Systems on Chip (SOC) are powerful multiprocessor systems capable of running multiple independent applications, often with both real-time and non-real-time requirements. Scenarios exist at two levels: first, combinations of independent applications, and second, different states of a single

  3. Compilation time analysis to minimize run-time overhead in preemptive scheduling on multiprocessors

    Science.gov (United States)

    Wauters, Piet; Lauwereins, Rudy; Peperstraete, J.

    1994-10-01

    This paper describes a scheduling method for hard real-time Digital Signal Processing (DSP) applications, implemented on a multi-processor. Due to the very high operating frequencies of DSP applications (typically hundreds of kHz) runtime overhead should be kept as small as possible. Because static scheduling introduces very little run-time overhead it is used as much as possible. Dynamic pre-emption of tasks is allowed if and only if it leads to better performance in spite of the extra run-time overhead. We essentially combine static scheduling with dynamic pre-emption using static priorities. Since we are dealing with hard real-time applications we must be able to guarantee at compile-time that all timing requirements will be satisfied at run-time. We will show that our method performs at least as good as any static scheduling method. It also reduces the total amount of dynamic pre-emptions compared with run time methods like deadline monotonic scheduling.

  4. Fast algorithms for computing phylogenetic divergence time.

    Science.gov (United States)

    Crosby, Ralph W; Williams, Tiffani L

    2017-12-06

    The inference of species divergence time is a key step in most phylogenetic studies. Methods have been available for the last ten years to perform the inference, but the performance of the methods does not yet scale well to studies with hundreds of taxa and thousands of DNA base pairs. For example a study of 349 primate taxa was estimated to require over 9 months of processing time. In this work, we present a new algorithm, AncestralAge, that significantly improves the performance of the divergence time process. As part of AncestralAge, we demonstrate a new method for the computation of phylogenetic likelihood and our experiments show a 90% improvement in likelihood computation time on the aforementioned dataset of 349 primates taxa with over 60,000 DNA base pairs. Additionally, we show that our new method for the computation of the Bayesian prior on node ages reduces the running time for this computation on the 349 taxa dataset by 99%. Through the use of these new algorithms we open up the ability to perform divergence time inference on large phylogenetic studies.

  5. Leisure-time running reduces all-cause and cardiovascular mortality risk.

    Science.gov (United States)

    Lee, Duck-Chul; Pate, Russell R; Lavie, Carl J; Sui, Xuemei; Church, Timothy S; Blair, Steven N

    2014-08-05

    Although running is a popular leisure-time physical activity, little is known about the long-term effects of running on mortality. The dose-response relations between running, as well as the change in running behaviors over time, and mortality remain uncertain. We examined the associations of running with all-cause and cardiovascular mortality risks in 55,137 adults, 18 to 100 years of age (mean age 44 years). Running was assessed on a medical history questionnaire by leisure-time activity. During a mean follow-up of 15 years, 3,413 all-cause and 1,217 cardiovascular deaths occurred. Approximately 24% of adults participated in running in this population. Compared with nonrunners, runners had 30% and 45% lower adjusted risks of all-cause and cardiovascular mortality, respectively, with a 3-year life expectancy benefit. In dose-response analyses, the mortality benefits in runners were similar across quintiles of running time, distance, frequency, amount, and speed, compared with nonrunners. Weekly running even benefits, with 29% and 50% lower risks of all-cause and cardiovascular mortality, respectively, compared with never-runners. Running, even 5 to 10 min/day and at slow speeds benefits. Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  6. ATLAS Distributed Computing Experience and Performance During the LHC Run-2

    Science.gov (United States)

    Filipčič, A.; ATLAS Collaboration

    2017-10-01

    ATLAS Distributed Computing during LHC Run-1 was challenged by steadily increasing computing, storage and network requirements. In addition, the complexity of processing task workflows and their associated data management requirements led to a new paradigm in the ATLAS computing model for Run-2, accompanied by extensive evolution and redesign of the workflow and data management systems. The new systems were put into production at the end of 2014, and gained robustness and maturity during 2015 data taking. ProdSys2, the new request and task interface; JEDI, the dynamic job execution engine developed as an extension to PanDA; and Rucio, the new data management system, form the core of Run-2 ATLAS distributed computing engine. One of the big changes for Run-2 was the adoption of the Derivation Framework, which moves the chaotic CPU and data intensive part of the user analysis into the centrally organized train production, delivering derived AOD datasets to user groups for final analysis. The effectiveness of the new model was demonstrated through the delivery of analysis datasets to users just one week after data taking, by completing the calibration loop, Tier-0 processing and train production steps promptly. The great flexibility of the new system also makes it possible to execute part of the Tier-0 processing on the grid when Tier-0 resources experience a backlog during high data-taking periods. The introduction of the data lifetime model, where each dataset is assigned a finite lifetime (with extensions possible for frequently accessed data), was made possible by Rucio. Thanks to this the storage crises experienced in Run-1 have not reappeared during Run-2. In addition, the distinction between Tier-1 and Tier-2 disk storage, now largely artificial given the quality of Tier-2 resources and their networking, has been removed through the introduction of dynamic ATLAS clouds that group the storage endpoint nucleus and its close-by execution satellite sites. All stable

  7. Running climate model on a commercial cloud computing environment: A case study using Community Earth System Model (CESM) on Amazon AWS

    Science.gov (United States)

    Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock

    2017-01-01

    The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.

  8. Novel real-time alignment and calibration of the LHCb detector in Run2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00144085

    2017-01-01

    LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run2. Data collected at the start of the fill are processed in a few minutes and used to update the alignment parameters, while the calibration constants are evaluated for each run. This procedure improves the quality of the online reconstruction. For example, the vertex locator is retracted and reinserted for stable beam conditions in each fill to be centred on the primary vertex position in the transverse plane. Consequently its position changes on a fill-by-fill basis. Critically, this new real-time alignment and calibration procedure allows identical constants to be used in the online and offline reconstruction, thus improving the correlation between triggered and offline-selected events. This offers the opportunity to optimise the event selection in the trigger by applying stronger constraints. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructur...

  9. Running Batch Jobs on Peregrine | High-Performance Computing | NREL

    Science.gov (United States)

    and run your application. Users typically create or edit job scripts using a text editor such as vi Using Resource Feature to Request Different Node Types Peregrine has several types of compute nodes , which differ in the amount of memory and number of processor cores. The majority of the nodes have 24

  10. Time Optimal Run-time Evaluation of Distributed Timing Constraints in Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.; Kristensen, C.H.

    1993-01-01

    This paper considers run-time evaluation of an important class of constraints; Timing constraints. These appear extensively in process control systems. Timing constraints are considered in distributed systems, i.e. systems consisting of multiple autonomous nodes......

  11. Accuracy versus run time in an adiabatic quantum search

    International Nuclear Information System (INIS)

    Rezakhani, A. T.; Pimachev, A. K.; Lidar, D. A.

    2010-01-01

    Adiabatic quantum algorithms are characterized by their run time and accuracy. The relation between the two is essential for quantifying adiabatic algorithmic performance yet is often poorly understood. We study the dynamics of a continuous time, adiabatic quantum search algorithm and find rigorous results relating the accuracy and the run time. Proceeding with estimates, we show that under fairly general circumstances the adiabatic algorithmic error exhibits a behavior with two discernible regimes: The error decreases exponentially for short times and then decreases polynomially for longer times. We show that the well-known quadratic speedup over classical search is associated only with the exponential error regime. We illustrate the results through examples of evolution paths derived by minimization of the adiabatic error. We also discuss specific strategies for controlling the adiabatic error and run time.

  12. Run-time verification of behavioural conformance for conversational web services

    OpenAIRE

    Dranidis, Dimitris; Ramollari, Ervin; Kourtesis, Dimitrios

    2009-01-01

    Web services exposing run-time behaviour that deviates from their behavioural specifications represent a major threat to the sustainability of a service-oriented ecosystem. It is therefore critical to verify the behavioural conformance of services during run-time. This paper discusses a novel approach for run-time verification of Web services. It proposes the utilisation of Stream X-machines for constructing formal behavioural specifications of Web services which can be exploited for verifyin...

  13. Walking, running, and resting under time, distance, and average speed constraints: optimality of walk–run–rest mixtures

    Science.gov (United States)

    Long, Leroy L.; Srinivasan, Manoj

    2013-01-01

    On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk–run mixture at intermediate speeds and a walk–rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients—a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk–run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill. PMID:23365192

  14. Safety evaluation of the ITP filter/stripper test runs and quiet time runs using simulant solution. Revision 3

    International Nuclear Information System (INIS)

    Gupta, M.K.

    1994-06-01

    The purpose is to provide the technical bases for the evaluation of Unreviewed Safety Question for the In-Tank Precipitation (ITP) Filter/Stripper Test Runs (Ref. 7) and Quiet Time Runs Program (described in Section 3.6). The Filter/Stripper Test Runs and Quiet Time Runs program involves a 12,000 gallon feed tank containing an agitator, a 4,000 gallon flush tank, a variable speed pump, associated piping and controls, and equipment within both the Filter and the Stripper Building

  15. An enhanced Ada run-time system for real-time embedded processors

    Science.gov (United States)

    Sims, J. T.

    1991-01-01

    An enhanced Ada run-time system has been developed to support real-time embedded processor applications. The primary focus of this development effort has been on the tasking system and the memory management facilities of the run-time system. The tasking system has been extended to support efficient and precise periodic task execution as required for control applications. Event-driven task execution providing a means of task-asynchronous control and communication among Ada tasks is supported in this system. Inter-task control is even provided among tasks distributed on separate physical processors. The memory management system has been enhanced to provide object allocation and protected access support for memory shared between disjoint processors, each of which is executing a distinct Ada program.

  16. Real-time alignment and calibration of the LHCb Detector in Run II

    CERN Multimedia

    Dujany, Giulio

    2016-01-01

    Stable, precise spatial alignment and PID calibration are necessary to achieve optimal detector performance. During Run2, LHCb has a new real-time detector alignment and calibration to allow equivalent performance in the online and offline reconstruction to be reached. This offers the opportunity to optimise the event selection by applying stronger constraints, and to use hadronic particle identification at the trigger level. The computing time constraints are met through the use of a new dedicated framework using the multi-core farm infrastructure for the trigger. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from the operative and physics performance point of view. Specific challenges of this configuration are discussed, as well as the designed framework and its performance.

  17. Real-time alignment and calibration of the LHCb Detector in Run II

    CERN Multimedia

    Dujany, Giulio

    2015-01-01

    Stable, precise spatial alignment and PID calibration are necessary to achieve optimal detector performance. During Run2, LHCb will have a new real-time detector alignment and calibration to allow equivalent performance in the online and offline reconstruction to be reached. This offers the opportunity to optimise the event selection by applying stronger constraints, and to use hadronic particle identification at the trigger level. The computing time constraints are met through the use of a new dedicated framework using the multi-core farm infrastructure for the trigger. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from the operative and physics performance point of view. Specific challenges of this configuration are discussed, as well as the designed framework and its performance.

  18. Computing Refined Buneman Trees in Cubic Time

    DEFF Research Database (Denmark)

    Brodal, G.S.; Fagerberg, R.; Östlin, A.

    2003-01-01

    Reconstructing the evolutionary tree for a set of n species based on pairwise distances between the species is a fundamental problem in bioinformatics. Neighbor joining is a popular distance based tree reconstruction method. It always proposes fully resolved binary trees despite missing evidence...... in the underlying distance data. Distance based methods based on the theory of Buneman trees and refined Buneman trees avoid this problem by only proposing evolutionary trees whose edges satisfy a number of constraints. These trees might not be fully resolved but there is strong combinatorial evidence for each...... proposed edge. The currently best algorithm for computing the refined Buneman tree from a given distance measure has a running time of O(n 5) and a space consumption of O(n 4). In this paper, we present an algorithm with running time O(n 3) and space consumption O(n 2). The improved complexity of our...

  19. Safety evaluation of the ITP filter/stripper test runs and quiet time runs using simulant solution

    International Nuclear Information System (INIS)

    Gupta, M.K.

    1993-10-01

    In-Tank Precipitation is a process for removing radioactivity from the salt stored in the Waste Management Tank Farm at Savannah River. The process involves precipitation of cesium and potassium with sodium tetraphenylborate (STPB) and adsorption of strontium and actinides on insoluble sodium titanate (ST) particles. The purpose of this report is to provide the technical bases for the evaluation of Unreviewed Safety Question for the In-Tank Precipitation (ITP) Filter/Stripper Test Runs and Quiet Time Runs Program. The primary objective of the filter-stripper test runs and quiet time runs program is to ensure that the facility will fulfill its design basis function prior to the introduction of radioactive feed. Risks associated with the program are identified and include hazards, both personnel and environmental, associated with handling the chemical simulants; the presence of flammable materials; the potential for damage to the permanenet ITP and Tank Farm facilities. The risks, potential accident scenarios, and safeguards either in place or planned are discussed at length

  20. Safety evaluation of the ITP filter/stripper test runs and quiet time runs using simulant solution

    Energy Technology Data Exchange (ETDEWEB)

    Gupta, M.K.

    1993-10-01

    In-Tank Precipitation is a process for removing radioactivity from the salt stored in the Waste Management Tank Farm at Savannah River. The process involves precipitation of cesium and potassium with sodium tetraphenylborate (STPB) and adsorption of strontium and actinides on insoluble sodium titanate (ST) particles. The purpose of this report is to provide the technical bases for the evaluation of Unreviewed Safety Question for the In-Tank Precipitation (ITP) Filter/Stripper Test Runs and Quiet Time Runs Program. The primary objective of the filter-stripper test runs and quiet time runs program is to ensure that the facility will fulfill its design basis function prior to the introduction of radioactive feed. Risks associated with the program are identified and include hazards, both personnel and environmental, associated with handling the chemical simulants; the presence of flammable materials; the potential for damage to the permanenet ITP and Tank Farm facilities. The risks, potential accident scenarios, and safeguards either in place or planned are discussed at length.

  1. Novel Real-time Alignment and Calibration of the LHCb detector in Run2

    Science.gov (United States)

    Martinelli, Maurizio; LHCb Collaboration

    2017-10-01

    LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run2. Data collected at the start of the fill are processed in a few minutes and used to update the alignment parameters, while the calibration constants are evaluated for each run. This procedure improves the quality of the online reconstruction. For example, the vertex locator is retracted and reinserted for stable beam conditions in each fill to be centred on the primary vertex position in the transverse plane. Consequently its position changes on a fill-by-fill basis. Critically, this new real-time alignment and calibration procedure allows identical constants to be used in the online and offline reconstruction, thus improving the correlation between triggered and offline-selected events. This offers the opportunity to optimise the event selection in the trigger by applying stronger constraints. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructure for the trigger. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from both the operational and physics performance points of view. Specific challenges of this novel configuration are discussed, as well as the working procedures of the framework and its performance.

  2. Design-time application mapping and platform exploration for MP-SoC customised run-time management

    NARCIS (Netherlands)

    Ykman-Couvreur, Ch.; Nollet, V.; Marescaux, T.M.; Brockmeyer, E.; Catthoor, F.; Corporaal, H.

    2007-01-01

    Abstract: In an Multi-Processor system-on-Chip (MP-SoC) environment, a customized run-time management layer should be incorporated on top of the basic Operating System services to alleviate the run-time decision-making and to globally optimise costs (e.g. energy consumption) across all active

  3. The ATLAS Distributed Computing project for LHC Run-2 and beyond.

    CERN Document Server

    Di Girolamo, Alessandro; The ATLAS collaboration

    2015-01-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run2. An increased data rate and computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (ProdSys2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward the flexible computing model. The flexible computing utilization exploring the opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model, the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover a new data management strategy, based on defined lifetime for each dataset, has been defin...

  4. Combining monitoring with run-time assertion checking

    NARCIS (Netherlands)

    Gouw, Stijn de

    2013-01-01

    We develop a new technique for Run-time Checking for two object-oriented languages: Java and the Abstract Behavioral Specification language ABS. In object-oriented languages, objects communicate by sending each other messages. Assuming encapsulation, the behavior of objects is completely

  5. Changes in running kinematics, kinetics, and spring-mass behavior over a 24-h run.

    Science.gov (United States)

    Morin, Jean-Benoît; Samozino, Pierre; Millet, Guillaume Y

    2011-05-01

    This study investigated the changes in running mechanics and spring-mass behavior over a 24-h treadmill run (24TR). Kinematics, kinetics, and spring-mass characteristics of the running step were assessed in 10 experienced ultralong-distance runners before, every 2 h, and after a 24TR using an instrumented treadmill dynamometer. These measurements were performed at 10 km·h, and mechanical parameters were sampled at 1000 Hz for 10 consecutive steps. Contact and aerial times were determined from ground reaction force (GRF) signals and used to compute step frequency. Maximal GRF, loading rate, downward displacement of the center of mass, and leg length change during the support phase were determined and used to compute both vertical and leg stiffness. Subjects' running pattern and spring-mass behavior significantly changed over the 24TR with a 4.9% higher step frequency on average (because of a significantly 4.5% shorter contact time), a lower maximal GRF (by 4.4% on average), a 13.0% lower leg length change during contact, and an increase in both leg and vertical stiffness (+9.9% and +8.6% on average, respectively). Most of these changes were significant from the early phase of the 24TR (fourth to sixth hour of running) and could be speculated as contributing to an overall limitation of the potentially harmful consequences of such a long-duration run on subjects' musculoskeletal system. During a 24TR, the changes in running mechanics and spring-mass behavior show a clear shift toward a higher oscillating frequency and stiffness, along with lower GRF and leg length change (hence a reduced overall eccentric load) during the support phase of running. © 2011 by the American College of Sports Medicine

  6. Strong normalization by type-directed partial evaluation and run-time code generation

    DEFF Research Database (Denmark)

    Balat, Vincent; Danvy, Olivier

    1998-01-01

    We investigate the synergy between type-directed partial evaluation and run-time code generation for the Caml dialect of ML. Type-directed partial evaluation maps simply typed, closed Caml values to a representation of their long βη-normal form. Caml uses a virtual machine and has the capability...... to load byte code at run time. Representing the long βη-normal forms as byte code gives us the ability to strongly normalize higher-order values (i.e., weak head normal forms in ML), to compile the resulting strong normal forms into byte code, and to load this byte code all in one go, at run time. We...... conclude this note with a preview of our current work on scaling up strong normalization by run-time code generation to the Caml module language....

  7. Strong Normalization by Type-Directed Partial Evaluation and Run-Time Code Generation

    DEFF Research Database (Denmark)

    Balat, Vincent; Danvy, Olivier

    1997-01-01

    We investigate the synergy between type-directed partial evaluation and run-time code generation for the Caml dialect of ML. Type-directed partial evaluation maps simply typed, closed Caml values to a representation of their long βη-normal form. Caml uses a virtual machine and has the capability...... to load byte code at run time. Representing the long βη-normal forms as byte code gives us the ability to strongly normalize higher-order values (i.e., weak head normal forms in ML), to compile the resulting strong normal forms into byte code, and to load this byte code all in one go, at run time. We...... conclude this note with a preview of our current work on scaling up strong normalization by run-time code generation to the Caml module language....

  8. Optimal Infinite Runs in One-Clock Priced Timed Automata

    DEFF Research Database (Denmark)

    David, Alexandre; Ejsing-Duun, Daniel; Fontani, Lisa

    We address the problem of finding an infinite run with the optimal cost-time ratio in a one-clock priced timed automaton and pro- vide an algorithmic solution. Through refinements of the quotient graph obtained by strong time-abstracting bisimulation partitioning, we con- struct a graph with time...

  9. Distributed computing for real-time petroleum reservoir monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Ayodele, O. R. [University of Alberta, Edmonton, AB (Canada)

    2004-05-01

    Computer software architecture is presented to illustrate how the concept of distributed computing can be applied to real-time reservoir monitoring processes, permitting the continuous monitoring of the dynamic behaviour of petroleum reservoirs at much shorter intervals. The paper describes the fundamental technologies driving distributed computing, namely Java 2 Platform Enterprise edition (J2EE) by Sun Microsystems, and the Microsoft Dot-Net (Microsoft.Net) initiative, and explains the challenges involved in distributed computing. These are: (1) availability of permanently placed downhole equipment to acquire and transmit seismic data; (2) availability of high bandwidth to transmit the data; (3) security considerations; (4) adaptation of existing legacy codes to run on networks as downloads on demand; and (5) credibility issues concerning data security over the Internet. Other applications of distributed computing in the petroleum industry are also considered, specifically MWD, LWD and SWD (measurement-while-drilling, logging-while-drilling, and simulation-while-drilling), and drill-string vibration monitoring. 23 refs., 1 fig.

  10. The Weekly Fab Five: Things You Should Do Every Week To Keep Your Computer Running in Tip-Top Shape.

    Science.gov (United States)

    Crispen, Patrick

    2001-01-01

    Describes five steps that school librarians should follow every week to keep their computers running at top efficiency. Explains how to update virus definitions; run Windows update; run ScanDisk to repair errors on the hard drive; run a disk defragmenter; and backup all data. (LRW)

  11. The CMS trigger in Run 2

    CERN Document Server

    Tosi, Mia

    2018-01-01

    During its second period of operation (Run 2) which started in 2015, the LHC will reach a peak instantaneous luminosity of approximately 2$\\times 10^{34}$~cm$^{-2}s^{-1}$ with an average pile-up of about 55, far larger than the design value. Under these conditions, the online event selection is a very challenging task. In CMS, it is realised by a two-level trigger system: the Level-1 (L1) Trigger, implemented in custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the offline reconstruction software running on a computer farm.\\\\ In order to face this challenge, the L1 trigger has undergone a major upgrade compared to Run 1, whereby all electronic boards of the system have been replaced, allowing more sophisticated algorithms to be run online. Its last stage, the global trigger, is now able to perform complex selections and to compute high-level quantities, like invariant masses. Likewise, the algorithms that run in the HLT went through big improvements; in particular, new ap...

  12. Design Flow Instantiation for Run-Time Reconfigurable Systems: A Case Study

    Directory of Open Access Journals (Sweden)

    Yang Qu

    2007-12-01

    Full Text Available Reconfigurable system is a promising alternative to deliver both flexibility and performance at the same time. New reconfigurable technologies and technology-dependent tools have been developed, but a complete overview of the whole design flow for run-time reconfigurable systems is missing. In this work, we present a design flow instantiation for such systems using a real-life application. The design flow is roughly divided into two parts: system level and implementation. At system level, our supports for hardware resource estimation and performance evaluation are applied. At implementation level, technology-dependent tools are used to realize the run-time reconfiguration. The design case is part of a WCDMA decoder on a commercially available reconfigurable platform. The results show that using run-time reconfiguration can save over 40% area when compared to a functionally equivalent fixed system and achieve 30 times speedup in processing time when compared to a functionally equivalent pure software design.

  13. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...

  14. Investigations of timing during the schedule and reinforcement intervals with wheel-running reinforcement.

    Science.gov (United States)

    Belke, Terry W; Christie-Fougere, Melissa M

    2006-11-01

    Across two experiments, a peak procedure was used to assess the timing of the onset and offset of an opportunity to run as a reinforcer. The first experiment investigated the effect of reinforcer duration on temporal discrimination of the onset of the reinforcement interval. Three male Wistar rats were exposed to fixed-interval (FI) 30-s schedules of wheel-running reinforcement and the duration of the opportunity to run was varied across values of 15, 30, and 60s. Each session consisted of 50 reinforcers and 10 probe trials. Results showed that as reinforcer duration increased, the percentage of postreinforcement pauses longer than the 30-s schedule interval increased. On probe trials, peak response rates occurred near the time of reinforcer delivery and peak times varied with reinforcer duration. In a second experiment, seven female Long-Evans rats were exposed to FI 30-s schedules leading to 30-s opportunities to run. Timing of the onset and offset of the reinforcement period was assessed by probe trials during the schedule interval and during the reinforcement interval in separate conditions. The results provided evidence of timing of the onset, but not the offset of the wheel-running reinforcement period. Further research is required to assess if timing occurs during a wheel-running reinforcement period.

  15. Adaptive Embedded Systems – Challenges of Run-Time Resource Management

    DEFF Research Database (Denmark)

    Understanding and efficiently controlling the dynamic behavior of adaptive embedded systems is a challenging endavor. The challenges come from the often very complicated interplay between the application, the application mapping, and the underlying hardware architecture. With MPSoC, we have...... the technology to design and fabricate dynamically reconfigurable hardware platforms. However, such platforms will pose new challenges to tools and methods to efficiently explore these platforms at run-time. This talk will address some of the challenges of run-time resource management in adaptive embedded...... systems....

  16. The Model of the Software Running on a Computer Equipment Hardware Included in the Grid network

    Directory of Open Access Journals (Sweden)

    T. A. Mityushkina

    2012-12-01

    Full Text Available A new approach to building a cloud computing environment using Grid networks is proposed in this paper. The authors describe the functional capabilities, algorithm, model of software running on a computer equipment hardware included in the Grid network, that will allow to implement cloud computing environment using Grid technologies.

  17. Real-time data acquisition and feedback control using Linux Intel computers

    International Nuclear Information System (INIS)

    Penaflor, B.G.; Ferron, J.R.; Piglowski, D.A.; Johnson, R.D.; Walker, M.L.

    2006-01-01

    This paper describes the experiences of the DIII-D programming staff in adapting Linux based Intel computing hardware for use in real-time data acquisition and feedback control systems. Due to the highly dynamic and unstable nature of magnetically confined plasmas in tokamak fusion experiments, real-time data acquisition and feedback control systems are in routine use with all major tokamaks. At DIII-D, plasmas are created and sustained using a real-time application known as the digital plasma control system (PCS). During each experiment, the PCS periodically samples data from hundreds of diagnostic signals and provides these data to control algorithms implemented in software. These algorithms compute the necessary commands to send to various actuators that affect plasma performance. The PCS consists of a group of rack mounted Intel Xeon computer systems running an in-house customized version of the Linux operating system tailored specifically to meet the real-time performance needs of the plasma experiments. This paper provides a more detailed description of the real-time computing hardware and custom developed software, including recent work to utilize dual Intel Xeon equipped computers within the PCS

  18. Combining Compile-Time and Run-Time Parallelization

    Directory of Open Access Journals (Sweden)

    Sungdo Moon

    1999-01-01

    Full Text Available This paper demonstrates that significant improvements to automatic parallelization technology require that existing systems be extended in two ways: (1 they must combine high‐quality compile‐time analysis with low‐cost run‐time testing; and (2 they must take control flow into account during analysis. We support this claim with the results of an experiment that measures the safety of parallelization at run time for loops left unparallelized by the Stanford SUIF compiler’s automatic parallelization system. We present results of measurements on programs from two benchmark suites – SPECFP95 and NAS sample benchmarks – which identify inherently parallel loops in these programs that are missed by the compiler. We characterize remaining parallelization opportunities, and find that most of the loops require run‐time testing, analysis of control flow, or some combination of the two. We present a new compile‐time analysis technique that can be used to parallelize most of these remaining loops. This technique is designed to not only improve the results of compile‐time parallelization, but also to produce low‐cost, directed run‐time tests that allow the system to defer binding of parallelization until run‐time when safety cannot be proven statically. We call this approach predicated array data‐flow analysis. We augment array data‐flow analysis, which the compiler uses to identify independent and privatizable arrays, by associating predicates with array data‐flow values. Predicated array data‐flow analysis allows the compiler to derive “optimistic” data‐flow values guarded by predicates; these predicates can be used to derive a run‐time test guaranteeing the safety of parallelization.

  19. Software Accelerates Computing Time for Complex Math

    Science.gov (United States)

    2014-01-01

    Ames Research Center awarded Newark, Delaware-based EM Photonics Inc. SBIR funding to utilize graphic processing unit (GPU) technology- traditionally used for computer video games-to develop high-computing software called CULA. The software gives users the ability to run complex algorithms on personal computers with greater speed. As a result of the NASA collaboration, the number of employees at the company has increased 10 percent.

  20. An Empirical Derivation of the Run Time of the Bubble Sort Algorithm.

    Science.gov (United States)

    Gonzales, Michael G.

    1984-01-01

    Suggests a moving pictorial tool to help teach principles in the bubble sort algorithm. Develops such a tool applied to an unsorted list of numbers and describes a method to derive the run time of the algorithm. The method can be modified to run the times of various other algorithms. (JN)

  1. On the Use of Running Trends as Summary Statistics for Univariate Time Series and Time Series Association

    OpenAIRE

    Trottini, Mario; Vigo, Isabel; Belda, Santiago

    2015-01-01

    Given a time series, running trends analysis (RTA) involves evaluating least squares trends over overlapping time windows of L consecutive time points, with overlap by all but one observation. This produces a new series called the “running trends series,” which is used as summary statistics of the original series for further analysis. In recent years, RTA has been widely used in climate applied research as summary statistics for time series and time series association. There is no doubt that ...

  2. CASY: a dynamic simulation of the gas-cooled fast breeder reactor core auxiliary cooling system. Volume II. Example computer run

    Energy Technology Data Exchange (ETDEWEB)

    1979-09-01

    A listing of a CASY computer run is presented. It was initiated from a demand terminal and, therefore, contains the identification ST0952. This run also contains an INDEX listing of the subroutine UPDATE. The run includes a simulated scram transient at 30 seconds.

  3. CASY: a dynamic simulation of the gas-cooled fast breeder reactor core auxiliary cooling system. Volume II. Example computer run

    International Nuclear Information System (INIS)

    1979-09-01

    A listing of a CASY computer run is presented. It was initiated from a demand terminal and, therefore, contains the identification ST0952. This run also contains an INDEX listing of the subroutine UPDATE. The run includes a simulated scram transient at 30 seconds

  4. Rapid Large Earthquake and Run-up Characterization in Quasi Real Time

    Science.gov (United States)

    Bravo, F. J.; Riquelme, S.; Koch, P.; Cararo, S.

    2017-12-01

    Several test in quasi real time have been conducted by the rapid response group at CSN (National Seismological Center) to characterize earthquakes in Real Time. These methods are known for its robustness and realibility to create Finite Fault Models. The W-phase FFM Inversion, The Wavelet Domain FFM and The Body Wave and FFM have been implemented in real time at CSN, all these algorithms are running automatically and triggered by the W-phase Point Source Inversion. Dimensions (Large and Width ) are predefined by adopting scaling laws for earthquakes in subduction zones. We tested the last four major earthquakes occurred in Chile using this scheme: The 2010 Mw 8.8 Maule Earthquake, The 2014 Mw 8.2 Iquique Earthquake, The 2015 Mw 8.3 Illapel Earthquake and The 7.6 Melinka Earthquake. We obtain many solutions as time elapses, for each one of those we calculate the run-up using an analytical formula. Our results are in agreements with some FFM already accepted by the sicentific comunnity aswell as run-up observations in the field.

  5. An Evaluation of Windows-Based Computer Forensics Application Software Running on a Macintosh

    Directory of Open Access Journals (Sweden)

    Gregory H. Carlton

    2008-09-01

    Full Text Available The two most common computer forensics applications perform exclusively on Microsoft Windows Operating Systems, yet contemporary computer forensics examinations frequently encounter one or more of the three most common operating system environments, namely Windows, OS-X, or some form of UNIX or Linux. Additionally, government and private computer forensics laboratories frequently encounter budget constraints that limit their access to computer hardware. Currently, Macintosh computer systems are marketed with the ability to accommodate these three common operating system environments, including Windows XP in native and virtual environments. We performed a series of experiments to measure the functionality and performance of the two most commonly used Windows-based computer forensics applications on a Macintosh running Windows XP in native mode and in two virtual environments relative to a similarly configured Dell personal computer. The research results are directly beneficial to practitioners, and the process illustrates affective pedagogy whereby students were engaged in applied research.

  6. Lower bounds on the run time of the univariate marginal distribution algorithm on OneMax

    DEFF Research Database (Denmark)

    Krejca, Martin S.; Witt, Carsten

    2017-01-01

    The Univariate Marginal Distribution Algorithm (UMDA), a popular estimation of distribution algorithm, is studied from a run time perspective. On the classical OneMax benchmark function, a lower bound of Ω(μ√n + n log n), where μ is the population size, on its expected run time is proved...... values maintained by the algorithm, including carefully designed potential functions. These techniques may prove useful in advancing the field of run time analysis for estimation of distribution algorithms in general........ This is the first direct lower bound on the run time of the UMDA. It is stronger than the bounds that follow from general black-box complexity theory and is matched by the run time of many evolutionary algorithms. The results are obtained through advanced analyses of the stochastic change of the frequencies of bit...

  7. A strategy for reducing turnaround time in design optimization using a distributed computer system

    Science.gov (United States)

    Young, Katherine C.; Padula, Sharon L.; Rogers, James L.

    1988-01-01

    There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.

  8. SAMGrid experiences with the Condor technology in Run II computing

    International Nuclear Information System (INIS)

    Baranovski, A.; Loebel-Carpenter, L.; Garzoglio, G.; Herber, R.; Illingworth, R.; Kennedy, R.; Kreymer, A.; Kumar, A.; Lueking, L.; Lyon, A.; Merritt, W.; Terekhov, I.; Trumbo, J.; Veseli, S.; White, S.; St. Denis, R.; Jain, S.; Nishandar, A.

    2004-01-01

    SAMGrid is a globally distributed system for data handling and job management, developed at Fermilab for the D0 and CDF experiments in Run II. The Condor system is being developed at the University of Wisconsin for management of distributed resources, computational and otherwise. We briefly review the SAMGrid architecture and its interaction with Condor, which was presented earlier. We then present our experiences using the system in production, which have two distinct aspects. At the global level, we deployed Condor-G, the Grid-extended Condor, for the resource brokering and global scheduling of our jobs. At the heart of the system is Condor's Matchmaking Service. As a more recent work at the computing element level, we have been benefiting from the large computing cluster at the University of Wisconsin campus. The architecture of the computing facility and the philosophy of Condor's resource management have prompted us to improve the application infrastructure for D0 and CDF, in aspects such as parting with the shared file system or reliance on resources being dedicated. As a result, we have increased productivity and made our applications more portable and Grid-ready. Our fruitful collaboration with the Condor team has been made possible by the Particle Physics Data Grid

  9. Optimisation of the ATLAS Track Reconstruction Software for Run-2

    CERN Document Server

    Salzburger, Andreas; The ATLAS collaboration

    2015-01-01

    The reconstruction of particle trajectories in the tracking detectors of experiments at the Large Hadron Collider (LHC) is one of the most complex parts in analysing the collected data from beam-beam collisions. To achieve the desired integrated luminosity during Run-1 of the LHC data taking period, the number of simultaneous proton-proton interactions per beam crossing (pile-up) was steadily increased. The track reconstruction is the most time consuming reconstruction component and scales non-linear in high luminosity environments. Flat budget projections (at best) for computing resources during the upcoming Run-2 of the LHC together with the demands of reconstructing higher pile-up collision data at rates more than double compared to Run-1 have put pressure on the track reconstruction software to stay within the available computing resources. The ATLAS experiment has thus performed a two year long software campaign which led to a reduction of the reconstruction time for Run-2 conditions by a factor of four:...

  10. Soft Real-Time PID Control on a VME Computer

    Science.gov (United States)

    Karayan, Vahag; Sander, Stanley; Cageao, Richard

    2007-01-01

    microPID (uPID) is a computer program for real-time proportional + integral + derivative (PID) control of a translation stage in a Fourier-transform ultraviolet spectrometer. microPID implements a PID control loop over a position profile at sampling rate of 8 kHz (sampling period 125microseconds). The software runs in a strippeddown Linux operating system on a VersaModule Eurocard (VME) computer operating in real-time priority queue using an embedded controller, a 16-bit digital-to-analog converter (D/A) board, and a laser-positioning board (LPB). microPID consists of three main parts: (1) VME device-driver routines, (2) software that administers a custom protocol for serial communication with a control computer, and (3) a loop section that obtains the current position from an LPB-driver routine, calculates the ideal position from the profile, and calculates a new voltage command by use of an embedded PID routine all within each sampling period. The voltage command is sent to the D/A board to control the stage. microPID uses special kernel headers to obtain microsecond timing resolution. Inasmuch as microPID implements a single-threaded process and all other processes are disabled, the Linux operating system acts as a soft real-time system.

  11. Visualization of synchronization of the uterine contraction signals: running cross-correlation and wavelet running cross-correlation methods.

    Science.gov (United States)

    Oczeretko, Edward; Swiatecka, Jolanta; Kitlas, Agnieszka; Laudanski, Tadeusz; Pierzynski, Piotr

    2006-01-01

    In physiological research, we often study multivariate data sets, containing two or more simultaneously recorded time series. The aim of this paper is to present the cross-correlation and the wavelet cross-correlation methods to assess synchronization between contractions in different topographic regions of the uterus. From a medical point of view, it is important to identify time delays between contractions, which may be of potential diagnostic significance in various pathologies. The cross-correlation was computed in a moving window with a width corresponding to approximately two or three contractions. As a result, the running cross-correlation function was obtained. The propagation% parameter assessed from this function allows quantitative description of synchronization in bivariate time series. In general, the uterine contraction signals are very complicated. Wavelet transforms provide insight into the structure of the time series at various frequencies (scales). To show the changes of the propagation% parameter along scales, a wavelet running cross-correlation was used. At first, the continuous wavelet transforms as the uterine contraction signals were received and afterwards, a running cross-correlation analysis was conducted for each pair of transformed time series. The findings show that running functions are very useful in the analysis of uterine contractions.

  12. Hard real-time quick EXAFS data acquisition with all open source software on a commodity personal computer

    International Nuclear Information System (INIS)

    So, I.; Siddons, D.P.; Caliebe, W.A.; Khalid, S.

    2007-01-01

    We describe here the data acquisition subsystem of the Quick EXAFS (QEXAFS) experiment at the National Synchrotron Light Source of Brookhaven National Laboratory. For ease of future growth and flexibility, almost all software components are open source with very active maintainers. Among them, Linux running on x86 desktop computer, RTAI for real-time response, COMEDI driver for the data acquisition hardware, Qt and PyQt for graphical user interface, PyQwt for plotting, and Python for scripting. The signal (A/D) and energy-reading (IK220 encoder) devices in the PCI computer are also EPICS enabled. The control system scans the monochromator energy through a networked EPICS motor. With the real-time kernel, the system is capable of deterministic data-sampling period of tens of micro-seconds with typical timing-jitter of several micro-seconds. At the same time, Linux is running in other non-real-time processes handling the user-interface. A modern Qt-based controls-frontend enhances productivity. The fast plotting and zooming of data in time or energy coordinates let the experimenters verify the quality of the data before detailed analysis. Python scripting is built-in for automation. The typical data-rate for continuous runs are around 10 M bytes/min

  13. Estimating Stair Running Performance Using Inertial Sensors

    Directory of Open Access Journals (Sweden)

    Lauro V. Ojeda

    2017-11-01

    Full Text Available Stair running, both ascending and descending, is a challenging aerobic exercise that many athletes, recreational runners, and soldiers perform during training. Studying biomechanics of stair running over multiple steps has been limited by the practical challenges presented while using optical-based motion tracking systems. We propose using foot-mounted inertial measurement units (IMUs as a solution as they enable unrestricted motion capture in any environment and without need for external references. In particular, this paper presents methods for estimating foot velocity and trajectory during stair running using foot-mounted IMUs. Computational methods leverage the stationary periods occurring during the stance phase and known stair geometry to estimate foot orientation and trajectory, ultimately used to calculate stride metrics. These calculations, applied to human participant stair running data, reveal performance trends through timing, trajectory, energy, and force stride metrics. We present the results of our analysis of experimental data collected on eleven subjects. Overall, we determine that for either ascending or descending, the stance time is the strongest predictor of speed as shown by its high correlation with stride time.

  14. Relationship between running kinematic changes and time limit at vVO2max

    Directory of Open Access Journals (Sweden)

    Leonardo De Lucca

    2012-06-01

    Exhaustive running at maximal oxygen uptake velocity (vVO2max can alter running kinematic parameters and increase energy cost along the time. The aims of the present study were to compare characteristics of ankle and knee kinematics during running at vVO2max and to verify the relationship between changes in kinematic variables and time limit (Tlim. Eleven male volunteers, recreational players of team sports, performed an incremental running test until volitional exhaustion to determine vVO2max and a constant velocity test at vVO2max. Subjects were filmed continuously from the left sagittal plane at 210 Hz for further kinematic analysis. The maximal plantar flexion during swing (p<0.01 was the only variable that increased significantly from beginning to end of the run. Increase in ankle angle at contact was the only variable related to Tlim (r=0.64; p=0.035 and explained 34% of the performance in the test. These findings suggest that the individuals under study maintained a stable running style at vVO2max and that increase in plantar flexion explained the performance in this test when it was applied in non-runners.

  15. Effect of treadmill versus overground running on the structure of variability of stride timing.

    Science.gov (United States)

    Lindsay, Timothy R; Noakes, Timothy D; McGregor, Stephen J

    2014-04-01

    Gait timing dynamics of treadmill and overground running were compared. Nine trained runners ran treadmill and track trials at 80, 100, and 120% of preferred pace for 8 min. each. Stride time series were generated for each trial. To each series, detrended fluctuation analysis (DFA), power spectral density (PSD), and multiscale entropy (MSE) analysis were applied to infer the regime of control along the randomness-regularity axis. Compared to overground running, treadmill running exhibited a higher DFA and PSD scaling exponent, as well as lower entropy at non-preferred speeds. This indicates a more ordered control for treadmill running, especially at non-preferred speeds. The results suggest that the treadmill itself brings about greater constraints and requires increased voluntary control. Thus, the quantification of treadmill running gait dynamics does not necessarily reflect movement in overground settings.

  16. Short-Run and Long-Run Elasticities of Diesel Demand in Korea

    Directory of Open Access Journals (Sweden)

    Seung-Hoon Yoo

    2012-11-01

    Full Text Available This paper investigates the demand function for diesel in Korea covering the period 1986–2011. The short-run and long-run elasticities of diesel demand with respect to price and income are empirically examined using a co-integration and error-correction model. The short-run and long-run price elasticities are estimated to be −0.357 and −0.547, respectively. The short-run and long-run income elasticities are computed to be 1.589 and 1.478, respectively. Thus, diesel demand is relatively inelastic to price change and elastic to income change in both the short-run and long-run. Therefore, a demand-side management through raising the price of diesel will be ineffective and tightening the regulation of using diesel more efficiently appears to be more effective in Korea. The demand for diesel is expected to continuously increase as the economy grows.

  17. Automated selection of brain regions for real-time fMRI brain-computer interfaces

    Science.gov (United States)

    Lührs, Michael; Sorger, Bettina; Goebel, Rainer; Esposito, Fabrizio

    2017-02-01

    Objective. Brain-computer interfaces (BCIs) implemented with real-time functional magnetic resonance imaging (rt-fMRI) use fMRI time-courses from predefined regions of interest (ROIs). To reach best performances, localizer experiments and on-site expert supervision are required for ROI definition. To automate this step, we developed two unsupervised computational techniques based on the general linear model (GLM) and independent component analysis (ICA) of rt-fMRI data, and compared their performances on a communication BCI. Approach. 3 T fMRI data of six volunteers were re-analyzed in simulated real-time. During a localizer run, participants performed three mental tasks following visual cues. During two communication runs, a letter-spelling display guided the subjects to freely encode letters by performing one of the mental tasks with a specific timing. GLM- and ICA-based procedures were used to decode each letter, respectively using compact ROIs and whole-brain distributed spatio-temporal patterns of fMRI activity, automatically defined from subject-specific or group-level maps. Main results. Letter-decoding performances were comparable to supervised methods. In combination with a similarity-based criterion, GLM- and ICA-based approaches successfully decoded more than 80% (average) of the letters. Subject-specific maps yielded optimal performances. Significance. Automated solutions for ROI selection may help accelerating the translation of rt-fMRI BCIs from research to clinical applications.

  18. Thermally-aware composite run-time CPU power models

    OpenAIRE

    Walker, Matthew J.; Diestelhorst, Stephan; Hansson, Andreas; Balsamo, Domenico; Merrett, Geoff V.; Al-Hashimi, Bashir M.

    2016-01-01

    Accurate and stable CPU power modelling is fundamental in modern system-on-chips (SoCs) for two main reasons: 1) they enable significant online energy savings by providing a run-time manager with reliable power consumption data for controlling CPU energy-saving techniques; 2) they can be used as accurate and trusted reference models for system design and exploration. We begin by showing the limitations in typical performance monitoring counter (PMC) based power modelling approaches and illust...

  19. AlgoRun: a Docker-based packaging system for platform-agnostic implemented algorithms.

    Science.gov (United States)

    Hosny, Abdelrahman; Vera-Licona, Paola; Laubenbacher, Reinhard; Favre, Thibauld

    2016-08-01

    There is a growing need in bioinformatics for easy-to-use software implementations of algorithms that are usable across platforms. At the same time, reproducibility of computational results is critical and often a challenge due to source code changes over time and dependencies. The approach introduced in this paper addresses both of these needs with AlgoRun, a dedicated packaging system for implemented algorithms, using Docker technology. Implemented algorithms, packaged with AlgoRun, can be executed through a user-friendly interface directly from a web browser or via a standardized RESTful web API to allow easy integration into more complex workflows. The packaged algorithm includes the entire software execution environment, thereby eliminating the common problem of software dependencies and the irreproducibility of computations over time. AlgoRun-packaged algorithms can be published on http://algorun.org, a centralized searchable directory to find existing AlgoRun-packaged algorithms. AlgoRun is available at http://algorun.org and the source code under GPL license is available at https://github.com/algorun laubenbacher@uchc.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Running speed during training and percent body fat predict race time in recreational male marathoners.

    Science.gov (United States)

    Barandun, Ursula; Knechtle, Beat; Knechtle, Patrizia; Klipstein, Andreas; Rüst, Christoph Alexander; Rosemann, Thomas; Lepers, Romuald

    2012-01-01

    Recent studies have shown that personal best marathon time is a strong predictor of race time in male ultramarathoners. We aimed to determine variables predictive of marathon race time in recreational male marathoners by using the same characteristics of anthropometry and training as used for ultramarathoners. Anthropometric and training characteristics of 126 recreational male marathoners were bivariately and multivariately related to marathon race times. After multivariate regression, running speed of the training units (β = -0.52, P marathon race times. Marathon race time for recreational male runners may be estimated to some extent by using the following equation (r (2) = 0.44): race time ( minutes) = 326.3 + 2.394 × (percent body fat, %) - 12.06 × (speed in training, km/hours). Running speed during training sessions correlated with prerace percent body fat (r = 0.33, P = 0.0002). The model including anthropometric and training variables explained 44% of the variance of marathon race times, whereas running speed during training sessions alone explained 40%. Thus, training speed was more predictive of marathon performance times than anthropometric characteristics. The present results suggest that low body fat and running speed during training close to race pace (about 11 km/hour) are two key factors for a fast marathon race time in recreational male marathoner runners.

  1. Shorter Ground Contact Time and Better Running Economy: Evidence From Female Kenyan Runners.

    Science.gov (United States)

    Mooses, Martin; Haile, Diresibachew W; Ojiambo, Robert; Sang, Meshack; Mooses, Kerli; Lane, Amy R; Hackney, Anthony C

    2018-06-25

    Mooses, M, Haile, DW, Ojiambo, R, Sang, M, Mooses, K, Lane, AR, and Hackney, AC. Shorter ground contact time and better running economy: evidence from female Kenyan runners. J Strength Cond Res XX(X): 000-000, 2018-Previously, it has been concluded that the improvement in running economy (RE) might be considered as a key to the continued improvement in performance when no further increase in V[Combining Dot Above]O2max is observed. To date, RE has been extensively studied among male East African distance runners. By contrast, there is a paucity of data on the RE of female East African runners. A total of 10 female Kenyan runners performed 3 × 1,600-m steady-state run trials on a flat outdoor clay track (400-m lap) at the intensities that corresponded to their everyday training intensities for easy, moderate, and fast running. Running economy together with gait characteristics was determined. Participants showed moderate to very good RE at the first (202 ± 26 ml·kg·km) and second (188 ± 12 ml·kg·km) run trials, respectively. Correlation analysis revealed significant relationship between ground contact time (GCT) and RE at the second run (r = 0.782; p = 0.022), which represented the intensity of anaerobic threshold. This study is the first to report the RE and gait characteristics of East African female athletes measured under everyday training settings. We provided the evidence that GCT is associated with the superior RE of the female Kenyan runners.

  2. Time limit and time at VO2max' during a continuous and an intermittent run.

    Science.gov (United States)

    Demarie, S; Koralsztein, J P; Billat, V

    2000-06-01

    The purpose of this study was to verify, by track field tests, whether sub-elite runners (n=15) could (i) reach their VO2max while running at v50%delta, i.e. midway between the speed associated with lactate threshold (vLAT) and that associated with maximal aerobic power (vVO2max), and (ii) if an intermittent exercise provokes a maximal and/or supra maximal oxygen consumption longer than a continuous one. Within three days, subjects underwent a multistage incremental test during which their vVO2max and vLAT were determined; they then performed two additional testing sessions, where continuous and intermittent running exercises at v50%delta were performed up to exhaustion. Subject's gas exchange and heart rate were continuously recorded by means of a telemetric apparatus. Blood samples were taken from fingertip and analysed for blood lactate concentration. In the continuous and the intermittent tests peak VO2 exceeded VO2max values, as determined during the incremental test. However in the intermittent exercise, peak VO2, time to exhaustion and time at VO2max reached significantly higher values, while blood lactate accumulation showed significantly lower values than in the continuous one. The v50%delta is sufficient to stimulate VO2max in both intermittent and continuous running. The intermittent exercise results better than the continuous one in increasing maximal aerobic power, allowing longer time at VO2max and obtaining higher peak VO2 with lower lactate accumulation.

  3. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code

    Directory of Open Access Journals (Sweden)

    Susanne Kunkel

    2017-06-01

    Full Text Available NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.

  4. ALICE HLT Run 2 performance overview.

    Science.gov (United States)

    Krzewicki, Mikolaj; Lindenstruth, Volker; ALICE Collaboration

    2017-10-01

    For the LHC Run 2 the ALICE HLT architecture was consolidated to comply with the upgraded ALICE detector readout technology. The software framework was optimized and extended to cope with the increased data load. Online calibration of the TPC using online tracking capabilities of the ALICE HLT was deployed. Offline calibration code was adapted to run both online and offline and the HLT framework was extended to support that. The performance of this schema is important for Run 3 related developments. An additional data transport approach was developed using the ZeroMQ library, forming at the same time a test bed for the new data flow model of the O2 system, where further development of this concept is ongoing. This messaging technology was used to implement the calibration feedback loop augmenting the existing, graph oriented HLT transport framework. Utilising the online reconstruction of many detectors, a new asynchronous monitoring scheme was developed to allow real-time monitoring of the physics performance of the ALICE detector, on top of the new messaging scheme for both internal and external communication. Spare computing resources comprising the production and development clusters are run as a tier-2 GRID site using an OpenStack-based setup. The development cluster is running continuously, the production cluster contributes resources opportunistically during periods of LHC inactivity.

  5. A Formal Approach to Run-Time Evaluation of Real-Time Behaviour in Distributed Process Control Systems

    DEFF Research Database (Denmark)

    Kristensen, C.H.

    This thesis advocates a formal approach to run-time evaluation of real-time behaviour in distributed process sontrol systems, motivated by a growing interest in applying the increasingly popular formal methods in the application area of distributed process control systems. We propose to evaluate...... because the real-time aspects of distributed process control systems are considered to be among the hardest and most interesting to handle....

  6. Change in skeletal muscle stiffness after running competition is dependent on both running distance and recovery time: a pilot study.

    Science.gov (United States)

    Sadeghi, Seyedali; Newman, Cassidy; Cortes, Daniel H

    2018-01-01

    Long-distance running competitions impose a large amount of mechanical loading and strain leading to muscle edema and delayed onset muscle soreness (DOMS). Damage to various muscle fibers, metabolic impairments and fatigue have been linked to explain how DOMS impairs muscle function. Disruptions of muscle fiber during DOMS exacerbated by exercise have been shown to change muscle mechanical properties. The objective of this study is to quantify changes in mechanical properties of different muscles in the thigh and lower leg as function of running distance and time after competition. A custom implementation of Focused Comb-Push Ultrasound Shear Elastography (F-CUSE) method was used to evaluate shear modulus in runners before and after a race. Twenty-two healthy individuals (age: 23 ± 5 years) were recruited using convenience sampling and split into three race categories: short distance (nine subjects, 3-5 miles), middle distance (10 subjects, 10-13 miles), and long distance (three subjects, 26+ miles). Shear Wave Elastography (SWE) measurements were taken on both legs of each subject on the rectus femoris (RF), vastus lateralis (VL), vastus medialis (VM), soleus, lateral gastrocnemius (LG), medial gastrocnemius (MG), biceps femoris (BF) and semitendinosus (ST) muscles. For statistical analyses, a linear mixed model was used, with recovery time and running distance as fixed variables, while shear modulus was used as the dependent variable. Recovery time had a significant effect on the soleus ( p  = 0.05), while running distance had considerable effect on the biceps femoris ( p  = 0.02), vastus lateralis ( p  trend from before competition to immediately after competition. The preliminary results suggest that SWE could potentially be used to quantify changes of muscle mechanical properties as a way for measuring recovery procedures for runners.

  7. ANALYSIS OF POSSIBILITY TO AVOID A RUNNING-DOW ACCIDENT TIMELY BRAKING

    Directory of Open Access Journals (Sweden)

    Sarayev, A.

    2013-06-01

    Full Text Available Such circumstances under which the drive can stop the vehicle by applying timely braking before reaching the pedestrian crossing or decrease the speed to the safe limit to avoid a running-down accident is considered.

  8. Differences in ground contact time explain the less efficient running economy in north african runners.

    Science.gov (United States)

    Santos-Concejero, J; Granados, C; Irazusta, J; Bidaurrazaga-Letona, I; Zabala-Lili, J; Tam, N; Gil, S M

    2013-09-01

    The purpose of this study was to investigate the relationship between biomechanical variables and running economy in North African and European runners. Eight North African and 13 European male runners of the same athletic level ran 4-minute stages on a treadmill at varying set velocities. During the test, biomechanical variables such as ground contact time, swing time, stride length, stride frequency, stride angle and the different sub-phases of ground contact were recorded using an optical measurement system. Additionally, oxygen uptake was measured to calculate running economy. The European runners were more economical than the North African runners at 19.5 km · h(-1), presented lower ground contact time at 18 km · h(-1) and 19.5 km · h(-1) and experienced later propulsion sub-phase at 10.5 km · h(-1),12 km · h(-1), 15 km · h(-1), 16.5 km · h(-1) and 19.5 km · h(-1) than the European runners (P Running economy at 19.5 km · h(-1) was negatively correlated with swing time (r = -0.53) and stride angle (r = -0.52), whereas it was positively correlated with ground contact time (r = 0.53). Within the constraints of extrapolating these findings, the less efficient running economy in North African runners may imply that their outstanding performance at international athletic events appears not to be linked to running efficiency. Further, the differences in metabolic demand seem to be associated with differing biomechanical characteristics during ground contact, including longer contact times.

  9. Biomechanical characteristics of skeletal muscles and associations between running speed and contraction time in 8- to 13-year-old children.

    Science.gov (United States)

    Završnik, Jernej; Pišot, Rado; Šimunič, Boštjan; Kokol, Peter; Blažun Vošner, Helena

    2017-02-01

    Objective To investigate associations between running speeds and contraction times in 8- to 13-year-old children. Method This longitudinal study analyzed tensiomyographic measurements of vastus lateralis and biceps femoris muscles' contraction times and maximum running speeds in 107 children (53 boys, 54 girls). Data were evaluated using multiple correspondence analysis. Results A gender difference existed between the vastus lateralis contraction times and running speeds. The running speed was less dependent on vastus lateralis contraction times in boys than in girls. Analysis of biceps femoris contraction times and running speeds revealed that running speeds of boys were much more structurally associated with contraction times than those of girls, for whom the association seemed chaotic. Conclusion Joint category plots showed that contraction times of biceps femoris were associated much more closely with running speed than those of the vastus lateralis muscle. These results provide insight into a new dimension of children's development.

  10. Real time analysis with the upgraded LHCb trigger in Run III

    Science.gov (United States)

    Szumlak, Tomasz

    2017-10-01

    The current LHCb trigger system consists of a hardware level, which reduces the LHC bunch-crossing rate of 40 MHz to 1.1 MHz, a rate at which the entire detector is read out. A second level, implemented in a farm of around 20k parallel processing CPUs, the event rate is reduced to around 12.5 kHz. The LHCb experiment plans a major upgrade of the detector and DAQ system in the LHC long shutdown II (2018-2019). In this upgrade, a purely software based trigger system is being developed and it will have to process the full 30 MHz of bunch crossings with inelastic collisions. LHCb will also receive a factor of 5 increase in the instantaneous luminosity, which further contributes to the challenge of reconstructing and selecting events in real time with the CPU farm. We discuss the plans and progress towards achieving efficient reconstruction and selection with a 30 MHz throughput. Another challenge is to exploit the increased signal rate that results from removing the 1.1 MHz readout bottleneck, combined with the higher instantaneous luminosity. Many charm hadron signals can be recorded at up to 50 times higher rate. LHCb is implementing a new paradigm in the form of real time data analysis, in which abundant signals are recorded in a reduced event format that can be fed directly to the physics analyses. These data do not need any further offline event reconstruction, which allows a larger fraction of the grid computing resources to be devoted to Monte Carlo productions. We discuss how this real-time analysis model is absolutely critical to the LHCb upgrade, and how it will evolve during Run-II.

  11. Development of a Computational Steering Framework for High Performance Computing Environments on Blue Gene/P Systems

    KAUST Repository

    Danani, Bob K.

    2012-07-01

    Computational steering has revolutionized the traditional workflow in high performance computing (HPC) applications. The standard workflow that consists of preparation of an application’s input, running of a simulation, and visualization of simulation results in a post-processing step is now transformed into a real-time interactive workflow that significantly reduces development and testing time. Computational steering provides the capability to direct or re-direct the progress of a simulation application at run-time. It allows modification of application-defined control parameters at run-time using various user-steering applications. In this project, we propose a computational steering framework for HPC environments that provides an innovative solution and easy-to-use platform, which allows users to connect and interact with running application(s) in real-time. This framework uses RealityGrid as the underlying steering library and adds several enhancements to the library to enable steering support for Blue Gene systems. Included in the scope of this project is the development of a scalable and efficient steering relay server that supports many-to-many connectivity between multiple steered applications and multiple steering clients. Steered applications can range from intermediate simulation and physical modeling applications to complex computational fluid dynamics (CFD) applications or advanced visualization applications. The Blue Gene supercomputer presents special challenges for remote access because the compute nodes reside on private networks. This thesis presents an implemented solution and demonstrates it on representative applications. Thorough implementation details and application enablement steps are also presented in this thesis to encourage direct usage of this framework.

  12. Computation of the target state and feedback controls for time optimal consensus in multi-agent systems

    Science.gov (United States)

    Mulla, Ameer K.; Patil, Deepak U.; Chakraborty, Debraj

    2018-02-01

    N identical agents with bounded inputs aim to reach a common target state (consensus) in the minimum possible time. Algorithms for computing this time-optimal consensus point, the control law to be used by each agent and the time taken for the consensus to occur, are proposed. Two types of multi-agent systems are considered, namely (1) coupled single-integrator agents on a plane and, (2) double-integrator agents on a line. At the initial time instant, each agent is assumed to have access to the state information of all the other agents. An algorithm, using convexity of attainable sets and Helly's theorem, is proposed, to compute the final consensus target state and the minimum time to achieve this consensus. Further, parts of the computation are parallelised amongst the agents such that each agent has to perform computations of O(N2) run time complexity. Finally, local feedback time-optimal control laws are synthesised to drive each agent to the target point in minimum time. During this part of the operation, the controller for each agent uses measurements of only its own states and does not need to communicate with any neighbouring agents.

  13. Development of a fast running accident analysis computer program for use in a simulator

    International Nuclear Information System (INIS)

    Cacciabue, P.C.

    1985-01-01

    This paper describes how a reactor safety nuclear computer program can be modified and improved with the aim of reaching a very fast running tool to be used as a physical model in a plant simulator, without penalizing the accuracy of results. It also discusses some ideas on how the physical theoretical model can be combined to a driving statistical tool for the build up of the entire package of software to be implemented in the simulator for risk and reliability analysis. The approach to the problem, although applied to a specific computer program, can be considered quite general if an already existing and well tested code is being used for the purpose. The computer program considered is ALMOD, originally developed for the analysis of the thermohydraulic and neutronic behaviour of the reactor core, primary circuit and steam generator during operational and special transients. (author)

  14. Non-exchangeability of running vs. other exercise in their association with adiposity, and its implications for public health recommendations.

    Directory of Open Access Journals (Sweden)

    Paul T Williams

    Full Text Available Current physical activity recommendations assume that different activities can be exchanged to produce the same weight-control benefits so long as total energy expended remains the same (exchangeability premise. To this end, they recommend calculating energy expenditure as the product of the time spent performing each activity and the activity's metabolic equivalents (MET, which may be summed to achieve target levels. The validity of the exchangeability premise was assessed using data from the National Runners' Health Study.Physical activity dose was compared to body mass index (BMI and body circumferences in 33,374 runners who reported usual distance run and pace, and usual times spent running and other exercises per week. MET hours per day (METhr/d from running was computed from: a time and intensity, and b reported distance run (1.02 MET • hours per km.When computed from time and intensity, the declines (slope±SE per METhr/d were significantly greater (P<10(-15 for running than non-running exercise for BMI (slopes±SE, male: -0.12 ± 0.00 vs. 0.00±0.00; female: -0.12 ± 0.00 vs. -0.01 ± 0.01 kg/m(2 per METhr/d and waist circumference (male: -0.28 ± 0.01 vs. -0.07±0.01; female: -0. 31±0.01 vs. -0.05 ± 0.01 cm per METhr/d. Reported METhr/d of running was 38% to 43% greater when calculated from time and intensity than distance. Moreover, the declines per METhr/d run were significantly greater when estimated from reported distance for BMI (males: -0.29 ± 0.01; females: -0.27 ± 0.01 kg/m(2 per METhr/d and waist circumference (males: -0.67 ± 0.02; females: -0.69 ± 0.02 cm per METhr/d than when computed from time and intensity (cited above.The exchangeability premise was not supported for running vs. non-running exercise. Moreover, distance-based running prescriptions may provide better weight control than time-based prescriptions for running or other activities. Additional longitudinal studies and randomized clinical trials are

  15. Changes in Running Mechanics During a 6-Hour Running Race.

    Science.gov (United States)

    Giovanelli, Nicola; Taboga, Paolo; Lazzer, Stefano

    2017-05-01

    To investigate changes in running mechanics during a 6-h running race. Twelve ultraendurance runners (age 41.9 ± 5.8 y, body mass 68.3 ± 12.6 kg, height 1.72 ± 0.09 m) were asked to run as many 874-m flat loops as possible in 6 h. Running speed, contact time (t c ), and aerial time (t a ) were measured in the first lap and every 30 ± 2 min during the race. Peak vertical ground-reaction force (F max ), stride length (SL), vertical downward displacement of the center of mass (Δz), leg-length change (ΔL), vertical stiffness (k vert ), and leg stiffness (k leg ) were then estimated. Mean distance covered by the athletes during the race was 62.9 ± 7.9 km. Compared with the 1st lap, running speed decreased significantly from 4 h 30 min onward (mean -5.6% ± 0.3%, P running, reaching the maximum difference after 5 h 30 min (+6.1%, P = .015). Conversely, k vert decreased after 4 h, reaching the lowest value after 5 h 30 min (-6.5%, P = .008); t a and F max decreased after 4 h 30 min through to the end of the race (mean -29.2% and -5.1%, respectively, P running, suggesting a possible time threshold that could affect performance regardless of absolute running speed.

  16. The Trick Simulation Toolkit: A NASA/Opensource Framework for Running Time Based Physics Models

    Science.gov (United States)

    Penn, John M.

    2016-01-01

    The Trick Simulation Toolkit is a simulation development environment used to create high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. Its purpose is to generate a simulation executable from a collection of user-supplied models and a simulation definition file. For each Trick-based simulation, Trick automatically provides job scheduling, numerical integration, the ability to write and restore human readable checkpoints, data recording, interactive variable manipulation, a run-time interpreter, and many other commonly needed capabilities. This allows simulation developers to concentrate on their domain expertise and the algorithms and equations of their models. Also included in Trick are tools for plotting recorded data and various other supporting utilities and libraries. Trick is written in C/C++ and Java and supports both Linux and MacOSX computer operating systems. This paper describes Trick's design and use at NASA Johnson Space Center.

  17. LHCb detector and trigger performance in Run II

    Science.gov (United States)

    Francesca, Dordei

    2017-12-01

    The LHCb detector is a forward spectrometer at the LHC, designed to perform high precision studies of b- and c- hadrons. In Run II of the LHC, a new scheme for the software trigger at LHCb allows splitting the triggering of events into two stages, giving room to perform the alignment and calibration in real time. In the novel detector alignment and calibration strategy for Run II, data collected at the start of the fill are processed in a few minutes and used to update the alignment, while the calibration constants are evaluated for each run. This allows identical constants to be used in the online and offline reconstruction, thus improving the correlation between triggered and offline selected events. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructure for the trigger. The larger timing budget, available in the trigger, allows to perform the same track reconstruction online and offline. This enables LHCb to achieve the best reconstruction performance already in the trigger, and allows physics analyses to be performed directly on the data produced by the trigger reconstruction. The novel real-time processing strategy at LHCb is discussed from both the technical and operational point of view. The overall performance of the LHCb detector on the data of Run II is presented as well.

  18. Towards the development of run times leveraging virtualization for high performance computing

    International Nuclear Information System (INIS)

    Diakhate, F.

    2010-12-01

    In recent years, there has been a growing interest in using virtualization to improve the efficiency of data centers. This success is rooted in virtualization's excellent fault tolerance and isolation properties, in the overall flexibility it brings, and in its ability to exploit multi-core architectures efficiently. These characteristics also make virtualization an ideal candidate to tackle issues found in new compute cluster architectures. However, in spite of recent improvements in virtualization technology, overheads in the execution of parallel applications remain, which prevent its use in the field of high performance computing. In this thesis, we propose a virtual device dedicated to message passing between virtual machines, so as to improve the performance of parallel applications executed in a cluster of virtual machines. We also introduce a set of techniques facilitating the deployment of virtualized parallel applications. These functionalities have been implemented as part of a runtime system which allows to benefit from virtualization's properties in a way that is as transparent as possible to the user while minimizing performance overheads. (author)

  19. Flexible structure control experiments using a real-time workstation for computer-aided control engineering

    Science.gov (United States)

    Stieber, Michael E.

    1989-01-01

    A Real-Time Workstation for Computer-Aided Control Engineering has been developed jointly by the Communications Research Centre (CRC) and Ruhr-Universitaet Bochum (RUB), West Germany. The system is presently used for the development and experimental verification of control techniques for large space systems with significant structural flexibility. The Real-Time Workstation essentially is an implementation of RUB's extensive Computer-Aided Control Engineering package KEDDC on an INTEL micro-computer running under the RMS real-time operating system. The portable system supports system identification, analysis, control design and simulation, as well as the immediate implementation and test of control systems. The Real-Time Workstation is currently being used by CRC to study control/structure interaction on a ground-based structure called DAISY, whose design was inspired by a reflector antenna. DAISY emulates the dynamics of a large flexible spacecraft with the following characteristics: rigid body modes, many clustered vibration modes with low frequencies and extremely low damping. The Real-Time Workstation was found to be a very powerful tool for experimental studies, supporting control design and simulation, and conducting and evaluating tests withn one integrated environment.

  20. Change in skeletal muscle stiffness after running competition is dependent on both running distance and recovery time: a pilot study

    Directory of Open Access Journals (Sweden)

    Seyedali Sadeghi

    2018-03-01

    Full Text Available Long-distance running competitions impose a large amount of mechanical loading and strain leading to muscle edema and delayed onset muscle soreness (DOMS. Damage to various muscle fibers, metabolic impairments and fatigue have been linked to explain how DOMS impairs muscle function. Disruptions of muscle fiber during DOMS exacerbated by exercise have been shown to change muscle mechanical properties. The objective of this study is to quantify changes in mechanical properties of different muscles in the thigh and lower leg as function of running distance and time after competition. A custom implementation of Focused Comb-Push Ultrasound Shear Elastography (F-CUSE method was used to evaluate shear modulus in runners before and after a race. Twenty-two healthy individuals (age: 23 ± 5 years were recruited using convenience sampling and split into three race categories: short distance (nine subjects, 3–5 miles, middle distance (10 subjects, 10–13 miles, and long distance (three subjects, 26+ miles. Shear Wave Elastography (SWE measurements were taken on both legs of each subject on the rectus femoris (RF, vastus lateralis (VL, vastus medialis (VM, soleus, lateral gastrocnemius (LG, medial gastrocnemius (MG, biceps femoris (BF and semitendinosus (ST muscles. For statistical analyses, a linear mixed model was used, with recovery time and running distance as fixed variables, while shear modulus was used as the dependent variable. Recovery time had a significant effect on the soleus (p = 0.05, while running distance had considerable effect on the biceps femoris (p = 0.02, vastus lateralis (p < 0.01 and semitendinosus muscles (p = 0.02. Sixty-seven percent of muscles exhibited a decreasing stiffness trend from before competition to immediately after competition. The preliminary results suggest that SWE could potentially be used to quantify changes of muscle mechanical properties as a way for measuring recovery procedures for runners.

  1. Wave Run-Up on Offshore Wind Turbines

    DEFF Research Database (Denmark)

    Ramirez, Jorge Robert Rodriguez

    to the cylinder. Based on appropriate analysis the collected data has been analysed with the stream function theory to obtain the relevant parameters for the use of the predicted wave run-up formula. An analytical approach has been pursued and solved for individual waves. Maximum run-up and 2% run-up were studied......This study has investigated the interaction of water waves with a circular structure known as wave run-up phenomenon. This run-up phenomenon has been simulated by the use of computational fluid dynamic models. The numerical model (NS3) used in this study has been verified rigorously against...... a number of cases. Regular and freak waves have been generated in a numerical wave tank with agentle slope in order to address the study of the wave run-up on a circular cylinder. From the computational side it can be said that it is inexpensive. Furthermore, the comparison of the current numerical model...

  2. Wave Run-Up on Offshore Wind Turbines

    DEFF Research Database (Denmark)

    Ramirez, Jorge Robert Rodriguez

    to the cylinder. Based on appropriate analysis the collected data has been analysed with the stream function theory to obtain the relevant parameters for the use of the predicted wave run-up formula. An analytical approach has been pursued and solved for individual waves. Maximum run-up and 2% run-up were studied......This study has investigated the interaction of water waves with a circular structure known as wave run-up phenomenon. This run-up phenomenon has been simulated by the use of computational fluid dynamic models. The numerical model (NS3) used in this study has been verified rigorously against...... a number of cases. Regular and freak waves have been generated in a numerical wave tank with a gentle slope in order to address the study of the wave run-up on a circular cylinder. From the computational side it can be said that it is inexpensive. Furthermore, the comparison of the current numerical model...

  3. Contributing to the design of run-time systems dedicated to high performance computing

    International Nuclear Information System (INIS)

    Perache, M.

    2006-10-01

    In the field of intensive scientific computing, the quest for performance has to face the increasing complexity of parallel architectures. Nowadays, these machines exhibit a deep memory hierarchy which complicates the design of efficient parallel applications. This thesis proposes a programming environment allowing to design efficient parallel programs on top of clusters of multi-processors. It features a programming model centered around collective communications and synchronizations, and provides load balancing facilities. The programming interface, named MPC, provides high level paradigms which are optimized according to the underlying architecture. The environment is fully functional and used within the CEA/DAM (TERANOVA) computing center. The evaluations presented in this document confirm the relevance of our approach. (author)

  4. A Faster Algorithm for Computing Motorcycle Graphs

    KAUST Repository

    Vigneron, Antoine E.; Yan, Lie

    2014-01-01

    We present a new algorithm for computing motorcycle graphs that runs in (Formula presented.) time for any (Formula presented.), improving on all previously known algorithms. The main application of this result is to computing the straight skeleton of a polygon. It allows us to compute the straight skeleton of a non-degenerate polygon with (Formula presented.) holes in (Formula presented.) expected time. If all input coordinates are (Formula presented.)-bit rational numbers, we can compute the straight skeleton of a (possibly degenerate) polygon with (Formula presented.) holes in (Formula presented.) expected time. In particular, it means that we can compute the straight skeleton of a simple polygon in (Formula presented.) expected time if all input coordinates are (Formula presented.)-bit rationals, while all previously known algorithms have worst-case running time (Formula presented.). © 2014 Springer Science+Business Media New York.

  5. A Faster Algorithm for Computing Motorcycle Graphs

    KAUST Repository

    Vigneron, Antoine E.

    2014-08-29

    We present a new algorithm for computing motorcycle graphs that runs in (Formula presented.) time for any (Formula presented.), improving on all previously known algorithms. The main application of this result is to computing the straight skeleton of a polygon. It allows us to compute the straight skeleton of a non-degenerate polygon with (Formula presented.) holes in (Formula presented.) expected time. If all input coordinates are (Formula presented.)-bit rational numbers, we can compute the straight skeleton of a (possibly degenerate) polygon with (Formula presented.) holes in (Formula presented.) expected time. In particular, it means that we can compute the straight skeleton of a simple polygon in (Formula presented.) expected time if all input coordinates are (Formula presented.)-bit rationals, while all previously known algorithms have worst-case running time (Formula presented.). © 2014 Springer Science+Business Media New York.

  6. The CERN Data Centre readies for Run 2

    CERN Multimedia

    Katarina Anthony

    2015-01-01

    While the world waits for Run 2 data with growing anticipation, the CERN Data Centre is battening down the hatches. Run 2 is set to see a significant increase in the amount of data produced by the LHC experiments, with more than one hundred additional petabytes expected over the next three years. How will CERN manage this flood of results? The Bulletin checks in with the IT Department to find out...   The CERN Data Centre: the heart of CERN's entire scientific, administrative, and computing infrastructure. With every second of run-time, gigabytes of data will come pouring into the CERN Data Centre to be stored, sorted and shared with physicists worldwide. To cope with this massive influx of Run 2 data, the CERN Data and Storage Services group focused on three areas: speed, capacity and reliability. First on the list, the group set out to increase the rate at which they could store data. "During Run 1, we were storing 1 gigabyte-per-second, with the occasional peak of 6 giga...

  7. Short- and long-run time-of-use price elasticities in Swiss residential electricity demand

    International Nuclear Information System (INIS)

    Filippini, Massimo

    2011-01-01

    This paper presents an empirical analysis on the residential demand for electricity by time-of-day. This analysis has been performed using aggregate data at the city level for 22 Swiss cities for the period 2000-2006. For this purpose, we estimated two log-log demand equations for peak and off-peak electricity consumption using static and dynamic partial adjustment approaches. These demand functions were estimated using several econometric approaches for panel data, for example LSDV and RE for static models, and LSDV and corrected LSDV estimators for dynamic models. The attempt of this empirical analysis has been to highlight some of the characteristics of the Swiss residential electricity demand. The estimated short-run own price elasticities are lower than 1, whereas in the long-run these values are higher than 1. The estimated short-run and long-run cross-price elasticities are positive. This result shows that peak and off-peak electricity are substitutes. In this context, time differentiated prices should provide an economic incentive to customers so that they can modify consumption patterns by reducing peak demand and shifting electricity consumption from peak to off-peak periods. - Highlights: → Empirical analysis on the residential demand for electricity by time-of-day. → Estimators for dynamic panel data. → Peak and off-peak residential electricity are substitutes.

  8. The investigation and implementation of real-time face pose and direction estimation on mobile computing devices

    Science.gov (United States)

    Fu, Deqian; Gao, Lisheng; Jhang, Seong Tae

    2012-04-01

    The mobile computing device has many limitations, such as relative small user interface and slow computing speed. Usually, augmented reality requires face pose estimation can be used as a HCI and entertainment tool. As far as the realtime implementation of head pose estimation on relatively resource limited mobile platforms is concerned, it is required to face different constraints while leaving enough face pose estimation accuracy. The proposed face pose estimation method met this objective. Experimental results running on a testing Android mobile device delivered satisfactory performing results in the real-time and accurately.

  9. Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories

    Science.gov (United States)

    Ng, Hok Kwan; Sridhar, Banavar

    2016-01-01

    This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.

  10. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  11. Personal best marathon time and longest training run, not anthropometry, predict performance in recreational 24-hour ultrarunners.

    Science.gov (United States)

    Knechtle, Beat; Knechtle, Patrizia; Rosemann, Thomas; Lepers, Romuald

    2011-08-01

    In recent studies, a relationship between both low body fat and low thicknesses of selected skinfolds has been demonstrated for running performance of distances from 100 m to the marathon but not in ultramarathon. We investigated the association of anthropometric and training characteristics with race performance in 63 male recreational ultrarunners in a 24-hour run using bi and multivariate analysis. The athletes achieved an average distance of 146.1 (43.1) km. In the bivariate analysis, body mass (r = -0.25), the sum of 9 skinfolds (r = -0.32), the sum of upper body skinfolds (r = -0.34), body fat percentage (r = -0.32), weekly kilometers ran (r = 0.31), longest training session before the 24-hour run (r = 0.56), and personal best marathon time (r = -0.58) were related to race performance. Stepwise multiple regression showed that both the longest training session before the 24-hour run (p = 0.0013) and the personal best marathon time (p = 0.0015) had the best correlation with race performance. Performance in these 24-hour runners may be predicted (r2 = 0.46) by the following equation: Performance in a 24-hour run, km) = 234.7 + 0.481 (longest training session before the 24-hour run, km) - 0.594 (personal best marathon time, minutes). For practical applications, training variables such as volume and intensity were associated with performance but not anthropometric variables. To achieve maximum kilometers in a 24-hour run, recreational ultrarunners should have a personal best marathon time of ∼3 hours 20 minutes and complete a long training run of ∼60 km before the race, whereas anthropometric characteristics such as low body fat or low skinfold thicknesses showed no association with performance.

  12. Computational comparison of the effect of mixing grids of 'swirler' and 'run-through' types on flow parameters and the behavior of steam phase in WWER fuel assemblies

    International Nuclear Information System (INIS)

    Shcherbakov, S.; Sergeev, V.

    2011-01-01

    The results obtained using the TURBOFLOW computer code are presented for the numerical calculations of space distributions of coolant flow, heating and boiling characteristics in WWER fuel assemblies with regard to the effect of mixing grids of 'Swirler' and 'Run-through' types installed in FA on the above processes. The nature of the effect of these grids on coolant flow was demonstrated to be different. Thus, the relaxation length of cross flows after passing a 'Run-through' grid is five times as compared to a 'Swirler'-type grid, which correlates well with the experimental data. At the same time, accelerations occurring in the flow downstream of a 'Swirler'-type grid are by an order of magnitude greater than those after a 'Run-through' grid. As a result, the efficiency of one-phase coolant mixing is much higher for the grids of 'Run-through' type, while the efficiency of steam removal from fuel surface is much higher for 'Swirler'-type grids. To achieve optimal removal of steam from fuel surface it has been proposed to install into fuel assemblies two 'Swirler'-type grids in tandem at a distance of about 10 cm from each other with flow swirling in opposite directions. 'Run-through' grids would be appropriate for use for mixing in fuel assemblies with a high non-uniformity of fuel-by-fuel power generation. (authors)

  13. GPU-accelerated micromagnetic simulations using cloud computing

    Energy Technology Data Exchange (ETDEWEB)

    Jermain, C.L., E-mail: clj72@cornell.edu [Cornell University, Ithaca, NY 14853 (United States); Rowlands, G.E.; Buhrman, R.A. [Cornell University, Ithaca, NY 14853 (United States); Ralph, D.C. [Cornell University, Ithaca, NY 14853 (United States); Kavli Institute at Cornell, Ithaca, NY 14853 (United States)

    2016-03-01

    Highly parallel graphics processing units (GPUs) can improve the speed of micromagnetic simulations significantly as compared to conventional computing using central processing units (CPUs). We present a strategy for performing GPU-accelerated micromagnetic simulations by utilizing cost-effective GPU access offered by cloud computing services with an open-source Python-based program for running the MuMax3 micromagnetics code remotely. We analyze the scaling and cost benefits of using cloud computing for micromagnetics. - Highlights: • The benefits of cloud computing for GPU-accelerated micromagnetics are examined. • We present the MuCloud software for running simulations on cloud computing. • Simulation run times are measured to benchmark cloud computing performance. • Comparison benchmarks are analyzed between CPU and GPU based solvers.

  14. GPU-accelerated micromagnetic simulations using cloud computing

    International Nuclear Information System (INIS)

    Jermain, C.L.; Rowlands, G.E.; Buhrman, R.A.; Ralph, D.C.

    2016-01-01

    Highly parallel graphics processing units (GPUs) can improve the speed of micromagnetic simulations significantly as compared to conventional computing using central processing units (CPUs). We present a strategy for performing GPU-accelerated micromagnetic simulations by utilizing cost-effective GPU access offered by cloud computing services with an open-source Python-based program for running the MuMax3 micromagnetics code remotely. We analyze the scaling and cost benefits of using cloud computing for micromagnetics. - Highlights: • The benefits of cloud computing for GPU-accelerated micromagnetics are examined. • We present the MuCloud software for running simulations on cloud computing. • Simulation run times are measured to benchmark cloud computing performance. • Comparison benchmarks are analyzed between CPU and GPU based solvers.

  15. Computing shifts to monitor ATLAS distributed computing infrastructure and operations

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00068610; The ATLAS collaboration; Barberis, Dario; Crepe-Renaudin, Sabine Chrystel; De, Kaushik; Fassi, Farida; Stradling, Alden; Svatos, Michal; Vartapetian, Armen; Wolters, Helmut

    2017-01-01

    The ATLAS Distributed Computing (ADC) group established a new Computing Run Coordinator (CRC) shift at the start of LHC Run 2 in 2015. The main goal was to rely on a person with a good overview of the ADC activities to ease the ADC experts’ workload. The CRC shifter keeps track of ADC tasks related to their fields of expertise and responsibility. At the same time, the shifter maintains a global view of the day-to-day operations of the ADC system. During Run 1, this task was accomplished by a person of the expert team called the ADC Manager on Duty (AMOD), a position that was removed during the shutdown period due to the reduced number and availability of ADC experts foreseen for Run 2. The CRC position was proposed to cover some of the AMODs former functions, while allowing more people involved in computing to participate. In this way, CRC shifters help with the training of future ADC experts. The CRC shifters coordinate daily ADC shift operations, including tracking open issues, reporting, and representing...

  16. Computing shifts to monitor ATLAS distributed computing infrastructure and operations

    CERN Document Server

    Adam Bourdarios, Claire; The ATLAS collaboration

    2016-01-01

    The ATLAS Distributed Computing (ADC) group established a new Computing Run Coordinator (CRC) shift at the start of LHC Run2 in 2015. The main goal was to rely on a person with a good overview of the ADC activities to ease the ADC experts' workload. The CRC shifter keeps track of ADC tasks related to their fields of expertise and responsibility. At the same time, the shifter maintains a global view of the day-to-day operations of the ADC system. During Run1, this task was accomplished by the ADC Manager on Duty (AMOD), a position that was removed during the shutdown period due to the reduced number and availability of ADC experts foreseen for Run2. The CRC position was proposed to cover some of the AMOD’s former functions, while allowing more people involved in computing to participate. In this way, CRC shifters help train future ADC experts. The CRC shifters coordinate daily ADC shift operations, including tracking open issues, reporting, and representing ADC in relevant meetings. The CRC also facilitates ...

  17. CDF run II run control and online monitor

    International Nuclear Information System (INIS)

    Arisawa, T.; Ikado, K.; Badgett, W.; Chlebana, F.; Maeshima, K.; McCrory, E.; Meyer, A.; Patrick, J.; Wenzel, H.; Stadie, H.; Wagner, W.; Veramendi, G.

    2001-01-01

    The authors discuss the CDF Run II Run Control and online event monitoring system. Run Control is the top level application that controls the data acquisition activities across 150 front end VME crates and related service processes. Run Control is a real-time multi-threaded application implemented in Java with flexible state machines, using JDBC database connections to configure clients, and including a user friendly and powerful graphical user interface. The CDF online event monitoring system consists of several parts: the event monitoring programs, the display to browse their results, the server program which communicates with the display via socket connections, the error receiver which displays error messages and communicates with Run Control, and the state manager which monitors the state of the monitor programs

  18.  Running speed during training and percent body fat predict race time in recreational male marathoners

    Directory of Open Access Journals (Sweden)

    Barandun U

    2012-07-01

    Full Text Available  Background: Recent studies have shown that personal best marathon time is a strong predictor of race time in male ultramarathoners. We aimed to determine variables predictive of marathon race time in recreational male marathoners by using the same characteristics of anthropometry and training as used for ultramarathoners.Methods: Anthropometric and training characteristics of 126 recreational male marathoners were bivariately and multivariately related to marathon race times.Results: After multivariate regression, running speed of the training units (β=-0.52, P<0.0001 and percent body fat (β=0.27, P <0.0001 were the two variables most strongly correlated with marathon race times. Marathon race time for recreational male runners may be estimated to some extent by using the following equation (r2 = 0.44: race time (minutes = 326.3 + 2.394 × (percent body fat, % – 12.06 × (speed in training, km/hours. Running speed during training sessions correlated with prerace percent body fat (r=0.33, P=0.0002. The model including anthropometric and training variables explained 44% of the variance of marathon race times, whereas running speed during training sessions alone explained 40%. Thus, training speed was more predictive of marathon performance times than anthropometric characteristics.Conclusion: The present results suggest that low body fat and running speed during training close to race pace (about 11 km/hour are two key factors for a fast marathon race time in recreational male marathoner runners.Keywords: body fat, skinfold thickness, anthropometry, endurance, athlete

  19. Success Run Waiting Times and Fuss-Catalan Numbers

    Directory of Open Access Journals (Sweden)

    S. J. Dilworth

    2015-01-01

    Full Text Available We present power series expressions for all the roots of the auxiliary equation of the recurrence relation for the distribution of the waiting time for the first run of k consecutive successes in a sequence of independent Bernoulli trials, that is, the geometric distribution of order k. We show that the series coefficients are Fuss-Catalan numbers and write the roots in terms of the generating function of the Fuss-Catalan numbers. Our main result is a new exact expression for the distribution, which is more concise than previously published formulas. Our work extends the analysis by Feller, who gave asymptotic results. We obtain quantitative improvements of the error estimates obtained by Feller.

  20. SASD and the CERN/SPS run-time coordinator

    International Nuclear Information System (INIS)

    Morpurgo, G.

    1990-01-01

    Structured Analysis and Structured Design (SASD) provides us with a handy way of specifying the flow of data between the different modules (functional units) of a system. But the formalism loses its immediacy when the control flow has to be taken into account as well. Moreover, due to the lack of appropriate software infrastructure, very often the actual implementation of the system does not reflect the module decoupling and independence so much emphasized at the design stage. In this paper the run-time coordinator, a complete software infrastructure to support a real decoupling of the functional units, is described. Special attention is given to the complementarity of our approach and the SASD methodology. (orig.)

  1. Support system for ATLAS distributed computing operations

    CERN Document Server

    Kishimoto, Tomoe; The ATLAS collaboration

    2018-01-01

    The ATLAS distributed computing system has allowed the experiment to successfully meet the challenges of LHC Run 2. In order for distributed computing to operate smoothly and efficiently, several support teams are organized in the ATLAS experiment. The ADCoS (ATLAS Distributed Computing Operation Shifts) is a dedicated group of shifters who follow and report failing jobs, failing data transfers between sites, degradation of ATLAS central computing services, and more. The DAST (Distributed Analysis Support Team) provides user support to resolve issues related to running distributed analysis on the grid. The CRC (Computing Run Coordinator) maintains a global view of the day-to-day operations. In this presentation, the status and operational experience of the support system for ATLAS distributed computing in LHC Run 2 will be reported. This report also includes operations experience from the grid site point of view, and an analysis of the errors that create the biggest waste of wallclock time. The report of oper...

  2. Compiler Technology for Parallel Scientific Computation

    Directory of Open Access Journals (Sweden)

    Can Özturan

    1994-01-01

    Full Text Available There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large-scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture-independent software for scientific computation based on our experience with equational programming language (EPL. Our approach is based on a program decomposition, parallel code synthesis, and run-time support for parallel scientific computation. The program decomposition is guided by the source program annotations provided by the user. The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components. Run-time support is provided by the compiler-generated code that redistributes computation and data during object program execution. The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization. In this article we discuss annotations, configurations, parallel code generation, and run-time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.

  3. Wave run-up on sandbag slopes

    Directory of Open Access Journals (Sweden)

    Thamnoon Rasmeemasmuang

    2014-03-01

    Full Text Available On occasions, sandbag revetments are temporarily applied to armour sandy beaches from erosion. Nevertheless, an empirical formula to determine the wave run -up height on sandbag slopes has not been available heretofore. In this study a wave run-up formula which considers the roughness of slope surfaces is proposed for the case of sandbag slopes. A series of laboratory experiments on the wave run -up on smooth slopes and sandbag slopes were conducted in a regular-wave flume, leading to the finding of empirical parameters for the formula. The proposed empirical formula is applicable to wave steepness ranging from 0.01 to 0.14 and to the thickness of placed sandbags relative to the wave height ranging from 0.17 to 3.0. The study shows that the wave run-up height computed by the formula for the sandbag slopes is 26-40% lower than that computed by the formula for the smooth slopes.

  4. Computing with Windows 7 For the Older and Wiser Get Up and Running on Your Home PC

    CERN Document Server

    Arnold, Adrian

    2010-01-01

    Computing with Windows® 7 for the Older & Wiser is a user friendly guide that takes you step-by-step through the basics of using a computer.  Written in an easy-to-understand, jargon free language, it is aimed at complete beginners using PCs running on Microsoft Windows ® 7. . Inside, you will find step-by-step guidance on: Using the keyboard & the mouse; Navigating files and folders; Customising your desktop; Using Email and the Internet; Word processing; Organising your digital photos; Safely downloading files from the Internet; Finding useful websites and much more

  5. The Reliability and Validity of a Four-Minute Running Time-Trial in Assessing V˙O2max and Performance

    Directory of Open Access Journals (Sweden)

    Kerry McGawley

    2017-05-01

    Full Text Available Introduction: Traditional graded-exercise tests to volitional exhaustion (GXTs are limited by the need to establish starting workloads, stage durations, and step increments. Short-duration time-trials (TTs may be easier to implement and more ecologically valid in terms of real-world athletic events. The purpose of the current study was to assess the reliability and validity of maximal oxygen uptake (V˙O2max and performance measured during a traditional GXT (STEP and a four-minute running time-trial (RunTT.Methods: Ten recreational runners (age: 32 ± 7 years; body mass: 69 ± 10 kg completed five STEP tests with a verification phase (VER and five self-paced RunTTs on a treadmill. The order of the STEP/VER and RunTT trials was alternated and counter-balanced. Performance was measured as time to exhaustion (TTE for STEP and VER and distance covered for RunTT.Results: The coefficient of variation (CV for V˙O2max was similar between STEP, VER, and RunTT (1.9 ± 1.0, 2.2 ± 1.1, and 1.8 ± 0.8%, respectively, but varied for performance between the three types of test (4.5 ± 1.9, 9.7 ± 3.5, and 1.8 ± 0.7% for STEP, VER, and RunTT, respectively. Bland-Altman limits of agreement (bias ± 95% showed V˙O2max to be 1.6 ± 3.6 mL·kg−1·min−1 higher for STEP vs. RunTT. Peak HR was also significantly higher during STEP compared with RunTT (P = 0.019.Conclusion: A four-minute running time-trial appears to provide more reliable performance data in comparison to an incremental test to exhaustion, but may underestimate V˙O2max.

  6. Precise fixpoint computation through strategy iteration

    DEFF Research Database (Denmark)

    Gawlitza, Thomas; Seidl, Helmut

    2007-01-01

    We present a practical algorithm for computing least solutions of systems of equations over the integers with addition, multiplication with positive constants, maximum and minimum. The algorithm is based on strategy iteration. Its run-time (w.r.t. the uniform cost measure) is independent of the s......We present a practical algorithm for computing least solutions of systems of equations over the integers with addition, multiplication with positive constants, maximum and minimum. The algorithm is based on strategy iteration. Its run-time (w.r.t. the uniform cost measure) is independent...

  7. A Computed River Flow-Based Turbine Controller on a Programmable Logic Controller for Run-Off River Hydroelectric Systems

    Directory of Open Access Journals (Sweden)

    Razali Jidin

    2017-10-01

    Full Text Available The main feature of a run-off river hydroelectric system is a small size intake pond that overspills when river flow is more than turbines’ intake. As river flow fluctuates, a large proportion of the potential energy is wasted due to the spillages which can occur when turbines are operated manually. Manual operation is often adopted due to unreliability of water level-based controllers at many remote and unmanned run-off river hydropower plants. In order to overcome these issues, this paper proposes a novel method by developing a controller that derives turbine output set points from computed mass flow rate of rivers that feed the hydroelectric system. The computed flow is derived by summation of pond volume difference with numerical integration of both turbine discharge flows and spillages. This approach of estimating river flow allows the use of existing sensors rather than requiring the installation of new ones. All computations, including the numerical integration, have been realized as ladder logics on a programmable logic controller. The implemented controller manages the dynamic changes in the flow rate of the river better than the old point-level based controller, with the aid of a newly installed water level sensor. The computed mass flow rate of the river also allows the controller to straightforwardly determine the number of turbines to be in service with considerations of turbine efficiencies and auxiliary power conservation.

  8. Review of quantum computation

    International Nuclear Information System (INIS)

    Lloyd, S.

    1992-01-01

    Digital computers are machines that can be programmed to perform logical and arithmetical operations. Contemporary digital computers are ''universal,'' in the sense that a program that runs on one computer can, if properly compiled, run on any other computer that has access to enough memory space and time. Any one universal computer can simulate the operation of any other; and the set of tasks that any such machine can perform is common to all universal machines. Since Bennett's discovery that computation can be carried out in a non-dissipative fashion, a number of Hamiltonian quantum-mechanical systems have been proposed whose time-evolutions over discrete intervals are equivalent to those of specific universal computers. The first quantum-mechanical treatment of computers was given by Benioff, who exhibited a Hamiltonian system with a basis whose members corresponded to the logical states of a Turing machine. In order to make the Hamiltonian local, in the sense that its structure depended only on the part of the computation being performed at that time, Benioff found it necessary to make the Hamiltonian time-dependent. Feynman discovered a way to make the computational Hamiltonian both local and time-independent by incorporating the direction of computation in the initial condition. In Feynman's quantum computer, the program is a carefully prepared wave packet that propagates through different computational states. Deutsch presented a quantum computer that exploits the possibility of existing in a superposition of computational states to perform tasks that a classical computer cannot, such as generating purely random numbers, and carrying out superpositions of computations as a method of parallel processing. In this paper, we show that such computers, by virtue of their common function, possess a common form for their quantum dynamics

  9. Integrating software testing and run-time checking in an assertion verification framework

    OpenAIRE

    Mera, E.; López García, Pedro; Hermenegildo, Manuel V.

    2009-01-01

    We have designed and implemented a framework that unifies unit testing and run-time verification (as well as static verification and static debugging). A key contribution of our approach is that a unified assertion language is used for all of these tasks. We first propose methods for compiling runtime checks for (parts of) assertions which cannot be verified at compile-time via program transformation. This transformation allows checking preconditions and postconditions, including conditional...

  10. Running vacuum in the Universe and the time variation of the fundamental constants of Nature

    Energy Technology Data Exchange (ETDEWEB)

    Fritzsch, Harald [Nanyang Technological University, Institute for Advanced Study, Singapore (Singapore); Universitaet Muenchen, Physik-Department, Munich (Germany); Sola, Joan [Nanyang Technological University, Institute for Advanced Study, Singapore (Singapore); Universitat de Barcelona, Departament de Fisica Quantica i Astrofisica, Barcelona, Catalonia (Spain); Universitat de Barcelona (ICCUB), Institute of Cosmos Sciences, Barcelona, Catalonia (Spain); Nunes, Rafael C. [Universidade Federal de Juiz de Fora, Dept. de Fisica, Juiz de Fora, MG (Brazil)

    2017-03-15

    We compute the time variation of the fundamental constants (such as the ratio of the proton mass to the electron mass, the strong coupling constant, the fine-structure constant and Newton's constant) within the context of the so-called running vacuum models (RVMs) of the cosmic evolution. Recently, compelling evidence has been provided that these models are able to fit the main cosmological data (SNIa+BAO+H(z)+LSS+BBN+CMB) significantly better than the concordance ΛCDM model. Specifically, the vacuum parameters of the RVM (i.e. those responsible for the dynamics of the vacuum energy) prove to be nonzero at a confidence level >or similar 3σ. Here we use such remarkable status of the RVMs to make definite predictions on the cosmic time variation of the fundamental constants. It turns out that the predicted variations are close to the present observational limits. Furthermore, we find that the time evolution of the dark matter particle masses should be crucially involved in the total mass variation of our Universe. A positive measurement of this kind of effects could be interpreted as strong support to the ''micro-macro connection'' (viz. the dynamical feedback between the evolution of the cosmological parameters and the time variation of the fundamental constants of the microscopic world), previously proposed by two of us (HF and JS). (orig.)

  11. Spanish ATLAS Tier-2: facing up to LHC Run 2

    CERN Document Server

    Gonzalez de la Hoz, Santiago; Fassi, Farida; Fernandez Casani, Alvaro; Kaci, Mohammed; Lacort Pellicer, Victor Ruben; Montiel Gonzalez, Almudena Del Rocio; Oliver Garcia, Elena; Pacheco Pages, Andres; Sánchez, Javier; Sanchez Martinez, Victoria; Salt, José; Villaplana Perez, Miguel

    2015-01-01

    The goal of this work is to describe the way of addressing the main challenges of Run-2 by the Spanish ATLAS Tier-2. The considerable increase of energy and luminosity for the upcoming Run-2 with respect to Run-1 has led to a revision of the ATLAS computing model as well as some of the main ATLAS computing tools. The adaptation on these changes will be shown, with the peculiarities that it is a distributed Tier-2 composed of three sites and its members are involved on ATLAS computing tasks with a hub of research, innovation and education.

  12. 12 CFR 908.27 - Computing time.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Computing time. 908.27 Section 908.27 Banks and... PRACTICE AND PROCEDURE IN HEARINGS ON THE RECORD General Rules § 908.27 Computing time. (a) General rule. In computing any period of time prescribed or allowed by this subpart, the date of the act or event...

  13. Icelandic Public Pensions: Why time is running out

    Directory of Open Access Journals (Sweden)

    Ólafur Ísleifsson

    2011-12-01

    Full Text Available The aim of this paper is to analyse the Icelandic public sector pension system enjoying a third party guarantee. Defined benefit funds fundamentally differ from defined contribution pension funds without a third party guarantee as is the case with the Icelandic general labour market pension funds. We probe the special nature of the public sector pension funds and make a comparison to the defined contribution pension funds of the general labour market. We explore the financial and economic effects of the third party guarantee of the funds, their investment performance and other relevant factors. We seek an answer to the question why time is running out for the country’s largest pension fund that currently faces the prospect of becoming empty by the year 2022.

  14. The immediate effect of long-distance running on T2 and T2* relaxation times of articular cartilage of the knee in young healthy adults at 3.0 T MR imaging.

    Science.gov (United States)

    Behzadi, Cyrus; Welsch, Goetz H; Laqmani, Azien; Henes, Frank O; Kaul, Michael G; Schoen, Gerhard; Adam, Gerhard; Regier, Marc

    2016-08-01

    To quantitatively assess the immediate effect of long-distance running on T2 and T2* relaxation times of the articular cartilage of the knee at 3.0 T in young healthy adults. 30 healthy male adults (18-31 years) who perform sports at an amateur level underwent an initial MRI at 3.0 T with T2 weighted [16 echo times (TEs): 9.7-154.6 ms] and T2* weighted (24 TEs: 4.6-53.6 ms) relaxation measurements. Thereafter, all participants performed a 45-min run. After the run, all individuals were immediately re-examined. Data sets were post-processed using dedicated software (ImageJ; National Institute of Health, Bethesda, MD). 22 regions of interest were manually drawn in segmented areas of the femoral, tibial and patellar cartilage. For statistical evaluation, Pearson product-moment correlation coefficients and confidence intervals were computed. Mean initial values were 35.7 ms for T2 and 25.1 ms for T2*. After the run, a significant decrease in the mean T2 and T2* relaxation times was observed for all segments in all participants. A mean decrease of relaxation time was observed for T2 with 4.6 ms (±3.6 ms) and for T2* with 3.6 ms (±5.1 ms) after running. A significant decrease could be observed in all cartilage segments for both biomarkers. Both quantitative techniques, T2 and T2*, seem to be valuable parameters in the evaluation of immediate changes in the cartilage ultrastructure after running. This is the first direct comparison of immediate changes in T2 and T2* relaxation times after running in healthy adults.

  15. Quantum Computation--The Ultimate Frontier

    OpenAIRE

    Adami, Chris; Dowling, Jonathan P.

    2002-01-01

    The discovery of an algorithm for factoring which runs in polynomial time on a quantum computer has given rise to a concerted effort to understand the principles, advantages, and limitations of quantum computing. At the same time, many different quantum systems are being explored for their suitability to serve as a physical substrate for the quantum computer of the future. I discuss some of the theoretical foundations of quantum computer science, including algorithms and error correction, and...

  16. Advances in randomized parallel computing

    CERN Document Server

    Rajasekaran, Sanguthevar

    1999-01-01

    The technique of randomization has been employed to solve numerous prob­ lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at t...

  17. The running pattern and its importance in running long-distance gears

    Directory of Open Access Journals (Sweden)

    Jarosław Hoffman

    2017-07-01

    Full Text Available The running pattern is individual for each runner, regardless of distance. We can characterize it as the sum of the data of the runner (age, height, training time, etc. and the parameters of his run. Building the proper technique should focus first and foremost on the work of movement coordination and the power of the runner. In training the correct running steps we can use similar tools as working on deep feeling. The aim of this paper was to define what we can call a running pattern, what is its influence in long-distance running, and the relationship between the training technique and the running pattern. The importance of a running pattern in long-distance racing is immense, as the more distracted and departed from the norm, the greater the harm to the body will cause it to repetition in long run. Putting on training exercises that shape the technique is very important and affects the running pattern significantly.

  18. System and Component Software Specification, Run-time Verification and Automatic Test Generation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The following background technology is described in Part 5: Run-time Verification (RV), White Box Automatic Test Generation (WBATG). Part 5 also describes how WBATG...

  19. Operating Security System Support for Run-Time Security with a Trusted Execution Environment

    DEFF Research Database (Denmark)

    Gonzalez, Javier

    Software services have become an integral part of our daily life. Cyber-attacks have thus become a problem of increasing importance not only for the IT industry, but for society at large. A way to contain cyber-attacks is to guarantee the integrity of IT systems at run-time. Put differently......, it is safe to assume that any complex software is compromised. The problem is then to monitor and contain it when it executes in order to protect sensitive data and other sensitive assets. To really have an impact, any solution to this problem should be integrated in commodity operating systems...... sensitive assets at run-time that we denote split-enforcement, and provide an implementation for ARM-powered devices using ARM TrustZone security extensions. We design, build, and evaluate a prototype Trusted Cell that provides trusted services. We also present the first generic TrustZone driver...

  20. Triathlon: running injuries.

    Science.gov (United States)

    Spiker, Andrea M; Dixit, Sameer; Cosgarea, Andrew J

    2012-12-01

    The running portion of the triathlon represents the final leg of the competition and, by some reports, the most important part in determining a triathlete's overall success. Although most triathletes spend most of their training time on cycling, running injuries are the most common injuries encountered. Common causes of running injuries include overuse, lack of rest, and activities that aggravate biomechanical predisposers of specific injuries. We discuss the running-associated injuries in the hip, knee, lower leg, ankle, and foot of the triathlete, and the causes, presentation, evaluation, and treatment of each.

  1. Addressing Thermal Model Run Time Concerns of the Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA)

    Science.gov (United States)

    Peabody, Hume; Guerrero, Sergio; Hawk, John; Rodriguez, Juan; McDonald, Carson; Jackson, Cliff

    2016-01-01

    The Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) utilizes an existing 2.4 m diameter Hubble sized telescope donated from elsewhere in the federal government for near-infrared sky surveys and Exoplanet searches to answer crucial questions about the universe and dark energy. The WFIRST design continues to increase in maturity, detail, and complexity with each design cycle leading to a Mission Concept Review and entrance to the Mission Formulation Phase. Each cycle has required a Structural-Thermal-Optical-Performance (STOP) analysis to ensure the design can meet the stringent pointing and stability requirements. As such, the models have also grown in size and complexity leading to increased model run time. This paper addresses efforts to reduce the run time while still maintaining sufficient accuracy for STOP analyses. A technique was developed to identify slews between observing orientations that were sufficiently different to warrant recalculation of the environmental fluxes to reduce the total number of radiation calculation points. The inclusion of a cryocooler fluid loop in the model also forced smaller time-steps than desired, which greatly increases the overall run time. The analysis of this fluid model required mitigation to drive the run time down by solving portions of the model at different time scales. Lastly, investigations were made into the impact of the removal of small radiation couplings on run time and accuracy. Use of these techniques allowed the models to produce meaningful results within reasonable run times to meet project schedule deadlines.

  2. Run Clever - No difference in risk of injury when comparing progression in running volume and running intensity in recreational runners

    DEFF Research Database (Denmark)

    Ramskov, Daniel; Rasmussen, Sten; Sørensen, Henrik

    2018-01-01

    Background/aim: The Run Clever trial investigated if there was a difference in injury occurrence across two running schedules, focusing on progression in volume of running intensity (Sch-I) or in total running volume (Sch-V). It was hypothesised that 15% more runners with a focus on progression...... in volume of running intensity would sustain an injury compared with runners with a focus on progression in total running volume. Methods: Healthy recreational runners were included and randomly allocated to Sch-I or Sch-V. In the first eight weeks of the 24-week follow-up, all participants (n=839) followed...... participants received real-time, individualised feedback on running intensity and running volume. The primary outcome was running-related injury (RRI). Results: After preconditioning a total of 80 runners sustained an RRI (Sch-I n=36/Sch-V n=44). The cumulative incidence proportion (CIP) in Sch-V (reference...

  3. A new view of responses to first-time barefoot running.

    OpenAIRE

    Wilkinson, Mick; Caplan, Nick; Akenhead, Richard; Hayes, Phil

    2015-01-01

    We examined acute alterations in gait and oxygen cost from shod-to-barefoot running in habitually-shod well-trained runners with no prior experience of running barefoot. Thirteen runners completed six-minute treadmill runs shod and barefoot on separate days at a mean speed of 12.5 km·h-1. Steady-state oxygen cost in the final minute was recorded. Kinematic data were captured from 30-consecutive strides. Mean differences between conditions were estimated with 90% confidence intervals. When bar...

  4. 49 CFR 511.15 - Time.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 6 2010-10-01 2010-10-01 false Time. 511.15 Section 511.15 Transportation Other... Time. (a) Computation. In computing any period of time prescribed or allowed by the rules in this part, the day of the act, event, or default from which the designated period of time begins to run shall not...

  5. Just-in-Time Compilation-Inspired Methodology for Parallelization of Compute Intensive Java Code

    Directory of Open Access Journals (Sweden)

    GHULAM MUSTAFA

    2017-01-01

    Full Text Available Compute intensive programs generally consume significant fraction of execution time in a small amount of repetitive code. Such repetitive code is commonly known as hotspot code. We observed that compute intensive hotspots often possess exploitable loop level parallelism. A JIT (Just-in-Time compiler profiles a running program to identify its hotspots. Hotspots are then translated into native code, for efficient execution. Using similar approach, we propose a methodology to identify hotspots and exploit their parallelization potential on multicore systems. Proposed methodology selects and parallelizes each DOALL loop that is either contained in a hotspot method or calls a hotspot method. The methodology could be integrated in front-end of a JIT compiler to parallelize sequential code, just before native translation. However, compilation to native code is out of scope of this work. As a case study, we analyze eighteen JGF (Java Grande Forum benchmarks to determine parallelization potential of hotspots. Eight benchmarks demonstrate a speedup of up to 7.6x on an 8-core system

  6. Just-in-time compilation-inspired methodology for parallelization of compute intensive java code

    International Nuclear Information System (INIS)

    Mustafa, G.; Ghani, M.U.

    2017-01-01

    Compute intensive programs generally consume significant fraction of execution time in a small amount of repetitive code. Such repetitive code is commonly known as hotspot code. We observed that compute intensive hotspots often possess exploitable loop level parallelism. A JIT (Just-in-Time) compiler profiles a running program to identify its hotspots. Hotspots are then translated into native code, for efficient execution. Using similar approach, we propose a methodology to identify hotspots and exploit their parallelization potential on multicore systems. Proposed methodology selects and parallelizes each DOALL loop that is either contained in a hotspot method or calls a hotspot method. The methodology could be integrated in front-end of a JIT compiler to parallelize sequential code, just before native translation. However, compilation to native code is out of scope of this work. As a case study, we analyze eighteen JGF (Java Grande Forum) benchmarks to determine parallelization potential of hotspots. Eight benchmarks demonstrate a speedup of up to 7.6x on an 8-core system. (author)

  7. 12 CFR 1780.11 - Computing time.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Computing time. 1780.11 Section 1780.11 Banks... time. (a) General rule. In computing any period of time prescribed or allowed by this subpart, the date of the act or event that commences the designated period of time is not included. The last day so...

  8. Optimisation of the ATLAS Track Reconstruction Software for Run-2

    CERN Document Server

    Salzburger, Andreas; The ATLAS collaboration

    2015-01-01

    Track reconstruction is one of the most complex element of the reconstruction of events recorded by ATLAS from collisions delivered by the LHC. It is the most time consuming reconstruction component in high luminosity environments. The flat budget projections for computing resources for Run-2 of the LHC together with the demands of reconstructing higher pile-up collision data at rates more than double those in Run-1 (an increase from 400 Hz to 1 kHz in trigger output) have put stringent requirements on the track reconstruction software. The ATLAS experiment has performed a two year long software campaign which aimed to reduce the reconstruction rate by a factor of three to meet the resource limitations for Run-2: the majority of the changes to achieve this were improvements to the track reconstruction software. The CPU processing time of ATLAS track reconstruction was reduced by more than a factor of three during this campaign without any loss of output information of the track reconstruction. We present the ...

  9. Time-dependent transport of energetic particles in magnetic turbulence: computer simulations versus analytical theory

    Science.gov (United States)

    Arendt, V.; Shalchi, A.

    2018-06-01

    We explore numerically the transport of energetic particles in a turbulent magnetic field configuration. A test-particle code is employed to compute running diffusion coefficients as well as particle distribution functions in the different directions of space. Our numerical findings are compared with models commonly used in diffusion theory such as Gaussian distribution functions and solutions of the cosmic ray Fokker-Planck equation. Furthermore, we compare the running diffusion coefficients across the mean magnetic field with solutions obtained from the time-dependent version of the unified non-linear transport theory. In most cases we find that particle distribution functions are indeed of Gaussian form as long as a two-component turbulence model is employed. For turbulence setups with reduced dimensionality, however, the Gaussian distribution can no longer be obtained. It is also shown that the unified non-linear transport theory agrees with simulated perpendicular diffusion coefficients as long as the pure two-dimensional model is excluded.

  10. On run-time exploitation of concurrency

    NARCIS (Netherlands)

    Holzenspies, P.K.F.

    2010-01-01

    The `free' speed-up stemming from ever increasing processor speed is over. Performance increase in computer systems can now only be achieved through parallelism. One of the biggest challenges in computer science is how to map applications onto parallel computers. Concurrency, seen as the set of

  11. Simulating three dimensional wave run-up over breakwaters covered by antifer units

    Science.gov (United States)

    Najafi-Jilani, A.; Niri, M. Zakiri; Naderi, Nader

    2014-06-01

    The paper presents the numerical analysis of wave run-up over rubble-mound breakwaters covered by antifer units using a technique integrating Computer-Aided Design (CAD) and Computational Fluid Dynamics (CFD) software. Direct application of Navier-Stokes equations within armour blocks, is used to provide a more reliable approach to simulate wave run-up over breakwaters. A well-tested Reynolds-averaged Navier-Stokes (RANS) Volume of Fluid (VOF) code (Flow-3D) was adopted for CFD computations. The computed results were compared with experimental data to check the validity of the model. Numerical results showed that the direct three dimensional (3D) simulation method can deliver accurate results for wave run-up over rubble mound breakwaters. The results showed that the placement pattern of antifer units had a great impact on values of wave run-up so that by changing the placement pattern from regular to double pyramid can reduce the wave run-up by approximately 30%. Analysis was done to investigate the influences of surface roughness, energy dissipation in the pores of the armour layer and reduced wave run-up due to inflow into the armour and stone layer.

  12. Simulating three dimensional wave run-up over breakwaters covered by antifer units

    Directory of Open Access Journals (Sweden)

    A. Najafi-Jilani

    2014-06-01

    Full Text Available The paper presents the numerical analysis of wave run-up over rubble-mound breakwaters covered by antifer units using a technique integrating Computer-Aided Design (CAD and Computational Fluid Dynamics (CFD software. Direct application of Navier-Stokes equations within armour blocks, is used to provide a more reliable approach to simulate wave run-up over breakwaters. A well-tested Reynolds-averaged Navier-Stokes (RANS Volume of Fluid (VOF code (Flow-3D was adopted for CFD computations. The computed results were compared with experimental data to check the validity of the model. Numerical results showed that the direct three dimensional (3D simulation method can deliver accurate results for wave run-up over rubble mound breakwaters. The results showed that the placement pattern of antifer units had a great impact on values of wave run-up so that by changing the placement pattern from regular to double pyramid can reduce the wave run-up by approximately 30%. Analysis was done to investigate the influences of surface roughness, energy dissipation in the pores of the armour layer and reduced wave run-up due to inflow into the armour and stone layer.

  13. Similar Running Economy With Different Running Patterns Along the Aerial-Terrestrial Continuum.

    Science.gov (United States)

    Lussiana, Thibault; Gindre, Cyrille; Hébert-Losier, Kim; Sagawa, Yoshimasa; Gimenez, Philippe; Mourot, Laurent

    2017-04-01

    No unique or ideal running pattern is the most economical for all runners. Classifying the global running patterns of individuals into 2 categories (aerial and terrestrial) using the Volodalen method could permit a better understanding of the relationship between running economy (RE) and biomechanics. The main purpose was to compare the RE of aerial and terrestrial runners. Two coaches classified 58 runners into aerial (n = 29) or terrestrial (n = 29) running patterns on the basis of visual observations. RE, muscle activity, kinematics, and spatiotemporal parameters of both groups were measured during a 5-min run at 12 km/h on a treadmill. Maximal oxygen uptake (V̇O 2 max) and peak treadmill speed (PTS) were assessed during an incremental running test. No differences were observed between aerial and terrestrial patterns for RE, V̇O 2 max, and PTS. However, at 12 km/h, aerial runners exhibited earlier gastrocnemius lateralis activation in preparation for contact, less dorsiflexion at ground contact, higher coactivation indexes, and greater leg stiffness during stance phase than terrestrial runners. Terrestrial runners had more pronounced semitendinosus activation at the start and end of the running cycle, shorter flight time, greater leg compression, and a more rear-foot strike. Different running patterns were associated with similar RE. Aerial runners appear to rely more on elastic energy utilization with a rapid eccentric-concentric coupling time, whereas terrestrial runners appear to propel the body more forward rather than upward to limit work against gravity. Excluding runners with a mixed running pattern from analyses did not affect study interpretation.

  14. Effect of Light/Dark Cycle on Wheel Running and Responding Reinforced by the Opportunity to Run Depends on Postsession Feeding Time

    Science.gov (United States)

    Belke, T. W.; Mondona, A. R.; Conrad, K. M.; Poirier, K. F.; Pickering, K. L.

    2008-01-01

    Do rats run and respond at a higher rate to run during the dark phase when they are typically more active? To answer this question, Long Evans rats were exposed to a response-initiated variable interval 30-s schedule of wheel-running reinforcement during light and dark cycles. Wheel-running and local lever-pressing rates increased modestly during…

  15. Running Neuroimaging Applications on Amazon Web Services: How, When, and at What Cost?

    Directory of Open Access Journals (Sweden)

    Tara M. Madhyastha

    2017-11-01

    Full Text Available The contribution of this paper is to identify and describe current best practices for using Amazon Web Services (AWS to execute neuroimaging workflows “in the cloud.” Neuroimaging offers a vast set of techniques by which to interrogate the structure and function of the living brain. However, many of the scientists for whom neuroimaging is an extremely important tool have limited training in parallel computation. At the same time, the field is experiencing a surge in computational demands, driven by a combination of data-sharing efforts, improvements in scanner technology that allow acquisition of images with higher image resolution, and by the desire to use statistical techniques that stress processing requirements. Most neuroimaging workflows can be executed as independent parallel jobs and are therefore excellent candidates for running on AWS, but the overhead of learning to do so and determining whether it is worth the cost can be prohibitive. In this paper we describe how to identify neuroimaging workloads that are appropriate for running on AWS, how to benchmark execution time, and how to estimate cost of running on AWS. By benchmarking common neuroimaging applications, we show that cloud computing can be a viable alternative to on-premises hardware. We present guidelines that neuroimaging labs can use to provide a cluster-on-demand type of service that should be familiar to users, and scripts to estimate cost and create such a cluster.

  16. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...... of error detection methods includes a high level software specification. this has the purpose of illustrating that the designed can be used in practice....

  17. Multitasking the code ARC3D. [for computational fluid dynamics

    Science.gov (United States)

    Barton, John T.; Hsiung, Christopher C.

    1986-01-01

    The CRAY multitasking system was developed in order to utilize all four processors and sharply reduce the wall clock run time. This paper describes the techniques used to modify the computational fluid dynamics code ARC3D for this run and analyzes the achieved speedup. The ARC3D code solves either the Euler or thin-layer N-S equations using an implicit approximate factorization scheme. Results indicate that multitask processing can be used to achieve wall clock speedup factors of over three times, depending on the nature of the program code being used. Multitasking appears to be particularly advantageous for large-memory problems running on multiple CPU computers.

  18. 6 CFR 13.27 - Computation of time.

    Science.gov (United States)

    2010-01-01

    ... 6 Domestic Security 1 2010-01-01 2010-01-01 false Computation of time. 13.27 Section 13.27 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY PROGRAM FRAUD CIVIL REMEDIES § 13.27 Computation of time. (a) In computing any period of time under this part or in an order issued...

  19. Elastic Spatial Query Processing in OpenStack Cloud Computing Environment for Time-Constraint Data Analysis

    Directory of Open Access Journals (Sweden)

    Wei Huang

    2017-03-01

    Full Text Available Geospatial big data analysis (GBDA is extremely significant for time-constraint applications such as disaster response. However, the time-constraint analysis is not yet a trivial task in the cloud computing environment. Spatial query processing (SQP is typical computation-intensive and indispensable for GBDA, and the spatial range query, join query, and the nearest neighbor query algorithms are not scalable without using MapReduce-liked frameworks. Parallel SQP algorithms (PSQPAs are trapped in screw-processing, which is a known issue in Geoscience. To satisfy time-constrained GBDA, we propose an elastic SQP approach in this paper. First, Spark is used to implement PSQPAs. Second, Kubernetes-managed Core Operation System (CoreOS clusters provide self-healing Docker containers for running Spark clusters in the cloud. Spark-based PSQPAs are submitted to Docker containers, where Spark master instances reside. Finally, the horizontal pod auto-scaler (HPA would scale-out and scale-in Docker containers for supporting on-demand computing resources. Combined with an auto-scaling group of virtual instances, HPA helps to find each of the five nearest neighbors for 46,139,532 query objects from 834,158 spatial data objects in less than 300 s. The experiments conducted on an OpenStack cloud demonstrate that auto-scaling containers can satisfy time-constraint GBDA in clouds.

  20. The Effect of Training in Minimalist Running Shoes on Running Economy.

    Science.gov (United States)

    Ridge, Sarah T; Standifird, Tyler; Rivera, Jessica; Johnson, A Wayne; Mitchell, Ulrike; Hunter, Iain

    2015-09-01

    The purpose of this study was to examine the effect of minimalist running shoes on oxygen uptake during running before and after a 10-week transition from traditional to minimalist running shoes. Twenty-five recreational runners (no previous experience in minimalist running shoes) participated in submaximal VO2 testing at a self-selected pace while wearing traditional and minimalist running shoes. Ten of the 25 runners gradually transitioned to minimalist running shoes over 10 weeks (experimental group), while the other 15 maintained their typical training regimen (control group). All participants repeated submaximal VO2 testing at the end of 10 weeks. Testing included a 3 minute warm-up, 3 minutes of running in the first pair of shoes, and 3 minutes of running in the second pair of shoes. Shoe order was randomized. Average oxygen uptake was calculated during the last minute of running in each condition. The average change from pre- to post-training for the control group during testing in traditional and minimalist shoes was an improvement of 3.1 ± 15.2% and 2.8 ± 16.2%, respectively. The average change from pre- to post-training for the experimental group during testing in traditional and minimalist shoes was an improvement of 8.4 ± 7.2% and 10.4 ± 6.9%, respectively. Data were analyzed using a 2-way repeated measures ANOVA. There were no significant interaction effects, but the overall improvement in running economy across time (6.15%) was significant (p = 0.015). Running in minimalist running shoes improves running economy in experienced, traditionally shod runners, but not significantly more than when running in traditional running shoes. Improvement in running economy in both groups, regardless of shoe type, may have been due to compliance with training over the 10-week study period and/or familiarity with testing procedures. Key pointsRunning in minimalist footwear did not result in a change in running economy compared to running in traditional footwear

  1. ALICE HLT Cluster operation during ALICE Run 2

    Science.gov (United States)

    Lehrbach, J.; Krzewicki, M.; Rohr, D.; Engel, H.; Gomez Ramirez, A.; Lindenstruth, V.; Berzano, D.; ALICE Collaboration

    2017-10-01

    ALICE (A Large Ion Collider Experiment) is one of the four major detectors located at the LHC at CERN, focusing on the study of heavy-ion collisions. The ALICE High Level Trigger (HLT) is a compute cluster which reconstructs the events and compresses the data in real-time. The data compression by the HLT is a vital part of data taking especially during the heavy-ion runs in order to be able to store the data which implies that reliability of the whole cluster is an important matter. To guarantee a consistent state among all compute nodes of the HLT cluster we have automatized the operation as much as possible. For automatic deployment of the nodes we use Foreman with locally mirrored repositories and for configuration management of the nodes we use Puppet. Important parameters like temperatures, network traffic, CPU load etc. of the nodes are monitored with Zabbix. During periods without beam the HLT cluster is used for tests and as one of the WLCG Grid sites to compute offline jobs in order to maximize the usage of our cluster. To prevent interference with normal HLT operations we separate the virtual machines running the Grid jobs from the normal HLT operation via virtual networks (VLANs). In this paper we give an overview of the ALICE HLT operation in 2016.

  2. The NLstart2run study: Training-related factors associated with running-related injuries in novice runners.

    Science.gov (United States)

    Kluitenberg, Bas; van der Worp, Henk; Huisstede, Bionka M A; Hartgens, Fred; Diercks, Ron; Verhagen, Evert; van Middelkoop, Marienke

    2016-08-01

    The incidence of running-related injuries is high. Some risk factors for injury were identified in novice runners, however, not much is known about the effect of training factors on injury risk. Therefore, the purpose of this study was to examine the associations between training factors and running-related injuries in novice runners, taking the time varying nature of these training-related factors into account. Prospective cohort study. 1696 participants completed weekly diaries on running exposure and injuries during a 6-week running program for novice runners. Total running volume (min), frequency and mean intensity (Rate of Perceived Exertion) were calculated for the seven days prior to each training session. The association of these time-varying variables with injury was determined in an extended Cox regression analysis. The results of the multivariable analysis showed that running with a higher intensity in the previous week was associated with a higher injury risk. Running frequency was not significantly associated with injury, however a trend towards running three times per week being more hazardous than two times could be observed. Finally, lower running volume was associated with a higher risk of sustaining an injury. These results suggest that running more than 60min at a lower intensity is least injurious. This finding is contrary to our expectations and is presumably the result of other factors. Therefore, the findings should not be used plainly as a guideline for novices. More research is needed to establish the person-specific training patterns that are associated with injury. Copyright © 2015 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  3. Comparing internal and external run-time coupling of CFD and building energy simulation software

    NARCIS (Netherlands)

    Djunaedy, E.; Hensen, J.L.M.; Loomans, M.G.L.C.

    2004-01-01

    This paper describes a comparison between internal and external run-time coupling of CFD and building energy simulation software. Internal coupling can be seen as the "traditional" way of developing software, i.e. the capabilities of existing software are expanded by merging codes. With external

  4. High Resolution Nature Runs and the Big Data Challenge

    Science.gov (United States)

    Webster, W. Phillip; Duffy, Daniel Q.

    2015-01-01

    NASA's Global Modeling and Assimilation Office at Goddard Space Flight Center is undertaking a series of very computationally intensive Nature Runs and a downscaled reanalysis. The nature runs use the GEOS-5 as an Atmospheric General Circulation Model (AGCM) while the reanalysis uses the GEOS-5 in Data Assimilation mode. This paper will present computational challenges from three runs, two of which are AGCM and one is downscaled reanalysis using the full DAS. The nature runs will be completed at two surface grid resolutions, 7 and 3 kilometers and 72 vertical levels. The 7 km run spanned 2 years (2005-2006) and produced 4 PB of data while the 3 km run will span one year and generate 4 BP of data. The downscaled reanalysis (MERRA-II Modern-Era Reanalysis for Research and Applications) will cover 15 years and generate 1 PB of data. Our efforts to address the big data challenges of climate science, we are moving toward a notion of Climate Analytics-as-a-Service (CAaaS), a specialization of the concept of business process-as-a-service that is an evolving extension of IaaS, PaaS, and SaaS enabled by cloud computing. In this presentation, we will describe two projects that demonstrate this shift. MERRA Analytic Services (MERRA/AS) is an example of cloud-enabled CAaaS. MERRA/AS enables MapReduce analytics over MERRA reanalysis data collection by bringing together the high-performance computing, scalable data management, and a domain-specific climate data services API. NASA's High-Performance Science Cloud (HPSC) is an example of the type of compute-storage fabric required to support CAaaS. The HPSC comprises a high speed Infinib and network, high performance file systems and object storage, and a virtual system environments specific for data intensive, science applications. These technologies are providing a new tier in the data and analytic services stack that helps connect earthbound, enterprise-level data and computational resources to new customers and new mobility

  5. Spanish ATLAS Tier-2 facing up to Run-2 period of LHC

    CERN Document Server

    Gonzalez de la Hoz, Santiago; The ATLAS collaboration; Fassi, Farida; Fernandez Casani, Alvaro; Kaci, Mohammed; Lacort Pellicer, Victor Ruben; Montiel Gonzalez, Almudena Del Rocio; Oliver Garcia, Elena; Pacheco Pages, Andres; Salt, José; Villaplana Perez, Miguel; Sanchez Martinez, Victoria; Sánchez, Javier

    2015-01-01

    The goal of this work is to describe the way of addressing the main challenges of Run-2 by the Spanish ATLAS Tier-2. The considerable increase of energy and luminosity for the upcoming Run-2 w.r.t. Run-1 has led to a revision of the ATLAS computing model as well as some of the main ATLAS computing tools. The adaptation to these changes will be shown, with the peculiarities that it is a distributed Tier-2 composed of three sites and its members are involved on ATLAS computing tasks with a hub of research, innovation and education.

  6. Running Speed Can Be Predicted from Foot Contact Time during Outdoor over Ground Running

    NARCIS (Netherlands)

    de Ruiter, C.J.; van Oeveren, B.; Francke, A.; Zijlstra, P.; van Dieen, J.H.

    2016-01-01

    The number of validation studies of commercially available foot pods that provide estimates of running speed is limited and these studies have been conducted under laboratory conditions. Moreover, internal data handling and algorithms used to derive speed from these pods are proprietary and thereby

  7. Results of the deepest all-sky survey for continuous gravitational waves on LIGO S6 data running on the Einstein@Home volunteer distributed computing project

    NARCIS (Netherlands)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acemese, F.; Ackley, K.; Adams, C.; Phythian-Adams, A.T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwa, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Arker, Bd.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Bejger, M.; Be, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, M.J.; Birney, R.; Biscans, S.; Bisht, A.; Bitoss, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, J.G.; Bogan, C.; Bohe, A.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Boutfanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, A.D.; Brown, D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, O.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, C.; Cahillane, C.; Bustillo, J. Calderon; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Diaz, J. Casanueva; Casentini, C.; Caudill, S.; Cavaglia, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Baiardi, L. Cerboni; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, D. S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S. S. Y.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P. -F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J. -P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, Laura; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dasgupta, A.; Costa, C. F. Da Silva; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De laurentis, M.; Deleglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.A.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devine, R. C.; Dhurandhar, S.; Diaz, M. C.; Di Fiore, L.; Giovanni, M. Di; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Dreyer, R. W. P.; Driggers, J. C.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Egizenstein, H. -B.; Ehrens, P.; Eichholel, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, O.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Far, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fenyvesi, E.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.M.; Fournier, J. -D.; Frasca, J. -D; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fritsche, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garuti, F.; Gaur, G.; Gehrels, N.; Gemme, G.; Geng, P.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gi, K.; Glaetke, A.; Goetz, E.; Goetz, R.; Gondan, L.; Gonzalez, Idelmis G.; Castro, J. M. Gonzalez; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Lee-Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Graff, P. B.; Granata, M.; Granta, A.; Gras, S.; Cray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C. -J.; Haughian, K.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, S.; Hennig, J.; Henry, J.A.; Heptonsta, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Hough, J.; Houston, E. A.; Howel, E. J.; Hu, Y. M.; Huang, O.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J. -M.; Isi, M.; Isogai, T.; Lyer, B. R.; Fzumi, K.; Jaccimin, T.; Jang, D.H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jian, L.; Jimenez-Forteza, F.; Johnson, W.; Jones, I.D.; Jones, R.; Jones, R.; Jonker, R. J. G.; Ju, L.; Wads, k; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kapadia, S. J.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kefelian, F.; Keh, M. S.; Keite, D.; Kelley, D. B.; Kells, W.; Kennedy, R.E.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chi-Woong; Kim, Chunglee; Kim, J.; Kim, K.; Kim, Namjun; Kim, W.; Kimbre, S. J.; King, E. J.; King, P. J.; Kisse, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringe, V.; Krishnan, B.; Krolak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Laxen, M.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, K.H.; Lee, M.H.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Lewis, J. B.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Liick, H.; Lundgren, A. P.; Lynch, R.; Ivia, Y.; Machenschalk, B.; Maclnnis, M.; Macleod, D. M.; Magafia-Sandoval, F.; Zertuche, L. Magafia; Magee, R. M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Manse, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Marka, S.; Marka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matiehard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mende, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Miche, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B.C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, S.D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P.G.; Mytidis, A.; Nardecehia, I.; Naticchioni, L.; Nayak, R. K.; Nedkova, K.; Nelemans, G.; Nelson, T. J. N.; Gutierrez-Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Hang, S.; Ohme, F.; Oliver, M.; Oppermann, P.; Ram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.S; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Perri, L. M.; Phelps, M.; Piccinni, . J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powel, J.; Prasad, J.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L. G.; Puncken, .; Punturo, M.; Purrer, PuppoM.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rowan, RosiliskaS.; Ruggi, RiidigerP.; Ryan, K.; Sachdev, Perminder S; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Saulson, P. R.; Sauter, E. S.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J; Schmidt, P.; Schnabe, R.; Schofield, R. M. S.; Schonbeck, A.; Schreiber, K.E.C.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T. J.; Shahriar, M. S.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Sielleez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, António Dias da; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, R. J. E.; Smith, N.D.; Smith, R. J. E.; Son, E. J.; Sorazus, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S. E.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sunil, Suns; Sutton, P. J.; Swinkels, B. L.; Szczepariczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tapai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, W.R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tomasi, Z.; Torres, C. V.; Tome, C.; Tot, D.; Travasso, F.; Traylor, G.; Trifire, D.; Tringali, M. C.; Trozz, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Valente, G.; Valdes, G.; van Bake, N.; Van Beuzekom, Martin; Van den Brand, J. F. J.; Van Den Broeck, C.F.F.; Vander-Hyde, D. C.; van der Schaaf, L.; Van Heilningen, J. V.; Van Vegge, A. A.; Vardaro, M.; Vass, S.; Vaslith, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P.J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Vicere, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J. -Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, MT; Walker, M.; Wallace, L.; Walsh, S.; Vvang, G.; Wang, O.; Wang, X.; Wiang, Y.; Ward, R. L.; Wiarner, J.; Was, M.; Weaver, B.; Wei, L. -W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weliels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Williams, D.R.; Williamson, A. R.; Willis, J. L.; WilIke, B.; Wimmer, M. H.; Whinkler, W.; Wipf, C. C.; De Witte, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J.L.; Wu, D. S.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yu, H.; Yvert, M.; Zadrozny, A.; Zangrando, L.; Zanolin, M.; Zendri, J. P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S.J.; Zhu, X.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.

    2016-01-01

    We report results of a deep all-sky search for periodic gravitational waves from isolated neutron stars in data from the S6 LIGO science run. The search was possible thanks to the computing power provided by the volunteers of the Einstein@Home distributed computing project. We find no significant

  8. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    Science.gov (United States)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  9. A users manual for a computer program which calculates time optical geocentric transfers using solar or nuclear electric and high thrust propulsion

    Science.gov (United States)

    Sackett, L. L.; Edelbaum, T. N.; Malchow, H. L.

    1974-01-01

    This manual is a guide for using a computer program which calculates time optimal trajectories for high-and low-thrust geocentric transfers. Either SEP or NEP may be assumed and a one or two impulse, fixed total delta V, initial high thrust phase may be included. Also a single impulse of specified delta V may be included after the low thrust state. The low thrust phase utilizes equinoctial orbital elements to avoid the classical singularities and Kryloff-Boguliuboff averaging to help insure more rapid computation time. The program is written in FORTRAN 4 in double precision for use on an IBM 360 computer. The manual includes a description of the problem treated, input/output information, examples of runs, and source code listings.

  10. Noise-constrained switching times for heteroclinic computing

    Science.gov (United States)

    Neves, Fabio Schittler; Voit, Maximilian; Timme, Marc

    2017-03-01

    Heteroclinic computing offers a novel paradigm for universal computation by collective system dynamics. In such a paradigm, input signals are encoded as complex periodic orbits approaching specific sequences of saddle states. Without inputs, the relevant states together with the heteroclinic connections between them form a network of states—the heteroclinic network. Systems of pulse-coupled oscillators or spiking neurons naturally exhibit such heteroclinic networks of saddles, thereby providing a substrate for general analog computations. Several challenges need to be resolved before it becomes possible to effectively realize heteroclinic computing in hardware. The time scales on which computations are performed crucially depend on the switching times between saddles, which in turn are jointly controlled by the system's intrinsic dynamics and the level of external and measurement noise. The nonlinear dynamics of pulse-coupled systems often strongly deviate from that of time-continuously coupled (e.g., phase-coupled) systems. The factors impacting switching times in pulse-coupled systems are still not well understood. Here we systematically investigate switching times in dependence of the levels of noise and intrinsic dissipation in the system. We specifically reveal how local responses to pulses coact with external noise. Our findings confirm that, like in time-continuous phase-coupled systems, piecewise-continuous pulse-coupled systems exhibit switching times that transiently increase exponentially with the number of switches up to some order of magnitude set by the noise level. Complementarily, we show that switching times may constitute a good predictor for the computation reliability, indicating how often an input signal must be reiterated. By characterizing switching times between two saddles in conjunction with the reliability of a computation, our results provide a first step beyond the coding of input signal identities toward a complementary coding for

  11. Adaptive real-time methodology for optimizing energy-efficient computing

    Science.gov (United States)

    Hsu, Chung-Hsing [Los Alamos, NM; Feng, Wu-Chun [Blacksburg, VA

    2011-06-28

    Dynamic voltage and frequency scaling (DVFS) is an effective way to reduce energy and power consumption in microprocessor units. Current implementations of DVFS suffer from inaccurate modeling of power requirements and usage, and from inaccurate characterization of the relationships between the applicable variables. A system and method is proposed that adjusts CPU frequency and voltage based on run-time calculations of the workload processing time, as well as a calculation of performance sensitivity with respect to CPU frequency. The system and method are processor independent, and can be applied to either an entire system as a unit, or individually to each process running on a system.

  12. Real time data analysis with the ATLAS Trigger at the LHC in Run-2

    CERN Document Server

    Beauchemin, Pierre-Hugues; The ATLAS collaboration

    2018-01-01

    The trigger selection capabilities of the ATLAS detector have been significantly enhanced for the LHC Run- 2 in order to cope with the higher event rates and with the large number of simultaneous interactions (pile-up) per protonproton bunch crossing. A new hardware system, designed to analyse real time event-topologies at Level-1 came to full use in 2017. A hardware-based track reconstruction system, expected to be used real-time in 2018, is designed to provide track information to the high-level software trigger at its full input rate. The high-level trigger selections are largely relying on offline-like reconstruction techniques, and in some cases multivariate analysis methods. Despite the sudden change in LHC operations during the second half of 2017, which caused an increase in pile-up and therefore also in CPU usage of the trigger algorithms, the set of triggers (so called trigger menu) running online has undergone only minor modifications thanks to the robustness and redundancy of the trigger system, a...

  13. The LHC Tier1 at PIC: Experience from first LHC run

    International Nuclear Information System (INIS)

    Flix, J.; Perez-Calero Yzquierdo, A.; Accion, E.; Acin, V.; Acosta, C.; Bernabeu, G.; Bria, A.; Casals, J.; Caubet, M.; Cruz, R.; Delfino, M.; Espinal, X.; Lanciotti, E.; Lopez, F.; Martinez, F.; Mendez, V.; Merino, G.; Pacheco, A.; Planas, E.; Porto, M. C.; Rodriguez, B.; Sedov, A.

    2013-01-01

    This paper summarizes the operational experience of the Tier1 computer center at Port d'Informacio Cientifica (PIC) supporting the commissioning and first run (Run1) of the Large Hadron Collider (LHC). The evolution of the experiment computing models resulting from the higher amounts of data expected after there start of the LHC are also described. (authors)

  14. RUN COORDINATION

    CERN Multimedia

    Christophe Delaere

    2013-01-01

    The focus of Run Coordination during LS1 is to monitor closely the advance of maintenance and upgrade activities, to smooth interactions between subsystems and to ensure that all are ready in time to resume operations in 2015 with a fully calibrated and understood detector. After electricity and cooling were restored to all equipment, at about the time of the last CMS week, recommissioning activities were resumed for all subsystems. On 7 October, DCS shifts began 24/7 to allow subsystems to remain on to facilitate operations. That culminated with the Global Run in November (GriN), which   took place as scheduled during the week of 4 November. The GriN has been the first centrally managed operation since the beginning of LS1, and involved all subdetectors but the Pixel Tracker presently in a lab upstairs. All nights were therefore dedicated to long stable runs with as many subdetectors as possible. Among the many achievements in that week, three items may be highlighted. First, the Strip...

  15. Some neutronics and thermal-hydraulics codes for reactor analysis using personal computers

    International Nuclear Information System (INIS)

    Woodruff, W.L.

    1990-01-01

    Some neutronics and thermal-hydraulics codes formerly available only for main frame computers may now be run on personal computers. Brief descriptions of the codes are provided. Running times for some of the codes are compared for an assortment of personal and main frame computers. With some limitations in detail, personal computer versions of the codes can be used to solve many problems of interest in reactor analyses at very modest costs. 11 refs., 4 tabs

  16. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  17. The Robust Running Ape: Unraveling the Deep Underpinnings of Coordinated Human Running Proficiency

    Directory of Open Access Journals (Sweden)

    John Kiely

    2017-06-01

    Full Text Available In comparison to other mammals, humans are not especially strong, swift or supple. Nevertheless, despite these apparent physical limitations, we are among Natures most superbly well-adapted endurance runners. Paradoxically, however, notwithstanding this evolutionary-bestowed proficiency, running-related injuries, and Overuse syndromes in particular, are widely pervasive. The term ‘coordination’ is similarly ubiquitous within contemporary coaching, conditioning, and rehabilitation cultures. Various theoretical models of coordination exist within the academic literature. However, the specific neural and biological underpinnings of ‘running coordination,’ and the nature of their integration, remain poorly elaborated. Conventionally running is considered a mundane, readily mastered coordination skill. This illusion of coordinative simplicity, however, is founded upon a platform of immense neural and biological complexities. This extensive complexity presents extreme organizational difficulties yet, simultaneously, provides a multiplicity of viable pathways through which the computational and mechanical burden of running can be proficiently dispersed amongst expanded networks of conditioned neural and peripheral tissue collaborators. Learning to adequately harness this available complexity, however, is a painstakingly slowly emerging, practice-driven process, greatly facilitated by innate evolutionary organizing principles serving to constrain otherwise overwhelming complexity to manageable proportions. As we accumulate running experiences persistent plastic remodeling customizes networked neural connectivity and biological tissue properties to best fit our unique neural and architectural idiosyncrasies, and personal histories: thus neural and peripheral tissue plasticity embeds coordination habits. When, however, coordinative processes are compromised—under the integrated influence of fatigue and/or accumulative cycles of injury, overuse

  18. Computational steering of GEM based detector simulations

    Science.gov (United States)

    Sheharyar, Ali; Bouhali, Othmane

    2017-10-01

    Gas based detector R&D relies heavily on full simulation of detectors and their optimization before final prototypes can be built and tested. These simulations in particular those with complex scenarios such as those involving high detector voltages or gas with larger gains are computationally intensive may take several days or weeks to complete. These long-running simulations usually run on the high-performance computers in batch mode. If the results lead to unexpected behavior, then the simulation might be rerun with different parameters. However, the simulations (or jobs) may have to wait in a queue until they get a chance to run again because the supercomputer is a shared resource that maintains a queue of other user programs as well and executes them as time and priorities permit. It may result in inefficient resource utilization and increase in the turnaround time for the scientific experiment. To overcome this issue, the monitoring of the behavior of a simulation, while it is running (or live), is essential. In this work, we employ the computational steering technique by coupling the detector simulations with a visualization package named VisIt to enable the exploration of the live data as it is produced by the simulation.

  19. A sub-cubic time algorithm for computing the quartet distance between two general trees

    DEFF Research Database (Denmark)

    Nielsen, Jesper; Kristensen, Anders Kabell; Mailund, Thomas

    2011-01-01

    Background When inferring phylogenetic trees different algorithms may give different trees. To study such effects a measure for the distance between two trees is useful. Quartet distance is one such measure, and is the number of quartet topologies that differ between two trees. Results We have...... derived a new algorithm for computing the quartet distance between a pair of general trees, i.e. trees where inner nodes can have any degree ≥ 3. The time and space complexity of our algorithm is sub-cubic in the number of leaves and does not depend on the degree of the inner nodes. This makes...... it the fastest algorithm so far for computing the quartet distance between general trees independent of the degree of the inner nodes. Conclusions We have implemented our algorithm and two of the best competitors. Our new algorithm is significantly faster than the competition and seems to run in close...

  20. Massively parallel Monte Carlo. Experiences running nuclear simulations on a large condor cluster

    International Nuclear Information System (INIS)

    Tickner, James; O'Dwyer, Joel; Roach, Greg; Uher, Josef; Hitchen, Greg

    2010-01-01

    The trivially-parallel nature of Monte Carlo (MC) simulations make them ideally suited for running on a distributed, heterogeneous computing environment. We report on the setup and operation of a large, cycle-harvesting Condor computer cluster, used to run MC simulations of nuclear instruments ('jobs') on approximately 4,500 desktop PCs. Successful operation must balance the competing goals of maximizing the availability of machines for running jobs whilst minimizing the impact on users' PC performance. This requires classification of jobs according to anticipated run-time and priority and careful optimization of the parameters used to control job allocation to host machines. To maximize use of a large Condor cluster, we have created a powerful suite of tools to handle job submission and analysis, as the manual creation, submission and evaluation of large numbers (hundred to thousands) of jobs would be too arduous. We describe some of the key aspects of this suite, which has been interfaced to the well-known MCNP and EGSnrc nuclear codes and our in-house PHOTON optical MC code. We report on our practical experiences of operating our Condor cluster and present examples of several large-scale instrument design problems that have been solved using this tool. (author)

  1. A cascadic monotonic time-discretized algorithm for finite-level quantum control computation

    Science.gov (United States)

    Ditz, P.; Borzi`, A.

    2008-03-01

    A computer package (CNMS) is presented aimed at the solution of finite-level quantum optimal control problems. This package is based on a recently developed computational strategy known as monotonic schemes. Quantum optimal control problems arise in particular in quantum optics where the optimization of a control representing laser pulses is required. The purpose of the external control field is to channel the system's wavefunction between given states in its most efficient way. Physically motivated constraints, such as limited laser resources, are accommodated through appropriately chosen cost functionals. Program summaryProgram title: CNMS Catalogue identifier: ADEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 770 No. of bytes in distributed program, including test data, etc.: 7098 Distribution format: tar.gz Programming language: MATLAB 6 Computer: AMD Athlon 64 × 2 Dual, 2:21 GHz, 1:5 GB RAM Operating system: Microsoft Windows XP Word size: 32 Classification: 4.9 Nature of problem: Quantum control Solution method: Iterative Running time: 60-600 sec

  2. Effects of computing time delay on real-time control systems

    Science.gov (United States)

    Shin, Kang G.; Cui, Xianzhong

    1988-01-01

    The reliability of a real-time digital control system depends not only on the reliability of the hardware and software used, but also on the speed in executing control algorithms. The latter is due to the negative effects of computing time delay on control system performance. For a given sampling interval, the effects of computing time delay are classified into the delay problem and the loss problem. Analysis of these two problems is presented as a means of evaluating real-time control systems. As an example, both the self-tuning predicted (STP) control and Proportional-Integral-Derivative (PID) control are applied to the problem of tracking robot trajectories, and their respective effects of computing time delay on control performance are comparatively evaluated. For this example, the STP (PID) controller is shown to outperform the PID (STP) controller in coping with the delay (loss) problem.

  3. Non-perturbative running of quark masses in three-flavour QCD

    CERN Document Server

    Campos, Isabel; Pena, Carlos; Preti, David; Ramos, Alberto; Vladikas, Anastassios

    2016-01-01

    We present our preliminary results for the computation of the non-perturbative running of renormalized quark masses in $N_f = 3$ QCD, between the electroweak and hadronic scales, using standard finite-size scaling techniques. The computation is carried out to very high precision, using massless $\\mathcal{O}(a)$-improved Wilson quarks. Following the strategy adopted by the ALPHA Collaboration for the running coupling, different schemes are used above and below a scale $\\mu_0 \\sim m_b$, which differ by using either the Schr\\"odinger Functional or Gradient Flow renormalized coupling. We discuss our results for the running in both regions, and the procedure to match the two schemes.

  4. Symmetry in running.

    Science.gov (United States)

    Raibert, M H

    1986-03-14

    Symmetry plays a key role in simplifying the control of legged robots and in giving them the ability to run and balance. The symmetries studied describe motion of the body and legs in terms of even and odd functions of time. A legged system running with these symmetries travels with a fixed forward speed and a stable upright posture. The symmetries used for controlling legged robots may help in elucidating the legged behavior of animals. Measurements of running in the cat and human show that the feet and body sometimes move as predicted by the even and odd symmetry functions.

  5. Effect of Minimalist Footwear on Running Efficiency

    Science.gov (United States)

    Gillinov, Stephen M.; Laux, Sara; Kuivila, Thomas; Hass, Daniel; Joy, Susan M.

    2015-01-01

    Background: Although minimalist footwear is increasingly popular among runners, claims that minimalist footwear enhances running biomechanics and efficiency are controversial. Hypothesis: Minimalist and barefoot conditions improve running efficiency when compared with traditional running shoes. Study Design: Randomized crossover trial. Level of Evidence: Level 3. Methods: Fifteen experienced runners each completed three 90-second running trials on a treadmill, each trial performed in a different type of footwear: traditional running shoes with a heavily cushioned heel, minimalist running shoes with minimal heel cushioning, and barefoot (socked). High-speed photography was used to determine foot strike, ground contact time, knee angle, and stride cadence with each footwear type. Results: Runners had more rearfoot strikes in traditional shoes (87%) compared with minimalist shoes (67%) and socked (40%) (P = 0.03). Ground contact time was longest in traditional shoes (265.9 ± 10.9 ms) when compared with minimalist shoes (253.4 ± 11.2 ms) and socked (250.6 ± 16.2 ms) (P = 0.005). There was no difference between groups with respect to knee angle (P = 0.37) or stride cadence (P = 0.20). When comparing running socked to running with minimalist running shoes, there were no differences in measures of running efficiency. Conclusion: When compared with running in traditional, cushioned shoes, both barefoot (socked) running and minimalist running shoes produce greater running efficiency in some experienced runners, with a greater tendency toward a midfoot or forefoot strike and a shorter ground contact time. Minimalist shoes closely approximate socked running in the 4 measurements performed. Clinical Relevance: With regard to running efficiency and biomechanics, in some runners, barefoot (socked) and minimalist footwear are preferable to traditional running shoes. PMID:26131304

  6. Run-Time HW/SW Scheduling of Data Flow Applications on Reconfigurable Architectures

    Directory of Open Access Journals (Sweden)

    Ghaffari Fakhreddine

    2009-01-01

    Full Text Available This paper presents an efficient dynamic and run-time Hardware/Software scheduling approach. This scheduling heuristic consists in mapping online the different tasks of a highly dynamic application in such a way that the total execution time is minimized. We consider soft real-time data flow graph oriented applications for which the execution time is function of the input data nature. The target architecture is composed of two processors connected to a dynamically reconfigurable hardware accelerator. Our approach takes advantage of the reconfiguration property of the considered architecture to adapt the treatment to the system dynamics. We compare our heuristic with another similar approach. We present the results of our scheduling method on several image processing applications. Our experiments include simulation and synthesis results on a Virtex V-based platform. These results show a better performance against existing methods.

  7. Radionuclide inventories for short run-time space nuclear reactor systems

    International Nuclear Information System (INIS)

    Coats, R.L.

    1993-01-01

    Space Nuclear Reactor Systems, especially those used for propulsion, often have expected operation run times much shorter than those for land-based nuclear power plants. This produces substantially different radionuclide inventories to be considered in the safety analyses of space nuclear systems. This presentation describes an analysis utilizing ORIGEN2 and DKPOWER to provide comparisons among representative land-based and space systems. These comparisons enable early, conceptual considerations of safety issues and features in the preliminary design phases of operational systems, test facilities, and operations by identifying differences between the requirements for space systems and the established practice for land-based power systems. Early indications are that separation distance is much more effective as a safety measure for space nuclear systems than for power reactors because greater decay of the radionuclide activity occurs during the time to transport the inventory a given distance. In addition, the inventories of long-lived actinides are very low for space reactor systems

  8. Safety, Liveness and Run-time Refinement for Modular Process-Aware Information Systems with Dynamic Sub Processes

    DEFF Research Database (Denmark)

    Debois, Søren; Hildebrandt, Thomas; Slaats, Tijs

    2015-01-01

    and verification of flexible, run-time adaptable process-aware information systems, moved into practice via the Dynamic Condition Response (DCR) Graphs notation co-developed with our industrial partner. Our key contributions are: (1) A formal theory of dynamic sub-process instantiation for declarative, event......We study modularity, run-time adaptation and refinement under safety and liveness constraints in event-based process models with dynamic sub-process instantiation. The study is part of a larger programme to provide semantically well-founded technologies for modelling, implementation......-based processes under safety and liveness constraints, given as the DCR* process language, equipped with a compositional operational semantics and conservatively extending the DCR Graphs notation; (2) an expressiveness analysis revealing that the DCR* process language is Turing-complete, while the fragment cor...

  9. Modeling subsurface reactive flows using leadership-class computing

    Energy Technology Data Exchange (ETDEWEB)

    Mills, Richard Tran [Computational Earth Sciences Group, Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831-6015 (United States); Hammond, Glenn E [Hydrology Group, Environmental Technology Division, Pacific Northwest National Laboratory, Richland, WA 99352 (United States); Lichtner, Peter C [Hydrology, Geochemistry, and Geology Group, Earth and Environmental Sciences Division, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Sripathi, Vamsi [Department of Computer Science, North Carolina State University, Raleigh, NC 27695-8206 (United States); Mahinthakumar, G [Department of Civil, Construction, and Environmental Engineering, North Carolina State University, Raleigh, NC 27695-7908 (United States); Smith, Barry F, E-mail: rmills@ornl.go, E-mail: glenn.hammond@pnl.go, E-mail: lichtner@lanl.go, E-mail: vamsi_s@ncsu.ed, E-mail: gmkumar@ncsu.ed, E-mail: bsmith@mcs.anl.go [Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439-4844 (United States)

    2009-07-01

    We describe our experiences running PFLOTRAN-a code for simulation of coupled hydro-thermal-chemical processes in variably saturated, non-isothermal, porous media- on leadership-class supercomputers, including initial experiences running on the petaflop incarnation of Jaguar, the Cray XT5 at the National Center for Computational Sciences at Oak Ridge National Laboratory. PFLOTRAN utilizes fully implicit time-stepping and is built on top of the Portable, Extensible Toolkit for Scientific Computation (PETSc). We discuss some of the hurdles to 'at scale' performance with PFLOTRAN and the progress we have made in overcoming them on leadership-class computer architectures.

  10. Modeling subsurface reactive flows using leadership-class computing

    International Nuclear Information System (INIS)

    Mills, Richard Tran; Hammond, Glenn E; Lichtner, Peter C; Sripathi, Vamsi; Mahinthakumar, G; Smith, Barry F

    2009-01-01

    We describe our experiences running PFLOTRAN-a code for simulation of coupled hydro-thermal-chemical processes in variably saturated, non-isothermal, porous media- on leadership-class supercomputers, including initial experiences running on the petaflop incarnation of Jaguar, the Cray XT5 at the National Center for Computational Sciences at Oak Ridge National Laboratory. PFLOTRAN utilizes fully implicit time-stepping and is built on top of the Portable, Extensible Toolkit for Scientific Computation (PETSc). We discuss some of the hurdles to 'at scale' performance with PFLOTRAN and the progress we have made in overcoming them on leadership-class computer architectures.

  11. Joint stiffness and running economy during imposed forefoot strike before and after a long run in rearfoot strike runners.

    Science.gov (United States)

    Melcher, Daniel A; Paquette, Max R; Schilling, Brian K; Bloomer, Richard J

    2017-12-01

    Research has focused on the effects of acute strike pattern modifications on lower extremity joint stiffness and running economy (RE). Strike pattern modifications on running biomechanics have mostly been studied while runners complete short running bouts. This study examined the effects of an imposed forefoot strike (FFS) on RE and ankle and knee joint stiffness before and after a long run in habitual rearfoot strike (RFS) runners. Joint kinetics and RE were collected before and after a long run. Sagittal joint kinetics were computed from kinematic and ground reaction force data that were collected during over-ground running trials in 13 male runners. RE was measured during treadmill running. Knee flexion range of motion, knee extensor moment and ankle joint stiffness were lower while plantarflexor moment and knee joint stiffness were greater during imposed FFS compared with RFS. The long run did not influence the difference in ankle and knee joint stiffness between strike patterns. Runners were more economical during RFS than imposed FFS and RE was not influenced by the long run. These findings suggest that using a FFS pattern towards the end of a long run may not be mechanically or metabolically beneficial for well-trained male RFS runners.

  12. ORLIB: a computer code that produces one-energy group, time- and spatially-averaged neutron cross sections

    International Nuclear Information System (INIS)

    Blink, J.A.; Dye, R.E.; Kimlinger, J.R.

    1981-12-01

    Calculation of neutron activation of proposed fusion reactors requires a library of neutron-activation cross sections. One such library is ACTL, which is being updated and expanded by Howerton. If the energy-dependent neutron flux is also known as a function of location and time, the buildup and decay of activation products can be calculated. In practice, hand calculation is impractical without energy-averaged cross sections because of the large number of energy groups. A widely used activation computer code, ORIGEN2, also requires energy-averaged cross sections. Accordingly, we wrote the ORLIB code to collapse the ACTL library, using the flux as a weighting function. The ORLIB code runs on the LLNL Cray computer network. We have also modified ORIGEN2 to accept the expanded activation libraries produced by ORLIB

  13. Running the running

    OpenAIRE

    Cabass, Giovanni; Di Valentino, Eleonora; Melchiorri, Alessandro; Pajer, Enrico; Silk, Joseph

    2016-01-01

    We use the recent observations of Cosmic Microwave Background temperature and polarization anisotropies provided by the Planck satellite experiment to place constraints on the running $\\alpha_\\mathrm{s} = \\mathrm{d}n_{\\mathrm{s}} / \\mathrm{d}\\log k$ and the running of the running $\\beta_{\\mathrm{s}} = \\mathrm{d}\\alpha_{\\mathrm{s}} / \\mathrm{d}\\log k$ of the spectral index $n_{\\mathrm{s}}$ of primordial scalar fluctuations. We find $\\alpha_\\mathrm{s}=0.011\\pm0.010$ and $\\beta_\\mathrm{s}=0.027\\...

  14. Match running performance and fitness in youth soccer.

    Science.gov (United States)

    Buchheit, M; Mendez-Villanueva, A; Simpson, B M; Bourdon, P C

    2010-11-01

    The activity profiles of highly trained young soccer players were examined in relation to age, playing position and physical capacity. Time-motion analyses (global positioning system) were performed on 77 (U13-U18; fullbacks [FB], centre-backs [CB], midfielders [MD], wide midfielders [W], second strikers [2 (nd)S] and strikers [S]) during 42 international club games. Total distance covered (TD) and very high-intensity activities (VHIA; >16.1 km·h (-1)) were computed during 186 entire player-matches. Physical capacity was assessed via field test measures (e. g., peak running speed during an incremental field test, VVam-eval). Match running performance showed an increasing trend with age ( Pperformance was position-dependent ( Pperformance and physical capacities were position-dependent, with poor or non-significant correlations within FB, CB, MD and W (e. g., VHIA vs. VVam-eval: R=0.06 in FB) but large associations within 2 (nd)S and S positions (e. g., VHIA vs. VVam-eval: R=0.70 in 2 (nd)S). In highly trained young soccer players, the importance of fitness level as a determinant of match running performance should be regarded as a function of playing position.

  15. Accuracy analysis of the State-of-Charge and remaining run-time determination for lithium-ion batteries

    NARCIS (Netherlands)

    Pop, V.; Bergveld, H.J.; Notten, P.H.L.; Op het Veld, J.H.G.; Regtien, Paulus P.L.

    2008-01-01

    This paper describes the various error sources in a real-time State-of-Charge (SoC) evaluation system and their effects on the overall accuracy in the calculation of the remaining run-time of a battery-operated system. The SoC algorithm for Li-ion batteries studied in this paper combines direct

  16. Accuracy analysis of the state-of-charge and remaining run-time determination for lithium-ion batteries

    NARCIS (Netherlands)

    Pop, V.; Bergveld, H.J.; Notten, P.H.L.; Op het Veld, J.H.G.; Regtien, P.P.L.

    2009-01-01

    This paper describes the various error sources in a real-time State-of-Charge (SoC) evaluation system and their effects on the overall accuracy in the calculation of the remaining run-time of a battery-operated system. The SoC algorithm for Li-ion batteries studied in this paper combines direct

  17. REINFORCEMENT OF DRINKING BY RUNNING: EFFECT OF FIXED RATIO AND REINFORCEMENT TIME.

    Science.gov (United States)

    PREMACK, D; SCHAEFFER, R W; HUNDT, A

    1964-01-01

    Rats were required to complete varying numbers of licks (FR), ranging from 10 to 300, in order to free an activity wheel for predetermined times (CT) ranging from 2 to 20 sec. The reinforcement of drinking by running was shown both by an increased frequency of licking, and by changes in length of the burst of licking relative to operant-level burst length. In log-log coordinates, instrumental licking tended to be a linear increasing function of FR for the range tested, a linear decreasing function of CT for the range tested. Pause time was implicated in both of the above relations, being a generally increasing function of both FR and CT.

  18. Real time data analysis with the ATLAS trigger at the LHC in Run-2

    CERN Document Server

    Beauchemin, Pierre-Hugues; The ATLAS collaboration

    2018-01-01

    The trigger selection capabilities of the ATLAS detector have been significantly enhanced for the LHC Run-2 in order to cope with the higher event rates and with the large number of simultaneous interactions (pile-up) per proton-proton bunch crossing. A new hardware system, designed to analyse real time event-topologies at Level-1 came to full use in 2017. A hardware-based track reconstruction system, expected to be used real-time in 2018, is designed to provide track information to the high-level software trigger at its full input rate. The high-level trigger selections are largely relying on offline-like reconstruction techniques, and in some cases multi-variate analysis methods. Despite the sudden change in LHC operations during the second half of 2017, which caused an increase in pile-up and therefore also in CPU usage of the trigger algorithms, the set of triggers (so called trigger menu) running online has undergone only minor modifications thanks to the robustness and redundancy of the trigger system, ...

  19. CrocoBLAST: Running BLAST efficiently in the age of next-generation sequencing.

    Science.gov (United States)

    Tristão Ramos, Ravi José; de Azevedo Martins, Allan Cézar; da Silva Delgado, Gabrielle; Ionescu, Crina-Maria; Ürményi, Turán Peter; Silva, Rosane; Koca, Jaroslav

    2017-11-15

    CrocoBLAST is a tool for dramatically speeding up BLAST+ execution on any computer. Alignments that would take days or weeks with NCBI BLAST+ can be run overnight with CrocoBLAST. Additionally, CrocoBLAST provides features critical for NGS data analysis, including: results identical to those of BLAST+; compatibility with any BLAST+ version; real-time information regarding calculation progress and remaining run time; access to partial alignment results; queueing, pausing, and resuming BLAST+ calculations without information loss. CrocoBLAST is freely available online, with ample documentation (webchem.ncbr.muni.cz/Platform/App/CrocoBLAST). No installation or user registration is required. CrocoBLAST is implemented in C, while the graphical user interface is implemented in Java. CrocoBLAST is supported under Linux and Windows, and can be run under Mac OS X in a Linux virtual machine. jkoca@ceitec.cz. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  20. Novel real-time alignment and calibration of the LHCb detector in Run II

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Z., E-mail: zhirui.xu@epfl.ch; Tobin, M.

    2016-07-11

    An automatic real-time alignment and calibration strategy of the LHCb detector was developed for the Run II. Thanks to the online calibration, tighter event selection criteria can be used in the trigger. Furthermore, the online calibration facilitates the use of hadronic particle identification using the Ring Imaging Cherenkov (RICH) detectors at the trigger level. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from both the operational and physics performance points of view. Specific challenges of this novel configuration are discussed, as well as the working procedures of the framework and its performance.

  1. Novel real-time alignment and calibration of the LHCb detector in Run II

    CERN Document Server

    AUTHOR|(CDS)2086132; Tobin, Mark

    2016-01-01

    An automatic real-time alignment and calibration strategy of the LHCb detector was developed for the Run II. Thanks to the online calibration, tighter event selection criteria can be used in the trigger. Furthermore, the online calibration facilitates the use of hadronic particle identification using the Ring Imaging Cherenkov (RICH) detectors at the trigger level. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from both the operational and physics performance points of view. Specific challenges of this novel configuration are discussed, as well as the working procedures of the framework and its performance.

  2. Cluster Computing for Embedded/Real-Time Systems

    Science.gov (United States)

    Katz, D.; Kepner, J.

    1999-01-01

    Embedded and real-time systems, like other computing systems, seek to maximize computing power for a given price, and thus can significantly benefit from the advancing capabilities of cluster computing.

  3. Kajian dan Implementasi Real Time Operating System pada Single Board Computer Berbasis Arm

    Directory of Open Access Journals (Sweden)

    Wiedjaja A

    2014-06-01

    Full Text Available Operating System is an important software in computer system. For personal and office use the operating system is sufficient. However, to critical mission applications such as nuclear power plants and braking system on the car (auto braking system which need a high level of reliability, it requires operating system which operates in real time. The study aims to assess the implementation of the Linux-based operating system on a Single Board Computer (SBC ARM-based, namely Pandaboard ES with the Dual-core ARM Cortex-A9, TI OMAP 4460 type. Research was conducted by the method of implementation of the General Purpose OS Ubuntu 12:04 OMAP4-armhf-RTOS and Linux 3.4.0-rt17 + on PandaBoard ES. Then research compared the latency value of each OS on no-load and with full-load condition. The results obtained show the maximum latency value of RTOS on full load condition is at 45 uS, much smaller than the maximum value of GPOS at full-load at 17.712 uS. The lower value of latency demontrates that the RTOS has ability to run the process in a certain period of time much better than the GPOS.

  4. A rapid estimation of near field tsunami run-up

    Science.gov (United States)

    Riqueime, Sebastian; Fuentes, Mauricio; Hayes, Gavin; Campos, Jamie

    2015-01-01

    Many efforts have been made to quickly estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori.However, such models are generally based on uniform slip distributions and thus oversimplify the knowledge of the earthquake source. Here, we show how to predict tsunami run-up from any seismic source model using an analytic solution, that was specifically designed for subduction zones with a well defined geometry, i.e., Chile, Japan, Nicaragua, Alaska. The main idea of this work is to provide a tool for emergency response, trading off accuracy for speed. The solutions we present for large earthquakes appear promising. Here, run-up models are computed for: The 1992 Mw 7.7 Nicaragua Earthquake, the 2001 Mw 8.4 Perú Earthquake, the 2003Mw 8.3 Hokkaido Earthquake, the 2007 Mw 8.1 Perú Earthquake, the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake and the recent 2014 Mw 8.2 Iquique Earthquake. The maximum run-up estimations are consistent with measurements made inland after each event, with a peak of 9 m for Nicaragua, 8 m for Perú (2001), 32 m for Maule, 41 m for Tohoku, and 4.1 m for Iquique. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first minutes after the occurrence of similar events. Thus, such calculations will provide faster run-up information than is available from existing uniform-slip seismic source databases or past events of pre-modeled seismic sources.

  5. Mapping real-life applications on run-time reconfigurable NoC-based MPSoC on FPGA.

    NARCIS (Netherlands)

    Singh, A.K.; Kumar, A.; Srikanthan, Th.; Ha, Y.

    2010-01-01

    Multiprocessor systems-on-chip (MPSoC) are required to fulfill the performance demand of modern real-life embedded applications. These MPSoCs are employing Network-on-Chip (NoC) for reasons of efficiency and scalability. Additionally, these systems need to support run-time reconfiguration of their

  6. Computer network time synchronization the network time protocol

    CERN Document Server

    Mills, David L

    2006-01-01

    What started with the sundial has, thus far, been refined to a level of precision based on atomic resonance: Time. Our obsession with time is evident in this continued scaling down to nanosecond resolution and beyond. But this obsession is not without warrant. Precision and time synchronization are critical in many applications, such as air traffic control and stock trading, and pose complex and important challenges in modern information networks.Penned by David L. Mills, the original developer of the Network Time Protocol (NTP), Computer Network Time Synchronization: The Network Time Protocol

  7. ATLAS Data Preparation in Run 2

    CERN Document Server

    Laycock, Paul; The ATLAS collaboration

    2016-01-01

    In this presentation, the data preparation workflows for Run 2 are presented. Online data quality uses a new hybrid software release that incorporates the latest offline data quality monitoring software for the online environment. This is used to provide fast feedback in the control room during a data acquisition (DAQ) run, via a histogram-based monitoring framework as well as the online Event Display. Data are sent to several streams for offline processing at the dedicated Tier-0 computing facility, including dedicated calibration streams and an "express" physics stream containing approximately 2% of the main physics stream. This express stream is processed as data arrives, allowing a first look at the offline data quality within hours of a run end. A prompt calibration loop starts once an ATLAS DAQ run ends, nominally defining a 48 hour period in which calibrations and alignments can be derived using the dedicated calibration and express streams. The bulk processing of the main physics stream starts on expi...

  8. An investigation of the relation between the 30 meter running time and the femoral volume fraction in the thigh

    Directory of Open Access Journals (Sweden)

    MY Tasmektepligil

    2009-12-01

    Full Text Available Leg components are thought to be a related to speed. Only a limited number of studies have, however, examined the interaction between speed and bone size. In this study, we examined the relationship between the time taken by football players to run thirty meters and the fraction which the femur forms compared to the entire thigh region. Data collected from thirty male football players of average age 17.3 (between 16-19 years old were analyzed. First we detected the thirty meter running times and then we estimated the volume fraction of the femur to the entire thigh region using stereological methods on magnetic resonance images. Our data showed that there was a highly negative relationship between the 30 meter running times and the volume fraction of the bone to the thigh region. Thus, 30 meter running time decreases as the fraction of the bone to the thigh region increases. In other words, speed increases as the fraction of bone volume increases. Our data indicate that selecting sportsman whose femoral volume fractions are high will provide a significant benefit to enhancing performance in those branches of sports which require speed. Moreover, we concluded that training which can increase the bone volume fraction should be practiced when an increase in speed is desired and that the changes in the fraction of thigh region components should be monitored during these trainings.

  9. 5 CFR 890.101 - Definitions; time computations.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Definitions; time computations. 890.101....101 Definitions; time computations. (a) In this part, the terms annuitant, carrier, employee, employee... in section 8901 of title 5, United States Code, and supplement the following definitions: Appropriate...

  10. Multiple running speed signals in medial entorhinal cortex

    Science.gov (United States)

    Hinman, James R.; Brandon, Mark P.; Climer, Jason R.; Chapman, G. William; Hasselmo, Michael E.

    2016-01-01

    Grid cells in medial entorhinal cortex (MEC) can be modeled using oscillatory interference or attractor dynamic mechanisms that perform path integration, a computation requiring information about running direction and speed. The two classes of computational models often use either an oscillatory frequency or a firing rate that increases as a function of running speed. Yet it is currently not known whether these are two manifestations of the same speed signal or dissociable signals with potentially different anatomical substrates. We examined coding of running speed in MEC and identified these two speed signals to be independent of each other within individual neurons. The medial septum (MS) is strongly linked to locomotor behavior and removal of MS input resulted in strengthening of the firing rate speed signal, while decreasing the strength of the oscillatory speed signal. Thus two speed signals are present in MEC that are differentially affected by disrupted MS input. PMID:27427460

  11. Development of real-time visualization system for Computational Fluid Dynamics on parallel computers

    International Nuclear Information System (INIS)

    Muramatsu, Kazuhiro; Otani, Takayuki; Matsumoto, Hideki; Takei, Toshifumi; Doi, Shun

    1998-03-01

    A real-time visualization system for computational fluid dynamics in a network connecting between a parallel computing server and the client terminal was developed. Using the system, a user can visualize the results of a CFD (Computational Fluid Dynamics) simulation on the parallel computer as a client terminal during the actual computation on a server. Using GUI (Graphical User Interface) on the client terminal, to user is also able to change parameters of the analysis and visualization during the real-time of the calculation. The system carries out both of CFD simulation and generation of a pixel image data on the parallel computer, and compresses the data. Therefore, the amount of data from the parallel computer to the client is so small in comparison with no compression that the user can enjoy the swift image appearance comfortably. Parallelization of image data generation is based on Owner Computation Rule. GUI on the client is built on Java applet. A real-time visualization is thus possible on the client PC only if Web browser is implemented on it. (author)

  12. Mean platelet volume (MPV) predicts middle distance running performance.

    Science.gov (United States)

    Lippi, Giuseppe; Salvagno, Gian Luca; Danese, Elisa; Skafidas, Spyros; Tarperi, Cantor; Guidi, Gian Cesare; Schena, Federico

    2014-01-01

    Running economy and performance in middle distance running depend on several physiological factors, which include anthropometric variables, functional characteristics, training volume and intensity. Since little information is available about hematological predictors of middle distance running time, we investigated whether some hematological parameters may be associated with middle distance running performance in a large sample of recreational runners. The study population consisted in 43 amateur runners (15 females, 28 males; median age 47 years), who successfully concluded a 21.1 km half-marathon at 75-85% of their maximal aerobic power (VO2max). Whole blood was collected 10 min before the run started and immediately thereafter, and hematological testing was completed within 2 hours after sample collection. The values of lymphocytes and eosinophils exhibited a significant decrease compared to pre-run values, whereas those of mean corpuscular volume (MCV), platelets, mean platelet volume (MPV), white blood cells (WBCs), neutrophils and monocytes were significantly increased after the run. In univariate analysis, significant associations with running time were found for pre-run values of hematocrit, hemoglobin, mean corpuscular hemoglobin (MCH), red blood cell distribution width (RDW), MPV, reticulocyte hemoglobin concentration (RetCHR), and post-run values of MCH, RDW, MPV, monocytes and RetCHR. In multivariate analysis, in which running time was entered as dependent variable whereas age, sex, blood lactate, body mass index, VO2max, mean training regimen and the hematological parameters significantly associated with running performance in univariate analysis were entered as independent variables, only MPV values before and after the trial remained significantly associated with running time. After adjustment for platelet count, the MPV value before the run (p = 0.042), but not thereafter (p = 0.247), remained significantly associated with running

  13. Mean platelet volume (MPV predicts middle distance running performance.

    Directory of Open Access Journals (Sweden)

    Giuseppe Lippi

    Full Text Available Running economy and performance in middle distance running depend on several physiological factors, which include anthropometric variables, functional characteristics, training volume and intensity. Since little information is available about hematological predictors of middle distance running time, we investigated whether some hematological parameters may be associated with middle distance running performance in a large sample of recreational runners.The study population consisted in 43 amateur runners (15 females, 28 males; median age 47 years, who successfully concluded a 21.1 km half-marathon at 75-85% of their maximal aerobic power (VO2max. Whole blood was collected 10 min before the run started and immediately thereafter, and hematological testing was completed within 2 hours after sample collection.The values of lymphocytes and eosinophils exhibited a significant decrease compared to pre-run values, whereas those of mean corpuscular volume (MCV, platelets, mean platelet volume (MPV, white blood cells (WBCs, neutrophils and monocytes were significantly increased after the run. In univariate analysis, significant associations with running time were found for pre-run values of hematocrit, hemoglobin, mean corpuscular hemoglobin (MCH, red blood cell distribution width (RDW, MPV, reticulocyte hemoglobin concentration (RetCHR, and post-run values of MCH, RDW, MPV, monocytes and RetCHR. In multivariate analysis, in which running time was entered as dependent variable whereas age, sex, blood lactate, body mass index, VO2max, mean training regimen and the hematological parameters significantly associated with running performance in univariate analysis were entered as independent variables, only MPV values before and after the trial remained significantly associated with running time. After adjustment for platelet count, the MPV value before the run (p = 0.042, but not thereafter (p = 0.247, remained significantly associated with running

  14. 29 CFR 1921.22 - Computation of time.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Computation of time. 1921.22 Section 1921.22 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... WORKERS' COMPENSATION ACT Miscellaneous § 1921.22 Computation of time. Sundays and holidays shall be...

  15. Design and development of FPGA based TCP/IP module for real time computers in nuclear power plants

    International Nuclear Information System (INIS)

    Balasri, G. Janani; Santhana Raj, A.; Gour, Aditya; Murali, N.; Manikandan, J.

    2013-01-01

    An VME (Virtual Module Europa) bus based Real Time Computer's (RTC's) are being developed for Prototype Fast Breeder Reactor (PFBR) which is in an advanced stage of construction at Kalpakkam, where the RTC's have to communicate to the central process computer on the data collected from the field instrument and receive data from the central process computer. A Distributed Digital Control System (DDSC) architecture has been designed for this communication which is based on Transfer Communication Protocol/Internet Protocol (TCP/IP) over Ethernet. Currently the RTC's uses 'Wiznet Module', a bought out chip which implements the TCP/IP stack in hardware. This project concentrates on the design and development of Field Programmable Gate Array (FPGA) based TCP/IP module that runs on Microblaze, a 32-bit softcore processor, to take care of the communication as that of Wiznet module. Advantage of switching over to FPGA based system are its reconfigurability, desired number of sockets, and the design is stable even if the FPGA's get obsolete. (author)

  16. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    Science.gov (United States)

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  17. Massively parallel signal processing using the graphics processing unit for real-time brain-computer interface feature extraction

    Directory of Open Access Journals (Sweden)

    J. Adam Wilson

    2009-07-01

    Full Text Available The clock speeds of modern computer processors have nearly plateaued in the past five years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card (GPU was developed for real-time neural signal processing of a brain-computer interface (BCI. The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter, followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally-intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a CPU-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  18. A recursive algorithm for computing the inverse of the Vandermonde matrix

    Directory of Open Access Journals (Sweden)

    Youness Aliyari Ghassabeh

    2016-12-01

    Full Text Available The inverse of a Vandermonde matrix has been used for signal processing, polynomial interpolation, curve fitting, wireless communication, and system identification. In this paper, we propose a novel fast recursive algorithm to compute the inverse of a Vandermonde matrix. The algorithm computes the inverse of a higher order Vandermonde matrix using the available lower order inverse matrix with a computational cost of $ O(n^2 $. The proposed algorithm is given in a matrix form, which makes it appropriate for hardware implementation. The running time of the proposed algorithm to find the inverse of a Vandermonde matrix using a lower order Vandermonde matrix is compared with the running time of the matrix inversion function implemented in MATLAB.

  19. The LHCb Run Control

    Energy Technology Data Exchange (ETDEWEB)

    Alessio, F; Barandela, M C; Frank, M; Gaspar, C; Herwijnen, E v; Jacobsson, R; Jost, B; Neufeld, N; Sambade, A; Schwemmer, R; Somogyi, P [CERN, 1211 Geneva 23 (Switzerland); Callot, O [LAL, IN2P3/CNRS and Universite Paris 11, Orsay (France); Duval, P-Y [Centre de Physique des Particules de Marseille, Aix-Marseille Universite, CNRS/IN2P3, Marseille (France); Franek, B [Rutherford Appleton Laboratory, Chilton, Didcot, OX11 0QX (United Kingdom); Galli, D, E-mail: Clara.Gaspar@cern.c [Universita di Bologna and INFN, Bologna (Italy)

    2010-04-01

    LHCb has designed and implemented an integrated Experiment Control System. The Control System uses the same concepts and the same tools to control and monitor all parts of the experiment: the Data Acquisition System, the Timing and the Trigger Systems, the High Level Trigger Farm, the Detector Control System, the Experiment's Infrastructure and the interaction with the CERN Technical Services and the Accelerator. LHCb's Run Control, the main interface used by the experiment's operator, provides access in a hierarchical, coherent and homogeneous manner to all areas of the experiment and to all its sub-detectors. It allows for automated (or manual) configuration and control, including error recovery, of the full experiment in its different running modes. Different instances of the same Run Control interface are used by the various sub-detectors for their stand-alone activities: test runs, calibration runs, etc. The architecture and the tools used to build the control system, the guidelines and components provided to the developers, as well as the first experience with the usage of the Run Control will be presented

  20. Educational Technology Network: a computer conferencing system dedicated to applications of computers in radiology practice, research, and education.

    Science.gov (United States)

    D'Alessandro, M P; Ackerman, M J; Sparks, S M

    1993-11-01

    Educational Technology Network (ET Net) is a free, easy to use, on-line computer conferencing system organized and funded by the National Library of Medicine that is accessible via the SprintNet (SprintNet, Reston, VA) and Internet (Merit, Ann Arbor, MI) computer networks. It is dedicated to helping bring together, in a single continuously running electronic forum, developers and users of computer applications in the health sciences, including radiology. ET Net uses the Caucus computer conferencing software (Camber-Roth, Troy, NY) running on a microcomputer. This microcomputer is located in the National Library of Medicine's Lister Hill National Center for Biomedical Communications and is directly connected to the SprintNet and the Internet networks. The advanced computer conferencing software of ET Net allows individuals who are separated in space and time to unite electronically to participate, at any time, in interactive discussions on applications of computers in radiology. A computer conferencing system such as ET Net allows radiologists to maintain contact with colleagues on a regular basis when they are not physically together. Topics of discussion on ET Net encompass all applications of computers in radiological practice, research, and education. ET Net has been in successful operation for 3 years and has a promising future aiding radiologists in the exchange of information pertaining to applications of computers in radiology.

  1. Reconfigurable FPGA architecture for computer vision applications in Smart Camera Networks

    OpenAIRE

    Maggiani , Luca; Salvadori , Claudio; Petracca , Matteo; Pagano , Paolo; Saletti , Roberto

    2013-01-01

    International audience; Smart Camera Networks (SCNs) is nowadays an emerging research field which represents the natural evolution of centralized computer vision applications towards full distributed and pervasive systems. In such a scenario, one of the biggest effort is in the definition of a flexible and reconfigurable SCN node architecture able to remotely support the possibility of updating the application parameters and changing the running computer vision applications at run-time. In th...

  2. Decreasing Computational Time for VBBinaryLensing by Point Source Approximation

    Science.gov (United States)

    Tirrell, Bethany M.; Visgaitis, Tiffany A.; Bozza, Valerio

    2018-01-01

    The gravitational lens of a binary system produces a magnification map that is more intricate than a single object lens. This map cannot be calculated analytically and one must rely on computational methods to resolve. There are generally two methods of computing the microlensed flux of a source. One is based on ray-shooting maps (Kayser, Refsdal, & Stabell 1986), while the other method is based on an application of Green’s theorem. This second method finds the area of an image by calculating a Riemann integral along the image contour. VBBinaryLensing is a C++ contour integration code developed by Valerio Bozza, which utilizes this method. The parameters at which the source object could be treated as a point source, or in other words, when the source is far enough from the caustic, was of interest to substantially decrease the computational time. The maximum and minimum values of the caustic curves produced, were examined to determine the boundaries for which this simplification could be made. The code was then run for a number of different maps, with separation values and accuracies ranging from 10-1 to 10-3, to test the theoretical model and determine a safe buffer for which minimal error could be made for the approximation. The determined buffer was 1.5+5q, with q being the mass ratio. The theoretical model and the calculated points worked for all combinations of the separation values and different accuracies except the map with accuracy and separation equal to 10-3 for y1 max. An alternative approach has to be found in order to accommodate a wider range of parameters.

  3. Short-run and long-run dynamics of farm land allocation

    DEFF Research Database (Denmark)

    Arnberg, Søren; Hansen, Lars Gårn

    2012-01-01

    This study develops and estimates a dynamic multi-output model of farmers’ land allocation decisions that allows for the gradual adjustment of allocations that can result from crop rotation practices and quasi-fixed capital constraints. Estimation is based on micro-panel data from Danish farmers...... that include acreage, output, and variable input utilization at the crop level. Results indicate that there are substantial differences between the short-run and long-run land allocation behaviour of Danish farmers and that there are substantial differences in the time lags associated with different crops...

  4. Evaluating computer program performance on the CRAY-1

    International Nuclear Information System (INIS)

    Rudsinski, L.; Pieper, G.W.

    1979-01-01

    The Advanced Scientific Computers Project of Argonne's Applied Mathematics Division has two objectives: to evaluate supercomputers and to determine their effect on Argonne's computing workload. Initial efforts have focused on the CRAY-1, which is the only advanced computer currently available. Users from seven Argonne divisions executed test programs on the CRAY and made performance comparisons with the IBM 370/195 at Argonne. This report describes these experiences and discusses various techniques for improving run times on the CRAY. Direct translations of code from scalar to vector processor reduced running times as much as two-fold, and this reduction will become more pronounced as the CRAY compiler is developed. Further improvement (two- to ten-fold) was realized by making minor code changes to facilitate compiler recognition of the parallel and vector structure within the programs. Finally, extensive rewriting of the FORTRAN code structure reduced execution times dramatically, in three cases by a factor of more than 20; and even greater reduction should be possible by changing algorithms within a production code. It is condluded that the CRAY-1 would be of great benefit to Argonne researchers. Existing codes could be modified with relative ease to run significantly faster than on the 370/195. More important, the CRAY would permit scientists to investigate complex problems currently deemed infeasibile on traditional scalar machines. Finally, an interface between the CRAY-1 and IBM computers such as the 370/195, scheduled by Cray Research for the first quarter of 1979, would considerably facilitate the task of integrating the CRAY into Argonne's Central Computing Facility. 13 tables

  5. The design of the run Clever randomized trial: running volume, -intensity and running-related injuries.

    Science.gov (United States)

    Ramskov, Daniel; Nielsen, Rasmus Oestergaard; Sørensen, Henrik; Parner, Erik; Lind, Martin; Rasmussen, Sten

    2016-04-23

    Injury incidence and prevalence in running populations have been investigated and documented in several studies. However, knowledge about injury etiology and prevention is needed. Training errors in running are modifiable risk factors and people engaged in recreational running need evidence-based running schedules to minimize the risk of injury. The existing literature on running volume and running intensity and the development of injuries show conflicting results. This may be related to previously applied study designs, methods used to quantify the performed running and the statistical analysis of the collected data. The aim of the Run Clever trial is to investigate if a focus on running intensity compared with a focus on running volume in a running schedule influences the overall injury risk differently. The Run Clever trial is a randomized trial with a 24-week follow-up. Healthy recreational runners between 18 and 65 years and with an average of 1-3 running sessions per week the past 6 months are included. Participants are randomized into two intervention groups: Running schedule-I and Schedule-V. Schedule-I emphasizes a progression in running intensity by increasing the weekly volume of running at a hard pace, while Schedule-V emphasizes a progression in running volume, by increasing the weekly overall volume. Data on the running performed is collected by GPS. Participants who sustain running-related injuries are diagnosed by a diagnostic team of physiotherapists using standardized diagnostic criteria. The members of the diagnostic team are blinded. The study design, procedures and informed consent were approved by the Ethics Committee Northern Denmark Region (N-20140069). The Run Clever trial will provide insight into possible differences in injury risk between running schedules emphasizing either running intensity or running volume. The risk of sustaining volume- and intensity-related injuries will be compared in the two intervention groups using a competing

  6. Run-time Phenomena in Dynamic Software Updating: Causes and Effects

    DEFF Research Database (Denmark)

    Gregersen, Allan Raundahl; Jørgensen, Bo Nørregaard

    2011-01-01

    The development of a dynamic software updating system for statically-typed object-oriented programming languages has turned out to be a challenging task. Despite the fact that the present state of the art in dynamic updating systems, like JRebel, Dynamic Code Evolution VM, JVolve and Javeleon, all...... written in statically-typed object-oriented programming languages. In this paper, we present our experience from developing dynamically updatable applications using a state-of-the-art dynamic updating system for Java. We believe that the findings presented in this paper provide an important step towards...... provide very transparent and flexible technical solutions to dynamic updating, case studies have shown that designing dynamically updatable applications still remains a challenging task. This challenge has its roots in a number of run-time phenomena that are inherent to dynamic updating of applications...

  7. The LHCb Run Control

    CERN Document Server

    Alessio, F; Callot, O; Duval, P-Y; Franek, B; Frank, M; Galli, D; Gaspar, C; v Herwijnen, E; Jacobsson, R; Jost, B; Neufeld, N; Sambade, A; Schwemmer, R; Somogyi, P

    2010-01-01

    LHCb has designed and implemented an integrated Experiment Control System. The Control System uses the same concepts and the same tools to control and monitor all parts of the experiment: the Data Acquisition System, the Timing and the Trigger Systems, the High Level Trigger Farm, the Detector Control System, the Experiment's Infrastructure and the interaction with the CERN Technical Services and the Accelerator. LHCb's Run Control, the main interface used by the experiment's operator, provides access in a hierarchical, coherent and homogeneous manner to all areas of the experiment and to all its sub-detectors. It allows for automated (or manual) configuration and control, including error recovery, of the full experiment in its different running modes. Different instances of the same Run Control interface are used by the various sub-detectors for their stand-alone activities: test runs, calibration runs, etc. The architecture and the tools used to build the control system, the guidelines and components provid...

  8. Running the EGS4 Monte Carlo code with Fortran 90 on a pentium computer

    International Nuclear Information System (INIS)

    Caon, M.; Bibbo, G.; Pattison, J.

    1996-01-01

    The possibility to run the EGS4 Monte Carlo code radiation transport system for medical radiation modelling on a microcomputer is discussed. This has been done using a Fortran 77 compiler with a 32-bit memory addressing system running under a memory extender operating system. In addition a virtual memory manager such as QEMM386 was required. It has successfully run on a SUN Sparcstation2. In 1995 faster Pentium-based microcomputers became available as did the Windows 95 operating system which can handle 32-bit programs, multitasking and provides its own virtual memory management. The paper describe how with simple modification to the batch files it was possible to run EGS4 on a Pentium under Fortran 90 and Windows 95. This combination of software and hardware is cheaper and faster than running it on a SUN Sparcstation2. 8 refs., 1 tab

  9. Running the EGS4 Monte Carlo code with Fortran 90 on a pentium computer

    Energy Technology Data Exchange (ETDEWEB)

    Caon, M. [Flinders Univ. of South Australia, Bedford Park, SA (Australia)]|[Univercity of South Australia, SA (Australia); Bibbo, G. [Womens and Childrens hospital, SA (Australia); Pattison, J. [Univercity of South Australia, SA (Australia)

    1996-09-01

    The possibility to run the EGS4 Monte Carlo code radiation transport system for medical radiation modelling on a microcomputer is discussed. This has been done using a Fortran 77 compiler with a 32-bit memory addressing system running under a memory extender operating system. In addition a virtual memory manager such as QEMM386 was required. It has successfully run on a SUN Sparcstation2. In 1995 faster Pentium-based microcomputers became available as did the Windows 95 operating system which can handle 32-bit programs, multitasking and provides its own virtual memory management. The paper describe how with simple modification to the batch files it was possible to run EGS4 on a Pentium under Fortran 90 and Windows 95. This combination of software and hardware is cheaper and faster than running it on a SUN Sparcstation2. 8 refs., 1 tab.

  10. Delivering LHC software to HPC compute elements

    CERN Document Server

    Blomer, Jakob; Hardi, Nikola; Popescu, Radu

    2017-01-01

    In recent years, there was a growing interest in improving the utilization of supercomputers by running applications of experiments at the Large Hadron Collider (LHC) at CERN when idle cores cannot be assigned to traditional HPC jobs. At the same time, the upcoming LHC machine and detector upgrades will produce some 60 times higher data rates and challenge LHC experiments to use so far untapped compute resources. LHC experiment applications are tailored to run on high-throughput computing resources and they have a different anatomy than HPC applications. LHC applications comprise a core framework that allows hundreds of researchers to plug in their specific algorithms. The software stacks easily accumulate to many gigabytes for a single release. New releases are often produced on a daily basis. To facilitate the distribution of these software stacks to world-wide distributed computing resources, LHC experiments use a purpose-built, global, POSIX file system, the CernVM File System. CernVM-FS pre-processes dat...

  11. Subversion: The Neglected Aspect of Computer Security.

    Science.gov (United States)

    1980-06-01

    it into the memory of the computer . These are called flows on covert channels... A simple covert channel is the running time of a program . Because... program and, in doing so, gives it ’permission’ to perform its covert functions. Not only will most computer systems not prevent the employment of such a...R. Schell, Major, USAF, June 1974. 109 11. Lackey, R.p., "Penetration of Computer Systems, an Overviev , Honeywell Computer Journal, Vol. 8, no. 21974

  12. Reinforcement of drinking by running: effect of fixed ratio and reinforcement time1

    Science.gov (United States)

    Premack, David; Schaeffer, Robert W.; Hundt, Alan

    1964-01-01

    Rats were required to complete varying numbers of licks (FR), ranging from 10 to 300, in order to free an activity wheel for predetermined times (CT) ranging from 2 to 20 sec. The reinforcement of drinking by running was shown both by an increased frequency of licking, and by changes in length of the burst of licking relative to operant-level burst length. In log-log coordinates, instrumental licking tended to be a linear increasing function of FR for the range tested, a linear decreasing function of CT for the range tested. Pause time was implicated in both of the above relations, being a generally increasing function of both FR and CT. PMID:14120150

  13. Ada Run Time Support Environments and a common APSE Interface Set. [Ada Programming Support Environment

    Science.gov (United States)

    Mckay, C. W.; Bown, R. L.

    1985-01-01

    The paper discusses the importance of linking Ada Run Time Support Environments to the Common Ada Programming Support Environment (APSE) Interface Set (CAIS). A non-stop network operating systems scenario is presented to serve as a forum for identifying the important issues. The network operating system exemplifies the issues involved in the NASA Space Station data management system.

  14. 40 CFR 209.12 - Time.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Time. 209.12 Section 209.12 Protection... Issued Under Section 11(d) of the Noise Control Act § 209.12 Time. (a) In computing any period of time... period of time begins to run shall not be included, except as otherwise provided. Saturdays, Sundays, and...

  15. Concurrent schedules of wheel-running reinforcement: choice between different durations of opportunity to run in rats.

    Science.gov (United States)

    Belke, Terry W

    2006-02-01

    How do animals choose between opportunities to run of different durations? Are longer durations preferred over shorter durations because they permit a greater number of revolutions? Are shorter durations preferred because they engender higher rates of running? Will longer durations be chosen because running is less constrained? The present study reports on three experiments that attempted to address these questions. In the first experiment, five male Wistar rats chose between 10-sec and 50-sec opportunities to run on modified concurrent variable-interval (VI) schedules. Across conditions, the durations associated with the alternatives were reversed. Response, time, and reinforcer proportions did not vary from indifference. In a second experiment, eight female Long-Evans rats chose between opportunities to run of equal (30 sec) and unequal durations (10 sec and 50 sec) on concurrent variable-ratio (VR) schedules. As in Experiment 1, between presentations of equal duration conditions, 10-sec and 50-sec durations were reversed. Results showed that response, time, and reinforcer proportions on an alternative did not vary with reinforcer duration. In a third experiment, using concurrent VR schedules, durations were systematically varied to decrease the shorter duration toward 0 sec. As the shorter duration decreased, response, time, and reinforcer proportions shifted toward the longer duration. In summary, differences in durations of opportunities to run did not affect choice behavior in a manner consistent with the assumption that a longer reinforcer is a larger reinforcer.

  16. Semantic 3d City Model to Raster Generalisation for Water Run-Off Modelling

    Science.gov (United States)

    Verbree, E.; de Vries, M.; Gorte, B.; Oude Elberink, S.; Karimlou, G.

    2013-09-01

    Water run-off modelling applied within urban areas requires an appropriate detailed surface model represented by a raster height grid. Accurate simulations at this scale level have to take into account small but important water barriers and flow channels given by the large-scale map definitions of buildings, street infrastructure, and other terrain objects. Thus, these 3D features have to be rasterised such that each cell represents the height of the object class as good as possible given the cell size limitations. Small grid cells will result in realistic run-off modelling but with unacceptable computation times; larger grid cells with averaged height values will result in less realistic run-off modelling but fast computation times. This paper introduces a height grid generalisation approach in which the surface characteristics that most influence the water run-off flow are preserved. The first step is to create a detailed surface model (1:1.000), combining high-density laser data with a detailed topographic base map. The topographic map objects are triangulated to a set of TIN-objects by taking into account the semantics of the different map object classes. These TIN objects are then rasterised to two grids with a 0.5m cell-spacing: one grid for the object class labels and the other for the TIN-interpolated height values. The next step is to generalise both raster grids to a lower resolution using a procedure that considers the class label of each cell and that of its neighbours. The results of this approach are tested and validated by water run-off model runs for different cellspaced height grids at a pilot area in Amersfoort (the Netherlands). Two national datasets were used in this study: the large scale Topographic Base map (BGT, map scale 1:1.000), and the National height model of the Netherlands AHN2 (10 points per square meter on average). Comparison between the original AHN2 height grid and the semantically enriched and then generalised height grids shows

  17. Enabling the ATLAS Experiment at the LHC for High Performance Computing

    CERN Document Server

    AUTHOR|(CDS)2091107; Ereditato, Antonio

    In this thesis, I studied the feasibility of running computer data analysis programs from the Worldwide LHC Computing Grid, in particular large-scale simulations of the ATLAS experiment at the CERN LHC, on current general purpose High Performance Computing (HPC) systems. An approach for integrating HPC systems into the Grid is proposed, which has been implemented and tested on the „Todi” HPC machine at the Swiss National Supercomputing Centre (CSCS). Over the course of the test, more than 500000 CPU-hours of processing time have been provided to ATLAS, which is roughly equivalent to the combined computing power of the two ATLAS clusters at the University of Bern. This showed that current HPC systems can be used to efficiently run large-scale simulations of the ATLAS detector and of the detected physics processes. As a first conclusion of my work, one can argue that, in perspective, running large-scale tasks on a few large machines might be more cost-effective than running on relatively small dedicated com...

  18. Using Computer Techniques To Predict OPEC Oil Prices For Period 2000 To 2015 By Time-Series Methods

    Directory of Open Access Journals (Sweden)

    Mohammad Esmail Ahmad

    2015-08-01

    Full Text Available The instability in the world and OPEC oil process results from many factors through a long time. The problems can be summarized as that the oil exports dont constitute a large share of N.I. only but it also makes up most of the saving of the oil states. The oil prices affect their market through the interaction of supply and demand forces of oil. The research hypothesis states that the movement of oil prices caused shocks crises and economic problems. These shocks happen due to changes in oil prices need to make a prediction within the framework of economic planning in a short run period in order to avoid shocks through using computer techniques by time series models.

  19. Causal Analysis of Railway Running Delays

    DEFF Research Database (Denmark)

    Cerreto, Fabrizio; Nielsen, Otto Anker; Harrod, Steven

    Operating delays and network propagation are inherent characteristics of railway operations. These are traditionally reduced by provision of time supplements or “slack” in railway timetables and operating plans. Supplement allocation policies must trade off reliability in the service commitments...... Denmark (the Danish infrastructure manager). The statistical analysis of the data identifies the minimum running times and the scheduled running time supplements and investigates the evolution of train delays along given train paths. An improved allocation of time supplements would result in smaller...

  20. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  1. New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era

    CERN Document Server

    Adamova, Dagmar

    2017-01-01

    The performance of the Large Hadron Collider (LHC) during the ongoing Run 2 is above expectations both concerning the delivered luminosity and the LHC live time. This resulted in a volume of data much larger than originally anticipated. Based on the current data production levels and the structure of the LHC experiment computing models, the estimates of the data production rates and resource needs were re-evaluated for the era leading into the High Luminosity LHC (HLLHC), the Run 3 and Run 4 phases of LHC operation. It turns out that the raw data volume will grow 10 times by the HL-LHC era and the processing capacity needs will grow more than 60 times. While the growth of storage requirements might in principle be satisfied with a 20 per cent budget increase and technology advancements, there is a gap of a factor 6 to 10 between the needed and available computing resources. The threat of a lack of computing and storage resources was present already in the beginning of Run 2, but could still be mitigated, e.g....

  2. Time-Predictable Computer Architecture

    Directory of Open Access Journals (Sweden)

    Schoeberl Martin

    2009-01-01

    Full Text Available Today's general-purpose processors are optimized for maximum throughput. Real-time systems need a processor with both a reasonable and a known worst-case execution time (WCET. Features such as pipelines with instruction dependencies, caches, branch prediction, and out-of-order execution complicate WCET analysis and lead to very conservative estimates. In this paper, we evaluate the issues of current architectures with respect to WCET analysis. Then, we propose solutions for a time-predictable computer architecture. The proposed architecture is evaluated with implementation of some features in a Java processor. The resulting processor is a good target for WCET analysis and still performs well in the average case.

  3. Recent achievements in real-time computational seismology in Taiwan

    Science.gov (United States)

    Lee, S.; Liang, W.; Huang, B.

    2012-12-01

    Real-time computational seismology is currently possible to be achieved which needs highly connection between seismic database and high performance computing. We have developed a real-time moment tensor monitoring system (RMT) by using continuous BATS records and moment tensor inversion (CMT) technique. The real-time online earthquake simulation service is also ready to open for researchers and public earthquake science education (ROS). Combine RMT with ROS, the earthquake report based on computational seismology can provide within 5 minutes after an earthquake occurred (RMT obtains point source information ROS completes a 3D simulation real-time now. For more information, welcome to visit real-time computational seismology earthquake report webpage (RCS).

  4. A Faster Algorithm for Computing Straight Skeletons

    KAUST Repository

    Mencel, Liam A.

    2014-01-01

    computation in O(n (log n) log r) time. It improves on the previously best known algorithm for this reduction, which is randomised, and runs in expected O(n √(h+1) log² n) time for a polygon with h holes. Using known motorcycle graph algorithms, our result

  5. A rapid estimation of tsunami run-up based on finite fault models

    Science.gov (United States)

    Campos, J.; Fuentes, M. A.; Hayes, G. P.; Barrientos, S. E.; Riquelme, S.

    2014-12-01

    Many efforts have been made to estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori. However, such models are generally based on uniform slip distributions and thus oversimplify our knowledge of the earthquake source. Instead, we can use finite fault models of earthquakes to give a more accurate prediction of the tsunami run-up. Here we show how to accurately predict tsunami run-up from any seismic source model using an analytic solution found by Fuentes et al, 2013 that was especially calculated for zones with a very well defined strike, i.e, Chile, Japan, Alaska, etc. The main idea of this work is to produce a tool for emergency response, trading off accuracy for quickness. Our solutions for three large earthquakes are promising. Here we compute models of the run-up for the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake, and the recent 2014 Mw 8.2 Iquique Earthquake. Our maximum rup-up predictions are consistent with measurements made inland after each event, with a peak of 15 to 20 m for Maule, 40 m for Tohoku, and 2,1 m for the Iquique earthquake. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first five minutes after the occurrence of any such event. Such calculations will thus provide more accurate run-up information than is otherwise available from existing uniform-slip seismic source databases.

  6. LHCb : Novel real-time alignment and calibration of the LHCb Detector in Run2

    CERN Multimedia

    Tobin, Mark

    2015-01-01

    LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run 2. Data collected at the start of the fill will be processed in a few minutes and used to update the alignment, while the calibration constants will be evaluated for each run. This procedure will improve the quality of the online alignment. For example, the vertex locator is retracted and reinserted for stable beam collisions in each fill to be centred on the primary vertex position in the transverse plane. Consequently its position changes on a fill-by-fill basis. Critically, this new realtime alignment and calibration procedure allows identical constants to be used in the online and offline reconstruction, thus improving the correlation between triggered and offline selected events. This offers the opportunity to optimise the event selection in the trigger by applying stronger constraints. The online calibration facilitates the use of hadronic particle identification using the RICH detectors at the trigger level. T...

  7. Voluntary Wheel Running in Mice.

    Science.gov (United States)

    Goh, Jorming; Ladiges, Warren

    2015-12-02

    Voluntary wheel running in the mouse is used to assess physical performance and endurance and to model exercise training as a way to enhance health. Wheel running is a voluntary activity in contrast to other experimental exercise models in mice, which rely on aversive stimuli to force active movement. This protocol consists of allowing mice to run freely on the open surface of a slanted, plastic saucer-shaped wheel placed inside a standard mouse cage. Rotations are electronically transmitted to a USB hub so that frequency and rate of running can be captured via a software program for data storage and analysis for variable time periods. Mice are individually housed so that accurate recordings can be made for each animal. Factors such as mouse strain, gender, age, and individual motivation, which affect running activity, must be considered in the design of experiments using voluntary wheel running. Copyright © 2015 John Wiley & Sons, Inc.

  8. Haemoglobin mass and running time trial performance after recombinant human erythropoietin administration in trained men.

    Directory of Open Access Journals (Sweden)

    Jérôme Durussel

    Full Text Available UNLABELLED: Recombinant human erythropoietin (rHuEpo increases haemoglobin mass (Hb(mass and maximal oxygen uptake (v O(2 max. PURPOSE: This study defined the time course of changes in Hb(mass, v O(2 max as well as running time trial performance following 4 weeks of rHuEpo administration to determine whether the laboratory observations would translate into actual improvements in running performance in the field. METHODS: 19 trained men received rHuEpo injections of 50 IU•kg(-1 body mass every two days for 4 weeks. Hb(mass was determined weekly using the optimized carbon monoxide rebreathing method until 4 weeks after administration. v O(2 max and 3,000 m time trial performance were measured pre, post administration and at the end of the study. RESULTS: Relative to baseline, running performance significantly improved by ∼6% after administration (10:30±1:07 min:sec vs. 11:08±1:15 min:sec, p<0.001 and remained significantly enhanced by ∼3% 4 weeks after administration (10:46±1:13 min:sec, p<0.001, while v O(2 max was also significantly increased post administration (60.7±5.8 mL•min(-1•kg(-1 vs. 56.0±6.2 mL•min(-1•kg(-1, p<0.001 and remained significantly increased 4 weeks after rHuEpo (58.0±5.6 mL•min(-1•kg(-1, p = 0.021. Hb(mass was significantly increased at the end of administration compared to baseline (15.2±1.5 g•kg(-1 vs. 12.7±1.2 g•kg(-1, p<0.001. The rate of decrease in Hb(mass toward baseline values post rHuEpo was similar to that of the increase during administration (-0.53 g•kg(-1•wk(-1, 95% confidence interval (CI (-0.68, -0.38 vs. 0.54 g•kg(-1•wk(-1, CI (0.46, 0.63 but Hb(mass was still significantly elevated 4 weeks after administration compared to baseline (13.7±1.1 g•kg(-1, p<0.001. CONCLUSION: Running performance was improved following 4 weeks of rHuEpo and remained elevated 4 weeks after administration compared to baseline. These field performance effects coincided with r

  9. A memory-array architecture for computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Balsara, P.T.

    1989-01-01

    With the fast advances in the area of computer vision and robotics there is a growing need for machines that can understand images at a very high speed. A conventional von Neumann computer is not suited for this purpose because it takes a tremendous amount of time to solve most typical image processing problems. Exploiting the inherent parallelism present in various vision tasks can significantly reduce the processing time. Fortunately, parallelism is increasingly affordable as hardware gets cheaper. Thus it is now imperative to study computer vision in a parallel processing framework. The author should first design a computational structure which is well suited for a wide range of vision tasks and then develop parallel algorithms which can run efficiently on this structure. Recent advances in VLSI technology have led to several proposals for parallel architectures for computer vision. In this thesis he demonstrates that a memory array architecture with efficient local and global communication capabilities can be used for high speed execution of a wide range of computer vision tasks. This architecture, called the Access Constrained Memory Array Architecture (ACMAA), is efficient for VLSI implementation because of its modular structure, simple interconnect and limited global control. Several parallel vision algorithms have been designed for this architecture. The choice of vision problems demonstrates the versatility of ACMAA for a wide range of vision tasks. These algorithms were simulated on a high level ACMAA simulator running on the Intel iPSC/2 hypercube, a parallel architecture. The results of this simulation are compared with those of sequential algorithms running on a single hypercube node. Details of the ACMAA processor architecture are also presented.

  10. Instruction timing for the CDC 7600 computer

    International Nuclear Information System (INIS)

    Lipps, H.

    1975-01-01

    This report provides timing information for all instructions of the Control Data 7600 computer, except for instructions of type 01X, to enable the optimization of 7600 programs. The timing rules serve as background information for timing charts which are produced by a program (TIME76) of the CERN Program Library. The rules that co-ordinate the different sections of the CPU are stated in as much detail as is necessary to time the flow of instructions for a given sequence of code. Instruction fetch, instruction issue, and access to small core memory are treated at length, since details are not available from the computer manuals. Annotated timing charts are given for 24 examples, chosen to display the full range of timing considerations. (Author)

  11. Parallel computers and three-dimensional computational electromagnetics

    International Nuclear Information System (INIS)

    Madsen, N.K.

    1994-01-01

    The authors have continued to enhance their ability to use new massively parallel processing computers to solve time-domain electromagnetic problems. New vectorization techniques have improved the performance of their code DSI3D by factors of 5 to 15, depending on the computer used. New radiation boundary conditions and far-field transformations now allow the computation of radar cross-section values for complex objects. A new parallel-data extraction code has been developed that allows the extraction of data subsets from large problems, which have been run on parallel computers, for subsequent post-processing on workstations with enhanced graphics capabilities. A new charged-particle-pushing version of DSI3D is under development. Finally, DSI3D has become a focal point for several new Cooperative Research and Development Agreement activities with industrial companies such as Lockheed Advanced Development Company, Varian, Hughes Electron Dynamics Division, General Atomic, and Cray

  12. Numerical Modelling of Wave Run-Up

    DEFF Research Database (Denmark)

    Ramirez, Jorge Robert Rodriguez; Frigaard, Peter; Andersen, Thomas Lykke

    2011-01-01

    Wave loads are important in problems related to offshore structure, such as wave run-up, slamming. The computation of such wave problems are carried out by CFD models. This paper presents one model, NS3, which solve 3D Navier-Stokes equations and use Volume of Fluid (VOF) method to treat the free...

  13. Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond

    International Nuclear Information System (INIS)

    Bonacorsi, D; Neri, M; Boccali, T; Giordano, D; Girone, M; Magini, N; Kuznetsov, V; Wildish, T

    2015-01-01

    During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This is of great importance, given that the scale of the computing problem will increase far faster than the resources available to the experiments, for Run-2 and beyond. Studying data-access patterns involves the validation of the quality of the monitoring data collected on the “popularity of each dataset, the analysis of the frequency and pattern of accesses to different datasets by analysis end-users, the exploration of different views of the popularity data (by physics activity, by region, by data type), the study of the evolution of Run-1 data exploitation over time, the evaluation of the impact of different data placement and distribution choices on the available network and storage resources and their impact on the computing operations. This work presents some insights from studies on the popularity data from the CMS experiment. We present the properties of a range of physics analysis activities as seen by the data popularity, and make recommendations for

  14. Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond

    Science.gov (United States)

    Bonacorsi, D.; Boccali, T.; Giordano, D.; Girone, M.; Neri, M.; Magini, N.; Kuznetsov, V.; Wildish, T.

    2015-12-01

    During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This is of great importance, given that the scale of the computing problem will increase far faster than the resources available to the experiments, for Run-2 and beyond. Studying data-access patterns involves the validation of the quality of the monitoring data collected on the “popularity of each dataset, the analysis of the frequency and pattern of accesses to different datasets by analysis end-users, the exploration of different views of the popularity data (by physics activity, by region, by data type), the study of the evolution of Run-1 data exploitation over time, the evaluation of the impact of different data placement and distribution choices on the available network and storage resources and their impact on the computing operations. This work presents some insights from studies on the popularity data from the CMS experiment. We present the properties of a range of physics analysis activities as seen by the data popularity, and make recommendations for

  15. A Faster Algorithm for Computing Straight Skeletons

    KAUST Repository

    Cheng, Siu-Wing

    2014-09-01

    We present a new algorithm for computing the straight skeleton of a polygon. For a polygon with n vertices, among which r are reflex vertices, we give a deterministic algorithm that reduces the straight skeleton computation to a motorcycle graph computation in O(n (logn)logr) time. It improves on the previously best known algorithm for this reduction, which is randomized, and runs in expected O(n√h+1log2n) time for a polygon with h holes. Using known motorcycle graph algorithms, our result yields improved time bounds for computing straight skeletons. In particular, we can compute the straight skeleton of a non-degenerate polygon in O(n (logn) logr + r 4/3 + ε ) time for any ε > 0. On degenerate input, our time bound increases to O(n (logn) logr + r 17/11 + ε ).

  16. A Faster Algorithm for Computing Straight Skeletons

    KAUST Repository

    Mencel, Liam A.

    2014-05-06

    We present a new algorithm for computing the straight skeleton of a polygon. For a polygon with n vertices, among which r are reflex vertices, we give a deterministic algorithm that reduces the straight skeleton computation to a motorcycle graph computation in O(n (log n) log r) time. It improves on the previously best known algorithm for this reduction, which is randomised, and runs in expected O(n √(h+1) log² n) time for a polygon with h holes. Using known motorcycle graph algorithms, our result yields improved time bounds for computing straight skeletons. In particular, we can compute the straight skeleton of a non-degenerate polygon in O(n (log n) log r + r^(4/3 + ε)) time for any ε > 0. On degenerate input, our time bound increases to O(n (log n) log r + r^(17/11 + ε))

  17. A Faster Algorithm for Computing Straight Skeletons

    KAUST Repository

    Cheng, Siu-Wing; Mencel, Liam A.; Vigneron, Antoine E.

    2014-01-01

    We present a new algorithm for computing the straight skeleton of a polygon. For a polygon with n vertices, among which r are reflex vertices, we give a deterministic algorithm that reduces the straight skeleton computation to a motorcycle graph computation in O(n (logn)logr) time. It improves on the previously best known algorithm for this reduction, which is randomized, and runs in expected O(n√h+1log2n) time for a polygon with h holes. Using known motorcycle graph algorithms, our result yields improved time bounds for computing straight skeletons. In particular, we can compute the straight skeleton of a non-degenerate polygon in O(n (logn) logr + r 4/3 + ε ) time for any ε > 0. On degenerate input, our time bound increases to O(n (logn) logr + r 17/11 + ε ).

  18. Cloud Computing: A model Construct of Real-Time Monitoring for Big Dataset Analytics Using Apache Spark

    Science.gov (United States)

    Alkasem, Ameen; Liu, Hongwei; Zuo, Decheng; Algarash, Basheer

    2018-01-01

    The volume of data being collected, analyzed, and stored has exploded in recent years, in particular in relation to the activity on the cloud computing. While large-scale data processing, analysis, storage, and platform model such as cloud computing were previously and currently are increasingly. Today, the major challenge is it address how to monitor and control these massive amounts of data and perform analysis in real-time at scale. The traditional methods and model systems are unable to cope with these quantities of data in real-time. Here we present a new methodology for constructing a model for optimizing the performance of real-time monitoring of big datasets, which includes a machine learning algorithms and Apache Spark Streaming to accomplish fine-grained fault diagnosis and repair of big dataset. As a case study, we use the failure of Virtual Machines (VMs) to start-up. The methodology proposition ensures that the most sensible action is carried out during the procedure of fine-grained monitoring and generates the highest efficacy and cost-saving fault repair through three construction control steps: (I) data collection; (II) analysis engine and (III) decision engine. We found that running this novel methodology can save a considerate amount of time compared to the Hadoop model, without sacrificing the classification accuracy or optimization of performance. The accuracy of the proposed method (92.13%) is an improvement on traditional approaches.

  19. Mathematica: A System of Computer Programs

    OpenAIRE

    Maiti, Santanu K.

    2006-01-01

    Starting from the basic level of mathematica here we illustrate how to use a mathematica notebook and write a program in the notebook. Next, we investigate elaborately the way of linking of external programs with mathematica, so-called the mathlink operation. Using this technique we can run very tedious jobs quite efficiently, and the operations become extremely fast. Sometimes it is quite desirable to run jobs in background of a computer which can take considerable amount of time to finish, ...

  20. Implementing ASPEN on the CRAY computer

    International Nuclear Information System (INIS)

    Duerre, K.H.; Bumb, A.C.

    1981-01-01

    This paper describes our experience in converting the ASPEN program for use on our CRAY computers at the Los Alamos National Laboratory. The CRAY computer is two-to-five times faster than a CDC-7600 for scalar operations, is equipped with up to two million words of high-speed storage, and has vector processing capability. Thus, the CRAY is a natural candidate for programs that are the size and complexity of ASPEN. Our approach to converting ASPEN and the conversion problems are discussed, including our plans for optimizing the program. Comparisons of run times for test problems between the CRAY and IBM 370 computer versions are presented

  1. Analysis of parallel computing performance of the code MCNP

    International Nuclear Information System (INIS)

    Wang Lei; Wang Kan; Yu Ganglin

    2006-01-01

    Parallel computing can reduce the running time of the code MCNP effectively. With the MPI message transmitting software, MCNP5 can achieve its parallel computing on PC cluster with Windows operating system. Parallel computing performance of MCNP is influenced by factors such as the type, the complexity level and the parameter configuration of the computing problem. This paper analyzes the parallel computing performance of MCNP regarding with these factors and gives measures to improve the MCNP parallel computing performance. (authors)

  2. Running economy and energy cost of running with backpacks.

    Science.gov (United States)

    Scheer, Volker; Cramer, Leoni; Heitkamp, Hans-Christian

    2018-05-02

    Running is a popular recreational activity and additional weight is often carried in backpacks on longer runs. Our aim was to examine running economy and other physiological parameters while running with a 1kg and 3 kg backpack at different submaximal running velocities. 10 male recreational runners (age 25 ± 4.2 years, VO2peak 60.5 ± 3.1 ml·kg-1·min-1) performed runs on a motorized treadmill of 5 minutes durations at three different submaximal speeds of 70, 80 and 90% of anaerobic lactate threshold (LT) without additional weight, and carrying a 1kg and 3 kg backpack. Oxygen consumption, heart rate, lactate and RPE were measured and analysed. Oxygen consumption, energy cost of running and heart rate increased significantly while running with a backpack weighing 3kg compared to running without additional weight at 80% of speed at lactate threshold (sLT) (p=0.026, p=0.009 and p=0.003) and at 90% sLT (p<0.001, p=0.001 and p=0.001). Running with a 1kg backpack showed a significant increase in heart rate at 80% sLT (p=0.008) and a significant increase in oxygen consumption and heart rate at 90% sLT (p=0.045 and p=0.007) compared to running without additional weight. While running at 70% sLT running economy and cardiovascular effort increased with weighted backpack running compared to running without additional weight, however these increases did not reach statistical significance. Running economy deteriorates and cardiovascular effort increases while running with additional backpack weight especially at higher submaximal running speeds. Backpack weight should therefore be kept to a minimum.

  3. Virtualization of Legacy Instrumentation Control Computers for Improved Reliability, Operational Life, and Management.

    Science.gov (United States)

    Katz, Jonathan E

    2017-01-01

    Laboratories tend to be amenable environments for long-term reliable operation of scientific measurement equipment. Indeed, it is not uncommon to find equipment 5, 10, or even 20+ years old still being routinely used in labs. Unfortunately, the Achilles heel for many of these devices is the control/data acquisition computer. Often these computers run older operating systems (e.g., Windows XP) and, while they might only use standard network, USB or serial ports, they require proprietary software to be installed. Even if the original installation disks can be found, it is a burdensome process to reinstall and is fraught with "gotchas" that can derail the process-lost license keys, incompatible hardware, forgotten configuration settings, etc. If you have running legacy instrumentation, the computer is the ticking time bomb waiting to put a halt to your operation.In this chapter, I describe how to virtualize your currently running control computer. This virtualized computer "image" is easy to maintain, easy to back up and easy to redeploy. I have used this multiple times in my own lab to greatly improve the robustness of my legacy devices.After completing the steps in this chapter, you will have your original control computer as well as a virtual instance of that computer with all the software installed ready to control your hardware should your original computer ever be decommissioned.

  4. Running into trouble with the time-dependent propagation of a wavepacket

    International Nuclear Information System (INIS)

    Garriz, Abel E; Sztrajman, Alejandro; Mitnik, DarIo

    2010-01-01

    The propagation in time of a wavepacket is a conceptually rich problem suitable to be studied in any introductory quantum mechanics course. This subject is covered analytically in most of the standard textbooks. Computer simulations have become a widespread pedagogical tool, easily implemented in computer labs and in classroom demonstrations. However, we have detected issues raising difficulties in the practical effectuation of these codes which are especially evident when discrete grid methods are used. One issue-relatively well known-appears at high incident energies, producing a wavepacket slower than expected theoretically. The other issue, which appears at low wavepacket energies, does not affect the time evolution of the propagating wavepacket proper, but produces dramatic effects on its spectral decomposition. The origin of the troubles is investigated, and different ways to deal with these issues are proposed. Finally, we show how this problem is manifested and solved in the practical case of the electronic spectra of a metal surface ionized by an ultrashort laser pulse.

  5. Running into trouble with the time-dependent propagation of a wavepacket

    Energy Technology Data Exchange (ETDEWEB)

    Garriz, Abel E; Sztrajman, Alejandro; Mitnik, DarIo, E-mail: dmitnik@df.uba.a [Instituto de AstronomIa y Fisica del Espacio, C.C. 67, Suc. 28, (C1428EGA) Buenos Aires (Argentina)

    2010-07-15

    The propagation in time of a wavepacket is a conceptually rich problem suitable to be studied in any introductory quantum mechanics course. This subject is covered analytically in most of the standard textbooks. Computer simulations have become a widespread pedagogical tool, easily implemented in computer labs and in classroom demonstrations. However, we have detected issues raising difficulties in the practical effectuation of these codes which are especially evident when discrete grid methods are used. One issue-relatively well known-appears at high incident energies, producing a wavepacket slower than expected theoretically. The other issue, which appears at low wavepacket energies, does not affect the time evolution of the propagating wavepacket proper, but produces dramatic effects on its spectral decomposition. The origin of the troubles is investigated, and different ways to deal with these issues are proposed. Finally, we show how this problem is manifested and solved in the practical case of the electronic spectra of a metal surface ionized by an ultrashort laser pulse.

  6. Real-time computational photon-counting LiDAR

    Science.gov (United States)

    Edgar, Matthew; Johnson, Steven; Phillips, David; Padgett, Miles

    2018-03-01

    The availability of compact, low-cost, and high-speed MEMS-based spatial light modulators has generated widespread interest in alternative sampling strategies for imaging systems utilizing single-pixel detectors. The development of compressed sensing schemes for real-time computational imaging may have promising commercial applications for high-performance detectors, where the availability of focal plane arrays is expensive or otherwise limited. We discuss the research and development of a prototype light detection and ranging (LiDAR) system via direct time of flight, which utilizes a single high-sensitivity photon-counting detector and fast-timing electronics to recover millimeter accuracy three-dimensional images in real time. The development of low-cost real time computational LiDAR systems could have importance for applications in security, defense, and autonomous vehicles.

  7. Control bandwidth improvements in GRAVITY fringe tracker by switching to a synchronous real time computer architecture

    Science.gov (United States)

    Abuter, Roberto; Dembet, Roderick; Lacour, Sylvestre; di Lieto, Nicola; Woillez, Julien; Eisenhauer, Frank; Fedou, Pierre; Phan Duc, Than

    2016-08-01

    The new VLTI (Very Large Telescope Interferometer) 1 instrument GRAVITY5, 22, 23 is equipped with a fringe tracker16 able to stabilize the K-band fringes on six baselines at the same time. It has been designed to achieve a performance for average seeing conditions of a residual OPD (Optical Path Difference) lower than 300 nm with objects brighter than K = 10. The control loop implementing the tracking is composed of a four stage real time computer system compromising: a sensor where the detector pixels are read in and the OPD and GD (Group Delay) are calculated; a controller receiving the computed sensor quantities and producing commands for the piezo actuators; a concentrator which combines both the OPD commands with the real time tip/tilt corrections offloading them to the piezo actuator; and finally a Kalman15 parameter estimator. This last stage is used to monitor current measurements over a window of few seconds and estimate new values for the main Kalman15 control loop parameters. The hardware and software implementation of this design runs asynchronously and communicates the four computers for data transfer via the Reflective Memory Network3. With the purpose of improving the performance of the GRAVITY5, 23 fringe tracking16, 22 control loop, a deviation from the standard asynchronous communication mechanism has been proposed and implemented. This new scheme operates the four independent real time computers involved in the tracking loop synchronously using the Reflective Memory Interrupts2 as the coordination signal. This synchronous mechanism had the effect of reducing the total pure delay of the loop from 3.5 [ms] to 2.0 [ms] which then translates on a better stabilization of the fringes as the bandwidth of the system is substantially improved. This paper will explain in detail the real time architecture of the fringe tracker in both is synchronous and synchronous implementation. The achieved improvements on reducing the delay via this mechanism will be

  8. Long-distance running, bone density, and osteoarthritis

    International Nuclear Information System (INIS)

    Lane, N.E.; Bloch, D.A.; Jones, H.H.; Marshall, W.H. Jr.; Wood, P.D.; Fries, J.F.

    1986-01-01

    Forty-one long-distance runners aged 50 to 72 years were compared with 41 matched community controls to examine associations of repetitive, long-term physical impact (running) with osteoarthritis and osteoporosis. Roentgenograms of hands, lateral lumbar spine, and knees were assessed without knowledge of running status. A computed tomographic scan of the first lumbar vertebra was performed to quantitate bone mineral content. Runners, both male and female, have approximately 40% more bone mineral than matched controls. Female runners, but not male runners, appear to have somewhat more sclerosis and spur formation in spine and weight-bearing knee x-ray films, but not in hand x-ray films. There were no differences between groups in joint space narrowing, crepitation, joint stability, or symptomatic osteoarthritis. Running is associated with increased bone mineral but not, in this cross-sectional study, with clinical osteoarthritis

  9. Hourly Comparison of GPM-IMERG-Final-Run and IMERG-Real-Time (V-03) over a Dense Surface Network in Northeastern Austria

    Science.gov (United States)

    Sharifi, Ehsan; Steinacker, Reinhold; Saghafian, Bahram

    2017-04-01

    Accurate quantitative daily precipitation estimation is key to meteorological and hydrological applications in hazards forecast and management. In-situ observations over mountainous areas are mostly limited, however, currently available satellite precipitation products can potentially provide the precipitation estimation needed for meteorological and hydrological applications. Over the years, blended methods that use multi-satellites and multi-sensors have been developed for estimating of global precipitation. One of the latest satellite precipitation products is GPM-IMERG (Global Precipitation Measurement with 30-minute temporal and 0.1-degree spatial resolutions) which consists of three products: Final-Run (aimed for research), Real-Time early run, and Real-Time late run. The Integrated Multisatellite Retrievals for GPM (IMERG) products built upon the success of TRMM's Multisatellite Precipitation Analysis (TMPA) products continue to make improvements in spatial and temporal resolutions and snowfall estimates. Recently, researchers who evaluated IMERG-Final-Run V-03 and other precipitation products indicated better performance for IMERG-Final-Run against other similar products. In this study two GPM-IMERG products, namely final run and real time-late run, were evaluated against a dense synoptic stations network (62 stations) over Northeastern Austria for mid-March 2015 to end of January 2016 period at hourly time-scale. Both products were examined against the reference data (stations) in capturing the occurrence of precipitation and statistical characteristics of precipitation intensity. Both satellite precipitation products underestimated precipitation events of 0.1 mm/hr to 0.4 mm/hr in intensity. For precipitations 0.4 mm/hr and greater, the trend was reversed and both satellite products overestimated than station recorded data. IMERG-RT outperformed IMERG-FR for precipitation intensity in the range of 0.1 mm/hr to 0.4 mm/hr while in the range of 1.1 to 1.8 mm

  10. Design and Implementation of a New Run-time Life-cycle for Interactive Public Display Applications

    OpenAIRE

    Cardoso, Jorge C. S.; Perpétua, Alice

    2015-01-01

    Public display systems are becoming increasingly complex. They are moving from passive closed systems to open interactive systems that are able to accommodate applications from several independent sources. This shift needs to be accompanied by a more flexible and powerful application management. In this paper, we propose a run-time life-cycle model for interactive public display applications that addresses several shortcomings of current display systems. Our mo...

  11. On the Correctness of Real-Time Modular Computer Systems Modeling with Stopwatch Automata Networks

    Directory of Open Access Journals (Sweden)

    Alevtina B. Glonina

    2018-01-01

    Full Text Available In this paper, we consider a schedulability analysis problem for real-time modular computer systems (RT MCS. A system configuration is called schedulable if all the jobs finish within their deadlines. The authors propose a stopwatch automata-based general model of RT MCS operation. A model instance for a given RT MCS configuration is a network of stopwatch automata (NSA and it can be built automatically using the general model. A system operation trace, which is necessary for checking the schedulability criterion, can be obtained from the corresponding NSA trace. The paper substantiates the correctness of the proposed approach. A set of correctness requirements to models of system components and to the whole system model were derived from RT MCS specifications. The authors proved that if all models of system components satisfy the corresponding requirements, the whole system model built according to the proposed approach satisfies its correctness requirements and is deterministic (i.e. for a given configuration a trace generated by the corresponding model run is uniquely determined. The model determinism implies that any model run can be used for schedulability analysis. This fact is crucial for the approach efficiency, as the number of possible model runs grows exponentially with the number of jobs in a system. Correctness requirements to models of system components models can be checked automatically by a verifier using observer automata approach. The authors proved by using UPPAAL verifier that all the developed models of system components satisfy the corresponding requirements. User-defined models of system components can be also used for system modeling if they satisfy the requirements.

  12. Some Programs Should Not Run on Laptops - Providing Programmatic Access to Applications Via Web Services

    Science.gov (United States)

    Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.

    2003-12-01

    hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.

  13. TimeSet: A computer program that accesses five atomic time services on two continents

    Science.gov (United States)

    Petrakis, P. L.

    1993-01-01

    TimeSet is a shareware program for accessing digital time services by telephone. At its initial release, it was capable of capturing time signals only from the U.S. Naval Observatory to set a computer's clock. Later the ability to synchronize with the National Institute of Standards and Technology was added. Now, in Version 7.10, TimeSet is able to access three additional telephone time services in Europe - in Sweden, Austria, and Italy - making a total of five official services addressable by the program. A companion program, TimeGen, allows yet another source of telephone time data strings for callers equipped with TimeSet version 7.10. TimeGen synthesizes UTC time data strings in the Naval Observatory's format from an accurately set and maintained DOS computer clock, and transmits them to callers. This allows an unlimited number of 'freelance' time generating stations to be created. Timesetting from TimeGen is made feasible by the advent of Becker's RighTime, a shareware program that learns the drift characteristics of a computer's clock and continuously applies a correction to keep it accurate, and also brings .01 second resolution to the DOS clock. With clock regulation by RighTime and periodic update calls by the TimeGen station to an official time source via TimeSet, TimeGen offers the same degree of accuracy within the resolution of the computer clock as any official atomic time source.

  14. Running coupling constants of the Luttinger liquid

    International Nuclear Information System (INIS)

    Boose, D.; Jacquot, J.L.; Polonyi, J.

    2005-01-01

    We compute the one-loop expressions of two running coupling constants of the Luttinger model. The obtained expressions have a nontrivial momentum dependence with Landau poles. The reason for the discrepancy between our results and those of other studies, which find that the scaling laws are trivial, is explained

  15. TV time but not computer time is associated with cardiometabolic risk in Dutch young adults.

    Science.gov (United States)

    Altenburg, Teatske M; de Kroon, Marlou L A; Renders, Carry M; Hirasing, Remy; Chinapaw, Mai J M

    2013-01-01

    TV time and total sedentary time have been positively related to biomarkers of cardiometabolic risk in adults. We aim to examine the association of TV time and computer time separately with cardiometabolic biomarkers in young adults. Additionally, the mediating role of waist circumference (WC) is studied. Data of 634 Dutch young adults (18-28 years; 39% male) were used. Cardiometabolic biomarkers included indicators of overweight, blood pressure, blood levels of fasting plasma insulin, cholesterol, glucose, triglycerides and a clustered cardiometabolic risk score. Linear regression analyses were used to assess the cross-sectional association of self-reported TV and computer time with cardiometabolic biomarkers, adjusting for demographic and lifestyle factors. Mediation by WC was checked using the product-of-coefficient method. TV time was significantly associated with triglycerides (B = 0.004; CI = [0.001;0.05]) and insulin (B = 0.10; CI = [0.01;0.20]). Computer time was not significantly associated with any of the cardiometabolic biomarkers. We found no evidence for WC to mediate the association of TV time or computer time with cardiometabolic biomarkers. We found a significantly positive association of TV time with cardiometabolic biomarkers. In addition, we found no evidence for WC as a mediator of this association. Our findings suggest a need to distinguish between TV time and computer time within future guidelines for screen time.

  16. Cardiovascular responses during deep water running versus shallow water running in school children

    Directory of Open Access Journals (Sweden)

    Anerao Urja M, Shinde Nisha K, Khatri SM

    2014-03-01

    Full Text Available Overview: As the school going children especially the adolescents’ need workout routine; it is advisable that the routine is imbibed in the school’s class time table. In India as growing number of schools provide swimming as one of the recreational activities; school staff often fails to notice the boredom that is caused by the same activity. Deep as well as shallow water running can be one of the best alternatives to swimming. Hence the present study was conducted to find out the cardiovascular response in these individuals. Methods: This was a Prospective Cross-Sectional Comparative Study done in 72 healthy school going students (males grouped into 2 according to the interventions (Deep water running and Shallow water running. Cardiovascular parameters such as Heart rate (HR, Saturation of oxygen (SpO2, Maximal oxygen consumption (VO2max and Rate of Perceived Exertion (RPE were assessed. Results: Significant improvements in cardiovascular parameters were seen in both the groups i.e. by both the interventions. Conclusion: Deep water running and Shallow water running can be used to improve cardiac function in terms of various outcome measures used in the study.

  17. Supporting Multiprocessors in the Icecap Safety-Critical Java Run-Time Environment

    DEFF Research Database (Denmark)

    Zhao, Shuai; Wellings, Andy; Korsholm, Stephan Erbs

    The current version of the Safety Critical Java (SCJ) specification defines three compliance levels. Level 0 targets single processor programs while Level 1 and 2 can support multiprocessor platforms. Level 1 programs must be fully partitioned but Level 2 programs can also be more globally...... scheduled. As of yet, there is no official Reference Implementation for SCJ. However, the icecap project has produced a Safety-Critical Java Run-time Environment based on the Hardware-near Virtual Machine (HVM). This supports SCJ at all compliance levels and provides an implementation of the safety......-critical Java (javax.safetycritical) package. This is still work-in-progress and lacks certain key features. Among these is the ability to support multiprocessor platforms. In this paper, we explore two possible options to adding multiprocessor support to this environment: the “green thread” and the “native...

  18. Computing challenges of the CMS experiment

    International Nuclear Information System (INIS)

    Krammer, N.; Liko, D.

    2017-01-01

    The success of the LHC experiments is due to the magnificent performance of the detector systems and the excellent operating computing systems. The CMS offline software and computing system is successfully fulfilling the LHC Run 2 requirements. For the increased data rate of future LHC operation, together with high pileup interactions, improvements of the usage of the current computing facilities and new technologies became necessary. Especially for the challenge of the future HL-LHC a more flexible and sophisticated computing model is needed. In this presentation, I will discuss the current computing system used in the LHC Run 2 and future computing facilities for the HL-LHC runs using flexible computing technologies like commercial and academic computing clouds. The cloud resources are highly virtualized and can be deployed for a variety of computing tasks providing the capacities for the increasing needs of large scale scientific computing.

  19. Quantum Computing for Computer Architects

    CERN Document Server

    Metodi, Tzvetan

    2011-01-01

    Quantum computers can (in theory) solve certain problems far faster than a classical computer running any known classical algorithm. While existing technologies for building quantum computers are in their infancy, it is not too early to consider their scalability and reliability in the context of the design of large-scale quantum computers. To architect such systems, one must understand what it takes to design and model a balanced, fault-tolerant quantum computer architecture. The goal of this lecture is to provide architectural abstractions for the design of a quantum computer and to explore

  20. Time-of-Flight Cameras in Computer Graphics

    DEFF Research Database (Denmark)

    Kolb, Andreas; Barth, Erhardt; Koch, Reinhard

    2010-01-01

    Computer Graphics, Computer Vision and Human Machine Interaction (HMI). These technologies are starting to have an impact on research and commercial applications. The upcoming generation of ToF sensors, however, will be even more powerful and will have the potential to become “ubiquitous real-time geometry...

  1. 29 CFR 4245.8 - Computation of time.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Computation of time. 4245.8 Section 4245.8 Labor Regulations Relating to Labor (Continued) PENSION BENEFIT GUARANTY CORPORATION INSOLVENCY, REORGANIZATION, TERMINATION, AND OTHER RULES APPLICABLE TO MULTIEMPLOYER PLANS NOTICE OF INSOLVENCY § 4245.8 Computation of...

  2. Low contrast volume run-off CT angiography with optimized scan time based on double-level test bolus technique – feasibility study

    International Nuclear Information System (INIS)

    Baxa, Jan; Vendiš, Tomáš; Moláček, Jiří; Štěpánková, Lucie; Flohr, Thomas; Schmidt, Bernhard; Korporaal, Johannes G.; Ferda, Jiří

    2014-01-01

    Purpose: To verify the technical feasibility of low contrast volume (40 mL) run-off CT angiography (run-off CTA) with the individual scan time optimization based on double-level test bolus technique. Materials and methods: A prospective study of 92 consecutive patients who underwent run-off CTA performed with 40 mL of contrast medium (injection rate of 6 mL/s) and optimized scan times on a second generation of dual-source CT. Individual optimized scan times were calculated from aortopopliteal transit times obtained on the basis of double-level test bolus technique – the single injection of 10 mL test bolus and dynamic acquisitions in two levels (abdominal aorta and popliteal arteries). Intraluminal attenuation (HU) was measured in 6 levels (aorta, iliac, femoral and popliteal arteries, middle and distal lower-legs) and subjective quality (3-point score) was assessed. Relations of image quality, test bolus parameters and arterial circulation involvement were analyzed. Results: High mean attenuation (HU) values (468; 437; 442; 440; 342; 274) and quality score in all monitored levels was achieved. In 91 patients (0.99) the sufficient diagnostic quality (score 1–2) in aorta, iliac and femoral arteries was determined. A total of 6 patients (0.07) were not evaluable in distal lower-legs. Only the weak indirect correlation of image quality and test-bolus parameters was proved in iliac, femoral and popliteal levels (r values: −0.263, −0.298 and −0.254). The statistically significant difference of the test-bolus parameters and image quality was proved in patients with occlusive and aneurysmal disease. Conclusion: We proved the technical feasibility and sufficient quality of run-off CTA with low volume of contrast medium and optimized scan time according to aortopopliteal transit time calculated from double-level test bolus

  3. Barefoot running: biomechanics and implications for running injuries.

    Science.gov (United States)

    Altman, Allison R; Davis, Irene S

    2012-01-01

    Despite the technological developments in modern running footwear, up to 79% of runners today get injured in a given year. As we evolved barefoot, examining this mode of running is insightful. Barefoot running encourages a forefoot strike pattern that is associated with a reduction in impact loading and stride length. Studies have shown a reduction in injuries to shod forefoot strikers as compared with rearfoot strikers. In addition to a forefoot strike pattern, barefoot running also affords the runner increased sensory feedback from the foot-ground contact, as well as increased energy storage in the arch. Minimal footwear is being used to mimic barefoot running, but it is not clear whether it truly does. The purpose of this article is to review current and past research on shod and barefoot/minimal footwear running and their implications for running injuries. Clearly more research is needed, and areas for future study are suggested.

  4. Designing Green Networks and Network Operations Saving Run-the-Engine Costs

    CERN Document Server

    Minoli, Daniel

    2011-01-01

    In recent years the confluence of socio-political trends toward environmental responsibility and the pressing need to reduce Run-the-Engine (RTE) costs has given birth to a nascent discipline of Green IT. A clear and concise introduction to green networks and green network operations, this book examines analytical measures and discusses virtualization, network computing, and web services as approaches for green data centers and networks. It identifies some strategies for green appliance and end devices and examines the methodical steps that can be taken over time to achieve a seamless migratio

  5. Reliable Viscosity Calculation from Equilibrium Molecular Dynamics Simulations: A Time Decomposition Method.

    Science.gov (United States)

    Zhang, Yong; Otani, Akihito; Maginn, Edward J

    2015-08-11

    Equilibrium molecular dynamics is often used in conjunction with a Green-Kubo integral of the pressure tensor autocorrelation function to compute the shear viscosity of fluids. This approach is computationally expensive and is subject to a large amount of variability because the plateau region of the Green-Kubo integral is difficult to identify unambiguously. Here, we propose a time decomposition approach for computing the shear viscosity using the Green-Kubo formalism. Instead of one long trajectory, multiple independent trajectories are run and the Green-Kubo relation is applied to each trajectory. The averaged running integral as a function of time is fit to a double-exponential function with a weighting function derived from the standard deviation of the running integrals. Such a weighting function minimizes the uncertainty of the estimated shear viscosity and provides an objective means of estimating the viscosity. While the formal Green-Kubo integral requires an integration to infinite time, we suggest an integration cutoff time tcut, which can be determined by the relative values of the running integral and the corresponding standard deviation. This approach for computing the shear viscosity can be easily automated and used in computational screening studies where human judgment and intervention in the data analysis are impractical. The method has been applied to the calculation of the shear viscosity of a relatively low-viscosity liquid, ethanol, and relatively high-viscosity ionic liquid, 1-n-butyl-3-methylimidazolium bis(trifluoromethane-sulfonyl)imide ([BMIM][Tf2N]), over a range of temperatures. These test cases show that the method is robust and yields reproducible and reliable shear viscosity values.

  6. Computation Offloading for Frame-Based Real-Time Tasks under Given Server Response Time Guarantees

    Directory of Open Access Journals (Sweden)

    Anas S. M. Toma

    2014-11-01

    Full Text Available Computation offloading has been adopted to improve the performance of embedded systems by offloading the computation of some tasks, especially computation-intensive tasks, to servers or clouds. This paper explores computation offloading for real-time tasks in embedded systems, provided given response time guarantees from the servers, to decide which tasks should be offloaded to get the results in time. We consider frame-based real-time tasks with the same period and relative deadline. When the execution order of the tasks is given, the problem can be solved in linear time. However, when the execution order is not specified, we prove that the problem is NP-complete. We develop a pseudo-polynomial-time algorithm for deriving feasible schedules, if they exist.  An approximation scheme is also developed to trade the error made from the algorithm and the complexity. Our algorithms are extended to minimize the period/relative deadline of the tasks for performance maximization. The algorithms are evaluated with a case study for a surveillance system and synthesized benchmarks.

  7. Computer utility for interactive instrument control

    International Nuclear Information System (INIS)

    Day, P.

    1975-08-01

    A careful study of the ANL laboratory automation needs in 1967 led to the conclusion that a central computer could support all of the real-time needs of a diverse collection of research instruments. A suitable hardware configuration would require an operating system to provide effective protection, fast real-time response and efficient data transfer. An SDS Sigma 5 satisfied all hardware criteria, however it was necessary to write an original operating system; services include program generation, experiment control real-time analysis, interactive graphics and final analysis. The system is providing real-time support for 21 concurrently running experiments, including an automated neutron diffractometer, a pulsed NMR spectrometer and multi-particle detection systems. It guarantees the protection of each user's interests and dynamically assigns core memory, disk space and 9-track magnetic tape usage. Multiplexor hardware capability allows the transfer of data between a user's device and assigned core area at rates of 100,000 bytes/sec. Real-time histogram generation for a user can proceed at rates of 50,000 points/sec. The facility has been self-running (no computer operator) for five years with a mean time between failures of 10 []ays and an uptime of 157 hours/week. (auth)

  8. How to run ions in the future?

    International Nuclear Information System (INIS)

    Küchler, D; Manglunki, D; Scrivens, R

    2014-01-01

    In the light of different running scenarios potential source improvements will be discussed (e.g. one month every year versus two month every other year and impact of the different running options [e.g. an extended ion run] on the source). As the oven refills cause most of the down time the oven design and refilling strategies will be presented. A test stand for off-line developments will be taken into account. Also the implications on the necessary manpower for extended runs will be discussed

  9. Exposure time, running and skill-related performance in international u20 rugby union players during an intensified tournament.

    Directory of Open Access Journals (Sweden)

    Christopher J Carling

    Full Text Available This study investigated exposure time, running and skill-related performance in two international u20 rugby union teams during an intensified tournament: the 2015 Junior World Rugby Championship.Both teams played 5 matches in 19 days. Analyses were conducted using global positioning system (GPS tracking (Viper 2™, Statsports Technologies Ltd and event coding (Opta Pro®.Of the 62 players monitored, 36 (57.1% participated in 4 matches and 23 (36.5% in all 5 matches while player availability for selection was 88%. Analyses of team running output (all players completing >60-min play showed that the total and peak 5-minute high metabolic load distances covered were likely-to-very likely moderately higher in the final match compared to matches 1 and 2 in back and forward players. In individual players with the highest match-play exposure (participation in >75% of total competition playing time and >75-min in each of the final 3 matches, comparisons of performance in matches 4 and 5 versus match 3 (three most important matches reported moderate-to-large decreases in total and high metabolic load distance in backs while similar magnitude reductions occurred in high-speed distance in forwards. In contrast, skill-related performance was unchanged, albeit with trivial and unclear changes, while there were no alterations in either total or high-speed running distance covered at the end of matches.These findings suggest that despite high availability for selection, players were not over-exposed to match-play during an intensified u20 international tournament. They also imply that the teams coped with the running and skill-related demands. Similarly, individual players with the highest exposure to match-play were also able to maintain skill-related performance and end-match running output (despite an overall reduction in the latter. These results support the need for player rotation and monitoring of performance, recovery and intervention strategies during

  10. Exposure time, running and skill-related performance in international u20 rugby union players during an intensified tournament

    Science.gov (United States)

    Carling, Christopher J.; Flanagan, Eamon; O’Doherty, Pearse; Piscione, Julien

    2017-01-01

    Purpose This study investigated exposure time, running and skill-related performance in two international u20 rugby union teams during an intensified tournament: the 2015 Junior World Rugby Championship. Method Both teams played 5 matches in 19 days. Analyses were conducted using global positioning system (GPS) tracking (Viper 2™, Statsports Technologies Ltd) and event coding (Opta Pro®). Results Of the 62 players monitored, 36 (57.1%) participated in 4 matches and 23 (36.5%) in all 5 matches while player availability for selection was 88%. Analyses of team running output (all players completing >60-min play) showed that the total and peak 5-minute high metabolic load distances covered were likely-to-very likely moderately higher in the final match compared to matches 1 and 2 in back and forward players. In individual players with the highest match-play exposure (participation in >75% of total competition playing time and >75-min in each of the final 3 matches), comparisons of performance in matches 4 and 5 versus match 3 (three most important matches) reported moderate-to-large decreases in total and high metabolic load distance in backs while similar magnitude reductions occurred in high-speed distance in forwards. In contrast, skill-related performance was unchanged, albeit with trivial and unclear changes, while there were no alterations in either total or high-speed running distance covered at the end of matches. Conclusions These findings suggest that despite high availability for selection, players were not over-exposed to match-play during an intensified u20 international tournament. They also imply that the teams coped with the running and skill-related demands. Similarly, individual players with the highest exposure to match-play were also able to maintain skill-related performance and end-match running output (despite an overall reduction in the latter). These results support the need for player rotation and monitoring of performance, recovery and

  11. Simulation of nonlinear wave run-up with a high-order Boussinesq model

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Madsen, Per A.

    2008-01-01

    This paper considers the numerical simulation of nonlinear wave run-up within a highly accurate Boussinesq-type model. Moving wet–dry boundary algorithms based on so-called extrapolating boundary techniques are utilized, and a new variant of this approach is proposed in two horizontal dimensions....... As validation, computed results involving the nonlinear run-up of periodic as well as transient waves on a sloping beach are considered in a single horizontal dimension, demonstrating excellent agreement with analytical solutions for both the free surface and horizontal velocity. In two horizontal dimensions...... cases involving long wave resonance in a parabolic basin, solitary wave evolution in a triangular channel, and solitary wave run-up on a circular conical island are considered. In each case the computed results compare well against available analytical solutions or experimental measurements. The ability...

  12. Instrument Front-Ends at Fermilab During Run II

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Thomas; Slimmer, David; Voy, Duane; /Fermilab

    2011-07-13

    The optimization of an accelerator relies on the ability to monitor the behavior of the beam in an intelligent and timely fashion. The use of processor-driven front-ends allowed for the deployment of smart systems in the field for improved data collection and analysis during Run II. This paper describes the implementation of the two main systems used: National Instruments LabVIEW running on PCs, and WindRiver's VxWorks real-time operating system running in a VME crate processor.

  13. Instrument front-ends at Fermilab during Run II

    International Nuclear Information System (INIS)

    Meyer, T; Slimmer, D; Voy, D

    2011-01-01

    The optimization of an accelerator relies on the ability to monitor the behavior of the beam in an intelligent and timely fashion. The use of processor-driven front-ends allowed for the deployment of smart systems in the field for improved data collection and analysis during Run II. This paper describes the implementation of the two main systems used: National Instruments LabVIEW running on PCs, and WindRiver's VxWorks real-time operating system running in a VME crate processor.

  14. Instrument Front-Ends at Fermilab During Run II

    International Nuclear Information System (INIS)

    Meyer, Thomas; Slimmer, David; Voy, Duane

    2011-01-01

    The optimization of an accelerator relies on the ability to monitor the behavior of the beam in an intelligent and timely fashion. The use of processor-driven front-ends allowed for the deployment of smart systems in the field for improved data collection and analysis during Run II. This paper describes the implementation of the two main systems used: National Instruments LabVIEW running on PCs, and WindRiver's VxWorks real-time operating system running in a VME crate processor.

  15. Stochastic nonlinear time series forecasting using time-delay reservoir computers: performance and universality.

    Science.gov (United States)

    Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo

    2014-07-01

    Reservoir computing is a recently introduced machine learning paradigm that has already shown excellent performances in the processing of empirical data. We study a particular kind of reservoir computers called time-delay reservoirs that are constructed out of the sampling of the solution of a time-delay differential equation and show their good performance in the forecasting of the conditional covariances associated to multivariate discrete-time nonlinear stochastic processes of VEC-GARCH type as well as in the prediction of factual daily market realized volatilities computed with intraday quotes, using as training input daily log-return series of moderate size. We tackle some problems associated to the lack of task-universality for individually operating reservoirs and propose a solution based on the use of parallel arrays of time-delay reservoirs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Spin-based quantum computation in multielectron quantum dots

    OpenAIRE

    Hu, Xuedong; Sarma, S. Das

    2001-01-01

    In a quantum computer the hardware and software are intrinsically connected because the quantum Hamiltonian (or more precisely its time development) is the code that runs the computer. We demonstrate this subtle and crucial relationship by considering the example of electron-spin-based solid state quantum computer in semiconductor quantum dots. We show that multielectron quantum dots with one valence electron in the outermost shell do not behave simply as an effective single spin system unles...

  17. Dynamic stability of running: The effects of speed and leg amputations on the maximal Lyapunov exponent

    International Nuclear Information System (INIS)

    Look, Nicole; Arellano, Christopher J.; Grabowski, Alena M.; Kram, Rodger; McDermott, William J.; Bradley, Elizabeth

    2013-01-01

    In this paper, we study dynamic stability during running, focusing on the effects of speed, and the use of a leg prosthesis. We compute and compare the maximal Lyapunov exponents of kinematic time-series data from subjects with and without unilateral transtibial amputations running at a wide range of speeds. We find that the dynamics of the affected leg with the running-specific prosthesis are less stable than the dynamics of the unaffected leg and also less stable than the biological legs of the non-amputee runners. Surprisingly, we find that the center-of-mass dynamics of runners with two intact biological legs are slightly less stable than those of runners with amputations. Our results suggest that while leg asymmetries may be associated with instability, runners may compensate for this effect by increased control of their center-of-mass dynamics

  18. Design and development of a diversified real time computer for future FBRs

    International Nuclear Information System (INIS)

    Sujith, K.R.; Bhattacharyya, Anindya; Behera, R.P.; Murali, N.

    2014-01-01

    The current safety related computer system of Prototype Fast Breeder Reactor (PFBR) under construction in Kalpakkam consists of two redundant Versa Module Europa (VME) bus based Real Time Computer system with a Switch Over Logic Circuit (SOLC). Since both the VME systems are identical, the dual redundant system is prone to common cause failure (CCF). The probability of CCF can be reduced by adopting diversity. Design diversity has long been used to protect redundant systems against common-mode failures. The conventional notion of diversity relies on 'independent' generation of 'different' implementations. This paper discusses the design and development of a diversified Real Time Computer which will replace one of the computer system in the dual redundant architecture. Compact PCI (cPCI) bus systems are widely used in safety critical applications such as avionics, railways, defence and uses diverse electrical signaling and logical specifications, hence was chosen for development of the diversified system. Towards the initial development a CPU card based on an ARM-9 processor, 16 channel Relay Output (RO) card and a 30 channel Analog Input (AI) card was developed. All the cards mentioned supports hot-swap and geographic addressing capability. In order to mitigate the component obsolescence problem the 32 bit PCI target controller and associated glue logic for the slave I/O cards was indigenously developed using VHDL. U-boot was selected as the boot loader and arm Linux 2.6 as the preliminary operating system for the CPU card. Board specific initialization code for the CPU card was written in ARM assembly language and serial port initialization was written in C language. Boot loader along with Linux 2.6 kernel and jffs2 file system was flashed into the CPU card. Test applications written in C language were used to test the various peripherals of the CPU card. Device driver for the AI and RO card was developed as Linux kernel modules and application library was also

  19. Lazy Spilling for a Time-Predictable Stack Cache: Implementation and Analysis

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar; Jordan, Alexander; Brandner, Florian

    2014-01-01

    The growing complexity of modern computer architectures increasingly complicates the prediction of the run-time behavior of software. For real-time systems, where a safe estimation of the program's worst-case execution time is needed, time-predictable computer architectures promise to resolve......, we show that lazy spilling can be analyzed with little extra effort, which benefits the worst-case spilling behavior that is relevant for a real-time system....

  20. Output improvement of Sg. Piah run-off river hydro-electric station with a new computed river flow-based control system

    International Nuclear Information System (INIS)

    Jidin, Razali; Othman, Bahari

    2013-01-01

    The lower Sg. Piah hydro-electric station is a river run-off hydro scheme with generators capable of generating 55MW of electricity. It is located 30km away from Sg. Siput, a small town in the state of Perak, Malaysia. The station has two turbines (Pelton) to harness energy from water that flow through a 7km tunnel from a small intake dam. The trait of a run-off river hydro station is small-reservoir that cannot store water for a long duration; therefore potential energy carried by the spillage will be wasted if the dam level is not appropriately regulated. To improve the station annual energy output, a new controller based on the computed river flow has been installed. The controller regulates the dam level with an algorithm based on the river flow derived indirectly from the intake-dam water level and other plant parameters. The controller has been able to maintain the dam at optimum water level and regulate the turbines to maximize the total generation output.

  1. Output improvement of Sg. Piah run-off river hydro-electric station with a new computed river flow-based control system

    Science.gov (United States)

    Jidin, Razali; Othman, Bahari

    2013-06-01

    The lower Sg. Piah hydro-electric station is a river run-off hydro scheme with generators capable of generating 55MW of electricity. It is located 30km away from Sg. Siput, a small town in the state of Perak, Malaysia. The station has two turbines (Pelton) to harness energy from water that flow through a 7km tunnel from a small intake dam. The trait of a run-off river hydro station is small-reservoir that cannot store water for a long duration; therefore potential energy carried by the spillage will be wasted if the dam level is not appropriately regulated. To improve the station annual energy output, a new controller based on the computed river flow has been installed. The controller regulates the dam level with an algorithm based on the river flow derived indirectly from the intake-dam water level and other plant parameters. The controller has been able to maintain the dam at optimum water level and regulate the turbines to maximize the total generation output.

  2. Distance walked and run as improved metrics over time-based energy estimation in epidemiological studies and prevention; evidence from medication use.

    Directory of Open Access Journals (Sweden)

    Paul T Williams

    Full Text Available The guideline physical activity levels are prescribed in terms of time, frequency, and intensity (e.g., 30 minutes brisk walking, five days a week or its energy equivalence and assume that different activities may be combined to meet targeted goals (exchangeability premise. Habitual runners and walkers may quantify exercise in terms of distance (km/day, and for them, the relationship between activity dose and health benefits may be better assessed in terms of distance rather than time. Analyses were therefore performed to test: 1 whether time-based or distance-based estimates of energy expenditure provide the best metric for relating running and walking to hypertensive, high cholesterol, and diabetes medication use (conditions known to be diminished by exercise, and 2 the exchangeability premise.Logistic regression analyses of medication use (dependent variable vs. metabolic equivalent hours per day (METhr/d of running, walking and other exercise (independent variables using cross-sectional data from the National Runners' (17,201 male, 16,173 female and Walkers' Health Studies (3,434 male, 12,384 female.Estimated METhr/d of running and walking activity were 38% and 31% greater, respectively, when calculated from self-reported time than distance in men, and 43% and 37% greater in women, respectively. Percent reductions in the odds for hypertension and high cholesterol medication use per METhr/d run or per METhr/d walked were ≥ 2-fold greater when estimated from reported distance (km/wk than from time (hr/wk. The per METhr/d odds reduction was significantly greater for the distance- than the time-based estimate for hypertension (runners: P<10(-5 for males and P=0.003 for females; walkers: P=0.03 for males and P<10(-4 for females, high cholesterol medication use in runners (P<10(-4 for males and P=0.02 for females and male walkers (P=0.01 for males and P=0.08 for females and for diabetes medication use in male runners (P<10(-3.Although causality

  3. Defining epidemics in computer simulation models: How do definitions influence conclusions?

    Directory of Open Access Journals (Sweden)

    Carolyn Orbann

    2017-06-01

    Full Text Available Computer models have proven to be useful tools in studying epidemic disease in human populations. Such models are being used by a broader base of researchers, and it has become more important to ensure that descriptions of model construction and data analyses are clear and communicate important features of model structure. Papers describing computer models of infectious disease often lack a clear description of how the data are aggregated and whether or not non-epidemic runs are excluded from analyses. Given that there is no concrete quantitative definition of what constitutes an epidemic within the public health literature, each modeler must decide on a strategy for identifying epidemics during simulation runs. Here, an SEIR model was used to test the effects of how varying the cutoff for considering a run an epidemic changes potential interpretations of simulation outcomes. Varying the cutoff from 0% to 15% of the model population ever infected with the illness generated significant differences in numbers of dead and timing variables. These results are important for those who use models to form public health policy, in which questions of timing or implementation of interventions might be answered using findings from computer simulation models.

  4. A simplified computing method of pile group to seismic loads using thin layer element

    International Nuclear Information System (INIS)

    Masao, T.; Hama, I.

    1995-01-01

    In the calculation of pile group, it is said that the results of response by thin layer method give the correct solution with the isotropic and homogeneous soil material in each layer, on the other hand this procedure spends huge computing time. Dynamic stiffness matrix of thin layer method is obtained from inversion of flexibility matrix between pile-i and pile-j. This flexibility matrix is full matrix and its size increase in proportion to the number of piles and thin layers. The greater part of run time is taken into the inversion of flexibility matrix against point loading. We propose the method of decreasing the run time for computing by reducing to banded matrix of flexibility matrix. (author)

  5. Computation of a long-time evolution in a Schroedinger system

    International Nuclear Information System (INIS)

    Girard, R.; Kroeger, H.; Labelle, P.; Bajzer, Z.

    1988-01-01

    We compare different techniques for the computation of a long-time evolution and the S matrix in a Schroedinger system. As an application we consider a two-nucleon system interacting via the Yamaguchi potential. We suggest computation of the time evolution for a very short time using Pade approximants, the long-time evolution being obtained by iterative squaring. Within the technique of strong approximation of Moller wave operators (SAM) we compare our calculation with computation of the time evolution in the eigenrepresentation of the Hamiltonian and with the standard Lippmann-Schwinger solution for the S matrix. We find numerical agreement between these alternative methods for time-evolution computation up to half the number of digits of internal machine precision, and fairly rapid convergence of both techniques towards the Lippmann-Schwinger solution

  6. Cluster Computing For Real Time Seismic Array Analysis.

    Science.gov (United States)

    Martini, M.; Giudicepietro, F.

    A seismic array is an instrument composed by a dense distribution of seismic sen- sors that allow to measure the directional properties of the wavefield (slowness or wavenumber vector) radiated by a seismic source. Over the last years arrays have been widely used in different fields of seismological researches. In particular they are applied in the investigation of seismic sources on volcanoes where they can be suc- cessfully used for studying the volcanic microtremor and long period events which are critical for getting information on the volcanic systems evolution. For this reason arrays could be usefully employed for the volcanoes monitoring, however the huge amount of data produced by this type of instruments and the processing techniques which are quite time consuming limited their potentiality for this application. In order to favor a direct application of arrays techniques to continuous volcano monitoring we designed and built a small PC cluster able to near real time computing the kinematics properties of the wavefield (slowness or wavenumber vector) produced by local seis- mic source. The cluster is composed of 8 Intel Pentium-III bi-processors PC working at 550 MHz, and has 4 Gigabytes of RAM memory. It runs under Linux operating system. The developed analysis software package is based on the Multiple SIgnal Classification (MUSIC) algorithm and is written in Fortran. The message-passing part is based upon the LAM programming environment package, an open-source imple- mentation of the Message Passing Interface (MPI). The developed software system includes modules devote to receiving date by internet and graphical applications for the continuous displaying of the processing results. The system has been tested with a data set collected during a seismic experiment conducted on Etna in 1999 when two dense seismic arrays have been deployed on the northeast and the southeast flanks of this volcano. A real time continuous acquisition system has been simulated by

  7. Determinants of the abilities to jump higher and shorten the contact time in a running 1-legged vertical jump in basketball.

    Science.gov (United States)

    Miura, Ken; Yamamoto, Masayoshi; Tamaki, Hiroyuki; Zushi, Koji

    2010-01-01

    This study was conducted to obtain useful information for developing training techniques for the running 1-legged vertical jump in basketball (lay-up shot jump). The ability to perform the lay-up shot jump and various basic jumps was measured by testing 19 male basketball players. The basic jumps consisted of the 1-legged repeated rebound jump, the 2-legged repeated rebound jump, and the countermovement jump. Jumping height, contact time, and jumping index (jumping height/contact time) were measured and calculated using a contact mat/computer system that recorded the contact and air times. The jumping index indicates power. No significant correlation existed between the jumping height and contact time of the lay-up shot jump, the 2 components of the lay-up shot jump index. As a result, jumping height and contact time were found to be mutually independent abilities. The relationships in contact time between the lay-up shot jump to the 1-legged repeated rebound jump and the 2-legged repeated rebound jump were correlated on the same significance levels (p jumping height existed between the 1-legged repeated rebound jump and the lay-up shot jump (p jumping height between the lay-up shot jump and both the 2-legged repeated rebound jump and countermovement jump. The lay-up shot index correlated more strongly to the 1-legged repeated rebound jump index (p jump index (p jump is effective in improving both contact time and jumping height in the lay-up shot jump.

  8. Short-run and long-run effects of unemployment on suicides: does welfare regime matter?

    Science.gov (United States)

    Gajewski, Pawel; Zhukovska, Kateryna

    2017-12-01

    Disentangling the immediate effects of an unemployment shock from the long-run relationship has a strong theoretical rationale. Different economic and psychological forces are at play in the first moment and after prolonged unemployment. This study suggests a diverse impact of short- and long-run unemployment on suicides in liberal and social-democratic countries. We take a macro-level perspective and simultaneously estimate the short- and long-run relationships between unemployment and suicide, along with the speed of convergence towards the long-run relationship after a shock, in a panel of 10 high-income countries. We also account for unemployment benefit spending, the share of the population aged 15-34, and the crisis effects. In the liberal group of countries, only a long-run impact of unemployment on suicides is found to be significant (P = 0.010). In social-democratic countries, suicides are associated with initial changes in unemployment (P = 0.028), but the positive link fades over time and becomes insignificant in the long run. Further, crisis effects are a much stronger determinant of suicides in social-democratic countries. Once the broad welfare regime is controlled for, changes in unemployment-related spending do not matter for preventing suicides. A generous welfare system seems efficient at preventing unemployment-related suicides in the long run, but societies in social-democratic countries might be less psychologically immune to sudden negative changes in their professional lives compared with people in liberal countries. Accounting for the different short- and long-run effects could thus improve our understanding of the unemployment-suicide link. © The Author 2017. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.

  9. Running the source term code package in Elebra MX-850

    International Nuclear Information System (INIS)

    Guimaraes, A.C.F.; Goes, A.G.A.

    1988-01-01

    The source term package (STCP) is one of the main tools applied in calculations of behavior of fission products from nuclear power plants. It is a set of computer codes to assist the calculations of the radioactive materials leaving from the metallic containment of power reactors to the environment during a severe reactor accident. The original version of STCP runs in SDC computer systems, but as it has been written in FORTRAN 77, is possible run it in others systems such as IBM, Burroughs, Elebra, etc. The Elebra MX-8500 version of STCP contains 5 codes:March 3, Trapmelt, Tcca, Vanessa and Nava. The example presented in this report has taken into consideration a small LOCA accident into a PWR type reactor. (M.I.)

  10. Collecting response times using Amazon Mechanical Turk and Adobe Flash.

    Science.gov (United States)

    Simcox, Travis; Fiez, Julie A

    2014-03-01

    Crowdsourcing systems like Amazon's Mechanical Turk (AMT) allow data to be collected from a large sample of people in a short amount of time. This use has garnered considerable interest from behavioral scientists. So far, most experiments conducted on AMT have focused on survey-type instruments because of difficulties inherent in running many experimental paradigms over the Internet. This study investigated the viability of presenting stimuli and collecting response times using Adobe Flash to run ActionScript 3 code in conjunction with AMT. First, the timing properties of Adobe Flash were investigated using a phototransistor and two desktop computers running under several conditions mimicking those that may be present in research using AMT. This experiment revealed some strengths and weaknesses of the timing capabilities of this method. Next, a flanker task and a lexical decision task implemented in Adobe Flash were administered to participants recruited with AMT. The expected effects in these tasks were replicated. Power analyses were conducted to describe the number of participants needed to replicate these effects. A questionnaire was used to investigate previously undescribed computer use habits of 100 participants on AMT. We conclude that a Flash program in conjunction with AMT can be successfully used for running many experimental paradigms that rely on response times, although experimenters must understand the limitations of the method.

  11. Western diet increases wheel running in mice selectively bred for high voluntary wheel running.

    Science.gov (United States)

    Meek, T H; Eisenmann, J C; Garland, T

    2010-06-01

    Mice from a long-term selective breeding experiment for high voluntary wheel running offer a unique model to examine the contributions of genetic and environmental factors in determining the aspects of behavior and metabolism relevant to body-weight regulation and obesity. Starting with generation 16 and continuing through to generation 52, mice from the four replicate high runner (HR) lines have run 2.5-3-fold more revolutions per day as compared with four non-selected control (C) lines, but the nature of this apparent selection limit is not understood. We hypothesized that it might involve the availability of dietary lipids. Wheel running, food consumption (Teklad Rodent Diet (W) 8604, 14% kJ from fat; or Harlan Teklad TD.88137 Western Diet (WD), 42% kJ from fat) and body mass were measured over 1-2-week intervals in 100 males for 2 months starting 3 days after weaning. WD was obesogenic for both HR and C, significantly increasing both body mass and retroperitoneal fat pad mass, the latter even when controlling statistically for wheel-running distance and caloric intake. The HR mice had significantly less fat than C mice, explainable statistically by their greater running distance. On adjusting for body mass, HR mice showed higher caloric intake than C mice, also explainable by their higher running. Accounting for body mass and running, WD initially caused increased caloric intake in both HR and C, but this effect was reversed during the last four weeks of the study. Western diet had little or no effect on wheel running in C mice, but increased revolutions per day by as much as 75% in HR mice, mainly through increased time spent running. The remarkable stimulation of wheel running by WD in HR mice may involve fuel usage during prolonged endurance exercise and/or direct behavioral effects on motivation. Their unique behavioral responses to WD may render HR mice an important model for understanding the control of voluntary activity levels.

  12. Imprecise results: Utilizing partial computations in real-time systems

    Science.gov (United States)

    Lin, Kwei-Jay; Natarajan, Swaminathan; Liu, Jane W.-S.

    1987-01-01

    In real-time systems, a computation may not have time to complete its execution because of deadline requirements. In such cases, no result except the approximate results produced by the computations up to that point will be available. It is desirable to utilize these imprecise results if possible. Two approaches are proposed to enable computations to return imprecise results when executions cannot be completed normally. The milestone approach records results periodically, and if a deadline is reached, returns the last recorded result. The sieve approach demarcates sections of code which can be skipped if the time available is insufficient. By using these approaches, the system is able to produce imprecise results when deadlines are reached. The design of the Concord project is described which supports imprecise computations using these techniques. Also presented is a general model of imprecise computations using these techniques, as well as one which takes into account the influence of the environment, showing where the latter approach fits into this model.

  13. Advanced computer-based training

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, H D; Martin, H D

    1987-05-01

    The paper presents new techniques of computer-based training for personnel of nuclear power plants. Training on full-scope simulators is further increased by use of dedicated computer-based equipment. An interactive communication system runs on a personal computer linked to a video disc; a part-task simulator runs on 32 bit process computers and shows two versions: as functional trainer or as on-line predictor with an interactive learning system (OPAL), which may be well-tailored to a specific nuclear power plant. The common goal of both develoments is the optimization of the cost-benefit ratio for training and equipment.

  14. Advanced computer-based training

    International Nuclear Information System (INIS)

    Fischer, H.D.; Martin, H.D.

    1987-01-01

    The paper presents new techniques of computer-based training for personnel of nuclear power plants. Training on full-scope simulators is further increased by use of dedicated computer-based equipment. An interactive communication system runs on a personal computer linked to a video disc; a part-task simulator runs on 32 bit process computers and shows two versions: as functional trainer or as on-line predictor with an interactive learning system (OPAL), which may be well-tailored to a specific nuclear power plant. The common goal of both develoments is the optimization of the cost-benefit ratio for training and equipment. (orig.) [de

  15. Computing Fault-Containment Times of Self-Stabilizing Algorithms Using Lumped Markov Chains

    Directory of Open Access Journals (Sweden)

    Volker Turau

    2018-05-01

    Full Text Available The analysis of self-stabilizing algorithms is often limited to the worst case stabilization time starting from an arbitrary state, i.e., a state resulting from a sequence of faults. Considering the fact that these algorithms are intended to provide fault tolerance in the long run, this is not the most relevant metric. A common situation is that a running system is an a legitimate state when hit by a single fault. This event has a much higher probability than multiple concurrent faults. Therefore, the worst case time to recover from a single fault is more relevant than the recovery time from a large number of faults. This paper presents techniques to derive upper bounds for the mean time to recover from a single fault for self-stabilizing algorithms based on Markov chains in combination with lumping. To illustrate the applicability of the techniques they are applied to a new self-stabilizing coloring algorithm.

  16. Nuclear energy as a 'golden bridge'? Constitutional legal problems of the negotiation of the prolongation of the running time against skimming of profits

    International Nuclear Information System (INIS)

    Waldhoff, Christian; Aswege, Hanka von

    2010-01-01

    The coalition agreement of Christian Demographic Union (CDU), Christian Social Union (CSU) and Free Democratic Party (FDP) from 26th October, 2009 characterizes the nuclear energy as a bridge technology. The coalition parties explain to prolong the running times of German nuclear power stations up to a reliable replacement by renewable energies. The conditions for the prolongation of the running times are to be regulated in agreement with energy supply companies. In the contribution under consideration, the authors report on the fiscal legal problems of the skimming of profits. Constitutional legal problems of the earmaking of a skimming of profits as well as a consensual agreement are discussed in this contribution. In the result, a financial constitutionally reliable way for the skimming of added profits due to prolongation of the running time is not evident. The legal earmaking of the duty advent for the promotion of renewable energies increases the constitutional doubts.

  17. Real-Time Thevenin Impedance Computation

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Jóhannsson, Hjörtur

    2013-01-01

    operating state, and strict time constraints are difficult to adhere to as the complexity of the grid increases. Several suggested approaches for real-time stability assessment require Thevenin impedances to be determined for the observed system conditions. By combining matrix factorization, graph reduction......, and parallelization, we develop an algorithm for computing Thevenin impedances an order of magnitude faster than previous approaches. We test the factor-and-solve algorithm with data from several power grids of varying complexity, and we show how the algorithm allows realtime stability assessment of complex power...

  18. An Innovative Running Wheel-based Mechanism for Improved Rat Training Performance.

    Science.gov (United States)

    Chen, Chi-Chun; Yang, Chin-Lung; Chang, Ching-Ping

    2016-09-19

    This study presents an animal mobility system, equipped with a positioning running wheel (PRW), as a way to quantify the efficacy of an exercise activity for reducing the severity of the effects of the stroke in rats. This system provides more effective animal exercise training than commercially available systems such as treadmills and motorized running wheels (MRWs). In contrast to an MRW that can only achieve speeds below 20 m/min, rats are permitted to run at a stable speed of 30 m/min on a more spacious and high-density rubber running track supported by a 15 cm wide acrylic wheel with a diameter of 55 cm in this work. Using a predefined adaptive acceleration curve, the system not only reduces the operator error but also trains the rats to run persistently until a specified intensity is reached. As a way to evaluate the exercise effectiveness, real-time position of a rat is detected by four pairs of infrared sensors deployed on the running wheel. Once an adaptive acceleration curve is initiated using a microcontroller, the data obtained by the infrared sensors are automatically recorded and analyzed in a computer. For comparison purposes, 3 week training is conducted on rats using a treadmill, an MRW and a PRW. After surgically inducing middle cerebral artery occlusion (MCAo), modified neurological severity scores (mNSS) and an inclined plane test were conducted to assess the neurological damages to the rats. PRW is experimentally validated as the most effective among such animal mobility systems. Furthermore, an exercise effectiveness measure, based on rat position analysis, showed that there is a high negative correlation between the effective exercise and the infarct volume, and can be employed to quantify a rat training in any type of brain damage reduction experiments.

  19. HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies

    Science.gov (United States)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.

    2017-10-01

    PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.

  20. INFLUENCE OF FUNCTIONAL ABILITY IN RUNNING 400 AND 800 METERS

    Directory of Open Access Journals (Sweden)

    Abdulla Elezi

    2013-07-01

    Full Text Available Goal of the research was to assess on the grounds of data collected that were used to assess the functional ability of the cardio-respiratory system and the results of running to determine the relation of these sum of variables. Basic statistical indicators of the physiological variables and results of running were calculated. For determining the relation, the regression analysis was used in the manifested space. Criterion variable (running for 100 meters did not demonstrate statistically significant coefficient of multiple variation with predictor variables. The time span in running 400 meters is short in order to engage mechanisms that supply and transform the energy for oxidative processes. Criterion variable (of running 800 meters has demonstrated statistically significant coefficient of multiple correlations with predictor variable and its value was 0.377 tested through F-test. This is understandable given that the time effect of engagement of systems responsible for transfer and transformation of energy compared to time needed for running 400 meters.

  1. Balancing Exploration, Uncertainty Representation and Computational Time in Many-Objective Reservoir Policy Optimization

    Science.gov (United States)

    Zatarain-Salazar, J.; Reed, P. M.; Quinn, J.; Giuliani, M.; Castelletti, A.

    2016-12-01

    As we confront the challenges of managing river basin systems with a large number of reservoirs and increasingly uncertain tradeoffs impacting their operations (due to, e.g. climate change, changing energy markets, population pressures, ecosystem services, etc.), evolutionary many-objective direct policy search (EMODPS) solution strategies will need to address the computational demands associated with simulating more uncertainties and therefore optimizing over increasingly noisy objective evaluations. Diagnostic assessments of state-of-the-art many-objective evolutionary algorithms (MOEAs) to support EMODPS have highlighted that search time (or number of function evaluations) and auto-adaptive search are key features for successful optimization. Furthermore, auto-adaptive MOEA search operators are themselves sensitive to having a sufficient number of function evaluations to learn successful strategies for exploring complex spaces and for escaping from local optima when stagnation is detected. Fortunately, recent parallel developments allow coordinated runs that enhance auto-adaptive algorithmic learning and can handle scalable and reliable search with limited wall-clock time, but at the expense of the total number of function evaluations. In this study, we analyze this tradeoff between parallel coordination and depth of search using different parallelization schemes of the Multi-Master Borg on a many-objective stochastic control problem. We also consider the tradeoff between better representing uncertainty in the stochastic optimization, and simplifying this representation to shorten the function evaluation time and allow for greater search. Our analysis focuses on the Lower Susquehanna River Basin (LSRB) system where multiple competing objectives for hydropower production, urban water supply, recreation and environmental flows need to be balanced. Our results provide guidance for balancing exploration, uncertainty, and computational demands when using the EMODPS

  2. Spying on real-time computers to improve performance

    International Nuclear Information System (INIS)

    Taff, L.M.

    1975-01-01

    The sampled program-counter histogram, an established technique for shortening the execution times of programs, is described for a real-time computer. The use of a real-time clock allows particularly easy implementation. (Auth.)

  3. COMPARISON OF METHODS FOR SIMULATING TSUNAMI RUN-UP THROUGH COASTAL FORESTS

    Directory of Open Access Journals (Sweden)

    Benazir

    2017-09-01

    Full Text Available The research is aimed at reviewing two numerical methods for modeling the effect of coastal forest on tsunami run-up and to propose an alternative approach. Two methods for modeling the effect of coastal forest namely the Constant Roughness Model (CRM and Equivalent Roughness Model (ERM simulate the effect of the forest by using an artificial Manning roughness coefficient. An alternative approach that simulates each of the trees as a vertical square column is introduced. Simulations were carried out with variations of forest density and layout pattern of the trees. The numerical model was validated using an existing data series of tsunami run-up without forest protection. The study indicated that the alternative method is in good agreement with ERM method for low forest density. At higher density and when the trees were planted in a zigzag pattern, the ERM produced significantly higher run-up. For a zigzag pattern and at 50% forest densities which represents a water tight wall, both the ERM and CRM methods produced relatively high run-up which should not happen theoretically. The alternative method, on the other hand, reflected the entire tsunami. In reality, housing complex can be considered and simulated as forest with various size and layout of obstacles where the alternative approach is applicable. The alternative method is more accurate than the existing methods for simulating a coastal forest for tsunami mitigation but consumes considerably more computational time.

  4. A Real-Time Sound Field Rendering Processor

    Directory of Open Access Journals (Sweden)

    Tan Yiyu

    2017-12-01

    Full Text Available Real-time sound field renderings are computationally intensive and memory-intensive. Traditional rendering systems based on computer simulations suffer from memory bandwidth and arithmetic units. The computation is time-consuming, and the sample rate of the output sound is low because of the long computation time at each time step. In this work, a processor with a hybrid architecture is proposed to speed up computation and improve the sample rate of the output sound, and an interface is developed for system scalability through simply cascading many chips to enlarge the simulated area. To render a three-minute Beethoven wave sound in a small shoe-box room with dimensions of 1.28 m × 1.28 m × 0.64 m, the field programming gate array (FPGA-based prototype machine with the proposed architecture carries out the sound rendering at run-time while the software simulation with the OpenMP parallelization takes about 12.70 min on a personal computer (PC with 32 GB random access memory (RAM and an Intel i7-6800K six-core processor running at 3.4 GHz. The throughput in the software simulation is about 194 M grids/s while it is 51.2 G grids/s in the prototype machine even if the clock frequency of the prototype machine is much lower than that of the PC. The rendering processor with a processing element (PE and interfaces consumes about 238,515 gates after fabricated by the 0.18 µm processing technology from the ROHM semiconductor Co., Ltd. (Kyoto Japan, and the power consumption is about 143.8 mW.

  5. A Distributed Computing Network for Real-Time Systems.

    Science.gov (United States)

    1980-11-03

    7 ) AU2 o NAVA TUNDEWATER SY$TEMS CENTER NEWPORT RI F/G 9/2 UIS RIBUT E 0 COMPUTIN G N LTWORK FOR REAL - TIME SYSTEMS .(U) UASSIFIED NOV Al 6 1...MORAIS - UT 92 dLEVEL c A Distributed Computing Network for Real - Time Systems . 11 𔃺-1 Gordon E/Morson I7 y tm- ,r - t "en t As J 2 -p .. - 7 I’ cNaval...NUMBER TD 5932 / N 4. TITLE mand SubotI. S. TYPE OF REPORT & PERIOD COVERED A DISTRIBUTED COMPUTING NETWORK FOR REAL - TIME SYSTEMS 6. PERFORMING ORG

  6. Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods.

    Science.gov (United States)

    Kim, Seung-Cheol; Kim, Eun-Soo

    2009-02-20

    In this paper we propose a new approach for fast generation of computer-generated holograms (CGHs) of a 3D object by using the run-length encoding (RLE) and the novel look-up table (N-LUT) methods. With the RLE method, spatially redundant data of a 3D object are extracted and regrouped into the N-point redundancy map according to the number of the adjacent object points having the same 3D value. Based on this redundancy map, N-point principle fringe patterns (PFPs) are newly calculated by using the 1-point PFP of the N-LUT, and the CGH pattern for the 3D object is generated with these N-point PFPs. In this approach, object points to be involved in calculation of the CGH pattern can be dramatically reduced and, as a result, an increase of computational speed can be obtained. Some experiments with a test 3D object are carried out and the results are compared to those of the conventional methods.

  7. A PICKSC Science Gateway for enabling the common plasma physicist to run kinetic software

    Science.gov (United States)

    Hu, Q.; Winjum, B. J.; Zonca, A.; Youn, C.; Tsung, F. S.; Mori, W. B.

    2017-10-01

    Computer simulations offer tremendous opportunities for studying plasmas, ranging from simulations for students that illuminate fundamental educational concepts to research-level simulations that advance scientific knowledge. Nevertheless, there is a significant hurdle to using simulation tools. Users must navigate codes and software libraries, determine how to wrangle output into meaningful plots, and oftentimes confront a significant cyberinfrastructure with powerful computational resources. Science gateways offer a Web-based environment to run simulations without needing to learn or manage the underlying software and computing cyberinfrastructure. We discuss our progress on creating a Science Gateway for the Particle-in-Cell and Kinetic Simulation Software Center that enables users to easily run and analyze kinetic simulations with our software. We envision that this technology could benefit a wide range of plasma physicists, both in the use of our simulation tools as well as in its adaptation for running other plasma simulation software. Supported by NSF under Grant ACI-1339893 and by the UCLA Institute for Digital Research and Education.

  8. Trends in economic growth, poverty and energy in Colombia: long-run and short-run effects

    OpenAIRE

    Cotte Poveda, Alexander; Pardo Martínez, Clara

    2011-01-01

    This research analyses the long run and short run relationships among economic growth, poverty and energy using the Colombian case. In this study, we use the time-series methodologies. The results regarding the relationship among economic growth, poverty and energy show that increases in gross domestic product and energy supply per capita should lead a decrease of poverty, which should demonstrate that access to modern and adequate energy services help to decrease poverty and to increase econ...

  9. Systems-level computational modeling demonstrates fuel selection switching in high capacity running and low capacity running rats

    Science.gov (United States)

    Qi, Nathan R.

    2018-01-01

    High capacity and low capacity running rats, HCR and LCR respectively, have been bred to represent two extremes of running endurance and have recently demonstrated disparities in fuel usage during transient aerobic exercise. HCR rats can maintain fatty acid (FA) utilization throughout the course of transient aerobic exercise whereas LCR rats rely predominantly on glucose utilization. We hypothesized that the difference between HCR and LCR fuel utilization could be explained by a difference in mitochondrial density. To test this hypothesis and to investigate mechanisms of fuel selection, we used a constraint-based kinetic analysis of whole-body metabolism to analyze transient exercise data from these rats. Our model analysis used a thermodynamically constrained kinetic framework that accounts for glycolysis, the TCA cycle, and mitochondrial FA transport and oxidation. The model can effectively match the observed relative rates of oxidation of glucose versus FA, as a function of ATP demand. In searching for the minimal differences required to explain metabolic function in HCR versus LCR rats, it was determined that the whole-body metabolic phenotype of LCR, compared to the HCR, could be explained by a ~50% reduction in total mitochondrial activity with an additional 5-fold reduction in mitochondrial FA transport activity. Finally, we postulate that over sustained periods of exercise that LCR can partly overcome the initial deficit in FA catabolic activity by upregulating FA transport and/or oxidation processes. PMID:29474500

  10. Run charts revisited: a simulation study of run chart rules for detection of non-random variation in health care processes.

    Science.gov (United States)

    Anhøj, Jacob; Olesen, Anne Vingaard

    2014-01-01

    A run chart is a line graph of a measure plotted over time with the median as a horizontal line. The main purpose of the run chart is to identify process improvement or degradation, which may be detected by statistical tests for non-random patterns in the data sequence. We studied the sensitivity to shifts and linear drifts in simulated processes using the shift, crossings and trend rules for detecting non-random variation in run charts. The shift and crossings rules are effective in detecting shifts and drifts in process centre over time while keeping the false signal rate constant around 5% and independent of the number of data points in the chart. The trend rule is virtually useless for detection of linear drift over time, the purpose it was intended for.

  11. Responding for sucrose and wheel-running reinforcement: effect of pre-running.

    Science.gov (United States)

    Belke, Terry W

    2006-01-10

    Six male albino Wistar rats were placed in running wheels and exposed to a fixed interval 30-s schedule that produced either a drop of 15% sucrose solution or the opportunity to run for 15s as reinforcing consequences for lever pressing. Each reinforcer type was signaled by a different stimulus. To assess the effect of pre-running, animals were allowed to run for 1h prior to a session of responding for sucrose and running. Results showed that, after pre-running, response rates in the later segments of the 30-s schedule decreased in the presence of a wheel-running stimulus and increased in the presence of a sucrose stimulus. Wheel-running rates were not affected. Analysis of mean post-reinforcement pauses (PRP) broken down by transitions between successive reinforcers revealed that pre-running lengthened pausing in the presence of the stimulus signaling wheel running and shortened pauses in the presence of the stimulus signaling sucrose. No effect was observed on local response rates. Changes in pausing in the presence of stimuli signaling the two reinforcers were consistent with a decrease in the reinforcing efficacy of wheel running and an increase in the reinforcing efficacy of sucrose. Pre-running decreased motivation to respond for running, but increased motivation to work for food.

  12. The use of micro-computers in the simulation of ion beam optics

    International Nuclear Information System (INIS)

    Spaedtke, P.; Ivens, D.

    1989-01-01

    With computer simulation codes specific problems of the ion beam optics can be studied, which is useful in the design as in optimization of existing systems. Several such codes have been developed, unfortunately requiring substantial computer resources. Recent advances of mini- and micro-computers have now made it possible to develop simulation codes which can be run on these small computers also. In this paper, some of these codes will be presented and their computing time discussed. (author)

  13. Real-time computing platform for spiking neurons (RT-spike).

    Science.gov (United States)

    Ros, Eduardo; Ortigosa, Eva M; Agís, Rodrigo; Carrillo, Richard; Arnold, Michael

    2006-07-01

    A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.

  14. Proposal for grid computing for nuclear applications

    International Nuclear Information System (INIS)

    Faridah Mohamad Idris; Wan Ahmad Tajuddin Wan Abdullah; Zainol Abidin Ibrahim; Zukhaimira Zolkapli

    2013-01-01

    Full-text: The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process. (author)

  15. The psychological benefits of recreational running: a field study.

    Science.gov (United States)

    Szabo, Attila; Abrahám, Júlia

    2013-01-01

    Running yields positive changes in affect, but the external validity of controlled studies has received little attention in the literature. In this inquiry, 50 recreational runners completed the Exercise-Induced Feeling Inventory (Gauvin & Rejeskí, 1993) before and after a bout of self-planned running on an urban running path. Positive changes were seen in all four measures of affect (p run, weekly running time, weekly running distance, and running experience) to the observed changes in affect. The results have revealed that exercise characteristics accounted for only 14-30% of the variance in the recreational runners' affect, in both directions. It is concluded that psychological benefits of recreational running may be linked to placebo (conditioning and/or expectancy) effects.

  16. Physiological demands of running during long distance runs and triathlons.

    Science.gov (United States)

    Hausswirth, C; Lehénaff, D

    2001-01-01

    The aim of this review article is to identify the main metabolic factors which have an influence on the energy cost of running (Cr) during prolonged exercise runs and triathlons. This article proposes a physiological comparison of these 2 exercises and the relationship between running economy and performance. Many terms are used as the equivalent of 'running economy' such as 'oxygen cost', 'metabolic cost', 'energy cost of running', and 'oxygen consumption'. It has been suggested that these expressions may be defined by the rate of oxygen uptake (VO2) at a steady state (i.e. between 60 to 90% of maximal VO2) at a submaximal running speed. Endurance events such as triathlon or marathon running are known to modify biological constants of athletes and should have an influence on their running efficiency. The Cr appears to contribute to the variation found in distance running performance among runners of homogeneous level. This has been shown to be important in sports performance, especially in events like long distance running. In addition, many factors are known or hypothesised to influence Cr such as environmental conditions, participant specificity, and metabolic modifications (e.g. training status, fatigue). The decrease in running economy during a triathlon and/or a marathon could be largely linked to physiological factors such as the enhancement of core temperature and a lack of fluid balance. Moreover, the increase in circulating free fatty acids and glycerol at the end of these long exercise durations bear witness to the decrease in Cr values. The combination of these factors alters the Cr during exercise and hence could modify the athlete's performance in triathlons or a prolonged run.

  17. Parallel computing in genomic research: advances and applications

    Directory of Open Access Journals (Sweden)

    Ocaña K

    2015-11-01

    Full Text Available Kary Ocaña,1 Daniel de Oliveira2 1National Laboratory of Scientific Computing, Petrópolis, Rio de Janeiro, 2Institute of Computing, Fluminense Federal University, Niterói, Brazil Abstract: Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. Keywords: high-performance computing, genomic research, cloud computing, grid computing, cluster computing, parallel computing

  18. Computing the Gromov hyperbolicity of a discrete metric space

    KAUST Repository

    Fournier, Hervé

    2015-02-12

    We give exact and approximation algorithms for computing the Gromov hyperbolicity of an n-point discrete metric space. We observe that computing the Gromov hyperbolicity from a fixed base-point reduces to a (max,min) matrix product. Hence, using the (max,min) matrix product algorithm by Duan and Pettie, the fixed base-point hyperbolicity can be determined in O(n2.69) time. It follows that the Gromov hyperbolicity can be computed in O(n3.69) time, and a 2-approximation can be found in O(n2.69) time. We also give a (2log2⁡n)-approximation algorithm that runs in O(n2) time, based on a tree-metric embedding by Gromov. We also show that hyperbolicity at a fixed base-point cannot be computed in O(n2.05) time, unless there exists a faster algorithm for (max,min) matrix multiplication than currently known.

  19. Real time animation of space plasma phenomena

    International Nuclear Information System (INIS)

    Jordan, K.F.; Greenstadt, E.W.

    1987-01-01

    In pursuit of real time animation of computer simulated space plasma phenomena, the code was rewritten for the Massively Parallel Processor (MPP). The program creates a dynamic representation of the global bowshock which is based on actual spacecraft data and designed for three dimensional graphic output. This output consists of time slice sequences which make up the frames of the animation. With the MPP, 16384, 512 or 4 frames can be calculated simultaneously depending upon which characteristic is being computed. The run time was greatly reduced which promotes the rapid sequence of images and makes real time animation a foreseeable goal. The addition of more complex phenomenology in the constructed computer images is now possible and work proceeds to generate these images

  20. Environmentally induced nonstationarity in LIGO science run data

    International Nuclear Information System (INIS)

    Stone, Robert; Mukherjee, Soma

    2009-01-01

    NoiseFloorMon is a data monitoring tool (DMT) implemented at the LIGO sites to monitor instances of non-stationarity in the gravitational-wave data that are correlated with physical environmental monitors. An analysis of the fifth science run is nearly complete, and test runs preceding the sixth science run have also been analyzed. These analyses have identified time intervals in the gravitational-wave channel that indicate non-stationarity due to seismic activity, and these intervals are referred to as data quality flags. In the analyses conducted to date the majority of time segments identified as non-stationary were due to seismic activity at the corner station and the x-arm end station. We present the algorithm and its performance, and discuss the potential for an on-site pipeline that automatically generates data quality flags for use in future data runs.

  1. Effect of foot orthoses on magnitude and timing of rearfoot and tibial motions, ground reaction force and knee moment during running.

    Science.gov (United States)

    Eslami, Mansour; Begon, Mickaël; Hinse, Sébastien; Sadeghi, Heydar; Popov, Peter; Allard, Paul

    2009-11-01

    Changes in magnitude and timing of rearfoot eversion and tibial internal rotation by foot orthoses and their contributions to vertical ground reaction force and knee joint moments are not well understood. The objectives of this study were to test if orthoses modify the magnitude and time to peak rearfoot eversion, tibial internal rotation, active ground reaction force and knee adduction moment and determine if rearfoot eversion, tibial internal rotation magnitudes are correlated to peak active ground reaction force and knee adduction moment during the first 60% stance phase of running. Eleven healthy men ran at 170 steps per minute in shod and with foot orthoses conditions. Video and force-plate data were collected simultaneously to calculate foot joint angular displacement, ground reaction forces and knee adduction moments. Results showed that wearing semi-rigid foot orthoses significantly reduced rearfoot eversion 40% (4.1 degrees ; p=0.001) and peak active ground reaction force 6% (0.96N/kg; p=0.008). No significant time differences occurred among the peak rearfoot eversion, tibial internal rotation and peak active ground reaction force in both conditions. A positive and significant correlation was observed between peak knee adduction moment and the magnitude of rearfoot eversion during shod (r=0.59; p=0.04) and shod/orthoses running (r=0.65; p=0.02). In conclusion, foot orthoses could reduce rearfoot eversion so that this can be associated with a reduction of knee adduction moment during the first 60% stance phase of running. Finding implies that modifying rearfoot and tibial motions during running could not be related to a reduction of the ground reaction force.

  2. Connectionist agent-based learning in bank-run decision making

    Science.gov (United States)

    Huang, Weihong; Huang, Qiao

    2018-05-01

    It is of utter importance for the policy makers, bankers, and investors to thoroughly understand the probability of bank-run (PBR) which was often neglected in the classical models. Bank-run is not merely due to miscoordination (Diamond and Dybvig, 1983) or deterioration of bank assets (Allen and Gale, 1998) but various factors. This paper presents the simulation results of the nonlinear dynamic probabilities of bank runs based on the global games approach, with the distinct assumption that heterogenous agents hold highly correlated but unidentical beliefs about the true payoffs. The specific technique used in the simulation is to let agents have an integrated cognitive-affective network. It is observed that, even when the economy is good, agents are significantly affected by the cognitive-affective network to react to bad news which might lead to bank-run. Both the rise of the late payoffs, R, and the early payoffs, r, will decrease the effect of the affective process. The increased risk sharing might or might not increase PBR, and the increase in late payoff is beneficial for preventing the bank run. This paper is one of the pioneers that links agent-based computational economics and behavioral economics.

  3. Operating Security System Support for Run-Time Security with a Trusted Execution Environment

    DEFF Research Database (Denmark)

    Gonzalez, Javier

    , it is safe to assume that any complex software is compromised. The problem is then to monitor and contain it when it executes in order to protect sensitive data and other sensitive assets. To really have an impact, any solution to this problem should be integrated in commodity operating systems...... in the Linux operating system. We are in the process of making this driver part of the mainline Linux kernel.......Software services have become an integral part of our daily life. Cyber-attacks have thus become a problem of increasing importance not only for the IT industry, but for society at large. A way to contain cyber-attacks is to guarantee the integrity of IT systems at run-time. Put differently...

  4. Instrument front-ends at Fermilab during Run II

    Science.gov (United States)

    Meyer, T.; Slimmer, D.; Voy, D.

    2011-11-01

    The optimization of an accelerator relies on the ability to monitor the behavior of the beam in an intelligent and timely fashion. The use of processor-driven front-ends allowed for the deployment of smart systems in the field for improved data collection and analysis during Run II. This paper describes the implementation of the two main systems used: National Instruments LabVIEW running on PCs, and WindRiver's VxWorks real-time operating system running in a VME crate processor. Work supported by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy.

  5. CERN openlab: Engaging industry for innovation in the LHC Run 3-4 R&D programme

    Science.gov (United States)

    Girone, M.; Purcell, A.; Di Meglio, A.; Rademakers, F.; Gunne, K.; Pachou, M.; Pavlou, S.

    2017-10-01

    LHC Run3 and Run4 represent an unprecedented challenge for HEP computing in terms of both data volume and complexity. New approaches are needed for how data is collected and filtered, processed, moved, stored and analysed if these challenges are to be met with a realistic budget. To develop innovative techniques we are fostering relationships with industry leaders. CERN openlab is a unique resource for public-private partnership between CERN and leading Information Communication and Technology (ICT) companies. Its mission is to accelerate the development of cutting-edge solutions to be used by the worldwide HEP community. In 2015, CERN openlab started its phase V with a strong focus on tackling the upcoming LHC challenges. Several R&D programs are ongoing in the areas of data acquisition, networks and connectivity, data storage architectures, computing provisioning, computing platforms and code optimisation and data analytics. This paper gives an overview of the various innovative technologies that are currently being explored by CERN openlab V and discusses the long-term strategies that are pursued by the LHC communities with the help of industry in closing the technological gap in processing and storage needs expected in Run3 and Run4.

  6. Relativistic Photoionization Computations with the Time Dependent Dirac Equation

    Science.gov (United States)

    2016-10-12

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6795--16-9698 Relativistic Photoionization Computations with the Time Dependent Dirac... Photoionization Computations with the Time Dependent Dirac Equation Daniel F. Gordon and Bahman Hafizi Naval Research Laboratory 4555 Overlook Avenue, SW...Unclassified Unlimited Unclassified Unlimited 22 Daniel Gordon (202) 767-5036 Tunneling Photoionization Ionization of inner shell electrons by laser

  7. Computation and evaluation of scheduled waiting time for railway networks

    DEFF Research Database (Denmark)

    Landex, Alex

    2010-01-01

    Timetables are affected by scheduled waiting time (SWT) that prolongs the travel times for trains and thereby passengers. SWT occurs when a train hinders another train to run with the wanted speed. The SWT affects both the trains and the passengers in the trains. The passengers may be further...... affected due to longer transfer times to other trains. SWT can be estimated analytically for a given timetable or by simulation of timetables and/or plans of operation. The simulation of SWT has the benefit that it is possible to examine the entire network. This makes it possible to improve the future...

  8. Computing the dilation of edge-augmented graphs in metric spaces

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    2010-01-01

    Let G=(V,E) be an undirected graph with n vertices embedded in a metric space. We consider the problem of adding a shortcut edge in G that minimizes the dilation of the resulting graph. The fastest algorithm to date for this problem has O(n4) running time and uses O(n2) space. We show how...... to improve the running time to O(n3logn) while maintaining quadratic space requirement. In fact, our algorithm not only determines the best shortcut but computes the dilation of G{(u,v)} for every pair of distinct vertices u and v....

  9. Constraint-led changes in internal variability in running.

    Science.gov (United States)

    Haudum, Anita; Birklbauer, Jürgen; Kröll, Josef; Müller, Erich

    2012-01-01

    We investigated the effect of a one-time application of elastic constraints on movement-inherent variability during treadmill running. Eleven males ran two 35-min intervals while surface EMG was measured. In one of two 35-min intervals, after 10 min of running without tubes, elastic tubes (between hip and heels) were attached, followed by another 5 min of running without tubes. To assess variability, stride-to-stride iEMG variability was calculated. Significant increases in variability (36 % to 74 %) were observed during tube running, whereas running without tubes after the tube running block showed no significant differences. Results show that elastic tubes affect variability on a muscular level despite the constant environmental conditions and underline the nervous system's adaptability to cope with somehow unpredictable constraints since stride duration was unaltered.

  10. The Development of University Computing in Sweden 1965-1985

    Science.gov (United States)

    Dahlstrand, Ingemar

    In 1965-70 the government agency, Statskontoret, set up five university computing centers, as service bureaux financed by grants earmarked for computer use. The centers were well equipped and staffed and caused a surge in computer use. When the yearly flow of grant money stagnated at 25 million Swedish crowns, the centers had to find external income to survive and acquire time-sharing. But the charging system led to the computers not being fully used. The computer scientists lacked equipment for laboratory use. The centers were decentralized and the earmarking abolished. Eventually they got new tasks like running computers owned by the departments, and serving the university administration.

  11. Pre-Exercise Hyperhydration-Induced Bodyweight Gain Does Not Alter Prolonged Treadmill Running Time-Trial Performance in Warm Ambient Conditions

    Directory of Open Access Journals (Sweden)

    Eric D. B. Goulet

    2012-08-01

    Full Text Available This study compared the effect of pre-exercise hyperhydration (PEH and pre-exercise euhydration (PEE upon treadmill running time-trial (TT performance in the heat. Six highly trained runners or triathletes underwent two 18 km TT runs (~28 °C, 25%–30% RH on a motorized treadmill, in a randomized, crossover fashion, while being euhydrated or after hyperhydration with 26 mL/kg bodyweight (BW of a 130 mmol/L sodium solution. Subjects then ran four successive 4.5 km blocks alternating between 2.5 km at 1% and 2 km at 6% gradient, while drinking a total of 7 mL/kg BW of a 6% sports drink solution (Gatorade, USA. PEH increased BW by 1.00 ± 0.34 kg (P < 0.01 and, compared with PEE, reduced BW loss from 3.1% ± 0.3% (EUH to 1.4% ± 0.4% (HYP (P < 0.01 during exercise. Running TT time did not differ between groups (PEH: 85.6 ± 11.6 min; PEE: 85.3 ± 9.6 min, P = 0.82. Heart rate (5 ± 1 beats/min and rectal (0.3 ± 0.1 °C and body (0.2 ± 0.1 °C temperatures of PEE were higher than those of PEH (P < 0.05. There was no significant difference in abdominal discomfort and perceived exertion or heat stress between groups. Our results suggest that pre-exercise sodium-induced hyperhydration of a magnitude of 1 L does not alter 80–90 min running TT performance under warm conditions in highly-trained runners drinking ~500 mL sports drink during exercise.

  12. Cloud Computing for Complex Performance Codes.

    Energy Technology Data Exchange (ETDEWEB)

    Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Klein, Brandon Thorin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Miner, John Gifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.

  13. Analysis and Design of Bi-Directional DC-DC Converter in the Extended Run Time DC UPS System Based on Fuel Cell and Supercapacitor

    DEFF Research Database (Denmark)

    Zhang, Zhe; Thomsen, Ole Cornelius; Andersen, Michael A. E.

    2009-01-01

    Abstract-In this paper, an extended run time DC UPS system structure with fuel cell and supercapacitor is investigated. A wide input range bi-directional dc-dc converter is described along with the phase-shift modulation scheme and phase-shift with duty cycle control, in different modes. The deli......Abstract-In this paper, an extended run time DC UPS system structure with fuel cell and supercapacitor is investigated. A wide input range bi-directional dc-dc converter is described along with the phase-shift modulation scheme and phase-shift with duty cycle control, in different modes...

  14. Greedy and metaheuristics for the offline scheduling problem in grid computing

    DEFF Research Database (Denmark)

    Gamst, Mette

    In grid computing a number of geographically distributed resources connected through a wide area network, are utilized as one computations unit. The NP-hard offline scheduling problem in grid computing consists of assigning jobs to resources in advance. In this paper, five greedy heuristics and two....... All heuristics solve instances with up to 2000 jobs and 1000 resources, thus the results are useful both with respect to running times and to solution values....

  15. The Effects of Running Club Membership on Fourth Graders' Achievement of Connecticut State Standard for the Mile Run

    Science.gov (United States)

    Foshay, John D.; Patterson, Melissa

    2010-01-01

    The purpose of this study was to investigate the effects of a running club on the mile run times of fourth grade students. The study was conducted in a suburban elementary school setting in central Connecticut with a student body of 400. The participants for the study included 59 fourth grade students, 30 of whom were boys and 29 of whom were…

  16. STICK: Spike Time Interval Computational Kernel, a Framework for General Purpose Computation Using Neurons, Precise Timing, Delays, and Synchrony.

    Science.gov (United States)

    Lagorce, Xavier; Benosman, Ryad

    2015-11-01

    There has been significant research over the past two decades in developing new platforms for spiking neural computation. Current neural computers are primarily developed to mimic biology. They use neural networks, which can be trained to perform specific tasks to mainly solve pattern recognition problems. These machines can do more than simulate biology; they allow us to rethink our current paradigm of computation. The ultimate goal is to develop brain-inspired general purpose computation architectures that can breach the current bottleneck introduced by the von Neumann architecture. This work proposes a new framework for such a machine. We show that the use of neuron-like units with precise timing representation, synaptic diversity, and temporal delays allows us to set a complete, scalable compact computation framework. The framework provides both linear and nonlinear operations, allowing us to represent and solve any function. We show usability in solving real use cases from simple differential equations to sets of nonlinear differential equations leading to chaotic attractors.

  17. Pathways to designing and running an operational flood forecasting system: an adventure game!

    Science.gov (United States)

    Arnal, Louise; Pappenberger, Florian; Ramos, Maria-Helena; Cloke, Hannah; Crochemore, Louise; Giuliani, Matteo; Aalbers, Emma

    2017-04-01

    In the design and building of an operational flood forecasting system, a large number of decisions have to be taken. These include technical decisions related to the choice of the meteorological forecasts to be used as input to the hydrological model, the choice of the hydrological model itself (its structure and parameters), the selection of a data assimilation procedure to run in real-time, the use (or not) of a post-processor, and the computing environment to run the models and display the outputs. Additionally, a number of trans-disciplinary decisions are also involved in the process, such as the way the needs of the users will be considered in the modelling setup and how the forecasts (and their quality) will be efficiently communicated to ensure usefulness and build confidence in the forecasting system. We propose to reflect on the numerous, alternative pathways to designing and running an operational flood forecasting system through an adventure game. In this game, the player is the protagonist of an interactive story driven by challenges, exploration and problem-solving. For this presentation, you will have a chance to play this game, acting as the leader of a forecasting team at an operational centre. Your role is to manage the actions of your team and make sequential decisions that impact the design and running of the system in preparation to and during a flood event, and that deal with the consequences of the forecasts issued. Your actions are evaluated by how much they cost you in time, money and credibility. Your aim is to take decisions that will ultimately lead to a good balance between time and money spent, while keeping your credibility high over the whole process. This game was designed to highlight the complexities behind decision-making in an operational forecasting and emergency response context, in terms of the variety of pathways that can be selected as well as the timescale, cost and timing of effective actions.

  18. Computer-controlled neutron time-of-flight spectrometer. Part II

    International Nuclear Information System (INIS)

    Merriman, S.H.

    1979-12-01

    A time-of-flight spectrometer for neutron inelastic scattering research has been interfaced to a PDP-15/30 computer. The computer is used for experimental data acquisition and analysis and for apparatus control. This report was prepared to summarize the functions of the computer and to act as a users' guide to the software system

  19. Computer language evaluation for MFTF SCDS

    International Nuclear Information System (INIS)

    Anderson, R.E.; McGoldrick, P.R.; Wyman, R.H.

    1979-01-01

    The computer languages available for the systems and application implementation on the Supervisory Control and Diagnostics System (SCDS) for the Mirror Fusion Test Facility (MFTF) were surveyed and evaluated. Four language processors, CAL (Common Assembly Language), Extended FORTRAN, CORAL 66, and Sequential Pascal (SPASCAL, a subset of Concurrent Pascal [CPASCAL]) are commercially available for the Interdata 7/32 and 8/32 computers that constitute the SCDS. Of these, the Sequential Pascal available from Kansas State University appears best for the job in terms of minimizing the implementation time, debugging time, and maintenance time. This improvement in programming productivity is due to the availability of a high-level, block-structured language that includes many compile-time and run-time checks to detect errors. In addition, the advanced data-types in language allow easy description of the program variables. 1 table

  20. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    Science.gov (United States)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  1. 1001 Ways to run AutoDock Vina for virtual screening

    NARCIS (Netherlands)

    Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D.

    2016-01-01

    Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides

  2. How to help CERN to run more simulations

    CERN Multimedia

    The LHC@home team

    2016-01-01

    With LHC@home you can actively contribute to the computing capacity of the Laboratory!   You may think that CERN's large Data Centre and the Worldwide LHC Computing Grid have enough computing capacity for all the Laboratory’s users. However, given the massive amount of data coming from LHC experiments and other sources, additional computing resources are always needed, notably for simulations of physics events, or accelerator and detector upgrades. This is an area where you can help, by installing BOINC and running simulations from LHC@home on your office PC or laptop. These background simulations will not disturb your work, as BOINC can be configured to automatically stop computing when your PC is in use. As mentioned in earlier editions of the Bulletin (see here and here), contributions from LHC@home volunteers have played a major role in LHC beam simulation studies. The computing capacity they made available corresponds to about half the capacity of the CERN...

  3. Time series modeling, computation, and inference

    CERN Document Server

    Prado, Raquel

    2010-01-01

    The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit

  4. Heterogeneous real-time computing in radio astronomy

    Science.gov (United States)

    Ford, John M.; Demorest, Paul; Ransom, Scott

    2010-07-01

    Modern computer architectures suited for general purpose computing are often not the best choice for either I/O-bound or compute-bound problems. Sometimes the best choice is not to choose a single architecture, but to take advantage of the best characteristics of different computer architectures to solve your problems. This paper examines the tradeoffs between using computer systems based on the ubiquitous X86 Central Processing Units (CPU's), Field Programmable Gate Array (FPGA) based signal processors, and Graphical Processing Units (GPU's). We will show how a heterogeneous system can be produced that blends the best of each of these technologies into a real-time signal processing system. FPGA's tightly coupled to analog-to-digital converters connect the instrument to the telescope and supply the first level of computing to the system. These FPGA's are coupled to other FPGA's to continue to provide highly efficient processing power. Data is then packaged up and shipped over fast networks to a cluster of general purpose computers equipped with GPU's, which are used for floating-point intensive computation. Finally, the data is handled by the CPU and written to disk, or further processed. Each of the elements in the system has been chosen for its specific characteristics and the role it can play in creating a system that does the most for the least, in terms of power, space, and money.

  5. Ubiquitous computing technology for just-in-time motivation of behavior change.

    Science.gov (United States)

    Intille, Stephen S

    2004-01-01

    This paper describes a vision of health care where "just-in-time" user interfaces are used to transform people from passive to active consumers of health care. Systems that use computational pattern recognition to detect points of decision, behavior, or consequences automatically can present motivational messages to encourage healthy behavior at just the right time. Further, new ubiquitous computing and mobile computing devices permit information to be conveyed to users at just the right place. In combination, computer systems that present messages at the right time and place can be developed to motivate physical activity and healthy eating. Computational sensing technologies can also be used to measure the impact of the motivational technology on behavior.

  6. The SIMRAND 1 computer program: Simulation of research and development projects

    Science.gov (United States)

    Miles, R. F., Jr.

    1986-01-01

    The SIMRAND I Computer Program (Version 5.0 x 0.3) written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles is described. The SIMRAND I Computer Program comprises eleven modules-a main routine and ten subroutines. Two additional files are used at compile time; one inserts the system or task equations into the source code, while the other inserts the dimension statements and common blocks. The SIMRAND I Computer Program can be run on most microcomputers or mainframe computers with only minor modifications to the computer code.

  7. Virtualization and cloud computing in dentistry.

    Science.gov (United States)

    Chow, Frank; Muftu, Ali; Shorter, Richard

    2014-01-01

    The use of virtualization and cloud computing has changed the way we use computers. Virtualization is a method of placing software called a hypervisor on the hardware of a computer or a host operating system. It allows a guest operating system to run on top of the physical computer with a virtual machine (i.e., virtual computer). Virtualization allows multiple virtual computers to run on top of one physical computer and to share its hardware resources, such as printers, scanners, and modems. This increases the efficient use of the computer by decreasing costs (e.g., hardware, electricity administration, and management) since only one physical computer is needed and running. This virtualization platform is the basis for cloud computing. It has expanded into areas of server and storage virtualization. One of the commonly used dental storage systems is cloud storage. Patient information is encrypted as required by the Health Insurance Portability and Accountability Act (HIPAA) and stored on off-site private cloud services for a monthly service fee. As computer costs continue to increase, so too will the need for more storage and processing power. Virtual and cloud computing will be a method for dentists to minimize costs and maximize computer efficiency in the near future. This article will provide some useful information on current uses of cloud computing.

  8. Improvements of the ALICE high level trigger for LHC Run 2 to facilitate online reconstruction, QA, and calibration

    Energy Technology Data Exchange (ETDEWEB)

    Rohr, David [Frankfurt Institute for Advanced Studies, Frankfurt (Germany); Collaboration: ALICE-Collaboration

    2016-07-01

    ALICE is one of the four major experiments at the Large Hadron Collider (LHC) at CERN. Its main goal is the study of matter under extreme pressure and temperature as produced in heavy ion collisions at LHC. The ALICE High Level Trigger (HLT) is an online compute farm of around 200 nodes that performs a real time event reconstruction of the data delivered by the ALICE detectors. The HLT employs a fast FPGA based cluster finder algorithm as well as a GPU based track reconstruction algorithm and it is designed to process the maximum data rate expected from the ALICE detectors in real time. We present new features of the HLT for LHC Run 2 that started in 2015. A new fast standalone track reconstruction algorithm for the Inner Tracking System (ITS) enables the HLT to compute and report to LHC the luminous region of the interactions in real time. We employ a new dynamically reconfigurable histogram component that allows the visualization of characteristics of the online reconstruction using the full set of events measured by the detectors. This improves our monitoring and QA capabilities. During Run 2, we plan to deploy online calibration, starting with the calibration of the TPC (Time Projection Chamber) detector's drift time. First proof of concept tests were successfully performed using data-replay on our development cluster and during the heavy ion period at the end of 2015.

  9. Effects of cognitive stimulation with a self-modeling video on time to exhaustion while running at maximal aerobic velocity: a pilot study.

    Science.gov (United States)

    Hagin, Vincent; Gonzales, Benoît R; Groslambert, Alain

    2015-04-01

    This study assessed whether video self-modeling improves running performance and influences the rate of perceived exertion and heart rate response. Twelve men (M age=26.8 yr., SD=6; M body mass index=22.1 kg.m(-2), SD=1) performed a time to exhaustion running test at 100 percent maximal aerobic velocity while focusing on a video self-modeling loop to synchronize their stride. Compared to the control condition, there was a significant increase of time to exhaustion. Perceived exertion was lower also, but there was no significant change in mean heart rate. In conclusion, the video self-modeling used as a pacer apparently increased endurance by decreasing perceived exertion without affecting the heart rate.

  10. A Concurrent Implementation of the Cascade-Correlation Algorithm, Using the Time Warp Operating System

    Science.gov (United States)

    Springer, P.

    1993-01-01

    This paper discusses the method in which the Cascade-Correlation algorithm was parallelized in such a way that it could be run using the Time Warp Operating System (TWOS). TWOS is a special purpose operating system designed to run parellel discrete event simulations with maximum efficiency on parallel or distributed computers.

  11. An innovative computer design for modeling forest landscape change in very large spatial extents with fine resolutions

    Science.gov (United States)

    Jian Yang; Hong S. He; Stephen R. Shifley; Frank R. Thompson; Yangjian. Zhang

    2011-01-01

    Although forest landscape models (FLMs) have benefited greatly from ongoing advances of computer technology and software engineering, computing capacity remains a bottleneck in the design and development of FLMs. Computer memory overhead and run time efficiency are primary limiting factors when applying forest landscape models to simulate large landscapes with fine...

  12. Prediction of ROSA-III experiment Run 702

    International Nuclear Information System (INIS)

    Koizumi, Yasuo; Soda, Kunihisa; Kikuchi, Osamu.

    1978-11-01

    The purpose of the ROSA-III experiment with a scaled BWR test facility is to examine primary coolant thermalhydraulic behavior and performance during a postulated loss-of-coolant accident of BWR. The results provide information for verification and improvement of reactor safety analysis codes. Run 702 assumes a recirculation line double ended break at the pump suction with average core power and no ECCS. Prediction of the Run 702 experiment was made with computer code RELAP-4J. What determine the coolant behavior are mixture level in the downcomer and flowrates and flow directions at jet pump drive flow nozzle, jet pump suction and discharge. There is thus the need for these measurements to compare predicted results with experimental ones. The liquid level formation model also needs improvement. (author)

  13. Continuous-Time Symmetric Hopfield Nets are Computationally Universal

    Czech Academy of Sciences Publication Activity Database

    Šíma, Jiří; Orponen, P.

    2003-01-01

    Roč. 15, č. 3 (2003), s. 693-733 ISSN 0899-7667 R&D Projects: GA AV ČR IAB2030007; GA ČR GA201/02/1456 Institutional research plan: AV0Z1030915 Keywords : continuous-time Hopfield network * Liapunov function * analog computation * computational power * Turing universality Subject RIV: BA - General Mathematics Impact factor: 2.747, year: 2003

  14. Decrease in Ground-Run Distance of Small Airplanes by Applying Electrically-Driven Wheels

    Science.gov (United States)

    Kobayashi, Hiroshi; Nishizawa, Akira

    A new takeoff method for small airplanes was proposed. Ground-roll performance of an airplane driven by electrically-powered wheels was experimentally and computationally studied. The experiments verified that the ground-run distance was decreased by half with a combination of the powered driven wheels and propeller without increase of energy consumption during the ground-roll. The computational analysis showed the ground-run distance of the wheel-driven aircraft was independent of the motor power when the motor capability exceeded the friction between tires and ground. Furthermore, the distance was minimized when the angle of attack was set to the value so that the wing generated negative lift.

  15. OPEX: Optimized Eccentricity Computation in Graphs

    Energy Technology Data Exchange (ETDEWEB)

    Henderson, Keith [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-11-14

    Real-world graphs have many properties of interest, but often these properties are expensive to compute. We focus on eccentricity, radius and diameter in this work. These properties are useful measures of the global connectivity patterns in a graph. Unfortunately, computing eccentricity for all nodes is O(n2) for a graph with n nodes. We present OPEX, a novel combination of optimizations which improves computation time of these properties by orders of magnitude in real-world experiments on graphs of many different sizes. We run OPEX on graphs with up to millions of links. OPEX gives either exact results or bounded approximations, unlike its competitors which give probabilistic approximations or sacrifice node-level information (eccentricity) to compute graphlevel information (diameter).

  16. Parallelism, fractal geometry and other aspects of computational mathematics

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1991-01-01

    In some fields such as meteorology, theoretical physics, quantum chemistry and hydrodynamics there are problems which involve so much computation that computers of the power of a thousand times a Cray 2 could be fully utilised if they were available. Since it is unlikely that uniprocessors of such power will be available, such large scale problems could be solved by using systems of computers running in parallel. This approach, of course, requires to find appropriate algorithms for the solution of such problems which can efficiently make use of a large number of computers working in parallel. 11 refs, 10 figs, 1 tab

  17. Just-in-time Time Data Analytics and Visualization of Climate Simulations using the Bellerophon Framework

    Science.gov (United States)

    Anantharaj, V. G.; Venzke, J.; Lingerfelt, E.; Messer, B.

    2015-12-01

    Climate model simulations are used to understand the evolution and variability of earth's climate. Unfortunately, high-resolution multi-decadal climate simulations can take days to weeks to complete. Typically, the simulation results are not analyzed until the model runs have ended. During the course of the simulation, the output may be processed periodically to ensure that the model is preforming as expected. However, most of the data analytics and visualization are not performed until the simulation is finished. The lengthy time period needed for the completion of the simulation constrains the productivity of climate scientists. Our implementation of near real-time data visualization analytics capabilities allows scientists to monitor the progress of their simulations while the model is running. Our analytics software executes concurrently in a co-scheduling mode, monitoring data production. When new data are generated by the simulation, a co-scheduled data analytics job is submitted to render visualization artifacts of the latest results. These visualization output are automatically transferred to Bellerophon's data server located at ORNL's Compute and Data Environment for Science (CADES) where they are processed and archived into Bellerophon's database. During the course of the experiment, climate scientists can then use Bellerophon's graphical user interface to view animated plots and their associated metadata. The quick turnaround from the start of the simulation until the data are analyzed permits research decisions and projections to be made days or sometimes even weeks sooner than otherwise possible! The supercomputer resources used to run the simulation are unaffected by co-scheduling the data visualization jobs, so the model runs continuously while the data are visualized. Our just-in-time data visualization software looks to increase climate scientists' productivity as climate modeling moves into exascale era of computing.

  18. Multiscale Space-Time Computational Methods for Fluid-Structure Interactions

    Science.gov (United States)

    2015-09-13

    thermo-fluid analysis of a ground vehicle and its tires ST-SI Computational Analysis of a Vertical - Axis Wind Turbine We have successfully...of a vertical - axis wind turbine . Multiscale Compressible-Flow Computation with Particle Tracking We have successfully tested the multiscale...Tezduyar, Spenser McIntyre, Nikolay Kostov, Ryan Kolesar, Casey Habluetzel. Space–time VMS computation of wind - turbine rotor and tower aerodynamics

  19. The ACP [Advanced Computer Program] multiprocessor system at Fermilab

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1986-09-01

    The Advanced Computer Program at Fermilab has developed a multiprocessor system which is easy to use and uniquely cost effective for many high energy physics problems. The system is based on single board computers which cost under $2000 each to build including 2 Mbytes of on board memory. These standard VME modules each run experiment reconstruction code in Fortran at speeds approaching that of a VAX 11/780. Two versions have been developed: one uses Motorola's 68020 32 bit microprocessor, the other runs with AT and T's 32100. both include the corresponding floating point coprocessor chip. The first system, when fully configured, uses 70 each of the two types of processors. A 53 processor system has been operated for several months with essentially no down time by computer operators in the Fermilab Computer Center, performing at nearly the capacity of 6 CDC Cyber 175 mainframe computers. The VME crates in which the processing ''nodes'' sit are connected via a high speed ''Branch Bus'' to one or more MicroVAX computers which act as hosts handling system resource management and all I/O in offline applications. An interface from Fastbus to the Branch Bus has been developed for online use which has been tested error free at 20 Mbytes/sec for 48 hours. ACP hardware modules are now available commercially. A major package of software, including a simulator that runs on any VAX, has been developed. It allows easy migration of existing programs to this multiprocessor environment. This paper describes the ACP Multiprocessor System and early experience with it at Fermilab and elsewhere

  20. Influence of running shoes and cross-trainers on Achilles tendon forces during running compared with military boots.

    Science.gov (United States)

    Sinclair, Jonathan; Taylor, P J; Atkins, S

    2015-06-01

    Military recruits are known to be susceptible to Achilles tendon pathology. The British Army have introduced footwear models, the PT-03 (cross-trainer) and PT1000 (running shoes), in an attempt to reduce the incidence of injuries. The aim of the current investigation was to examine the Achilles tendon forces of the cross-trainer and running shoe in relation to conventional army boots. Ten male participants ran at 4.0 m/s in each footwear condition. Achilles tendon forces were obtained throughout the stance phase of running and compared using repeated-measures ANOVAs. The results showed that the time to peak Achilles tendon force was significantly shorter when running in conventional army boots (0.12 s) in comparison with the cross-trainer (0.13 s) and running shoe (0.13 s). Achilles tendon loading rate was shown to be significantly greater in conventional army boots (38.73 BW/s) in comparison with the cross-trainer (35.14 BW/s) and running shoe (33.57 BW/s). The results of this study suggest that the running shoes and cross-trainer footwear are associated with reductions in Achilles tendon parameters that have been linked to the aetiology of injury, and thus it can be hypothesised that these footwear could be beneficial for military recruits undertaking running exercises. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  1. Constraint-Led Changes in Internal Variability in Running

    OpenAIRE

    Haudum, Anita; Birklbauer, Jürgen; Kröll, Josef; Müller, Erich

    2012-01-01

    We investigated the effect of a one-time application of elastic constraints on movement-inherent variability during treadmill running. Eleven males ran two 35-min intervals while surface EMG was measured. In one of two 35-min intervals, after 10 min of running without tubes, elastic tubes (between hip and heels) were attached, followed by another 5 min of running without tubes. To assess variability, stride-to-stride iEMG variability was calculated. Significant increases in variability (36 % ...

  2. Strategic directions of computing at Fermilab

    Science.gov (United States)

    Wolbers, Stephen

    1998-05-01

    Fermilab computing has changed a great deal over the years, driven by the demands of the Fermilab experimental community to record and analyze larger and larger datasets, by the desire to take advantage of advances in computing hardware and software, and by the advances coming from the R&D efforts of the Fermilab Computing Division. The strategic directions of Fermilab Computing continue to be driven by the needs of the experimental program. The current fixed-target run will produce over 100 TBytes of raw data and systems must be in place to allow the timely analysis of the data. The collider run II, beginning in 1999, is projected to produce of order 1 PByte of data per year. There will be a major change in methodology and software language as the experiments move away from FORTRAN and into object-oriented languages. Increased use of automation and the reduction of operator-assisted tape mounts will be required to meet the needs of the large experiments and large data sets. Work will continue on higher-rate data acquisition systems for future experiments and projects. R&D projects will be pursued as necessary to provide software, tools, or systems which cannot be purchased or acquired elsewhere. A closer working relation with other high energy laboratories will be pursued to reduce duplication of effort and to allow effective collaboration on many aspects of HEP computing.

  3. The optimal production-run time for a stock-dependent imperfect production process

    Directory of Open Access Journals (Sweden)

    Jain Divya

    2013-01-01

    Full Text Available This paper develops an inventory model for a hypothesized volume flexible manufacturing system in which the production rate is stock-dependent and the system produces both perfect and imperfect quality items. The demand rate of perfect quality items is known and constant, whereas the demand rate of imperfect (non-conforming to specifications quality items is a function of discount offered in the selling price. In this paper, we determine an optimal production-run time and the optimal discount that should be offered in the selling price to influence the sale of imperfect quality items produced by the manufacturing system. The considered model aims to maximize the net profit obtained through the sales of both perfect and imperfect quality items subject to certain constraints of the system. The solution procedure suggests the use of ‘Interior Penalty Function Method’ to solve the associated constrained maximization problem. Finally, a numerical example demonstrating the applicability of proposed model has been included.

  4. Computing the Dilation of Edge-Augmented Graphs Embedded in Metric Spaces

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    2008-01-01

    Let G = (V,E) be an undirected graph with n vertices embedded in a metric space. We consider the problem of adding a shortcut edge in G that minimizes the dilation of the resulting graph. The fastest algorithm to date for this problem has O(n^4) running time and uses O(n^2) space. We show how...... to improve running time to O(n^3*log n) while maintaining quadratic space requirement. In fact, our algorithm not only determines the best shortcut but computes the dilation of G U {(u,v)} for every pair of distinct vertices u and v....

  5. A Distributed Snapshot Protocol for Efficient Artificial Intelligence Computation in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    JongBeom Lim

    2018-01-01

    Full Text Available Many artificial intelligence applications often require a huge amount of computing resources. As a result, cloud computing adoption rates are increasing in the artificial intelligence field. To support the demand for artificial intelligence applications and guarantee the service level agreement, cloud computing should provide not only computing resources but also fundamental mechanisms for efficient computing. In this regard, a snapshot protocol has been used to create a consistent snapshot of the global state in cloud computing environments. However, the existing snapshot protocols are not optimized in the context of artificial intelligence applications, where large-scale iterative computation is the norm. In this paper, we present a distributed snapshot protocol for efficient artificial intelligence computation in cloud computing environments. The proposed snapshot protocol is based on a distributed algorithm to run interconnected multiple nodes in a scalable fashion. Our snapshot protocol is able to deal with artificial intelligence applications, in which a large number of computing nodes are running. We reveal that our distributed snapshot protocol guarantees the correctness, safety, and liveness conditions.

  6. Effect of injection timing on combustion and performance of a direct injection diesel engine running on Jatropha methyl ester

    Energy Technology Data Exchange (ETDEWEB)

    Jindal, S. [Mechanical Engineering Department, College of Technology & Engineering, Maharana Pratap University of Agriculture and Technology, Udaipur 313001 (India)

    2011-07-01

    The present study aims at evaluation of effect of injection timing on the combustion, performance and emissions of a small power diesel engine, commonly used for agriculture purpose, running on pure biodiesel, prepared from Jatropha (Jatropha curcas) vegetable oil. The effect of varying injection timing was evaluated in terms of thermal efficiency, specific fuel consumption, power and mean effective pressure, exhaust temperature, cylinder pressure, rate of pressure rise and the heat release rate. It was found that retarding the injection timing by 3 degrees enhances the thermal efficiency by about 8 percent.

  7. Metabolic cost of running is greater on a treadmill with a stiffer running platform.

    Science.gov (United States)

    Smith, James A H; McKerrow, Alexander D; Kohn, Tertius A

    2017-08-01

    Exercise testing on motorised treadmills provides valuable information about running performance and metabolism; however, the impact of treadmill type on these tests has not been investigated. This study compared the energy demand of running on two laboratory treadmills: an HP Cosmos (C) and a Quinton (Q) model, with the latter having a 4.5 times stiffer running platform. Twelve experienced runners ran identical bouts on these treadmills at a range of four submaximal velocities (reported data is for the velocity that approximated 75-81% VO 2max ). The stiffer treadmill elicited higher oxygen consumption (C: 46.7 ± 3.8; Q: 50.1 ± 4.3 ml·kg -1 · min -1 ), energy expenditure (C: 16.0 ± 2.5; Q: 17.7 ± 2.9 kcal · min -1 ), carbohydrate oxidation (C: 9.6 ± 3.1; Q: 13.0 ± 3.9 kcal · min -1 ), heart rate (C: 155 ± 16; Q: 163 ± 16 beats · min -1 ) and rating of perceived exertion (C: 13.8 ± 1.2; Q: 14.7 ± 1.2), but lower fat oxidation (C: 6.4 ± 2.3; Q: 4.6 ± 2.5 kcal · min -1 ) (all analysis of variance treadmill comparisons P running depending on the running platform stiffness.

  8. Running as a Key Lifestyle Medicine for Longevity.

    Science.gov (United States)

    Lee, Duck-Chul; Brellenthin, Angelique G; Thompson, Paul D; Sui, Xuemei; Lee, I-Min; Lavie, Carl J

    Running is a popular and convenient leisure-time physical activity (PA) with a significant impact on longevity. In general, runners have a 25%-40% reduced risk of premature mortality and live approximately 3 years longer than non-runners. Recently, specific questions have emerged regarding the extent of the health benefits of running versus other types of PA, and perhaps more critically, whether there are diminishing returns on health and mortality outcomes with higher amounts of running. This review details the findings surrounding the impact of running on various health outcomes and premature mortality, highlights plausible underlying mechanisms linking running with chronic disease prevention and longevity, identifies the estimated additional life expectancy among runners and other active individuals, and discusses whether there is adequate evidence to suggest that longevity benefits are attenuated with higher doses of running. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Virtual network computing: cross-platform remote display and collaboration software.

    Science.gov (United States)

    Konerding, D E

    1999-04-01

    VNC (Virtual Network Computing) is a computer program written to address the problem of cross-platform remote desktop/application display. VNC uses a client/server model in which an image of the desktop of the server is transmitted to the client and displayed. The client collects mouse and keyboard input from the user and transmits them back to the server. The VNC client and server can run on Windows 95/98/NT, MacOS, and Unix (including Linux) operating systems. VNC is multi-user on Unix machines (any number of servers can be run are unrelated to the primary display of the computer), while it is effectively single-user on Macintosh and Windows machines (only one server can be run, displaying the contents of the primary display of the server). The VNC servers can be configured to allow more than one client to connect at one time, effectively allowing collaboration through the shared desktop. I describe the function of VNC, provide details of installation, describe how it achieves its goal, and evaluate the use of VNC for molecular modelling. VNC is an extremely useful tool for collaboration, instruction, software development, and debugging of graphical programs with remote users.

  10. PC graphics generation and management tool for real-time applications

    Science.gov (United States)

    Truong, Long V.

    1992-01-01

    A graphics tool was designed and developed for easy generation and management of personal computer graphics. It also provides methods and 'run-time' software for many common artificial intelligence (AI) or expert system (ES) applications.

  11. Running and osteoarthritis.

    Science.gov (United States)

    Willick, Stuart E; Hansen, Pamela A

    2010-07-01

    The overall health benefits of cardiovascular exercise, such as running, are well established. However, it is also well established that in certain circumstances running can lead to overload injuries of muscle, tendon, and bone. In contrast, it has not been established that running leads to degeneration of articular cartilage, which is the hallmark of osteoarthritis. This article reviews the available literature on the association between running and osteoarthritis, with a focus on clinical epidemiologic studies. The preponderance of clinical reports refutes an association between running and osteoarthritis. Copyright 2010 Elsevier Inc. All rights reserved.

  12. A Novel Earphone Type Sensor for Measuring Mealtime: Consideration of the Method to Distinguish between Running and Meals

    Directory of Open Access Journals (Sweden)

    Kazuhiro Taniguchi

    2017-01-01

    Full Text Available In this study, we describe a technique for estimating meal times using an earphone-type wearable sensor. A small optical sensor composed of a light-emitting diode and phototransistor is inserted into the ear hole of a user and estimates the meal times of the user from the time variations in the amount of light received. This is achieved by emitting light toward the inside of the ear canal and receiving light reflected back from the ear canal. This proposed technique allowed “meals” to be differentiated from having conversations, sneezing, walking, ascending and descending stairs, operating a computer, and using a smartphone. Conventional devices worn on the head of users and that measure food intake can vibrate during running as the body is jolted more violently than during walking; this can result in the misidentification of running as eating by these devices. To solve this problem, we used two of our sensors simultaneously: one in the left ear and one in the right ear. This was based on our finding that measurements from the left and right ear canals have a strong correlation during running but no correlation during eating. This allows running and eating to be distinguished based on correlation coefficients, which can reduce misidentification. Moreover, by using an optical sensor composed of a semiconductor, a small and lightweight device can be created. This measurement technique can also measure body motion associated with running, and the data obtained from the optical sensor inserted into the ear can be used to support a healthy lifestyle regarding both eating and exercise.

  13. Backward running or absence of running from Creutz ratios

    International Nuclear Information System (INIS)

    Giedt, Joel; Weinberg, Evan

    2011-01-01

    We extract the running coupling based on Creutz ratios in SU(2) lattice gauge theory with two Dirac fermions in the adjoint representation. Depending on how the extrapolation to zero fermion mass is performed, either backward running or an absence of running is observed at strong bare coupling. This behavior is consistent with other findings which indicate that this theory has an infrared fixed point.

  14. Fast Running Urban Dispersion Model for Radiological Dispersal Device (RDD) Releases: Model Description and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Gowardhan, Akshay [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Neuscamman, Stephanie [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Donetti, John [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Walker, Hoyt [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Belles, Rich [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Eme, Bill [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Homann, Steven [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Simpson, Matthew [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Nasstrom, John [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC)

    2017-05-24

    Aeolus is an efficient three-dimensional computational fluid dynamics code based on finite volume method developed for predicting transport and dispersion of contaminants in a complex urban area. It solves the time dependent incompressible Navier-Stokes equation on a regular Cartesian staggered grid using a fractional step method. It also solves a scalar transport equation for temperature and using the Boussinesq approximation. The model also includes a Lagrangian dispersion model for predicting the transport and dispersion of atmospheric contaminants. The model can be run in an efficient Reynolds Average Navier-Stokes (RANS) mode with a run time of several minutes, or a more detailed Large Eddy Simulation (LES) mode with run time of hours for a typical simulation. This report describes the model components, including details on the physics models used in the code, as well as several model validation efforts. Aeolus wind and dispersion predictions are compared to field data from the Joint Urban Field Trials 2003 conducted in Oklahoma City (Allwine et al 2004) including both continuous and instantaneous releases. Newly implemented Aeolus capabilities include a decay chain model and an explosive Radiological Dispersal Device (RDD) source term; these capabilities are described. Aeolus predictions using the buoyant explosive RDD source are validated against two experimental data sets: the Green Field explosive cloud rise experiments conducted in Israel (Sharon et al 2012) and the Full-Scale RDD Field Trials conducted in Canada (Green et al 2016).

  15. Applied time series analysis and innovative computing

    CERN Document Server

    Ao, Sio-Iong

    2010-01-01

    This text is a systematic, state-of-the-art introduction to the use of innovative computing paradigms as an investigative tool for applications in time series analysis. It includes frontier case studies based on recent research.

  16. The gradient flow running coupling with twisted boundary conditions

    International Nuclear Information System (INIS)

    Ramos, Alberto

    2014-09-01

    We study the gradient flow for Yang-Mills theories with twisted boundary conditions. The perturbative behavior of the energy density left angle E(t) right angle is used to define a running coupling at a scale given by the linear size of the finite volume box. We compute the non-perturbative running of the pure gauge SU(2) coupling constant and conclude that the technique is well suited for further applications due to the relatively mild cutoff effects of the step scaling function and the high numerical precision that can be achieved in lattice simulations. We also comment on the inclusion of matter fields.

  17. The LHC Computing Grid in the starting blocks

    CERN Multimedia

    Danielle Amy Venton

    2010-01-01

    As the Large Hadron Collider ramps up operations and breaks world records, it is an exciting time for everyone at CERN. To get the computing perspective, the Bulletin this week caught up with Ian Bird, leader of the Worldwide LHC Computing Grid (WLCG). He is confident that everything is ready for the first data.   The metallic globe illustrating the Worldwide LHC Computing GRID (WLCG) in the CERN Computing Centre. The Worldwide LHC Computing Grid (WLCG) collaboration has been in place since 2001 and for the past several years it has continually run the workloads for the experiments as part of their preparations for LHC data taking. So far, the numerous and massive simulations of the full chain of reconstruction and analysis software could only be carried out using Monte Carlo simulated data. Now, for the first time, the system is starting to work with real data and with many simultaneous users accessing them from all around the world. “During the 2009 large-scale computing challenge (...

  18. Parallel Monte Carlo simulations on an ARC-enabled computing grid

    International Nuclear Information System (INIS)

    Nilsen, Jon K; Samset, Bjørn H

    2011-01-01

    Grid computing opens new possibilities for running heavy Monte Carlo simulations of physical systems in parallel. The presentation gives an overview of GaMPI, a system for running an MPI-based random walker simulation on grid resources. Integrating the ARC middleware and the new storage system Chelonia with the Ganga grid job submission and control system, we show that MPI jobs can be run on a world-wide computing grid with good performance and promising scaling properties. Results for relatively communication-heavy Monte Carlo simulations run on multiple heterogeneous, ARC-enabled computing clusters in several countries are presented.

  19. Running from Paris to Beijing: biomechanical and physiological consequences.

    Science.gov (United States)

    Millet, Guillaume Y; Morin, Jean-Benoît; Degache, Francis; Edouard, Pascal; Feasson, Léonard; Verney, Julien; Oullion, Roger

    2009-12-01

    The purpose of this study was to examine the physiological and biomechanical changes occurring in a subject after running 8,500 km in 161 days (i.e. 52.8 km daily). Three weeks before, 3 weeks after (POST) and 5 months after (POST+5) running from Paris to Beijing, energy cost of running (Cr), knee flexor and extensor isokinetic strength and biomechanical parameters (using a treadmill dynamometer) at different velocities were assessed in an experienced ultra-runner. At POST, there was a tendency toward a 'smoother' running pattern, as shown by (a) a higher stride frequency and duty factor, and a reduced aerial time without a change in contact time, (b) a lower maximal vertical force and loading rate at impact and (c) a decrease in both potential and kinetic energy changes at each step. This was associated with a detrimental effect on Cr (+6.2%) and a loss of strength at all angular velocities for both knee flexors and extensors. At POST+5, the subject returned to his original running patterns at low but not at high speeds and maximal strength remained reduced at low angular velocities (i.e. at high levels of force). It is suggested that the running pattern changes observed in the present study were a strategy adopted by the subject to reduce the deleterious effects of long distance running. However, the running pattern changes could partly be linked to the decrease in maximal strength.

  20. Demo: Distributed Real-Time Generative 3D Hand Tracking using Edge GPGPU Acceleration

    DEFF Research Database (Denmark)

    Qammaz, Ammar; Kosta, Sokol; Kyriazis, Nikolaos

    2018-01-01

    computations locally. The network connection takes the place of a GPGPU accelerator and sharing resources with a larger workstation becomes the acceleration mechanism. The unique properties of a generative optimizer are examined and constitute a challenging use-case, since the requirement for real......This work demonstrates a real-time 3D hand tracking application that runs via computation offloading. The proposed framework enables the application to run on low-end mobile devices such as laptops and tablets, despite the fact that they lack the sufficient hardware to perform the required...

  1. CPU SIM: A Computer Simulator for Use in an Introductory Computer Organization-Architecture Class.

    Science.gov (United States)

    Skrein, Dale

    1994-01-01

    CPU SIM, an interactive low-level computer simulation package that runs on the Macintosh computer, is described. The program is designed for instructional use in the first or second year of undergraduate computer science, to teach various features of typical computer organization through hands-on exercises. (MSE)

  2. Passenger Sharing of the High-Speed Railway from Sensitivity Analysis Caused by Price and Run-time Based on the Multi-Agent System

    Directory of Open Access Journals (Sweden)

    Ma Ning

    2013-09-01

    Full Text Available Purpose: Nowadays, governments around the world are active in constructing the high-speed railway. Therefore, it is significant to make research on this increasingly prevalent transport.Design/methodology/approach: In this paper, we simulate the process of the passenger’s travel mode choice by adjusting the ticket fare and the run-time based on the multi-agent system (MAS.Findings: From the research we get the conclusion that increasing the run-time appropriately and reducing the ticket fare in some extent are effective ways to enhance the passenger sharing of the high-speed railway.Originality/value: We hope it can provide policy recommendations for the railway sectors in developing the long-term plan on high-speed railway in the future.

  3. Running jobs in the vacuum

    International Nuclear Information System (INIS)

    McNab, A; Stagni, F; Garcia, M Ubeda

    2014-01-01

    We present a model for the operation of computing nodes at a site using Virtual Machines (VMs), in which VMs are created and contextualized for experiments by the site itself. For the experiment, these VMs appear to be produced spontaneously 'in the vacuum' rather having to ask the site to create each one. This model takes advantage of the existing pilot job frameworks adopted by many experiments. In the Vacuum model, the contextualization process starts a job agent within the VM and real jobs are fetched from the central task queue as normal. An implementation of the Vacuum scheme, Vac, is presented in which a VM factory runs on each physical worker node to create and contextualize its set of VMs. With this system, each node's VM factory can decide which experiments' VMs to run, based on site-wide target shares and on a peer-to-peer protocol in which the site's VM factories query each other to discover which VM types they are running. A property of this system is that there is no gate keeper service, head node, or batch system accepting and then directing jobs to particular worker nodes, avoiding several central points of failure. Finally, we describe tests of the Vac system using jobs from the central LHCb task queue, using the same contextualization procedure for VMs developed by LHCb for Clouds.

  4. Experience running a distributed Tier-2 in Spain for the ATLAS experiment

    International Nuclear Information System (INIS)

    March, L; Hoz, S Gonzales de la; Kaci, M; Fassi, F; Fernandez, A; Lamas, A; Salt, J; Sanchez, J; Peso, J del; Fernandez, P; Munoz, L; Pardo, J; Espinal, X; Garitaonandia, H; Mir, M L; Nadal, J; Pacheco, A; Shuskov, S

    2008-01-01

    The main role of the Tier-2s is to provide computing resources for production of physics simulated events and distributed data analysis. The Spanish ATLAS Tier-2 is geographically distributed among three HEP institutes: IFAE (Barcelona), IFIC (Valencia) and UAM (Madrid). Currently it has a computing power of 430 kSI2K CPU, a disk storage capacity of 87 TB and a network bandwidth, connecting the three sites and the nearest Tier-1 (PIC), of 1 Gb/s. These resources will be increased according to the ATLAS Computing Model with time in parallel to those of all ATLAS Tier-2s. Since 2002, it has been participating into the different Data Challenge exercises. Currently, it is achieving around 1.5% of the whole ATLAS collaboration production in the framework of the Computing System Commissioning exercise. A distributed data management is also arising as an important issue in the daily activities of the Tier-2. The distribution in three sites has shown to be useful due to an increasing service redundancy, a faster solution of problems, the share of computing expertise and know-how. Experience gained running the distributed Tier-2 in order to be ready at the LHC start-up will be presented

  5. 22 CFR 1429.21 - Computation of time for filing papers.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Computation of time for filing papers. 1429.21... MISCELLANEOUS AND GENERAL REQUIREMENTS General Requirements § 1429.21 Computation of time for filing papers. In... subchapter requires the filing of any paper, such document must be received by the Board or the officer or...

  6. Newmark local time stepping on high-performance computing architectures

    KAUST Repository

    Rietmann, Max

    2016-11-25

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  7. Newmark local time stepping on high-performance computing architectures

    KAUST Repository

    Rietmann, Max; Grote, Marcus; Peter, Daniel; Schenk, Olaf

    2016-01-01

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  8. Newmark local time stepping on high-performance computing architectures

    Energy Technology Data Exchange (ETDEWEB)

    Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland); Institute of Geophysics, ETH Zurich (Switzerland); Grote, Marcus, E-mail: marcus.grote@unibas.ch [Department of Mathematics and Computer Science, University of Basel (Switzerland); Peter, Daniel, E-mail: daniel.peter@kaust.edu.sa [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland); Institute of Geophysics, ETH Zurich (Switzerland); Schenk, Olaf, E-mail: olaf.schenk@usi.ch [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland)

    2017-04-01

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  9. Highly reliable computer network for real time system

    International Nuclear Information System (INIS)

    Mohammed, F.A.; Omar, A.A.; Ayad, N.M.A.; Madkour, M.A.I.; Ibrahim, M.K.

    1988-01-01

    Many of computer networks have been studied different trends regarding the network architecture and the various protocols that govern data transfers and guarantee a reliable communication among all a hierarchical network structure has been proposed to provide a simple and inexpensive way for the realization of a reliable real-time computer network. In such architecture all computers in the same level are connected to a common serial channel through intelligent nodes that collectively control data transfers over the serial channel. This level of computer network can be considered as a local area computer network (LACN) that can be used in nuclear power plant control system since it has geographically dispersed subsystems. network expansion would be straight the common channel for each added computer (HOST). All the nodes are designed around a microprocessor chip to provide the required intelligence. The node can be divided into two sections namely a common section that interfaces with serial data channel and a private section to interface with the host computer. This part would naturally tend to have some variations in the hardware details to match the requirements of individual host computers. fig 7

  10. Introduction to Xgrid: Cluster Computing for Everyone

    OpenAIRE

    Breen, Barbara J.; Lindner, John F.

    2010-01-01

    Xgrid is the first distributed computing architecture built into a desktop operating system. It allows you to run a single job across multiple computers at once. All you need is at least one Macintosh computer running Mac OS X v10.4 or later. (Mac OS X Server is not required.) We provide explicit instructions and example code to get you started, including examples of how to distribute your computing jobs, even if your initial cluster consists of just two old laptops in your basement.

  11. Computer simulations of long-time tails: what's new?

    NARCIS (Netherlands)

    Hoef, van der M.A.; Frenkel, D.

    1995-01-01

    Twenty five years ago Alder and Wainwright discovered, by simulation, the 'long-time tails' in the velocity autocorrelation function of a single particle in fluid [1]. Since then, few qualitatively new results on long-time tails have been obtained by computer simulations. However, within the

  12. Energy consumption program: A computer model simulating energy loads in buildings

    Science.gov (United States)

    Stoller, F. W.; Lansing, F. L.; Chai, V. W.; Higgins, S.

    1978-01-01

    The JPL energy consumption computer program developed as a useful tool in the on-going building modification studies in the DSN energy conservation project is described. The program simulates building heating and cooling loads and computes thermal and electric energy consumption and cost. The accuracy of computations are not sacrificed, however, since the results lie within + or - 10 percent margin compared to those read from energy meters. The program is carefully structured to reduce both user's time and running cost by asking minimum information from the user and reducing many internal time-consuming computational loops. Many unique features were added to handle two-level electronics control rooms not found in any other program.

  13. Graph run-length matrices for histopathological image segmentation.

    Science.gov (United States)

    Tosun, Akif Burak; Gunduz-Demir, Cigdem

    2011-03-01

    The histopathological examination of tissue specimens is essential for cancer diagnosis and grading. However, this examination is subject to a considerable amount of observer variability as it mainly relies on visual interpretation of pathologists. To alleviate this problem, it is very important to develop computational quantitative tools, for which image segmentation constitutes the core step. In this paper, we introduce an effective and robust algorithm for the segmentation of histopathological tissue images. This algorithm incorporates the background knowledge of the tissue organization into segmentation. For this purpose, it quantifies spatial relations of cytological tissue components by constructing a graph and uses this graph to define new texture features for image segmentation. This new texture definition makes use of the idea of gray-level run-length matrices. However, it considers the runs of cytological components on a graph to form a matrix, instead of considering the runs of pixel intensities. Working with colon tissue images, our experiments demonstrate that the texture features extracted from "graph run-length matrices" lead to high segmentation accuracies, also providing a reasonable number of segmented regions. Compared with four other segmentation algorithms, the results show that the proposed algorithm is more effective in histopathological image segmentation.

  14. Spike-timing-based computation in sound localization.

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2010-11-01

    Full Text Available Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal.

  15. Real-time Tsunami Inundation Prediction Using High Performance Computers

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2014-12-01

    Recently off-shore tsunami observation stations based on cabled ocean bottom pressure gauges are actively being deployed especially in Japan. These cabled systems are designed to provide real-time tsunami data before tsunamis reach coastlines for disaster mitigation purposes. To receive real benefits of these observations, real-time analysis techniques to make an effective use of these data are necessary. A representative study was made by Tsushima et al. (2009) that proposed a method to provide instant tsunami source prediction based on achieving tsunami waveform data. As time passes, the prediction is improved by using updated waveform data. After a tsunami source is predicted, tsunami waveforms are synthesized from pre-computed tsunami Green functions of linear long wave equations. Tsushima et al. (2014) updated the method by combining the tsunami waveform inversion with an instant inversion of coseismic crustal deformation and improved the prediction accuracy and speed in the early stages. For disaster mitigation purposes, real-time predictions of tsunami inundation are also important. In this study, we discuss the possibility of real-time tsunami inundation predictions, which require faster-than-real-time tsunami inundation simulation in addition to instant tsunami source analysis. Although the computational amount is large to solve non-linear shallow water equations for inundation predictions, it has become executable through the recent developments of high performance computing technologies. We conducted parallel computations of tsunami inundation and achieved 6.0 TFLOPS by using 19,000 CPU cores. We employed a leap-frog finite difference method with nested staggered grids of which resolution range from 405 m to 5 m. The resolution ratio of each nested domain was 1/3. Total number of grid points were 13 million, and the time step was 0.1 seconds. Tsunami sources of 2011 Tohoku-oki earthquake were tested. The inundation prediction up to 2 hours after the

  16. Muscle injury after low-intensity downhill running reduces running economy.

    Science.gov (United States)

    Baumann, Cory W; Green, Michael S; Doyle, J Andrew; Rupp, Jeffrey C; Ingalls, Christopher P; Corona, Benjamin T

    2014-05-01

    Contraction-induced muscle injury may reduce running economy (RE) by altering motor unit recruitment, lowering contraction economy, and disturbing running mechanics, any of which may have a deleterious effect on endurance performance. The purpose of this study was to determine if RE is reduced 2 days after performing injurious, low-intensity exercise in 11 healthy active men (27.5 ± 5.7 years; 50.05 ± 1.67 VO2peak). Running economy was determined at treadmill speeds eliciting 65 and 75% of the individual's peak rate of oxygen uptake (VO2peak) 1 day before and 2 days after injury induction. Lower extremity muscle injury was induced with a 30-minute downhill treadmill run (6 × 5 minutes runs, 2 minutes rest, -12% grade, and 12.9 km·h(-1)) that elicited 55% VO2peak. Maximal quadriceps isometric torque was reduced immediately and 2 days after the downhill run by 18 and 10%, and a moderate degree of muscle soreness was present. Two days after the injury, steady-state VO2 and metabolic work (VO2 L·km(-1)) were significantly greater (4-6%) during the 65% VO2peak run. Additionally, postinjury VCO2, VE and rating of perceived exertion were greater at 65% but not at 75% VO2peak, whereas whole blood-lactate concentrations did not change pre-injury to postinjury at either intensity. In conclusion, low-intensity downhill running reduces RE at 65% but not 75% VO2peak. The results of this study and other studies indicate the magnitude to which RE is altered after downhill running is dependent on the severity of the injury and intensity of the RE test.

  17. Excessive Progression in Weekly Running Distance and Risk of Running-related Injuries

    DEFF Research Database (Denmark)

    Nielsen, R.O.; Parner, Erik Thorlund; Nohr, Ellen Aagaard

    2014-01-01

    Study Design An explorative, 1-year prospective cohort study. Objective To examine whether an association between a sudden change in weekly running distance and running-related injury varies according to injury type. Background It is widely accepted that a sudden increase in running distance...... is strongly related to injury in runners. But the scientific knowledge supporting this assumption is limited. Methods A volunteer sample of 874 healthy novice runners who started a self-structured running regimen were provided a global-positioning-system watch. After each running session during the study...... period, participants were categorized into 1 of the following exposure groups, based on the progression of their weekly running distance: less than 10% or regression, 10% to 30%, or more than 30%. The primary outcome was running-related injury. Results A total of 202 runners sustained a running...

  18. Quantum walk computation

    International Nuclear Information System (INIS)

    Kendon, Viv

    2014-01-01

    Quantum versions of random walks have diverse applications that are motivating experimental implementations as well as theoretical studies. Recent results showing quantum walks are “universal for quantum computation” relate to algorithms, to be run on quantum computers. We consider whether an experimental implementation of a quantum walk could provide useful computation before we have a universal quantum computer

  19. 5 CFR 831.703 - Computation of annuities for part-time service.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Computation of annuities for part-time... part-time service. (a) Purpose. The computational method in this section shall be used to determine the annuity for an employee who has part-time service on or after April 7, 1986. (b) Definitions. In this...

  20. Toward a Run-to-Run Adaptive Artificial Pancreas: In Silico Results.

    Science.gov (United States)

    Toffanin, Chiara; Visentin, Roberto; Messori, Mirko; Palma, Federico Di; Magni, Lalo; Cobelli, Claudio

    2018-03-01

    Contemporary and future outpatient long-term artificial pancreas (AP) studies need to cope with the well-known large intra- and interday glucose variability occurring in type 1 diabetic (T1D) subjects. Here, we propose an adaptive model predictive control (MPC) strategy to account for it and test it in silico. A run-to-run (R2R) approach adapts the subcutaneous basal insulin delivery during the night and the carbohydrate-to-insulin ratio (CR) during the day, based on some performance indices calculated from subcutaneous continuous glucose sensor data. In particular, R2R aims, first, to reduce the percentage of time in hypoglycemia and, secondarily, to improve the percentage of time in euglycemia and average glucose. In silico simulations are performed by using the University of Virginia/Padova T1D simulator enriched by incorporating three novel features: intra- and interday variability of insulin sensitivity, different distributions of CR at breakfast, lunch, and dinner, and dawn phenomenon. After about two months, using the R2R approach with a scenario characterized by a random 30% variation of the nominal insulin sensitivity the time in range and the time in tight range are increased by 11.39% and 44.87%, respectively, and the time spent above 180 mg/dl is reduced by 48.74%. An adaptive MPC algorithm based on R2R shows in silico great potential to capture intra- and interday glucose variability by improving both overnight and postprandial glucose control without increasing hypoglycemia. Making an AP adaptive is key for long-term real-life outpatient studies. These good in silico results are very encouraging and worth testing in vivo.

  1. Evaluation of the 1996 predictions of the run-timing of wild migrant spring/summer yearling chinook in the Snake River Basin using Program RealTime

    International Nuclear Information System (INIS)

    Townsend, R.L.; Yasuda, D.; Skalski, J.R.

    1997-03-01

    This report is a post-season analysis of the accuracy of the 1996 predictions from the program RealTime. Observed 1996 migration data collected at Lower Granite Dam were compared to the predictions made by RealTime for the spring outmigration of wild spring/summer chinook. Appendix A displays the graphical reports of the RealTime program that were interactively accessible via the World Wide Web during the 1996 migration season. Final reports are available at address http://www.cqs.washington.edu/crisprt/. The CRISP model incorporated the predictions of the run status to move the timing forecasts further down the Snake River to Little Goose, Lower Monumental and McNary Dams. An analysis of the dams below Lower Granite Dam is available separately

  2. ATLAS Strip Detector: Operational Experience and Run1-> Run2 Transition

    CERN Document Server

    Nagai, Koichi; The ATLAS collaboration

    2014-01-01

    Large hadron collider was operated very successfully during the Run1 and provided a lot of opportunities of physics studies. It currently has a consolidation work toward to the operation at $\\sqrt{s}=14 \\mathrm{TeV}$ in Run2. The ATLAS experiment has achieved excellent performance in Run1 operation, delivering remarkable physics results. The SemiConductor Tracker contributed to the precise measurement of momentum of charged particles. This paper describes the operation experience of the SemiConductor Tracker in Run1 and the preparation toward to the Run2 operation during the LS1.

  3. The Preferred Movement Path Paradigm: Influence of Running Shoes on Joint Movement.

    Science.gov (United States)

    Nigg, Benno M; Vienneau, Jordyn; Smith, Aimée C; Trudeau, Matthieu B; Mohr, Maurice; Nigg, Sandro R

    2017-08-01

    (A) To quantify differences in lower extremity joint kinematics for groups of runners subjected to different running footwear conditions, and (B) to quantify differences in lower extremity joint kinematics on an individual basis for runners subjected to different running footwear conditions. Three-dimensional ankle and knee joint kinematics were collected for 35 heel-toe runners when wearing three different running shoes and when running barefoot. Absolute mean differences in ankle and knee joint kinematics were computed between running shoe conditions. The percentage of individual runners who displayed differences below a 2°, 3°, and 5° threshold were also calculated. The results indicate that the mean kinematics of the ankle and knee joints were similar between running shoe conditions. Aside from ankle dorsiflexion and knee flexion, the percentage of runners maintaining their movement path between running shoes (i.e., less than 3°) was in the order of magnitude of about 80% to 100%. Many runners showed ankle and knee joint kinematics that differed between a conventional running shoe and barefoot by more than 3°, especially for ankle dorsiflexion and knee flexion. Many runners stay in the same movement path (the preferred movement path) when running in various different footwear conditions. The percentage of runners maintaining their preferred movement path depends on the magnitude of the change introduced by the footwear condition.

  4. The $SU(\\infty)$ twisted gradient flow running coupling

    CERN Document Server

    Pérez, Margarita García; Keegan, Liam; Okawa, Masanori

    2015-01-01

    We measure the running of the $SU(\\infty)$ 't Hooft coupling by performing a step scaling analysis of the Twisted Eguchi-Kawai (TEK) model, the SU($N$) gauge theory on a single site lattice with twisted boundary conditions. The computation relies on the conjecture that finite volume effects for SU(N) gauge theories defined on a 4-dimensional twisted torus are controlled by an effective size parameter $\\tilde l = l \\sqrt{N}$, with $l$ the torus period. We set the scale for the running coupling in terms of $\\tilde l$ and use the gradient flow to define a renormalized 't Hooft coupling $\\lambda(\\tilde l)$. In the TEK model, this idea allows the determination of the running of the coupling through a step scaling procedure that uses the rank of the group as a size parameter. The continuum renormalized coupling constant is extracted in the zero lattice spacing limit, which in the TEK model corresponds to the large $N$ limit taken at fixed value of $\\lambda(\\tilde l)$. The coupling constant is thus expected to coinc...

  5. Highly coherent free-running dual-comb chip platform.

    Science.gov (United States)

    Hébert, Nicolas Bourbeau; Lancaster, David G; Michaud-Belleau, Vincent; Chen, George Y; Genest, Jérôme

    2018-04-15

    We characterize the frequency noise performance of a free-running dual-comb source based on an erbium-doped glass chip running two adjacent mode-locked waveguide lasers. This compact laser platform, contained only in a 1.2 L volume, rejects common-mode environmental noise by 20 dB thanks to the proximity of the two laser cavities. Furthermore, it displays a remarkably low mutual frequency noise floor around 10  Hz 2 /Hz, which is enabled by its large-mode-area waveguides and low Kerr nonlinearity. As a result, it reaches a free-running mutual coherence time of 1 s since mode-resolved dual-comb spectra are generated even on this time scale. This design greatly simplifies dual-comb interferometers by enabling mode-resolved measurements without any phase lock.

  6. Using Integration and Autonomy to Teach an Elementary Running Unit

    Science.gov (United States)

    Sluder, J. Brandon; Howard-Shaughnessy, Candice

    2015-01-01

    Cardiovascular fitness is an important aspect of overall fitness, health, and wellness, and running can be an excellent lifetime physical activity. One of the most simple and effective means of exercise, running raises heart rate in a short amount of time and can be done with little to no cost for equipment. There are many benefits to running,…

  7. Real-Time MENTAT programming language and architecture

    Science.gov (United States)

    Grimshaw, Andrew S.; Silberman, Ami; Liu, Jane W. S.

    1989-01-01

    Real-time MENTAT, a programming environment designed to simplify the task of programming real-time applications in distributed and parallel environments, is described. It is based on the same data-driven computation model and object-oriented programming paradigm as MENTAT. It provides an easy-to-use mechanism to exploit parallelism, language constructs for the expression and enforcement of timing constraints, and run-time support for scheduling and exciting real-time programs. The real-time MENTAT programming language is an extended C++. The extensions are added to facilitate automatic detection of data flow and generation of data flow graphs, to express the timing constraints of individual granules of computation, and to provide scheduling directives for the runtime system. A high-level view of the real-time MENTAT system architecture and programming language constructs is provided.

  8. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  9. The Influence of Running on Foot Posture and In-Shoe Plantar Pressures.

    Science.gov (United States)

    Bravo-Aguilar, María; Gijón-Noguerón, Gabriel; Luque-Suarez, Alejandro; Abian-Vicen, Javier

    2016-03-01

    Running can be considered a high-impact practice, and most people practicing continuous running experience lower-limb injuries. The aim of this study was to determine the influence of 45 min of running on foot posture and plantar pressures. The sample comprised 116 healthy adults (92 men and 24 women) with no foot-related injuries. The mean ± SD age of the participants was 28.31 ± 6.01 years; body mass index, 23.45 ± 1.96; and training time, 11.02 ± 4.22 h/wk. Outcome measures were collected before and after 45 min of running at an average speed of 12 km/h, and included the Foot Posture Index (FPI) and a baropodometric analysis. The results show that foot posture can be modified after 45 min of running. The mean ± SD FPI changed from 6.15 ± 2.61 to 4.86 ± 2.65 (P running. Peak plantar pressures in the forefoot decreased after running. The pressure-time integral decreased during the heel strike phase in the internal edge of the foot. In addition, a decrease was found in the pressure-time integral during the heel-off phase in the internal and rearfoot edges. The findings suggest that after 45 min of running, a pronated foot tends to change into a more neutral position, and decreased plantar pressures were found after the run.

  10. Multiyear interactive computer almanac, 1800-2050

    CERN Document Server

    United States. Naval Observatory

    2005-01-01

    The Multiyear Interactive Computer Almanac (MICA Version 2.2.2 ) is a software system that runs on modern versions of Windows and Macintosh computers created by the U.S. Naval Observatory's Astronomical Applications Department, especially for astronomers, surveyors, meteorologists, navigators and others who regularly need accurate information on the positions, motions, and phenomena of celestial objects. MICA produces high-precision astronomical data in tabular form, tailored for the times and locations specified by the user. Unlike traditional almanacs, MICA computes these data in real time, eliminating the need for table look-ups and additional hand calculations. MICA tables can be saved as standard text files, enabling their use in other applications. Several important new features have been added to this edition of MICA, including: extended date coverage from 1800 to 2050; a redesigned user interface; a graphical sky map; a phenomena calculator (eclipses, transits, equinoxes, solstices, conjunctions, oppo...

  11. Statistical Emulation of Climate Model Projections Based on Precomputed GCM Runs*

    KAUST Repository

    Castruccio, Stefano; McInerney, David J.; Stein, Michael L.; Liu Crouch, Feifei; Jacob, Robert L.; Moyer, Elisabeth J.

    2014-01-01

    functions of the past trajectory of atmospheric CO2 concentrations, and a statistical model is fit using a limited set of training runs. The approach is demonstrated to be a useful and computationally efficient alternative to pattern scaling and captures

  12. Performance comparison between Java and JNI for optimal implementation of computational micro-kernels

    OpenAIRE

    Halli , Nassim; Charles , Henri-Pierre; Méhaut , Jean-François

    2015-01-01

    International audience; General purpose CPUs used in high performance computing (HPC) support a vector instruction set and an out-of-order engine dedicated to increase the instruction level parallelism. Hence, related optimizations are currently critical to improve the performance of applications requiring numerical computation. Moreover, the use of a Java run-time environment such as the HotSpot Java Virtual Machine (JVM) in high performance computing is a promising alternative. It benefits ...

  13. New Flutter Analysis Technique for Time-Domain Computational Aeroelasticity

    Science.gov (United States)

    Pak, Chan-Gi; Lung, Shun-Fat

    2017-01-01

    A new time-domain approach for computing flutter speed is presented. Based on the time-history result of aeroelastic simulation, the unknown unsteady aerodynamics model is estimated using a system identification technique. The full aeroelastic model is generated via coupling the estimated unsteady aerodynamic model with the known linear structure model. The critical dynamic pressure is computed and used in the subsequent simulation until the convergence of the critical dynamic pressure is achieved. The proposed method is applied to a benchmark cantilevered rectangular wing.

  14. Running on a lower-body positive pressure treadmill

    DEFF Research Database (Denmark)

    Raffalt, Peter C; Hovgaard-Hansen, Line; Jensen, Bente Rona

    2013-01-01

    This study investigated maximal oxygen consumption (VO2max) and time to exhaustion while running on a lower-body positive pressure treadmill (LBPPT) at normal body weight (BW) as well as how BW support affects respiratory responses, ground reaction forces, and stride characteristics.......This study investigated maximal oxygen consumption (VO2max) and time to exhaustion while running on a lower-body positive pressure treadmill (LBPPT) at normal body weight (BW) as well as how BW support affects respiratory responses, ground reaction forces, and stride characteristics....

  15. Parallel computing in genomic research: advances and applications.

    Science.gov (United States)

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities.

  16. Dedicated OO expertise applied to Run II software projects

    International Nuclear Information System (INIS)

    Amidei, D.

    2000-01-01

    The change in software language and methodology by CDF and D0 to object-oriented from procedural Fortran is significant. Both experiments requested dedicated expertise that could be applied to software design, coding, advice and review. The Fermilab Run II offline computing outside review panel agreed strongly with the request and recommended that the Fermilab Computing Division hire dedicated OO expertise for the CDF/D0/Computing Division joint project effort. This was done and the two experts have been an invaluable addition to the CDF and D0 upgrade software projects and to the Computing Division in general. These experts have encouraged common approaches and increased the overall quality of the upgrade software. Advice on OO techniques and specific advice on C++ coding has been used. Recently a set of software reviews has been accomplished. This has been a very successful instance of a targeted application of computing expertise, and constitutes a very interesting study of how to move toward modern computing methodologies in HEP

  17. The ACP (Advanced Computer Program) multiprocessor system at Fermilab

    Energy Technology Data Exchange (ETDEWEB)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Case, G.; Cook, A.; Fischler, M.; Gaines, I.; Hance, R.; Husby, D.

    1986-09-01

    The Advanced Computer Program at Fermilab has developed a multiprocessor system which is easy to use and uniquely cost effective for many high energy physics problems. The system is based on single board computers which cost under $2000 each to build including 2 Mbytes of on board memory. These standard VME modules each run experiment reconstruction code in Fortran at speeds approaching that of a VAX 11/780. Two versions have been developed: one uses Motorola's 68020 32 bit microprocessor, the other runs with AT and T's 32100. both include the corresponding floating point coprocessor chip. The first system, when fully configured, uses 70 each of the two types of processors. A 53 processor system has been operated for several months with essentially no down time by computer operators in the Fermilab Computer Center, performing at nearly the capacity of 6 CDC Cyber 175 mainframe computers. The VME crates in which the processing ''nodes'' sit are connected via a high speed ''Branch Bus'' to one or more MicroVAX computers which act as hosts handling system resource management and all I/O in offline applications. An interface from Fastbus to the Branch Bus has been developed for online use which has been tested error free at 20 Mbytes/sec for 48 hours. ACP hardware modules are now available commercially. A major package of software, including a simulator that runs on any VAX, has been developed. It allows easy migration of existing programs to this multiprocessor environment. This paper describes the ACP Multiprocessor System and early experience with it at Fermilab and elsewhere.

  18. WinSCP for Windows File Transfers | High-Performance Computing | NREL

    Science.gov (United States)

    WinSCP for Windows File Transfers WinSCP for Windows File Transfers WinSCP for can used to securely transfer files between your local computer running Microsoft Windows and a remote computer running Linux

  19. Running and Osteoarthritis: Does Recreational or Competitive Running Increase the Risk?

    Science.gov (United States)

    2017-06-01

    Exercise, like running, is good for overall health and, specifically, our hearts, lungs, muscles, bones, and brains. However, some people are concerned about the impact of running on longterm joint health. Does running lead to higher rates of arthritis in knees and hips? While many researchers find that running protects bone health, others are concerned that this exercise poses a high risk for age-related changes to hips and knees. A study published in the June 2017 issue of JOSPT suggests that the difference in these outcomes depends on the frequency and intensity of running. J Orthop Sports Phys Ther 2017;47(6):391. doi:10.2519/jospt.2017.0505.

  20. The effect of footwear on running performance and running economy in distance runners.

    Science.gov (United States)

    Fuller, Joel T; Bellenger, Clint R; Thewlis, Dominic; Tsiros, Margarita D; Buckley, Jonathan D

    2015-03-01

    The effect of footwear on running economy has been investigated in numerous studies. However, no systematic review and meta-analysis has synthesised the available literature and the effect of footwear on running performance is not known. The aim of this systematic review and meta-analysis was to investigate the effect of footwear on running performance and running economy in distance runners, by reviewing controlled trials that compare different footwear conditions or compare footwear with barefoot. The Web of Science, Scopus, MEDLINE, CENTRAL (Cochrane Central Register of Controlled Trials), EMBASE, AMED (Allied and Complementary Medicine), CINAHL and SPORTDiscus databases were searched from inception up until April 2014. Included articles reported on controlled trials that examined the effects of footwear or footwear characteristics (including shoe mass, cushioning, motion control, longitudinal bending stiffness, midsole viscoelasticity, drop height and comfort) on running performance or running economy and were published in a peer-reviewed journal. Of the 1,044 records retrieved, 19 studies were included in the systematic review and 14 studies were included in the meta-analysis. No studies were identified that reported effects on running performance. Individual studies reported significant, but trivial, beneficial effects on running economy for comfortable and stiff-soled shoes [standardised mean difference (SMD) beneficial effect on running economy for cushioned shoes (SMD = 0.37; P beneficial effect on running economy for training in minimalist shoes (SMD = 0.79; P beneficial effects on running economy for light shoes and barefoot compared with heavy shoes (SMD running was identified (P running economy. Certain models of footwear and footwear characteristics can improve running economy. Future research in footwear performance should include measures of running performance.

  1. Short- and long-run adjustments in U.S. petroleum consumption

    International Nuclear Information System (INIS)

    Huntington, Hillard G.

    2010-01-01

    Long-run adjustments in petroleum consumption are not only larger than short-run adjustments. They may also be motivated by entirely different price events. This analysis shows that new price peaks have both short-run and long-run consumption responses, a result that is starkly different than price changes that track previous price paths. It also establishes significant trend effects where gasoline and residual fuel oil consumption decline over time. The analysis explores these adjustments by establishing long-run cointegrating relationships for different petroleum product groupings. An important implication is that price increases above historical levels may be providing substantially greater incentives for significant long-run demand adjustments than would be the case otherwise. (author)

  2. Short- and long-run adjustments in U.S. petroleum consumption

    Energy Technology Data Exchange (ETDEWEB)

    Huntington, Hillard G. [Executive Director Energy Modeling Forum 450 Terman Center 380 Panama Mall Stanford University Stanford, CA 94305-4026 (United States)

    2010-01-15

    Long-run adjustments in petroleum consumption are not only larger than short-run adjustments. They may also be motivated by entirely different price events. This analysis shows that new price peaks have both short-run and long-run consumption responses, a result that is starkly different than price changes that track previous price paths. It also establishes significant trend effects where gasoline and residual fuel oil consumption decline over time. The analysis explores these adjustments by establishing long-run cointegrating relationships for different petroleum product groupings. An important implication is that price increases above historical levels may be providing substantially greater incentives for significant long-run demand adjustments than would be the case otherwise. (author)

  3. Shoe cleat position during cycling and its effect on subsequent running performance in triathletes.

    Science.gov (United States)

    Viker, Tomas; Richardson, Matt X

    2013-01-01

    Research with cyclists suggests a decreased load on the lower limbs by placing the shoe cleat more posteriorly, which may benefit subsequent running in a triathlon. This study investigated the effect of shoe cleat position during cycling on subsequent running. Following bike-run training sessions with both aft and traditional cleat positions, 13 well-trained triathletes completed a 30 min simulated draft-legal triathlon cycling leg, followed by a maximal 5 km run on two occasions, once with aft-placed and once with traditionally placed cleats. Oxygen consumption, breath frequency, heart rate, cadence and power output were measured during cycling, while heart rate, contact time, 200 m lap time and total time were measured during running. Cardiovascular measures did not differ between aft and traditional cleat placement during the cycling protocol. The 5 km run time was similar for aft and traditional cleat placement, at 1084 ± 80 s and 1072 ± 64 s, respectively, as was contact time during km 1 and 5, and heart rate and running speed for km 5 for the two cleat positions. Running speed during km 1 was 2.1% ± 1.8 faster (P beneficial effects of an aft cleat position on subsequent running in a short distance triathlon.

  4. Building a cluster computer for the computing grid of tomorrow

    International Nuclear Information System (INIS)

    Wezel, J. van; Marten, H.

    2004-01-01

    The Grid Computing Centre Karlsruhe takes part in the development, test and deployment of hardware and cluster infrastructure, grid computing middleware, and applications for particle physics. The construction of a large cluster computer with thousands of nodes and several PB data storage capacity is a major task and focus of research. CERN based accelerator experiments will use GridKa, one of only 8 world wide Tier-1 computing centers, for its huge computer demands. Computing and storage is provided already for several other running physics experiments on the exponentially expanding cluster. (orig.)

  5. Metadata aided run selection at ATLAS

    International Nuclear Information System (INIS)

    Buckingham, R M; Gallas, E J; Tseng, J C-L; Viegas, F; Vinek, E

    2011-01-01

    Management of the large volume of data collected by any large scale scientific experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user interfaces, to pinpoint collections of data needed for specific purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called 'runBrowser' makes these Conditions Metadata available as a Run based selection service. runBrowser, based on PHP and JavaScript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions attributes, but also gives the user information at each stage about the relationship between the conditions chosen and the remaining conditions criteria available. When a set of COMA selections are complete, runBrowser produces a human readable report as well as an XML file in a standardized ATLAS format. This XML can be saved for later use or refinement in a future runBrowser session, shared with physics/detector groups, or used as input to ELSSI (event level Metadata browser) or other ATLAS run or event processing services.

  6. The Run-2 ATLAS Trigger System

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00222798; The ATLAS collaboration

    2016-01-01

    The ATLAS trigger successfully collected collision data during the first run of the LHC between 2009-2013 at different centre-of-mass energies between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 and a software-based high level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. In Run-2, the LHC will operate at centre-of-mass energies of 13 and 14 TeV and higher luminosity, resulting in roughly five times higher trigger rates. A brief review of the ATLAS trigger system upgrades that were implemented between Run-1 and Run-2, allowing to cope with the increased trigger rates while maintaining or even improving the efficiency to select physics processes of interest, will be given. This includes changes to the Level-1 calorimeter and muon trigger systems, the introduction of a new Level-1 topological trigger module and the merging of the previously two-level HLT system into a single event filter farm. A ...

  7. Effect of Compression Garments on Physiological Responses After Uphill Running.

    Science.gov (United States)

    Struhár, Ivan; Kumstát, Michal; Králová, Dagmar Moc

    2018-03-01

    Limited practical recommendations related to wearing compression garments for athletes can be drawn from the literature at the present time. We aimed to identify the effects of compression garments on physiological and perceptual measures of performance and recovery after uphill running with different pressure and distributions of applied compression. In a random, double blinded study, 10 trained male runners undertook three 8 km treadmill runs at a 6% elevation rate, with the intensity of 75% VO2max while wearing low, medium grade compression garments and high reverse grade compression. In all the trials, compression garments were worn during 4 hours post run. Creatine kinase, measurements of muscle soreness, ankle strength of plantar/dorsal flexors and mean performance time were then measured. The best mean performance time was observed in the medium grade compression garments with the time difference being: medium grade compression garments vs. high reverse grade compression garments. A positive trend in increasing peak torque of plantar flexion (60º·s-1, 120º·s-1) was found in the medium grade compression garments: a difference between 24 and 48 hours post run. The highest pain tolerance shift in the gastrocnemius muscle was the medium grade compression garments, 24 hour post run, with the shift being +11.37% for the lateral head and 6.63% for the medial head. In conclusion, a beneficial trend in the promotion of running performance and decreasing muscle soreness within 24 hour post exercise was apparent in medium grade compression garments.

  8. Effect of Compression Garments on Physiological Responses After Uphill Running

    Directory of Open Access Journals (Sweden)

    Struhár Ivan

    2018-03-01

    Full Text Available Limited practical recommendations related to wearing compression garments for athletes can be drawn from the literature at the present time. We aimed to identify the effects of compression garments on physiological and perceptual measures of performance and recovery after uphill running with different pressure and distributions of applied compression. In a random, double blinded study, 10 trained male runners undertook three 8 km treadmill runs at a 6% elevation rate, with the intensity of 75% VO2max while wearing low, medium grade compression garments and high reverse grade compression. In all the trials, compression garments were worn during 4 hours post run. Creatine kinase, measurements of muscle soreness, ankle strength of plantar/dorsal flexors and mean performance time were then measured. The best mean performance time was observed in the medium grade compression garments with the time difference being: medium grade compression garments vs. high reverse grade compression garments. A positive trend in increasing peak torque of plantar flexion (60o·s-1, 120o·s-1 was found in the medium grade compression garments: a difference between 24 and 48 hours post run. The highest pain tolerance shift in the gastrocnemius muscle was the medium grade compression garments, 24 hour post run, with the shift being +11.37% for the lateral head and 6.63% for the medial head. In conclusion, a beneficial trend in the promotion of running performance and decreasing muscle soreness within 24 hour post exercise was apparent in medium grade compression garments.

  9. Soft Computing Techniques for the Protein Folding Problem on High Performance Computing Architectures.

    Science.gov (United States)

    Llanes, Antonio; Muñoz, Andrés; Bueno-Crespo, Andrés; García-Valverde, Teresa; Sánchez, Antonia; Arcas-Túnez, Francisco; Pérez-Sánchez, Horacio; Cecilia, José M

    2016-01-01

    The protein-folding problem has been extensively studied during the last fifty years. The understanding of the dynamics of global shape of a protein and the influence on its biological function can help us to discover new and more effective drugs to deal with diseases of pharmacological relevance. Different computational approaches have been developed by different researchers in order to foresee the threedimensional arrangement of atoms of proteins from their sequences. However, the computational complexity of this problem makes mandatory the search for new models, novel algorithmic strategies and hardware platforms that provide solutions in a reasonable time frame. We present in this revision work the past and last tendencies regarding protein folding simulations from both perspectives; hardware and software. Of particular interest to us are both the use of inexact solutions to this computationally hard problem as well as which hardware platforms have been used for running this kind of Soft Computing techniques.

  10. Dual-Priced Modal Transition Systems with Time Durations

    DEFF Research Database (Denmark)

    Beneš, Nikola; Kretínsky, Jan; Larsen, Kim Guldstrand

    2012-01-01

    Modal transition systems are a well-established specification formalism for a high-level modelling of component-based software systems. We present a novel extension of the formalism called modal transition systems with durations where time durations are modelled as controllable or uncontrollable...... intervals. We further equip the model with two kinds of quantitative aspects: each action has its own running cost per time unit, and actions may require several hardware components of different costs. We ask the question, given a fixed budget for the hardware components, what is the implementation...... with the cheapest long-run average reward. We give an algorithm for computing such optimal implementations via a reduction to a new extension of mean payoff games with time durations and analyse the complexity of the algorithm....

  11. Strategic directions of computing at Fermilab

    International Nuclear Information System (INIS)

    Wolbers, S.

    1997-04-01

    Fermilab computing has changed a great deal over the years, driven by the demands of the Fermilab experimental community to record and analyze larger and larger datasets, by the desire to take advantage of advances in computing hardware and software, and by the advances coming from the R ampersand D efforts of the Fermilab Computing Division. The strategic directions of Fermilab Computing continue to be driven by the needs of the experimental program. The current fixed-target run will produce over 100 TBytes of raw data and systems must be in place to allow the timely analysis of the data. The collider run II, beginning in 1999, is projected to produce of order 1 PByte of data per year. There will be a major change in methodology and software language as the experiments move away from FORTRAN and into object- oriented languages. Increased use of automation and the reduction of operator-assisted tape mounts will be required to meet the needs of the large experiments and large data sets. Work will continue on higher-rate data acquisition systems for future experiments and project. R ampersand D projects will be pursued as necessary to provide software, tools, or systems which cannot be purchased or acquired elsewhere. A closer working relation with other high energy laboratories will be pursued to reduce duplication of effort and to allow effective collaboration on many aspects of HEP computing

  12. Real Time Animation of Trees Based on BBSC in Computer Games

    Directory of Open Access Journals (Sweden)

    Xuefeng Ao

    2009-01-01

    Full Text Available That researchers in the field of computer games usually find it is difficult to simulate the motion of actual 3D model trees lies in the fact that the tree model itself has very complicated structure, and many sophisticated factors need to be considered during the simulation. Though there are some works on simulating 3D tree and its motion, few of them are used in computer games due to the high demand for real-time in computer games. In this paper, an approach of animating trees in computer games based on a novel tree model representation—Ball B-Spline Curves (BBSCs are proposed. By taking advantage of the good features of the BBSC-based model, physical simulation of the motion of leafless trees with wind blowing becomes easier and more efficient. The method can generate realistic 3D tree animation in real-time, which meets the high requirement for real time in computer games.

  13. BOINC service for volunteer cloud computing

    International Nuclear Information System (INIS)

    Høimyr, N; Blomer, J; Buncic, P; Giovannozzi, M; Gonzalez, A; Harutyunyan, A; Jones, P L; Karneyeu, A; Marquina, M A; Mcintosh, E; Segal, B; Skands, P; Grey, F; Lombraña González, D; Zacharov, I

    2012-01-01

    Since a couple of years, a team at CERN and partners from the Citizen Cyberscience Centre (CCC) have been working on a project that enables general physics simulation programs to run in a virtual machine on volunteer PCs around the world. The project uses the Berkeley Open Infrastructure for Network Computing (BOINC) framework. Based on CERNVM and the job management framework Co-Pilot, this project was made available for public beta-testing in August 2011 with Monte Carlo simulations of LHC physics under the name “LHC at home 2.0” and the BOINC project: “Test4Theory”. At the same time, CERN's efforts on Volunteer Computing for LHC machine studies have been intensified; this project has previously been known as LHC at home, and has been running the “Sixtrack” beam dynamics application for the LHC accelerator, using a classic BOINC framework without virtual machines. CERN-IT has set up a BOINC server cluster, and has provided and supported the BOINC infrastructure for both projects. CERN intends to evolve the setup into a generic BOINC application service that will allow scientists and engineers at CERN to profit from volunteer computing. This paper describes the experience with the two different approaches to volunteer computing as well as the status and outlook of a general BOINC service.

  14. Five training sessions improves 3000 meter running performance.

    Science.gov (United States)

    Riiser, A; Ripe, S; Aadland, E

    2015-12-01

    The primary aim of the present study was to evaluate the effect of two weeks of endurance training on 3000-meter running performance. Secondary we wanted to assess the relationship between baseline running performance and change in running performance over the intervention period. We assigned 36 military recruits to a training group (N.=28) and a control group. The training group was randomly allocated to one of three sub-groups: 1) a 3000 meter group (test race); 2) a 4x4-minutes high-intensity interval group; 3) a continuous training group. The training group exercised five times over a two-week period. The training group improved its 3000 meter running performance with 50 seconds (6%) compared to the control group (P=0.003). Moreover, all sub-groups improved their performance by 37 to 73 seconds (4-8%) compared to the control group (Ptraining group. We conclude that five endurance training sessions improved 3000 meter running performance and the slowest runners achieved the greatest improvement in running performance.

  15. Statistical Emulation of Climate Model Projections Based on Precomputed GCM Runs*

    KAUST Repository

    Castruccio, Stefano

    2014-03-01

    The authors describe a new approach for emulating the output of a fully coupled climate model under arbitrary forcing scenarios that is based on a small set of precomputed runs from the model. Temperature and precipitation are expressed as simple functions of the past trajectory of atmospheric CO2 concentrations, and a statistical model is fit using a limited set of training runs. The approach is demonstrated to be a useful and computationally efficient alternative to pattern scaling and captures the nonlinear evolution of spatial patterns of climate anomalies inherent in transient climates. The approach does as well as pattern scaling in all circumstances and substantially better in many; it is not computationally demanding; and, once the statistical model is fit, it produces emulated climate output effectively instantaneously. It may therefore find wide application in climate impacts assessments and other policy analyses requiring rapid climate projections.

  16. Computer network time synchronization the network time protocol on earth and in space

    CERN Document Server

    Mills, David L

    2010-01-01

    Carefully coordinated, reliable, and accurate time synchronization is vital to a wide spectrum of fields-from air and ground traffic control, to buying and selling goods and services, to TV network programming. Ill-gotten time could even lead to the unimaginable and cause DNS caches to expire, leaving the entire Internet to implode on the root servers.Written by the original developer of the Network Time Protocol (NTP), Computer Network Time Synchronization: The Network Time Protocol on Earth and in Space, Second Edition addresses the technological infrastructure of time dissemination, distrib

  17. The reliable solution and computation time of variable parameters Logistic model

    OpenAIRE

    Pengfei, Wang; Xinnong, Pan

    2016-01-01

    The reliable computation time (RCT, marked as Tc) when applying a double precision computation of a variable parameters logistic map (VPLM) is studied. First, using the method proposed, the reliable solutions for the logistic map are obtained. Second, for a time-dependent non-stationary parameters VPLM, 10000 samples of reliable experiments are constructed, and the mean Tc is then computed. The results indicate that for each different initial value, the Tcs of the VPLM are generally different...

  18. High performance computer code for molecular dynamics simulations

    International Nuclear Information System (INIS)

    Levay, I.; Toekesi, K.

    2007-01-01

    Complete text of publication follows. Molecular Dynamics (MD) simulation is a widely used technique for modeling complicated physical phenomena. Since 2005 we are developing a MD simulations code for PC computers. The computer code is written in C++ object oriented programming language. The aim of our work is twofold: a) to develop a fast computer code for the study of random walk of guest atoms in Be crystal, b) 3 dimensional (3D) visualization of the particles motion. In this case we mimic the motion of the guest atoms in the crystal (diffusion-type motion), and the motion of atoms in the crystallattice (crystal deformation). Nowadays, it is common to use Graphics Devices in intensive computational problems. There are several ways to use this extreme processing performance, but never before was so easy to programming these devices as now. The CUDA (Compute Unified Device) Architecture introduced by nVidia Corporation in 2007 is a very useful for every processor hungry application. A Unified-architecture GPU include 96-128, or more stream processors, so the raw calculation performance is 576(!) GFLOPS. It is ten times faster, than the fastest dual Core CPU [Fig.1]. Our improved MD simulation software uses this new technology, which speed up our software and the code run 10 times faster in the critical calculation code segment. Although the GPU is a very powerful tool, it has a strongly paralleled structure. It means, that we have to create an algorithm, which works on several processors without deadlock. Our code currently uses 256 threads, shared and constant on-chip memory, instead of global memory, which is 100 times slower than others. It is possible to implement the total algorithm on GPU, therefore we do not need to download and upload the data in every iteration. On behalf of maximal throughput, every thread run with the same instructions

  19. Universal quantum computation by discontinuous quantum walk

    International Nuclear Information System (INIS)

    Underwood, Michael S.; Feder, David L.

    2010-01-01

    Quantum walks are the quantum-mechanical analog of random walks, in which a quantum ''walker'' evolves between initial and final states by traversing the edges of a graph, either in discrete steps from node to node or via continuous evolution under the Hamiltonian furnished by the adjacency matrix of the graph. We present a hybrid scheme for universal quantum computation in which a quantum walker takes discrete steps of continuous evolution. This ''discontinuous'' quantum walk employs perfect quantum-state transfer between two nodes of specific subgraphs chosen to implement a universal gate set, thereby ensuring unitary evolution without requiring the introduction of an ancillary coin space. The run time is linear in the number of simulated qubits and gates. The scheme allows multiple runs of the algorithm to be executed almost simultaneously by starting walkers one time step apart.

  20. The SU(∞) twisted gradient flow running coupling

    Energy Technology Data Exchange (ETDEWEB)

    Pérez, Margarita García [Instituto de Física Teórica UAM-CSIC,Nicolás Cabrera 13-15, E-28049-Madrid (Spain); González-Arroyo, Antonio [Instituto de Física Teórica UAM-CSIC,Nicolás Cabrera 13-15, E-28049-Madrid (Spain); Departamento de Física Teórica, C-15, Universidad Autónoma de Madrid,E-28049-Madrid (Spain); Keegan, Liam [PH-TH, CERN,CH-1211 Geneva 23 (Switzerland); Okawa, Masanori [Graduate School of Science, Hiroshima University,Higashi-Hiroshima, Hiroshima 739-8526 (Japan)

    2015-01-09

    We measure the running of the SU(∞) ’t Hooft coupling by performing a step scaling analysis of the Twisted Eguchi-Kawai (TEK) model, the SU(N) gauge theory on a single site lattice with twisted boundary conditions. The computation relies on the conjecture that finite volume effects for SU(N) gauge theories defined on a 4-dimensional twisted torus are controlled by an effective size parameter l-tilde=l√N, with l the torus period. We set the scale for the running coupling in terms of l-tilde and use the gradient flow to define a renormalized ’t Hooft coupling λ(l-tilde). In the TEK model, this idea allows the determination of the running of the coupling through a step scaling procedure that uses the rank of the group as a size parameter. The continuum renormalized coupling constant is extracted in the zero lattice spacing limit, which in the TEK model corresponds to the large N limit taken at fixed value of λ(l-tilde). The coupling constant is thus expected to coincide with that of the ordinary pure gauge theory at N=∞. The idea is shown to work and permits us to follow the evolution of the coupling over a wide range of scales. At weak coupling we find a remarkable agreement with the perturbative two-loop formula for the running coupling.

  1. 10 km running performance predicted by a multiple linear regression model with allometrically adjusted variables.

    Science.gov (United States)

    Abad, Cesar C C; Barros, Ronaldo V; Bertuzzi, Romulo; Gagliardi, João F L; Lima-Silva, Adriano E; Lambert, Mike I; Pires, Flavio O

    2016-06-01

    The aim of this study was to verify the power of VO 2max , peak treadmill running velocity (PTV), and running economy (RE), unadjusted or allometrically adjusted, in predicting 10 km running performance. Eighteen male endurance runners performed: 1) an incremental test to exhaustion to determine VO 2max and PTV; 2) a constant submaximal run at 12 km·h -1 on an outdoor track for RE determination; and 3) a 10 km running race. Unadjusted (VO 2max , PTV and RE) and adjusted variables (VO 2max 0.72 , PTV 0.72 and RE 0.60 ) were investigated through independent multiple regression models to predict 10 km running race time. There were no significant correlations between 10 km running time and either the adjusted or unadjusted VO 2max . Significant correlations (p 0.84 and power > 0.88. The allometrically adjusted predictive model was composed of PTV 0.72 and RE 0.60 and explained 83% of the variance in 10 km running time with a standard error of the estimate (SEE) of 1.5 min. The unadjusted model composed of a single PVT accounted for 72% of the variance in 10 km running time (SEE of 1.9 min). Both regression models provided powerful estimates of 10 km running time; however, the unadjusted PTV may provide an uncomplicated estimation.

  2. Comparing the performance of SIMD computers by running large air pollution models

    DEFF Research Database (Denmark)

    Brown, J.; Hansen, Per Christian; Wasniewski, J.

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on these computers. Using a realistic large-scale model, we gained detailed insight about the performance of the computers involved when used to solve large-scale scientific...... problems that involve several types of numerical computations. The computers used in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  3. Fog computing job scheduling optimization based on bees swarm

    Science.gov (United States)

    Bitam, Salim; Zeadally, Sherali; Mellouk, Abdelhamid

    2018-04-01

    Fog computing is a new computing architecture, composed of a set of near-user edge devices called fog nodes, which collaborate together in order to perform computational services such as running applications, storing an important amount of data, and transmitting messages. Fog computing extends cloud computing by deploying digital resources at the premise of mobile users. In this new paradigm, management and operating functions, such as job scheduling aim at providing high-performance, cost-effective services requested by mobile users and executed by fog nodes. We propose a new bio-inspired optimization approach called Bees Life Algorithm (BLA) aimed at addressing the job scheduling problem in the fog computing environment. Our proposed approach is based on the optimized distribution of a set of tasks among all the fog computing nodes. The objective is to find an optimal tradeoff between CPU execution time and allocated memory required by fog computing services established by mobile users. Our empirical performance evaluation results demonstrate that the proposal outperforms the traditional particle swarm optimization and genetic algorithm in terms of CPU execution time and allocated memory.

  4. Relating timed and register automata

    Directory of Open Access Journals (Sweden)

    Diego Figueira

    2010-11-01

    Full Text Available Timed automata and register automata are well-known models of computation over timed and data words respectively. The former has clocks that allow to test the lapse of time between two events, whilst the latter includes registers that can store data values for later comparison. Although these two models behave in appearance differently, several decision problems have the same (undecidability and complexity results for both models. As a prominent example, emptiness is decidable for alternating automata with one clock or register, both with non-primitive recursive complexity. This is not by chance. This work confirms that there is indeed a tight relationship between the two models. We show that a run of a timed automaton can be simulated by a register automaton, and conversely that a run of a register automaton can be simulated by a timed automaton. Our results allow to transfer complexity and decidability results back and forth between these two kinds of models. We justify the usefulness of these reductions by obtaining new results on register automata.

  5. Optical Design Using Small Dedicated Computers

    Science.gov (United States)

    Sinclair, Douglas C.

    1980-09-01

    Since the time of the 1975 International Lens Design Conference, we have developed a series of optical design programs for Hewlett-Packard desktop computers. The latest programs in the series, OSLO-25G and OSLO-45G, have most of the capabilities of general-purpose optical design programs, including optimization based on exact ray-trace data. The computational techniques used in the programs are similar to ones used in other programs, but the creative environment experienced by a designer working directly with these small dedicated systems is typically much different from that obtained with shared-computer systems. Some of the differences are due to the psychological factors associated with using a system having zero running cost, while others are due to the design of the program, which emphasizes graphical output and ease of use, as opposed to computational speed.

  6. Habitual Minimalist Shod Running Biomechanics and the Acute Response to Running Barefoot.

    Science.gov (United States)

    Tam, Nicholas; Darragh, Ian A J; Divekar, Nikhil V; Lamberts, Robert P

    2017-09-01

    The aim of the study was to determine whether habitual minimalist shoe runners present with purported favorable running biomechanithat reduce running injury risk such as initial loading rate. Eighteen minimalist and 16 traditionally cushioned shod runners were assessed when running both in their preferred training shoe and barefoot. Ankle and knee joint kinetics and kinematics, initial rate of loading, and footstrike angle were measured. Sagittal ankle and knee joint stiffness were also calculated. Results of a two-factor ANOVA presented no group difference in initial rate of loading when participants were running either shod or barefoot; however, initial loading rate increased for both groups when running barefoot (p=0.008). Differences in footstrike angle were observed between groups when running shod, but not when barefoot (minimalist:8.71±8.99 vs. traditional: 17.32±11.48 degrees, p=0.002). Lower ankle joint stiffness was found in both groups when running barefoot (p=0.025). These findings illustrate that risk factors for injury potentially differ between the two groups. Shoe construction differences do change mechanical demands, however, once habituated to the demands of a given shoe condition, certain acute favorable or unfavorable responses may be moderated. The purported benefits of minimalist running shoes in mimicking habitual barefoot running is questioned, and risk of injury may not be attenuated. © Georg Thieme Verlag KG Stuttgart · New York.

  7. BOINC service for volunteer cloud computing

    CERN Document Server

    Høimyr, N; Buncic, P; Giovannozzi, M; Gonzalez, A; Harutyunyan, A; Jones, P L; Karneyeu, A; Marquina, M A; Mcintosh, E; Segal, B; Skands, P; Grey, F; Lombraña González, D; Zacharov, I; CERN. Geneva. IT Department

    2012-01-01

    Since a couple of years, a team at CERN and partners from the Citizen Cyberscience Centre (CCC) have been working on a project that enables general physics simulation programs to run in a virtual machine on volunteer PCs around the world. The project uses the Berkeley Open Infrastructure for Network Computing (BOINC) framework. Based on CERNVM and the job management framework Co-Pilot, this project was made available for public beta-testing in August 2011 with Monte Carlo simulations of LHC physics under the name "LHC@home 2.0" and the BOINC project: "Test4Theory". At the same time, CERN's efforts on Volunteer Computing for LHC machine studies have been intensified; this project has previously been known as LHC@home, and has been running the "Sixtrack" beam dynamics application for the LHC accelerator, using a classic BOINC framework without virtual machines. CERN-IT has set up a BOINC server cluster, and has provided and supported the BOINC infrastructure for both projects. CERN intends to evolve the setup i...

  8. Electroweak physics prospects for CDF in Run II

    International Nuclear Information System (INIS)

    Eric James

    2003-01-01

    The CDF collaboration will vigorously pursue a comprehensive program of electroweak physics during Run II at the Tevatron based strongly on the successful Run I program. The Run IIa integrated luminosity goal of 2 fb -1 will lead to a CDF dataset twenty times larger than that collected in Run I. In addition, an increase in the energy of the colliding beams from √s = 1.80 TeV to √s = 1.96 TeV for Run II provides a 10% increase in the W and Z boson production cross sections and a corresponding enlargement of the electroweak event samples. In the near term, CDF expects to collect a dataset with 2-3 times the integrated luminosity of Run I by September of 2003. Utilizing these new datasets CDF will be able to make improved, precision measurements of Standard Model electroweak parameters including M W , M top , Λ W , and sin 2 θ W eff . The goal of these measurements will be to improve our understanding of the self-consistency of the Standard Model and knowledge of the Higgs boson mass within the model. The top plot in Fig. 1 illustrates our current knowledge of the Standard Model Higgs mass based on measurements of M W and M top . The constraints imposed by combined CDF and D0 Run I measurements of M W (80.456 ± 0.059GeV/c 2 ) and M top (174.3 ± 5.1GeV/c 2 ) are illustrated by the shaded oval region on the plot. The hatched rectangle shows the additional constraint imposed by the recent LEP2 measurement of M W . The bottom plot in Fig. 1 illustrates the expected improvement in these constraints based on Run II CDF measurements utilizing a 2 fb -1 dataset. The shaded oval region in this plot is based on current estimates of a 40 MeV/c 2 uncertainty for measuring M W and a 2-3 GeV/c 2 uncertainty for measuring M top

  9. Using latency as a QoS indicator for global cloud computing services

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Riaz, Tahir; Dubalski, Bozydar

    2013-01-01

    Many globally distributed cloud computing (CC) applications and services running over the Internet, between globally dispersed clients and servers, will require certain levels of QoS in order to deliver and give a sufficiently smooth user experience. This would be essential for real-time streaming...

  10. Operating system for a real-time multiprocessor propulsion system simulator

    Science.gov (United States)

    Cole, G. L.

    1984-01-01

    The success of the Real Time Multiprocessor Operating System (RTMPOS) in the development and evaluation of experimental hardware and software systems for real time interactive simulation of air breathing propulsion systems was evaluated. The Real Time Multiprocessor Operating System (RTMPOS) provides the user with a versatile, interactive means for loading, running, debugging and obtaining results from a multiprocessor based simulator. A front end processor (FEP) serves as the simulator controller and interface between the user and the simulator. These functions are facilitated by the RTMPOS which resides on the FEP. The RTMPOS acts in conjunction with the FEP's manufacturer supplied disk operating system that provides typical utilities like an assembler, linkage editor, text editor, file handling services, etc. Once a simulation is formulated, the RTMPOS provides for engineering level, run time operations such as loading, modifying and specifying computation flow of programs, simulator mode control, data handling and run time monitoring. Run time monitoring is a powerful feature of RTMPOS that allows the user to record all actions taken during a simulation session and to receive advisories from the simulator via the FEP. The RTMPOS is programmed mainly in PASCAL along with some assembly language routines. The RTMPOS software is easily modified to be applicable to hardware from different manufacturers.

  11. Electricity prices and fuel costs. Long-run relations and short-run dynamics

    International Nuclear Information System (INIS)

    Mohammadi, Hassan

    2009-01-01

    The paper examines the long-run relation and short-run dynamics between electricity prices and three fossil fuel prices - coal, natural gas and crude oil - using annual data for the U.S. for 1960-2007. The results suggest (1) a stable long-run relation between real prices for electricity and coal (2) Bi-directional long-run causality between coal and electricity prices. (3) Insignificant long-run relations between electricity and crude oil and/or natural gas prices. And (4) no evidence of asymmetries in the adjustment of electricity prices to deviations from equilibrium. A number of implications are addressed. (author)

  12. RUN COORDINATION

    CERN Multimedia

    C. Delaere

    2012-01-01

      With the analysis of the first 5 fb–1 culminating in the announcement of the observation of a new particle with mass of around 126 GeV/c2, the CERN directorate decided to extend the LHC run until February 2013. This adds three months to the original schedule. Since then the LHC has continued to perform extremely well, and the total luminosity delivered so far this year is 22 fb–1. CMS also continues to perform excellently, recording data with efficiency higher than 95% for fills with the magnetic field at nominal value. The highest instantaneous luminosity achieved by LHC to date is 7.6x1033 cm–2s–1, which translates into 35 interactions per crossing. On the CMS side there has been a lot of work to handle these extreme conditions, such as a new DAQ computer farm and trigger menus to handle the pile-up, automation of recovery procedures to minimise the lost luminosity, better training for the shift crews, etc. We did suffer from a couple of infrastructure ...

  13. Reconfiguration in FPGA-Based Multi-Core Platforms for Hard Real-Time Applications

    DEFF Research Database (Denmark)

    Pezzarossa, Luca; Schoeberl, Martin; Sparsø, Jens

    2016-01-01

    -case execution-time of tasks of an application that determines the systems ability to respond in time. To support this focus, the platform must provide service guarantees for both communication and computation resources. In addition, many hard real-time applications have multiple modes of operation, and each......In general-purpose computing multi-core platforms, hardware accelerators and reconfiguration are means to improve performance; i.e., the average-case execution time of a software application. In hard real-time systems, such average-case speed-up is not in itself relevant - it is the worst...... mode has specific requirements. An interesting perspective on reconfigurable computing is to exploit run-time reconfiguration to support mode changes. In this paper we explore approaches to reconfiguration of communication and computation resources in the T-CREST hard real-time multi-core platform...

  14. Implementation of Tree and Butterfly Barriers with Optimistic Time Management Algorithms for Discrete Event Simulation

    Science.gov (United States)

    Rizvi, Syed S.; Shah, Dipali; Riasat, Aasia

    The Time Wrap algorithm [3] offers a run time recovery mechanism that deals with the causality errors. These run time recovery mechanisms consists of rollback, anti-message, and Global Virtual Time (GVT) techniques. For rollback, there is a need to compute GVT which is used in discrete-event simulation to reclaim the memory, commit the output, detect the termination, and handle the errors. However, the computation of GVT requires dealing with transient message problem and the simultaneous reporting problem. These problems can be dealt in an efficient manner by the Samadi's algorithm [8] which works fine in the presence of causality errors. However, the performance of both Time Wrap and Samadi's algorithms depends on the latency involve in GVT computation. Both algorithms give poor latency for large simulation systems especially in the presence of causality errors. To improve the latency and reduce the processor ideal time, we implement tree and butterflies barriers with the optimistic algorithm. Our analysis shows that the use of synchronous barriers such as tree and butterfly with the optimistic algorithm not only minimizes the GVT latency but also minimizes the processor idle time.

  15. Liquidity Runs

    NARCIS (Netherlands)

    Matta, R.; Perotti, E.

    2016-01-01

    Can the risk of losses upon premature liquidation produce bank runs? We show how a unique run equilibrium driven by asset liquidity risk arises even under minimal fundamental risk. To study the role of illiquidity we introduce realistic norms on bank default, such that mandatory stay is triggered

  16. Analysis and Synthesis of Communication-Intensive Heterogeneous Real-Time Systems

    DEFF Research Database (Denmark)

    Pop, Paul

    2003-01-01

    Embedded computer systems are now everywhere: from alarm clocks to PDAs, from mobile phones to cars, almost all the devices we use are controlled by embedded computer systems. An important class of embedded computer systems is that of real-time systems, which have to fulfill strict timing...... requirements. As realtime systems become more complex, they are often implemented using distributed heterogeneous architectures. The main objective of this thesis is to develop analysis and synthesis methods for communication-intensive heterogeneous hard real-time systems. The systems are heterogeneous...... is the synthesis of the communication infrastructure, which has a significant impact on the overall system performance and cost. To reduce the time-to-market of products, the design of real-time systems seldom starts from scratch. Typically, designers start from an already existing system, running certain...

  17. The FERMI-Elettra distributed real-time framework

    International Nuclear Information System (INIS)

    Pivetta, L.; Gaio, G.; Passuello, R.; Scalamera, G.

    2012-01-01

    FERMI-Elettra is a Free Electron Laser (FEL) based on a 1.5 GeV linac. The pulsed operation of the accelerator and the necessity to characterize and control each electron bunch requires synchronous acquisition of the beam diagnostics together with the ability to drive actuators in real-time at the linac repetition rate. The Adeos/Xenomai real-time extensions have been adopted in order to add real-time capabilities to the Linux based control system computers running the Tango software. A software communication protocol based on Gigabit Ethernet and known as Network Reflective Memory (NRM) has been developed to implement a shared memory across the whole control system, allowing computers to communicate in real-time. The NRM architecture, the real-time performance and the integration in the control system are described. (authors)

  18. Computing return times or return periods with rare event algorithms

    Science.gov (United States)

    Lestang, Thibault; Ragone, Francesco; Bréhier, Charles-Edouard; Herbert, Corentin; Bouchet, Freddy

    2018-04-01

    The average time between two occurrences of the same event, referred to as its return time (or return period), is a useful statistical concept for practical applications. For instance insurances or public agencies may be interested by the return time of a 10 m flood of the Seine river in Paris. However, due to their scarcity, reliably estimating return times for rare events is very difficult using either observational data or direct numerical simulations. For rare events, an estimator for return times can be built from the extrema of the observable on trajectory blocks. Here, we show that this estimator can be improved to remain accurate for return times of the order of the block size. More importantly, we show that this approach can be generalised to estimate return times from numerical algorithms specifically designed to sample rare events. So far those algorithms often compute probabilities, rather than return times. The approach we propose provides a computationally extremely efficient way to estimate numerically the return times of rare events for a dynamical system, gaining several orders of magnitude of computational costs. We illustrate the method on two kinds of observables, instantaneous and time-averaged, using two different rare event algorithms, for a simple stochastic process, the Ornstein–Uhlenbeck process. As an example of realistic applications to complex systems, we finally discuss extreme values of the drag on an object in a turbulent flow.

  19. Near real-time digital holographic microscope based on GPU parallel computing

    Science.gov (United States)

    Zhu, Gang; Zhao, Zhixiong; Wang, Huarui; Yang, Yan

    2018-01-01

    A transmission near real-time digital holographic microscope with in-line and off-axis light path is presented, in which the parallel computing technology based on compute unified device architecture (CUDA) and digital holographic microscopy are combined. Compared to other holographic microscopes, which have to implement reconstruction in multiple focal planes and are time-consuming the reconstruction speed of the near real-time digital holographic microscope can be greatly improved with the parallel computing technology based on CUDA, so it is especially suitable for measurements of particle field in micrometer and nanometer scale. Simulations and experiments show that the proposed transmission digital holographic microscope can accurately measure and display the velocity of particle field in micrometer scale, and the average velocity error is lower than 10%.With the graphic processing units(GPU), the computing time of the 100 reconstruction planes(512×512 grids) is lower than 120ms, while it is 4.9s using traditional reconstruction method by CPU. The reconstruction speed has been raised by 40 times. In other words, it can handle holograms at 8.3 frames per second and the near real-time measurement and display of particle velocity field are realized. The real-time three-dimensional reconstruction of particle velocity field is expected to achieve by further optimization of software and hardware. Keywords: digital holographic microscope,

  20. The ATLAS Data Management System Rucio: Supporting LHC Run-2 and beyond

    CERN Document Server

    Barisits, Martin-Stefan; The ATLAS collaboration; Garonne, Vincent

    2017-01-01

    With this contribution we present the recent developments made to Rucio, the data management system of the High-Energy Physics Experiment ATLAS. Already managing 260 Petabytes of both official and user data, Rucio has seen incremental improvements throughout LHC Run-2, and is currently laying the groundwork for HEP computing in the HL-LHC era. The focus of this contribution are (a) the automations that have been put in place such as data rebalancing or dynamic replication of user data, as well as their supporting infrastructures such as real-time networking metrics or transfer time predictions; (b) the flexible approach towards inclusion of heterogeneous storage systems, including object stores, while unifying the potential access paths using generally available tools and protocols; (c) the improvements made to the real time monitoring of the system to alleviate the work of our human shifters; and (d) the adoption of Rucio for two other experiments, AMS and Xenon1t. We conclude by presenting operational numbe...