WorldWideScience

Sample records for source running time

  1. The Trick Simulation Toolkit: A NASA/Open source Framework for Running Time Based Physics Models

    Science.gov (United States)

    Penn, John M.; Lin, Alexander S.

    2016-01-01

    This paper describes the design and use at of the Trick Simulation Toolkit, a simulation development environment for creating high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. It describes Trick's design goals and how the development environment attempts to achieve those goals. It describes how Trick is used in some of the many training and engineering simulations at NASA. Finally it describes the Trick NASA/Open source project on Github.

  2. EnergyPlus Run Time Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences, identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.

  3. Implementing Run-Time Evaluation of Distributed Timing Constraints in a Real-Time Environment

    DEFF Research Database (Denmark)

    Kristensen, C. H.; Drejer, N.

    1994-01-01

    In this paper we describe a solution to the problem of implementing run-time evaluation of timing constraints in distributed real-time environments......In this paper we describe a solution to the problem of implementing run-time evaluation of timing constraints in distributed real-time environments...

  4. Combining Compile-Time and Run-Time Parallelization

    Directory of Open Access Journals (Sweden)

    Sungdo Moon

    1999-01-01

    Full Text Available This paper demonstrates that significant improvements to automatic parallelization technology require that existing systems be extended in two ways: (1 they must combine high‐quality compile‐time analysis with low‐cost run‐time testing; and (2 they must take control flow into account during analysis. We support this claim with the results of an experiment that measures the safety of parallelization at run time for loops left unparallelized by the Stanford SUIF compiler’s automatic parallelization system. We present results of measurements on programs from two benchmark suites – SPECFP95 and NAS sample benchmarks – which identify inherently parallel loops in these programs that are missed by the compiler. We characterize remaining parallelization opportunities, and find that most of the loops require run‐time testing, analysis of control flow, or some combination of the two. We present a new compile‐time analysis technique that can be used to parallelize most of these remaining loops. This technique is designed to not only improve the results of compile‐time parallelization, but also to produce low‐cost, directed run‐time tests that allow the system to defer binding of parallelization until run‐time when safety cannot be proven statically. We call this approach predicated array data‐flow analysis. We augment array data‐flow analysis, which the compiler uses to identify independent and privatizable arrays, by associating predicates with array data‐flow values. Predicated array data‐flow analysis allows the compiler to derive “optimistic” data‐flow values guarded by predicates; these predicates can be used to derive a run‐time test guaranteeing the safety of parallelization.

  5. 16 CFR 803.10 - Running of time.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Running of time. 803.10 Section 803.10 Commercial Practices FEDERAL TRADE COMMISSION RULES, REGULATIONS, STATEMENTS AND INTERPRETATIONS UNDER THE HART-SCOTT-RODINO ANTITRUST IMPROVEMENTS ACT OF 1976 TRANSMITTAL RULES § 803.10 Running of time. (a...

  6. Aspects for Run-time Component Integration

    DEFF Research Database (Denmark)

    Truyen, Eddy; Jørgensen, Bo Nørregaard; Joosen, Wouter

    2000-01-01

    Component framework technology has become the cornerstone of building a family of systems and applications. A component framework defines a generic architecture into which specialized components can be plugged. As such, the component framework leverages the glue that connects the different inserted...... to dynamically integrate into the architecture of middleware systems new services that support non-functional aspects such as security, transactions, real-time....

  7. Running the source term code package in Elebra MX-850

    International Nuclear Information System (INIS)

    Guimaraes, A.C.F.; Goes, A.G.A.

    1988-01-01

    The source term package (STCP) is one of the main tools applied in calculations of behavior of fission products from nuclear power plants. It is a set of computer codes to assist the calculations of the radioactive materials leaving from the metallic containment of power reactors to the environment during a severe reactor accident. The original version of STCP runs in SDC computer systems, but as it has been written in FORTRAN 77, is possible run it in others systems such as IBM, Burroughs, Elebra, etc. The Elebra MX-8500 version of STCP contains 5 codes:March 3, Trapmelt, Tcca, Vanessa and Nava. The example presented in this report has taken into consideration a small LOCA accident into a PWR type reactor. (M.I.)

  8. Time Optimal Run-time Evaluation of Distributed Timing Constraints in Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.; Kristensen, C.H.

    1993-01-01

    This paper considers run-time evaluation of an important class of constraints; Timing constraints. These appear extensively in process control systems. Timing constraints are considered in distributed systems, i.e. systems consisting of multiple autonomous nodes......

  9. Optimal Infinite Runs in One-Clock Priced Timed Automata

    DEFF Research Database (Denmark)

    David, Alexandre; Ejsing-Duun, Daniel; Fontani, Lisa

    We address the problem of finding an infinite run with the optimal cost-time ratio in a one-clock priced timed automaton and pro- vide an algorithmic solution. Through refinements of the quotient graph obtained by strong time-abstracting bisimulation partitioning, we con- struct a graph with time...

  10. Accuracy versus run time in an adiabatic quantum search

    International Nuclear Information System (INIS)

    Rezakhani, A. T.; Pimachev, A. K.; Lidar, D. A.

    2010-01-01

    Adiabatic quantum algorithms are characterized by their run time and accuracy. The relation between the two is essential for quantifying adiabatic algorithmic performance yet is often poorly understood. We study the dynamics of a continuous time, adiabatic quantum search algorithm and find rigorous results relating the accuracy and the run time. Proceeding with estimates, we show that under fairly general circumstances the adiabatic algorithmic error exhibits a behavior with two discernible regimes: The error decreases exponentially for short times and then decreases polynomially for longer times. We show that the well-known quadratic speedup over classical search is associated only with the exponential error regime. We illustrate the results through examples of evolution paths derived by minimization of the adiabatic error. We also discuss specific strategies for controlling the adiabatic error and run time.

  11. Combining monitoring with run-time assertion checking

    NARCIS (Netherlands)

    Gouw, Stijn de

    2013-01-01

    We develop a new technique for Run-time Checking for two object-oriented languages: Java and the Abstract Behavioral Specification language ABS. In object-oriented languages, objects communicate by sending each other messages. Assuming encapsulation, the behavior of objects is completely

  12. LHCb's Time-Real Alignment in RunII

    CERN Multimedia

    Batozskaya, Varvara

    2015-01-01

    LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run 2. Data collected at the start of the fill will be processed in a few minutes and used to update the alignment, while the calibration constants will be evaluated for each run. This procedure will improve the quality of the online alignment. Critically, this new real-time alignment and calibration procedure allows identical constants to be used in the online and offline reconstruction, thus improving the correlation between triggered and offline selected events. This offers the opportunity to optimise the event selection in the trigger by applying stronger constraints. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructure for the trigger. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from both the operational and physics performance points of view. Specific challenges of this novel configur...

  13. Thermally-aware composite run-time CPU power models

    OpenAIRE

    Walker, Matthew J.; Diestelhorst, Stephan; Hansson, Andreas; Balsamo, Domenico; Merrett, Geoff V.; Al-Hashimi, Bashir M.

    2016-01-01

    Accurate and stable CPU power modelling is fundamental in modern system-on-chips (SoCs) for two main reasons: 1) they enable significant online energy savings by providing a run-time manager with reliable power consumption data for controlling CPU energy-saving techniques; 2) they can be used as accurate and trusted reference models for system design and exploration. We begin by showing the limitations in typical performance monitoring counter (PMC) based power modelling approaches and illust...

  14. LHCb's Real-Time Alignment in Run2

    CERN Multimedia

    Batozskaya, Varvara

    2015-01-01

    Stable, precise spatial alignment and PID calibration are necessary to achieve optimal detector performances. During Run2, LHCb will have a new real-time detector alignment and calibration to reach equivalent performances in the online and offline reconstruction. This offers the opportunity to optimise the event selection by applying stronger constraints as well as hadronic particle identification at the trigger level. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructure for the trigger.

  15. Safety evaluation of the ITP filter/stripper test runs and quiet time runs using simulant solution. Revision 3

    International Nuclear Information System (INIS)

    Gupta, M.K.

    1994-06-01

    The purpose is to provide the technical bases for the evaluation of Unreviewed Safety Question for the In-Tank Precipitation (ITP) Filter/Stripper Test Runs (Ref. 7) and Quiet Time Runs Program (described in Section 3.6). The Filter/Stripper Test Runs and Quiet Time Runs program involves a 12,000 gallon feed tank containing an agitator, a 4,000 gallon flush tank, a variable speed pump, associated piping and controls, and equipment within both the Filter and the Stripper Building

  16. Preventing Run-Time Bugs at Compile-Time Using Advanced C++

    Energy Technology Data Exchange (ETDEWEB)

    Neswold, Richard [Fermilab

    2018-01-01

    When writing software, we develop algorithms that tell the computer what to do at run-time. Our solutions are easier to understand and debug when they are properly modeled using class hierarchies, enumerations, and a well-factored API. Unfortunately, even with these design tools, we end up having to debug our programs at run-time. Worse still, debugging an embedded system changes its dynamics, making it tough to find and fix concurrency issues. This paper describes techniques using C++ to detect run-time bugs *at compile time*. A concurrency library, developed at Fermilab, is used for examples in illustrating these techniques.

  17. SASD and the CERN/SPS run-time coordinator

    International Nuclear Information System (INIS)

    Morpurgo, G.

    1990-01-01

    Structured Analysis and Structured Design (SASD) provides us with a handy way of specifying the flow of data between the different modules (functional units) of a system. But the formalism loses its immediacy when the control flow has to be taken into account as well. Moreover, due to the lack of appropriate software infrastructure, very often the actual implementation of the system does not reflect the module decoupling and independence so much emphasized at the design stage. In this paper the run-time coordinator, a complete software infrastructure to support a real decoupling of the functional units, is described. Special attention is given to the complementarity of our approach and the SASD methodology. (orig.)

  18. Success Run Waiting Times and Fuss-Catalan Numbers

    Directory of Open Access Journals (Sweden)

    S. J. Dilworth

    2015-01-01

    Full Text Available We present power series expressions for all the roots of the auxiliary equation of the recurrence relation for the distribution of the waiting time for the first run of k consecutive successes in a sequence of independent Bernoulli trials, that is, the geometric distribution of order k. We show that the series coefficients are Fuss-Catalan numbers and write the roots in terms of the generating function of the Fuss-Catalan numbers. Our main result is a new exact expression for the distribution, which is more concise than previously published formulas. Our work extends the analysis by Feller, who gave asymptotic results. We obtain quantitative improvements of the error estimates obtained by Feller.

  19. Icelandic Public Pensions: Why time is running out

    Directory of Open Access Journals (Sweden)

    Ólafur Ísleifsson

    2011-12-01

    Full Text Available The aim of this paper is to analyse the Icelandic public sector pension system enjoying a third party guarantee. Defined benefit funds fundamentally differ from defined contribution pension funds without a third party guarantee as is the case with the Icelandic general labour market pension funds. We probe the special nature of the public sector pension funds and make a comparison to the defined contribution pension funds of the general labour market. We explore the financial and economic effects of the third party guarantee of the funds, their investment performance and other relevant factors. We seek an answer to the question why time is running out for the country’s largest pension fund that currently faces the prospect of becoming empty by the year 2022.

  20. Running Speed Can Be Predicted from Foot Contact Time during Outdoor over Ground Running

    NARCIS (Netherlands)

    de Ruiter, C.J.; van Oeveren, B.; Francke, A.; Zijlstra, P.; van Dieen, J.H.

    2016-01-01

    The number of validation studies of commercially available foot pods that provide estimates of running speed is limited and these studies have been conducted under laboratory conditions. Moreover, internal data handling and algorithms used to derive speed from these pods are proprietary and thereby

  1. Walking, running, and resting under time, distance, and average speed constraints: optimality of walk-run-rest mixtures.

    Science.gov (United States)

    Long, Leroy L; Srinivasan, Manoj

    2013-04-06

    On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk-run mixture at intermediate speeds and a walk-rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients-a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk-run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill.

  2. Compilation time analysis to minimize run-time overhead in preemptive scheduling on multiprocessors

    Science.gov (United States)

    Wauters, Piet; Lauwereins, Rudy; Peperstraete, J.

    1994-10-01

    This paper describes a scheduling method for hard real-time Digital Signal Processing (DSP) applications, implemented on a multi-processor. Due to the very high operating frequencies of DSP applications (typically hundreds of kHz) runtime overhead should be kept as small as possible. Because static scheduling introduces very little run-time overhead it is used as much as possible. Dynamic pre-emption of tasks is allowed if and only if it leads to better performance in spite of the extra run-time overhead. We essentially combine static scheduling with dynamic pre-emption using static priorities. Since we are dealing with hard real-time applications we must be able to guarantee at compile-time that all timing requirements will be satisfied at run-time. We will show that our method performs at least as good as any static scheduling method. It also reduces the total amount of dynamic pre-emptions compared with run time methods like deadline monotonic scheduling.

  3. Run-time middleware to support real-time system scenarios

    NARCIS (Netherlands)

    Goossens, K.; Koedam, M.; Sinha, S.; Nelson, A.; Geilen, M.

    2015-01-01

    Systems on Chip (SOC) are powerful multiprocessor systems capable of running multiple independent applications, often with both real-time and non-real-time requirements. Scenarios exist at two levels: first, combinations of independent applications, and second, different states of a single

  4. How Many Times Should One Run a Computational Simulation?

    DEFF Research Database (Denmark)

    Seri, Raffaello; Secchi, Davide

    2017-01-01

    This chapter is an attempt to answer the question “how many runs of a computational simulation should one do,” and it gives an answer by means of statistical analysis. After defining the nature of the problem and which types of simulation are mostly affected by it, the article introduces statisti......This chapter is an attempt to answer the question “how many runs of a computational simulation should one do,” and it gives an answer by means of statistical analysis. After defining the nature of the problem and which types of simulation are mostly affected by it, the article introduces...

  5. Rapid Large Earthquake and Run-up Characterization in Quasi Real Time

    Science.gov (United States)

    Bravo, F. J.; Riquelme, S.; Koch, P.; Cararo, S.

    2017-12-01

    Several test in quasi real time have been conducted by the rapid response group at CSN (National Seismological Center) to characterize earthquakes in Real Time. These methods are known for its robustness and realibility to create Finite Fault Models. The W-phase FFM Inversion, The Wavelet Domain FFM and The Body Wave and FFM have been implemented in real time at CSN, all these algorithms are running automatically and triggered by the W-phase Point Source Inversion. Dimensions (Large and Width ) are predefined by adopting scaling laws for earthquakes in subduction zones. We tested the last four major earthquakes occurred in Chile using this scheme: The 2010 Mw 8.8 Maule Earthquake, The 2014 Mw 8.2 Iquique Earthquake, The 2015 Mw 8.3 Illapel Earthquake and The 7.6 Melinka Earthquake. We obtain many solutions as time elapses, for each one of those we calculate the run-up using an analytical formula. Our results are in agreements with some FFM already accepted by the sicentific comunnity aswell as run-up observations in the field.

  6. Accuracy analysis of the State-of-Charge and remaining run-time determination for lithium-ion batteries

    NARCIS (Netherlands)

    Pop, V.; Bergveld, H.J.; Notten, P.H.L.; Op het Veld, J.H.G.; Regtien, Paulus P.L.

    2008-01-01

    This paper describes the various error sources in a real-time State-of-Charge (SoC) evaluation system and their effects on the overall accuracy in the calculation of the remaining run-time of a battery-operated system. The SoC algorithm for Li-ion batteries studied in this paper combines direct

  7. Accuracy analysis of the state-of-charge and remaining run-time determination for lithium-ion batteries

    NARCIS (Netherlands)

    Pop, V.; Bergveld, H.J.; Notten, P.H.L.; Op het Veld, J.H.G.; Regtien, P.P.L.

    2009-01-01

    This paper describes the various error sources in a real-time State-of-Charge (SoC) evaluation system and their effects on the overall accuracy in the calculation of the remaining run-time of a battery-operated system. The SoC algorithm for Li-ion batteries studied in this paper combines direct

  8. An enhanced Ada run-time system for real-time embedded processors

    Science.gov (United States)

    Sims, J. T.

    1991-01-01

    An enhanced Ada run-time system has been developed to support real-time embedded processor applications. The primary focus of this development effort has been on the tasking system and the memory management facilities of the run-time system. The tasking system has been extended to support efficient and precise periodic task execution as required for control applications. Event-driven task execution providing a means of task-asynchronous control and communication among Ada tasks is supported in this system. Inter-task control is even provided among tasks distributed on separate physical processors. The memory management system has been enhanced to provide object allocation and protected access support for memory shared between disjoint processors, each of which is executing a distinct Ada program.

  9. Safety evaluation of the ITP filter/stripper test runs and quiet time runs using simulant solution

    International Nuclear Information System (INIS)

    Gupta, M.K.

    1993-10-01

    In-Tank Precipitation is a process for removing radioactivity from the salt stored in the Waste Management Tank Farm at Savannah River. The process involves precipitation of cesium and potassium with sodium tetraphenylborate (STPB) and adsorption of strontium and actinides on insoluble sodium titanate (ST) particles. The purpose of this report is to provide the technical bases for the evaluation of Unreviewed Safety Question for the In-Tank Precipitation (ITP) Filter/Stripper Test Runs and Quiet Time Runs Program. The primary objective of the filter-stripper test runs and quiet time runs program is to ensure that the facility will fulfill its design basis function prior to the introduction of radioactive feed. Risks associated with the program are identified and include hazards, both personnel and environmental, associated with handling the chemical simulants; the presence of flammable materials; the potential for damage to the permanenet ITP and Tank Farm facilities. The risks, potential accident scenarios, and safeguards either in place or planned are discussed at length

  10. Safety evaluation of the ITP filter/stripper test runs and quiet time runs using simulant solution

    Energy Technology Data Exchange (ETDEWEB)

    Gupta, M.K.

    1993-10-01

    In-Tank Precipitation is a process for removing radioactivity from the salt stored in the Waste Management Tank Farm at Savannah River. The process involves precipitation of cesium and potassium with sodium tetraphenylborate (STPB) and adsorption of strontium and actinides on insoluble sodium titanate (ST) particles. The purpose of this report is to provide the technical bases for the evaluation of Unreviewed Safety Question for the In-Tank Precipitation (ITP) Filter/Stripper Test Runs and Quiet Time Runs Program. The primary objective of the filter-stripper test runs and quiet time runs program is to ensure that the facility will fulfill its design basis function prior to the introduction of radioactive feed. Risks associated with the program are identified and include hazards, both personnel and environmental, associated with handling the chemical simulants; the presence of flammable materials; the potential for damage to the permanenet ITP and Tank Farm facilities. The risks, potential accident scenarios, and safeguards either in place or planned are discussed at length.

  11. Run-time verification of behavioural conformance for conversational web services

    OpenAIRE

    Dranidis, Dimitris; Ramollari, Ervin; Kourtesis, Dimitrios

    2009-01-01

    Web services exposing run-time behaviour that deviates from their behavioural specifications represent a major threat to the sustainability of a service-oriented ecosystem. It is therefore critical to verify the behavioural conformance of services during run-time. This paper discusses a novel approach for run-time verification of Web services. It proposes the utilisation of Stream X-machines for constructing formal behavioural specifications of Web services which can be exploited for verifyin...

  12. Time limit and time at VO2max' during a continuous and an intermittent run.

    Science.gov (United States)

    Demarie, S; Koralsztein, J P; Billat, V

    2000-06-01

    The purpose of this study was to verify, by track field tests, whether sub-elite runners (n=15) could (i) reach their VO2max while running at v50%delta, i.e. midway between the speed associated with lactate threshold (vLAT) and that associated with maximal aerobic power (vVO2max), and (ii) if an intermittent exercise provokes a maximal and/or supra maximal oxygen consumption longer than a continuous one. Within three days, subjects underwent a multistage incremental test during which their vVO2max and vLAT were determined; they then performed two additional testing sessions, where continuous and intermittent running exercises at v50%delta were performed up to exhaustion. Subject's gas exchange and heart rate were continuously recorded by means of a telemetric apparatus. Blood samples were taken from fingertip and analysed for blood lactate concentration. In the continuous and the intermittent tests peak VO2 exceeded VO2max values, as determined during the incremental test. However in the intermittent exercise, peak VO2, time to exhaustion and time at VO2max reached significantly higher values, while blood lactate accumulation showed significantly lower values than in the continuous one. The v50%delta is sufficient to stimulate VO2max in both intermittent and continuous running. The intermittent exercise results better than the continuous one in increasing maximal aerobic power, allowing longer time at VO2max and obtaining higher peak VO2 with lower lactate accumulation.

  13. Change in skeletal muscle stiffness after running competition is dependent on both running distance and recovery time: a pilot study.

    Science.gov (United States)

    Sadeghi, Seyedali; Newman, Cassidy; Cortes, Daniel H

    2018-01-01

    Long-distance running competitions impose a large amount of mechanical loading and strain leading to muscle edema and delayed onset muscle soreness (DOMS). Damage to various muscle fibers, metabolic impairments and fatigue have been linked to explain how DOMS impairs muscle function. Disruptions of muscle fiber during DOMS exacerbated by exercise have been shown to change muscle mechanical properties. The objective of this study is to quantify changes in mechanical properties of different muscles in the thigh and lower leg as function of running distance and time after competition. A custom implementation of Focused Comb-Push Ultrasound Shear Elastography (F-CUSE) method was used to evaluate shear modulus in runners before and after a race. Twenty-two healthy individuals (age: 23 ± 5 years) were recruited using convenience sampling and split into three race categories: short distance (nine subjects, 3-5 miles), middle distance (10 subjects, 10-13 miles), and long distance (three subjects, 26+ miles). Shear Wave Elastography (SWE) measurements were taken on both legs of each subject on the rectus femoris (RF), vastus lateralis (VL), vastus medialis (VM), soleus, lateral gastrocnemius (LG), medial gastrocnemius (MG), biceps femoris (BF) and semitendinosus (ST) muscles. For statistical analyses, a linear mixed model was used, with recovery time and running distance as fixed variables, while shear modulus was used as the dependent variable. Recovery time had a significant effect on the soleus ( p  = 0.05), while running distance had considerable effect on the biceps femoris ( p  = 0.02), vastus lateralis ( p  trend from before competition to immediately after competition. The preliminary results suggest that SWE could potentially be used to quantify changes of muscle mechanical properties as a way for measuring recovery procedures for runners.

  14. Change in skeletal muscle stiffness after running competition is dependent on both running distance and recovery time: a pilot study

    Directory of Open Access Journals (Sweden)

    Seyedali Sadeghi

    2018-03-01

    Full Text Available Long-distance running competitions impose a large amount of mechanical loading and strain leading to muscle edema and delayed onset muscle soreness (DOMS. Damage to various muscle fibers, metabolic impairments and fatigue have been linked to explain how DOMS impairs muscle function. Disruptions of muscle fiber during DOMS exacerbated by exercise have been shown to change muscle mechanical properties. The objective of this study is to quantify changes in mechanical properties of different muscles in the thigh and lower leg as function of running distance and time after competition. A custom implementation of Focused Comb-Push Ultrasound Shear Elastography (F-CUSE method was used to evaluate shear modulus in runners before and after a race. Twenty-two healthy individuals (age: 23 ± 5 years were recruited using convenience sampling and split into three race categories: short distance (nine subjects, 3–5 miles, middle distance (10 subjects, 10–13 miles, and long distance (three subjects, 26+ miles. Shear Wave Elastography (SWE measurements were taken on both legs of each subject on the rectus femoris (RF, vastus lateralis (VL, vastus medialis (VM, soleus, lateral gastrocnemius (LG, medial gastrocnemius (MG, biceps femoris (BF and semitendinosus (ST muscles. For statistical analyses, a linear mixed model was used, with recovery time and running distance as fixed variables, while shear modulus was used as the dependent variable. Recovery time had a significant effect on the soleus (p = 0.05, while running distance had considerable effect on the biceps femoris (p = 0.02, vastus lateralis (p < 0.01 and semitendinosus muscles (p = 0.02. Sixty-seven percent of muscles exhibited a decreasing stiffness trend from before competition to immediately after competition. The preliminary results suggest that SWE could potentially be used to quantify changes of muscle mechanical properties as a way for measuring recovery procedures for runners.

  15. Leisure-time running reduces all-cause and cardiovascular mortality risk.

    Science.gov (United States)

    Lee, Duck-Chul; Pate, Russell R; Lavie, Carl J; Sui, Xuemei; Church, Timothy S; Blair, Steven N

    2014-08-05

    Although running is a popular leisure-time physical activity, little is known about the long-term effects of running on mortality. The dose-response relations between running, as well as the change in running behaviors over time, and mortality remain uncertain. We examined the associations of running with all-cause and cardiovascular mortality risks in 55,137 adults, 18 to 100 years of age (mean age 44 years). Running was assessed on a medical history questionnaire by leisure-time activity. During a mean follow-up of 15 years, 3,413 all-cause and 1,217 cardiovascular deaths occurred. Approximately 24% of adults participated in running in this population. Compared with nonrunners, runners had 30% and 45% lower adjusted risks of all-cause and cardiovascular mortality, respectively, with a 3-year life expectancy benefit. In dose-response analyses, the mortality benefits in runners were similar across quintiles of running time, distance, frequency, amount, and speed, compared with nonrunners. Weekly running even benefits, with 29% and 50% lower risks of all-cause and cardiovascular mortality, respectively, compared with never-runners. Running, even 5 to 10 min/day and at slow speeds benefits. Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  16. Time Series Analysis Based on Running Mann Whitney Z Statistics

    Science.gov (United States)

    A sensitive and objective time series analysis method based on the calculation of Mann Whitney U statistics is described. This method samples data rankings over moving time windows, converts those samples to Mann-Whitney U statistics, and then normalizes the U statistics to Z statistics using Monte-...

  17. An Empirical Derivation of the Run Time of the Bubble Sort Algorithm.

    Science.gov (United States)

    Gonzales, Michael G.

    1984-01-01

    Suggests a moving pictorial tool to help teach principles in the bubble sort algorithm. Develops such a tool applied to an unsorted list of numbers and describes a method to derive the run time of the algorithm. The method can be modified to run the times of various other algorithms. (JN)

  18. Lower bounds on the run time of the univariate marginal distribution algorithm on OneMax

    DEFF Research Database (Denmark)

    Krejca, Martin S.; Witt, Carsten

    2017-01-01

    The Univariate Marginal Distribution Algorithm (UMDA), a popular estimation of distribution algorithm, is studied from a run time perspective. On the classical OneMax benchmark function, a lower bound of Ω(μ√n + n log n), where μ is the population size, on its expected run time is proved...... values maintained by the algorithm, including carefully designed potential functions. These techniques may prove useful in advancing the field of run time analysis for estimation of distribution algorithms in general........ This is the first direct lower bound on the run time of the UMDA. It is stronger than the bounds that follow from general black-box complexity theory and is matched by the run time of many evolutionary algorithms. The results are obtained through advanced analyses of the stochastic change of the frequencies of bit...

  19. Discount-Optimal Infinite Runs in Priced Timed Automata

    DEFF Research Database (Denmark)

    Fahrenberg, Uli; Larsen, Kim Guldstrand

    2009-01-01

    We introduce a new discounting semantics for priced timed automata. Discounting provides a way to model optimal-cost problems for infinite traces and has applications in optimal scheduling and other areas. In the discounting semantics, prices decrease exponentially, so that the contribution...

  20. Design-time application mapping and platform exploration for MP-SoC customised run-time management

    NARCIS (Netherlands)

    Ykman-Couvreur, Ch.; Nollet, V.; Marescaux, T.M.; Brockmeyer, E.; Catthoor, F.; Corporaal, H.

    2007-01-01

    Abstract: In an Multi-Processor system-on-Chip (MP-SoC) environment, a customized run-time management layer should be incorporated on top of the basic Operating System services to alleviate the run-time decision-making and to globally optimise costs (e.g. energy consumption) across all active

  1. Safety provision for nuclear power plants during remaining running time

    International Nuclear Information System (INIS)

    Rossnagel, Alexander; Hentschel, Anja

    2012-01-01

    With the phasing-out of the industrial use of nuclear energy for the power generation, the risk of the nuclear power plants has not been eliminated in principle, but only for a limited period of time. Therefore, the remaining nine nuclear power plants must also be used for the remaining ten years according to the state of science and technology. Regulatory authorities must substantiate the safety requirements for each nuclear power plant and enforce these requirements by means of various regulatory measures. The consequences of Fukushima must be included in the assessment of the safety level of nuclear power plants in Germany. In this respect, the regulatory authorities have the important tasks to investigate and assess the security risks as well as to develop instructions and orders.

  2. On the Use of Running Trends as Summary Statistics for Univariate Time Series and Time Series Association

    OpenAIRE

    Trottini, Mario; Vigo, Isabel; Belda, Santiago

    2015-01-01

    Given a time series, running trends analysis (RTA) involves evaluating least squares trends over overlapping time windows of L consecutive time points, with overlap by all but one observation. This produces a new series called the “running trends series,” which is used as summary statistics of the original series for further analysis. In recent years, RTA has been widely used in climate applied research as summary statistics for time series and time series association. There is no doubt that ...

  3. Effect of treadmill versus overground running on the structure of variability of stride timing.

    Science.gov (United States)

    Lindsay, Timothy R; Noakes, Timothy D; McGregor, Stephen J

    2014-04-01

    Gait timing dynamics of treadmill and overground running were compared. Nine trained runners ran treadmill and track trials at 80, 100, and 120% of preferred pace for 8 min. each. Stride time series were generated for each trial. To each series, detrended fluctuation analysis (DFA), power spectral density (PSD), and multiscale entropy (MSE) analysis were applied to infer the regime of control along the randomness-regularity axis. Compared to overground running, treadmill running exhibited a higher DFA and PSD scaling exponent, as well as lower entropy at non-preferred speeds. This indicates a more ordered control for treadmill running, especially at non-preferred speeds. The results suggest that the treadmill itself brings about greater constraints and requires increased voluntary control. Thus, the quantification of treadmill running gait dynamics does not necessarily reflect movement in overground settings.

  4. Evolution of the open-source data management system Rucio for LHC Run-3 and beyond ATLAS

    CERN Document Server

    Barisits, Martin-Stefan; The ATLAS collaboration

    2018-01-01

    Rucio, the distributed data management system of the ATLAS collaboration already manages more than 330 Petabytes of physics data on the grid. Rucio has seen incremental improvements throughout LHC Run-2 and is currently being prepared for the HL-LHC era of the experiment. Next to these improvements the system is currently evolving into a full-scale generic data management system for application beyond ATLAS, or even beyond high energy physics. This contribution focuses on the development roadmap of Rucio for LHC Run-3, such as, event level data management, generic meta-data support, and increased usage of networks and tapes. At the same time Rucio is evolving beyond the original ATLAS use-case. This includes authentication beyond the WLCG ecosystem, generic database compatibility, deployment and packaging of the software stack in containers and a project paradigm shift to a full-scale open source project.

  5. Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment

    Directory of Open Access Journals (Sweden)

    Qi Liu

    2016-08-01

    Full Text Available Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs.

  6. System and Component Software Specification, Run-time Verification and Automatic Test Generation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The following background technology is described in Part 5: Run-time Verification (RV), White Box Automatic Test Generation (WBATG). Part 5 also describes how WBATG...

  7. Strong normalization by type-directed partial evaluation and run-time code generation

    DEFF Research Database (Denmark)

    Balat, Vincent; Danvy, Olivier

    1998-01-01

    We investigate the synergy between type-directed partial evaluation and run-time code generation for the Caml dialect of ML. Type-directed partial evaluation maps simply typed, closed Caml values to a representation of their long βη-normal form. Caml uses a virtual machine and has the capability...... to load byte code at run time. Representing the long βη-normal forms as byte code gives us the ability to strongly normalize higher-order values (i.e., weak head normal forms in ML), to compile the resulting strong normal forms into byte code, and to load this byte code all in one go, at run time. We...... conclude this note with a preview of our current work on scaling up strong normalization by run-time code generation to the Caml module language....

  8. ANALYSIS OF POSSIBILITY TO AVOID A RUNNING-DOW ACCIDENT TIMELY BRAKING

    Directory of Open Access Journals (Sweden)

    Sarayev, A.

    2013-06-01

    Full Text Available Such circumstances under which the drive can stop the vehicle by applying timely braking before reaching the pedestrian crossing or decrease the speed to the safe limit to avoid a running-down accident is considered.

  9. Strong Normalization by Type-Directed Partial Evaluation and Run-Time Code Generation

    DEFF Research Database (Denmark)

    Balat, Vincent; Danvy, Olivier

    1997-01-01

    We investigate the synergy between type-directed partial evaluation and run-time code generation for the Caml dialect of ML. Type-directed partial evaluation maps simply typed, closed Caml values to a representation of their long βη-normal form. Caml uses a virtual machine and has the capability...... to load byte code at run time. Representing the long βη-normal forms as byte code gives us the ability to strongly normalize higher-order values (i.e., weak head normal forms in ML), to compile the resulting strong normal forms into byte code, and to load this byte code all in one go, at run time. We...... conclude this note with a preview of our current work on scaling up strong normalization by run-time code generation to the Caml module language....

  10. Investigations of timing during the schedule and reinforcement intervals with wheel-running reinforcement.

    Science.gov (United States)

    Belke, Terry W; Christie-Fougere, Melissa M

    2006-11-01

    Across two experiments, a peak procedure was used to assess the timing of the onset and offset of an opportunity to run as a reinforcer. The first experiment investigated the effect of reinforcer duration on temporal discrimination of the onset of the reinforcement interval. Three male Wistar rats were exposed to fixed-interval (FI) 30-s schedules of wheel-running reinforcement and the duration of the opportunity to run was varied across values of 15, 30, and 60s. Each session consisted of 50 reinforcers and 10 probe trials. Results showed that as reinforcer duration increased, the percentage of postreinforcement pauses longer than the 30-s schedule interval increased. On probe trials, peak response rates occurred near the time of reinforcer delivery and peak times varied with reinforcer duration. In a second experiment, seven female Long-Evans rats were exposed to FI 30-s schedules leading to 30-s opportunities to run. Timing of the onset and offset of the reinforcement period was assessed by probe trials during the schedule interval and during the reinforcement interval in separate conditions. The results provided evidence of timing of the onset, but not the offset of the wheel-running reinforcement period. Further research is required to assess if timing occurs during a wheel-running reinforcement period.

  11. Design and Implementation of a New Run-time Life-cycle for Interactive Public Display Applications

    OpenAIRE

    Cardoso, Jorge C. S.; Perpétua, Alice

    2015-01-01

    Public display systems are becoming increasingly complex. They are moving from passive closed systems to open interactive systems that are able to accommodate applications from several independent sources. This shift needs to be accompanied by a more flexible and powerful application management. In this paper, we propose a run-time life-cycle model for interactive public display applications that addresses several shortcomings of current display systems. Our mo...

  12. Design Flow Instantiation for Run-Time Reconfigurable Systems: A Case Study

    Directory of Open Access Journals (Sweden)

    Yang Qu

    2007-12-01

    Full Text Available Reconfigurable system is a promising alternative to deliver both flexibility and performance at the same time. New reconfigurable technologies and technology-dependent tools have been developed, but a complete overview of the whole design flow for run-time reconfigurable systems is missing. In this work, we present a design flow instantiation for such systems using a real-life application. The design flow is roughly divided into two parts: system level and implementation. At system level, our supports for hardware resource estimation and performance evaluation are applied. At implementation level, technology-dependent tools are used to realize the run-time reconfiguration. The design case is part of a WCDMA decoder on a commercially available reconfigurable platform. The results show that using run-time reconfiguration can save over 40% area when compared to a functionally equivalent fixed system and achieve 30 times speedup in processing time when compared to a functionally equivalent pure software design.

  13. Running speed during training and percent body fat predict race time in recreational male marathoners.

    Science.gov (United States)

    Barandun, Ursula; Knechtle, Beat; Knechtle, Patrizia; Klipstein, Andreas; Rüst, Christoph Alexander; Rosemann, Thomas; Lepers, Romuald

    2012-01-01

    Recent studies have shown that personal best marathon time is a strong predictor of race time in male ultramarathoners. We aimed to determine variables predictive of marathon race time in recreational male marathoners by using the same characteristics of anthropometry and training as used for ultramarathoners. Anthropometric and training characteristics of 126 recreational male marathoners were bivariately and multivariately related to marathon race times. After multivariate regression, running speed of the training units (β = -0.52, P marathon race times. Marathon race time for recreational male runners may be estimated to some extent by using the following equation (r (2) = 0.44): race time ( minutes) = 326.3 + 2.394 × (percent body fat, %) - 12.06 × (speed in training, km/hours). Running speed during training sessions correlated with prerace percent body fat (r = 0.33, P = 0.0002). The model including anthropometric and training variables explained 44% of the variance of marathon race times, whereas running speed during training sessions alone explained 40%. Thus, training speed was more predictive of marathon performance times than anthropometric characteristics. The present results suggest that low body fat and running speed during training close to race pace (about 11 km/hour) are two key factors for a fast marathon race time in recreational male marathoner runners.

  14. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...

  15. Run-Time and Compiler Support for Programming in Adaptive Parallel Environments

    Directory of Open Access Journals (Sweden)

    Guy Edjlali

    1997-01-01

    Full Text Available For better utilization of computing resources, it is important to consider parallel programming environments in which the number of available processors varies at run-time. In this article, we discuss run-time support for data-parallel programming in such an adaptive environment. Executing programs in an adaptive environment requires redistributing data when the number of processors changes, and also requires determining new loop bounds and communication patterns for the new set of processors. We have developed a run-time library to provide this support. We discuss how the run-time library can be used by compilers of high-performance Fortran (HPF-like languages to generate code for an adaptive environment. We present performance results for a Navier-Stokes solver and a multigrid template run on a network of workstations and an IBM SP-2. Our experiments show that if the number of processors is not varied frequently, the cost of data redistribution is not significant compared to the time required for the actual computation. Overall, our work establishes the feasibility of compiling HPF for a network of nondedicated workstations, which are likely to be an important resource for parallel programming in the future.

  16. Relationship between running kinematic changes and time limit at vVO2max

    Directory of Open Access Journals (Sweden)

    Leonardo De Lucca

    2012-06-01

    Exhaustive running at maximal oxygen uptake velocity (vVO2max can alter running kinematic parameters and increase energy cost along the time. The aims of the present study were to compare characteristics of ankle and knee kinematics during running at vVO2max and to verify the relationship between changes in kinematic variables and time limit (Tlim. Eleven male volunteers, recreational players of team sports, performed an incremental running test until volitional exhaustion to determine vVO2max and a constant velocity test at vVO2max. Subjects were filmed continuously from the left sagittal plane at 210 Hz for further kinematic analysis. The maximal plantar flexion during swing (p<0.01 was the only variable that increased significantly from beginning to end of the run. Increase in ankle angle at contact was the only variable related to Tlim (r=0.64; p=0.035 and explained 34% of the performance in the test. These findings suggest that the individuals under study maintained a stable running style at vVO2max and that increase in plantar flexion explained the performance in this test when it was applied in non-runners.

  17. Adaptive Embedded Systems – Challenges of Run-Time Resource Management

    DEFF Research Database (Denmark)

    Understanding and efficiently controlling the dynamic behavior of adaptive embedded systems is a challenging endavor. The challenges come from the often very complicated interplay between the application, the application mapping, and the underlying hardware architecture. With MPSoC, we have...... the technology to design and fabricate dynamically reconfigurable hardware platforms. However, such platforms will pose new challenges to tools and methods to efficiently explore these platforms at run-time. This talk will address some of the challenges of run-time resource management in adaptive embedded...... systems....

  18. Shorter Ground Contact Time and Better Running Economy: Evidence From Female Kenyan Runners.

    Science.gov (United States)

    Mooses, Martin; Haile, Diresibachew W; Ojiambo, Robert; Sang, Meshack; Mooses, Kerli; Lane, Amy R; Hackney, Anthony C

    2018-06-25

    Mooses, M, Haile, DW, Ojiambo, R, Sang, M, Mooses, K, Lane, AR, and Hackney, AC. Shorter ground contact time and better running economy: evidence from female Kenyan runners. J Strength Cond Res XX(X): 000-000, 2018-Previously, it has been concluded that the improvement in running economy (RE) might be considered as a key to the continued improvement in performance when no further increase in V[Combining Dot Above]O2max is observed. To date, RE has been extensively studied among male East African distance runners. By contrast, there is a paucity of data on the RE of female East African runners. A total of 10 female Kenyan runners performed 3 × 1,600-m steady-state run trials on a flat outdoor clay track (400-m lap) at the intensities that corresponded to their everyday training intensities for easy, moderate, and fast running. Running economy together with gait characteristics was determined. Participants showed moderate to very good RE at the first (202 ± 26 ml·kg·km) and second (188 ± 12 ml·kg·km) run trials, respectively. Correlation analysis revealed significant relationship between ground contact time (GCT) and RE at the second run (r = 0.782; p = 0.022), which represented the intensity of anaerobic threshold. This study is the first to report the RE and gait characteristics of East African female athletes measured under everyday training settings. We provided the evidence that GCT is associated with the superior RE of the female Kenyan runners.

  19. Comparing internal and external run-time coupling of CFD and building energy simulation software

    NARCIS (Netherlands)

    Djunaedy, E.; Hensen, J.L.M.; Loomans, M.G.L.C.

    2004-01-01

    This paper describes a comparison between internal and external run-time coupling of CFD and building energy simulation software. Internal coupling can be seen as the "traditional" way of developing software, i.e. the capabilities of existing software are expanded by merging codes. With external

  20. Ada Run Time Support Environments and a common APSE Interface Set. [Ada Programming Support Environment

    Science.gov (United States)

    Mckay, C. W.; Bown, R. L.

    1985-01-01

    The paper discusses the importance of linking Ada Run Time Support Environments to the Common Ada Programming Support Environment (APSE) Interface Set (CAIS). A non-stop network operating systems scenario is presented to serve as a forum for identifying the important issues. The network operating system exemplifies the issues involved in the NASA Space Station data management system.

  1. Differences in ground contact time explain the less efficient running economy in north african runners.

    Science.gov (United States)

    Santos-Concejero, J; Granados, C; Irazusta, J; Bidaurrazaga-Letona, I; Zabala-Lili, J; Tam, N; Gil, S M

    2013-09-01

    The purpose of this study was to investigate the relationship between biomechanical variables and running economy in North African and European runners. Eight North African and 13 European male runners of the same athletic level ran 4-minute stages on a treadmill at varying set velocities. During the test, biomechanical variables such as ground contact time, swing time, stride length, stride frequency, stride angle and the different sub-phases of ground contact were recorded using an optical measurement system. Additionally, oxygen uptake was measured to calculate running economy. The European runners were more economical than the North African runners at 19.5 km · h(-1), presented lower ground contact time at 18 km · h(-1) and 19.5 km · h(-1) and experienced later propulsion sub-phase at 10.5 km · h(-1),12 km · h(-1), 15 km · h(-1), 16.5 km · h(-1) and 19.5 km · h(-1) than the European runners (P Running economy at 19.5 km · h(-1) was negatively correlated with swing time (r = -0.53) and stride angle (r = -0.52), whereas it was positively correlated with ground contact time (r = 0.53). Within the constraints of extrapolating these findings, the less efficient running economy in North African runners may imply that their outstanding performance at international athletic events appears not to be linked to running efficiency. Further, the differences in metabolic demand seem to be associated with differing biomechanical characteristics during ground contact, including longer contact times.

  2. Infrared observations of gravitational-wave sources in Advanced LIGO's second observing run

    Science.gov (United States)

    Pound Singer, Leo; Kasliwal, Mansi; Lau, Ryan; Cenko, Bradley; Global Relay of Observatories Watching Transients Happen (GROWTH)

    2018-01-01

    Advanced LIGO observed gravitational waves (GWs) from a binary black hole merger in its first observing run (O1) in September 2015. It is anticipated that LIGO and Virgo will soon detect the first binary neutron star mergers. The most promising electromagnetic counterparts to such events are kilonovae: fast, faint transients powered by the radioactive decay of the r-process ejecta. Joint gravitational-wave and electromagnetic observations of such transients hold the key to many longstanding problems, from the nature of short GRBS to the cosmic production sites of the r-process elements to "standard siren" cosmology. Due to the large LIGO/Virgo error regions of 100 deg2, synoptic survey telescopes have dominated the search for LIGO counterparts. Due to the paucity of infrared instruments with multi-deg2 fields of view, infrared observations have been lacking. Near-infrared emission should not only be a more robust signature of kilonovae than optical emission (independent of viewing angle), but should also be several magnitudes brighter and be detectable for much longer, weeks after merger rather than days. In Advanced LIGO's second observing run, we used the FLAMINGOS-2 instrument on Gemini-South to hunt for the near-infrared emission from GW sources by targeted imaging of the most massive galaxies in the LIGO/Virgo localization volumes. We present the results of this campaign, rates, and interpretation of our near-infrared imaging and spectroscopy. We show that leveraging large-scale structure and targeted imaging of the most massive ~10 galaxies in a LIGO/Virgo localization volume may be a surprisingly effective strategy to find the electromagnetic counterpart.

  3. Short- and long-run time-of-use price elasticities in Swiss residential electricity demand

    International Nuclear Information System (INIS)

    Filippini, Massimo

    2011-01-01

    This paper presents an empirical analysis on the residential demand for electricity by time-of-day. This analysis has been performed using aggregate data at the city level for 22 Swiss cities for the period 2000-2006. For this purpose, we estimated two log-log demand equations for peak and off-peak electricity consumption using static and dynamic partial adjustment approaches. These demand functions were estimated using several econometric approaches for panel data, for example LSDV and RE for static models, and LSDV and corrected LSDV estimators for dynamic models. The attempt of this empirical analysis has been to highlight some of the characteristics of the Swiss residential electricity demand. The estimated short-run own price elasticities are lower than 1, whereas in the long-run these values are higher than 1. The estimated short-run and long-run cross-price elasticities are positive. This result shows that peak and off-peak electricity are substitutes. In this context, time differentiated prices should provide an economic incentive to customers so that they can modify consumption patterns by reducing peak demand and shifting electricity consumption from peak to off-peak periods. - Highlights: → Empirical analysis on the residential demand for electricity by time-of-day. → Estimators for dynamic panel data. → Peak and off-peak residential electricity are substitutes.

  4. Integrating software testing and run-time checking in an assertion verification framework

    OpenAIRE

    Mera, E.; López García, Pedro; Hermenegildo, Manuel V.

    2009-01-01

    We have designed and implemented a framework that unifies unit testing and run-time verification (as well as static verification and static debugging). A key contribution of our approach is that a unified assertion language is used for all of these tasks. We first propose methods for compiling runtime checks for (parts of) assertions which cannot be verified at compile-time via program transformation. This transformation allows checking preconditions and postconditions, including conditional...

  5. A Formal Approach to Run-Time Evaluation of Real-Time Behaviour in Distributed Process Control Systems

    DEFF Research Database (Denmark)

    Kristensen, C.H.

    This thesis advocates a formal approach to run-time evaluation of real-time behaviour in distributed process sontrol systems, motivated by a growing interest in applying the increasingly popular formal methods in the application area of distributed process control systems. We propose to evaluate...... because the real-time aspects of distributed process control systems are considered to be among the hardest and most interesting to handle....

  6. Novel Real-time Calibration and Alignment Procedure for LHCb Run II

    CERN Multimedia

    Prouve, Claire

    2016-01-01

    In order to achieve optimal detector performance the LHCb experiment has introduced a novel real-time detector alignment and calibration strategy for Run II of the LHC. For the alignment tasks, data is collected and processed at the beginning of each fill while the calibrations are performed for each run. This real time alignment and calibration allows the same constants being used in both the online and offline reconstruction, thus improving the correlation between triggered and offline selected events. Additionally the newly computed alignment and calibration constants can be instantly used in the trigger, making it more efficient. The online alignment and calibration of the RICH detectors also enable the use of hadronic particle identification in the trigger. The computing time constraints are met through the use of a new dedicated framework using the multi-core farm infrastructure for the LHCb trigger. An overview of all alignment and calibration tasks is presented and their performance is shown.

  7. Implementering Run-time Evaluation of Distributed Timing Constraints in a Micro Kernel

    DEFF Research Database (Denmark)

    Kristensen, C.H.; Drejer, N.; Nielsen, Jens Frederik Dalsgaard

    In the present paper we describe a solution to the problem of implementing time-optimal evaluation of timing constraints in distributed real-time systems......In the present paper we describe a solution to the problem of implementing time-optimal evaluation of timing constraints in distributed real-time systems...

  8. A Modular Environment for Geophysical Inversion and Run-time Autotuning using Heterogeneous Computing Systems

    Science.gov (United States)

    Myre, Joseph M.

    Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that

  9. Operating Security System Support for Run-Time Security with a Trusted Execution Environment

    DEFF Research Database (Denmark)

    Gonzalez, Javier

    Software services have become an integral part of our daily life. Cyber-attacks have thus become a problem of increasing importance not only for the IT industry, but for society at large. A way to contain cyber-attacks is to guarantee the integrity of IT systems at run-time. Put differently......, it is safe to assume that any complex software is compromised. The problem is then to monitor and contain it when it executes in order to protect sensitive data and other sensitive assets. To really have an impact, any solution to this problem should be integrated in commodity operating systems...... sensitive assets at run-time that we denote split-enforcement, and provide an implementation for ARM-powered devices using ARM TrustZone security extensions. We design, build, and evaluate a prototype Trusted Cell that provides trusted services. We also present the first generic TrustZone driver...

  10. Android Application Install-time Permission Validation and Run-time Malicious Pattern Detection

    OpenAIRE

    Ma, Zhongmin

    2014-01-01

    The open source structure of Android applications introduces security vulnerabilities that can be readily exploited by third-party applications. We address certain vulnerabilities at both installation and runtime using machine learning. Effective classification techniques with neural networks can be used to verify the application categories on installation. We devise a novel application category verification methodology that involves machine learning the application permissions...

  11. LHCb : Novel real-time alignment and calibration of the LHCb Detector in Run2

    CERN Multimedia

    Tobin, Mark

    2015-01-01

    LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run 2. Data collected at the start of the fill will be processed in a few minutes and used to update the alignment, while the calibration constants will be evaluated for each run. This procedure will improve the quality of the online alignment. For example, the vertex locator is retracted and reinserted for stable beam collisions in each fill to be centred on the primary vertex position in the transverse plane. Consequently its position changes on a fill-by-fill basis. Critically, this new realtime alignment and calibration procedure allows identical constants to be used in the online and offline reconstruction, thus improving the correlation between triggered and offline selected events. This offers the opportunity to optimise the event selection in the trigger by applying stronger constraints. The online calibration facilitates the use of hadronic particle identification using the RICH detectors at the trigger level. T...

  12. Novel real-time alignment and calibration of the LHCb detector in Run II

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Z., E-mail: zhirui.xu@epfl.ch; Tobin, M.

    2016-07-11

    An automatic real-time alignment and calibration strategy of the LHCb detector was developed for the Run II. Thanks to the online calibration, tighter event selection criteria can be used in the trigger. Furthermore, the online calibration facilitates the use of hadronic particle identification using the Ring Imaging Cherenkov (RICH) detectors at the trigger level. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from both the operational and physics performance points of view. Specific challenges of this novel configuration are discussed, as well as the working procedures of the framework and its performance.

  13. Novel real-time alignment and calibration of the LHCb detector in Run II

    CERN Document Server

    AUTHOR|(CDS)2086132; Tobin, Mark

    2016-01-01

    An automatic real-time alignment and calibration strategy of the LHCb detector was developed for the Run II. Thanks to the online calibration, tighter event selection criteria can be used in the trigger. Furthermore, the online calibration facilitates the use of hadronic particle identification using the Ring Imaging Cherenkov (RICH) detectors at the trigger level. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from both the operational and physics performance points of view. Specific challenges of this novel configuration are discussed, as well as the working procedures of the framework and its performance.

  14. Repetitively pulsed UV radiation source based on a run-away electron preionised diffuse discharge in nitrogen

    Energy Technology Data Exchange (ETDEWEB)

    Baksht, E Kh; Burachenko, A G; Lomaev, M I; Panchenko, A N; Tarasenko, V F [Institute of High Current Electronics, Siberian Branch, Russian Academy of Sciences, Tomsk (Russian Federation)

    2015-04-30

    An extended repetitively pulsed source of spontaneous UV radiation is fabricated, which may also be used for producing laser radiation. Voltage pulses with an incident wave amplitude of up to 30 kV, a half-amplitude duration of ∼4 ns and a rise time of ∼2.5 ns are applied to a gap with a nonuniform electric field. For an excitation region length of 35 cm and a nitrogen pressure of 30 – 760 Torr, a diffusive discharge up to a pulse repetition rate of 2 kHz is produced without using an additional system for gap preionisation. An investigation is made of the plasma of the run-away electron preionised diffuse discharge. Using a CCD camera it is found that the dense diffused plasma fills the gap in a time shorter than 1 ns. X-ray radiation is recorded from behind the foil anode throughout the pressure range under study; a supershort avalanche electron beam is recorded by the collector electrode at pressures below 100 Torr. (laser applications and other topics in quantum electronics)

  15. Repetitively pulsed UV radiation source based on a run-away electron preionised diffuse discharge in nitrogen

    Science.gov (United States)

    Baksht, E. Kh; Burachenko, A. G.; Lomaev, M. I.; Panchenko, A. N.; Tarasenko, V. F.

    2015-04-01

    An extended repetitively pulsed source of spontaneous UV radiation is fabricated, which may also be used for producing laser radiation. Voltage pulses with an incident wave amplitude of up to 30 kV, a half-amplitude duration of ~4 ns and a rise time of ~2.5 ns are applied to a gap with a nonuniform electric field. For an excitation region length of 35 cm and a nitrogen pressure of 30 - 760 Torr, a diffusive discharge up to a pulse repetition rate of 2 kHz is produced without using an additional system for gap preionisation. An investigation is made of the plasma of the run-away electron preionised diffuse discharge. Using a CCD camera it is found that the dense diffused plasma fills the gap in a time shorter than 1 ns. X-ray radiation is recorded from behind the foil anode throughout the pressure range under study; a supershort avalanche electron beam is recorded by the collector electrode at pressures below 100 Torr.

  16. Novel Real-time Alignment and Calibration of the LHCb detector in Run2

    Science.gov (United States)

    Martinelli, Maurizio; LHCb Collaboration

    2017-10-01

    LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run2. Data collected at the start of the fill are processed in a few minutes and used to update the alignment parameters, while the calibration constants are evaluated for each run. This procedure improves the quality of the online reconstruction. For example, the vertex locator is retracted and reinserted for stable beam conditions in each fill to be centred on the primary vertex position in the transverse plane. Consequently its position changes on a fill-by-fill basis. Critically, this new real-time alignment and calibration procedure allows identical constants to be used in the online and offline reconstruction, thus improving the correlation between triggered and offline-selected events. This offers the opportunity to optimise the event selection in the trigger by applying stronger constraints. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructure for the trigger. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from both the operational and physics performance points of view. Specific challenges of this novel configuration are discussed, as well as the working procedures of the framework and its performance.

  17. Run-Time HW/SW Scheduling of Data Flow Applications on Reconfigurable Architectures

    Directory of Open Access Journals (Sweden)

    Ghaffari Fakhreddine

    2009-01-01

    Full Text Available This paper presents an efficient dynamic and run-time Hardware/Software scheduling approach. This scheduling heuristic consists in mapping online the different tasks of a highly dynamic application in such a way that the total execution time is minimized. We consider soft real-time data flow graph oriented applications for which the execution time is function of the input data nature. The target architecture is composed of two processors connected to a dynamically reconfigurable hardware accelerator. Our approach takes advantage of the reconfiguration property of the considered architecture to adapt the treatment to the system dynamics. We compare our heuristic with another similar approach. We present the results of our scheduling method on several image processing applications. Our experiments include simulation and synthesis results on a Virtex V-based platform. These results show a better performance against existing methods.

  18. Novel real-time alignment and calibration of the LHCb detector in Run2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00144085

    2017-01-01

    LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run2. Data collected at the start of the fill are processed in a few minutes and used to update the alignment parameters, while the calibration constants are evaluated for each run. This procedure improves the quality of the online reconstruction. For example, the vertex locator is retracted and reinserted for stable beam conditions in each fill to be centred on the primary vertex position in the transverse plane. Consequently its position changes on a fill-by-fill basis. Critically, this new real-time alignment and calibration procedure allows identical constants to be used in the online and offline reconstruction, thus improving the correlation between triggered and offline-selected events. This offers the opportunity to optimise the event selection in the trigger by applying stronger constraints. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructur...

  19. Real-time alignment and calibration of the LHCb Detector in Run II

    CERN Multimedia

    Dujany, Giulio

    2016-01-01

    Stable, precise spatial alignment and PID calibration are necessary to achieve optimal detector performance. During Run2, LHCb has a new real-time detector alignment and calibration to allow equivalent performance in the online and offline reconstruction to be reached. This offers the opportunity to optimise the event selection by applying stronger constraints, and to use hadronic particle identification at the trigger level. The computing time constraints are met through the use of a new dedicated framework using the multi-core farm infrastructure for the trigger. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from the operative and physics performance point of view. Specific challenges of this configuration are discussed, as well as the designed framework and its performance.

  20. Real-time alignment and calibration of the LHCb Detector in Run II

    CERN Multimedia

    Dujany, Giulio

    2015-01-01

    Stable, precise spatial alignment and PID calibration are necessary to achieve optimal detector performance. During Run2, LHCb will have a new real-time detector alignment and calibration to allow equivalent performance in the online and offline reconstruction to be reached. This offers the opportunity to optimise the event selection by applying stronger constraints, and to use hadronic particle identification at the trigger level. The computing time constraints are met through the use of a new dedicated framework using the multi-core farm infrastructure for the trigger. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from the operative and physics performance point of view. Specific challenges of this configuration are discussed, as well as the designed framework and its performance.

  1. Real time data analysis with the ATLAS Trigger at the LHC in Run-2

    CERN Document Server

    Beauchemin, Pierre-Hugues; The ATLAS collaboration

    2018-01-01

    The trigger selection capabilities of the ATLAS detector have been significantly enhanced for the LHC Run- 2 in order to cope with the higher event rates and with the large number of simultaneous interactions (pile-up) per protonproton bunch crossing. A new hardware system, designed to analyse real time event-topologies at Level-1 came to full use in 2017. A hardware-based track reconstruction system, expected to be used real-time in 2018, is designed to provide track information to the high-level software trigger at its full input rate. The high-level trigger selections are largely relying on offline-like reconstruction techniques, and in some cases multivariate analysis methods. Despite the sudden change in LHC operations during the second half of 2017, which caused an increase in pile-up and therefore also in CPU usage of the trigger algorithms, the set of triggers (so called trigger menu) running online has undergone only minor modifications thanks to the robustness and redundancy of the trigger system, a...

  2. Real time data analysis with the ATLAS trigger at the LHC in Run-2

    CERN Document Server

    Beauchemin, Pierre-Hugues; The ATLAS collaboration

    2018-01-01

    The trigger selection capabilities of the ATLAS detector have been significantly enhanced for the LHC Run-2 in order to cope with the higher event rates and with the large number of simultaneous interactions (pile-up) per proton-proton bunch crossing. A new hardware system, designed to analyse real time event-topologies at Level-1 came to full use in 2017. A hardware-based track reconstruction system, expected to be used real-time in 2018, is designed to provide track information to the high-level software trigger at its full input rate. The high-level trigger selections are largely relying on offline-like reconstruction techniques, and in some cases multi-variate analysis methods. Despite the sudden change in LHC operations during the second half of 2017, which caused an increase in pile-up and therefore also in CPU usage of the trigger algorithms, the set of triggers (so called trigger menu) running online has undergone only minor modifications thanks to the robustness and redundancy of the trigger system, ...

  3. The Advanced Photon Source injection timing system

    International Nuclear Information System (INIS)

    Lenkszus, F.R.; Laird, R.

    1995-01-01

    The Advanced Photon Source consists of five accelerators. The injection timing system provides the signals required to cause a bunch emitted from the electron gun to navigate through intermediate accelerators to a specific bucket (1 out of 1296) within the storage ring. Two linacs and a positron accumulator ring operate at 60Hz while a booster synchrotron ramps and injects into the storage ring at 2Hz. The distributed, modular VME/VXI-based injection timing system is controlled by two EPICS-based input/output controllers (IOCs). Over 40 VME/VXI cards have been developed to implement the system. Card types range from 352MHz VXI timing modules to VME-based fiber optic fanouts and logic translators/drivers. All timing is distributed with fiber optics. Timing references are derived directly from machine low-level rf of 9.77MHz and 352MHz. The timing references provide triggers to programmable delay generators. Three grades of timing are provided. Precision timing is derived from commercial digital delay generators, intermediate precision timing is obtained from VXI 8-channel digital delay generators which provide timing with 25ns peak-to-peak jitter, and modest precision timing is provided by the APS event system. The timing system is fully integrated into the APS EPICS-based control system

  4. Reinforcement of drinking by running: effect of fixed ratio and reinforcement time1

    Science.gov (United States)

    Premack, David; Schaeffer, Robert W.; Hundt, Alan

    1964-01-01

    Rats were required to complete varying numbers of licks (FR), ranging from 10 to 300, in order to free an activity wheel for predetermined times (CT) ranging from 2 to 20 sec. The reinforcement of drinking by running was shown both by an increased frequency of licking, and by changes in length of the burst of licking relative to operant-level burst length. In log-log coordinates, instrumental licking tended to be a linear increasing function of FR for the range tested, a linear decreasing function of CT for the range tested. Pause time was implicated in both of the above relations, being a generally increasing function of both FR and CT. PMID:14120150

  5. REINFORCEMENT OF DRINKING BY RUNNING: EFFECT OF FIXED RATIO AND REINFORCEMENT TIME.

    Science.gov (United States)

    PREMACK, D; SCHAEFFER, R W; HUNDT, A

    1964-01-01

    Rats were required to complete varying numbers of licks (FR), ranging from 10 to 300, in order to free an activity wheel for predetermined times (CT) ranging from 2 to 20 sec. The reinforcement of drinking by running was shown both by an increased frequency of licking, and by changes in length of the burst of licking relative to operant-level burst length. In log-log coordinates, instrumental licking tended to be a linear increasing function of FR for the range tested, a linear decreasing function of CT for the range tested. Pause time was implicated in both of the above relations, being a generally increasing function of both FR and CT.

  6. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...... of error detection methods includes a high level software specification. this has the purpose of illustrating that the designed can be used in practice....

  7. Radionuclide inventories for short run-time space nuclear reactor systems

    International Nuclear Information System (INIS)

    Coats, R.L.

    1993-01-01

    Space Nuclear Reactor Systems, especially those used for propulsion, often have expected operation run times much shorter than those for land-based nuclear power plants. This produces substantially different radionuclide inventories to be considered in the safety analyses of space nuclear systems. This presentation describes an analysis utilizing ORIGEN2 and DKPOWER to provide comparisons among representative land-based and space systems. These comparisons enable early, conceptual considerations of safety issues and features in the preliminary design phases of operational systems, test facilities, and operations by identifying differences between the requirements for space systems and the established practice for land-based power systems. Early indications are that separation distance is much more effective as a safety measure for space nuclear systems than for power reactors because greater decay of the radionuclide activity occurs during the time to transport the inventory a given distance. In addition, the inventories of long-lived actinides are very low for space reactor systems

  8. Design of the Advanced Light Source timing system

    International Nuclear Information System (INIS)

    Fahmie, M.

    1993-05-01

    The Advanced Light Source (ALS) is a third generation synchrotron radiation facility, and as such, has several unique timing requirements. Arbitrary Storage Ring filling patterns and high single bunch purity requirements demand a highly stable, low jitter timing system with the flexibility to reconfigure on a pulse-to-pulse basis. This modular system utilizes a highly linear Gauss Clock with ''on the fly'' programmable setpoints to track a free-running Booster ramping magnet and provides digitally programmable sequencing and delay for Electron Gun, Linac, Booster Ring, and Storage Ring RF, Pulsed Magnet, and Instrumentation systems. It has proven itself over the last year of accelerator operation to be reliable and rock solid

  9. Run-time Phenomena in Dynamic Software Updating: Causes and Effects

    DEFF Research Database (Denmark)

    Gregersen, Allan Raundahl; Jørgensen, Bo Nørregaard

    2011-01-01

    The development of a dynamic software updating system for statically-typed object-oriented programming languages has turned out to be a challenging task. Despite the fact that the present state of the art in dynamic updating systems, like JRebel, Dynamic Code Evolution VM, JVolve and Javeleon, all...... written in statically-typed object-oriented programming languages. In this paper, we present our experience from developing dynamically updatable applications using a state-of-the-art dynamic updating system for Java. We believe that the findings presented in this paper provide an important step towards...... provide very transparent and flexible technical solutions to dynamic updating, case studies have shown that designing dynamically updatable applications still remains a challenging task. This challenge has its roots in a number of run-time phenomena that are inherent to dynamic updating of applications...

  10. The Trick Simulation Toolkit: A NASA/Opensource Framework for Running Time Based Physics Models

    Science.gov (United States)

    Penn, John M.

    2016-01-01

    The Trick Simulation Toolkit is a simulation development environment used to create high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. Its purpose is to generate a simulation executable from a collection of user-supplied models and a simulation definition file. For each Trick-based simulation, Trick automatically provides job scheduling, numerical integration, the ability to write and restore human readable checkpoints, data recording, interactive variable manipulation, a run-time interpreter, and many other commonly needed capabilities. This allows simulation developers to concentrate on their domain expertise and the algorithms and equations of their models. Also included in Trick are tools for plotting recorded data and various other supporting utilities and libraries. Trick is written in C/C++ and Java and supports both Linux and MacOSX computer operating systems. This paper describes Trick's design and use at NASA Johnson Space Center.

  11. Novel time-dependent alignment of the ATLAS Inner Detector in the LHC Run 2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00386283; The ATLAS collaboration

    2016-01-01

    ATLAS is a multipurpose experiment at the LHC proton-proton collider. Its physics goals require an unbiased and high resolution measurement of the charged particle kinematic parameters. These critically depend on the layout and performance of the tracking system and the quality of the alignment of its components. For the LHC Run 2, the system has been upgraded with the installation of a new pixel layer, the Insertable B-layer (IBL). ATLAS Inner Detector alignment framework has been adapted and upgraded to correct very short time scale movements of the sub-detectors. In particular, a mechanical distortion of the IBL staves up to 20 μm and a vertical displacement of the Pixel detector of ~6 μm have been observed during data-taking. The techniques used to correct for these effects and to match the required Inner Detector performance will be presented.

  12. Operating Security System Support for Run-Time Security with a Trusted Execution Environment

    DEFF Research Database (Denmark)

    Gonzalez, Javier

    , it is safe to assume that any complex software is compromised. The problem is then to monitor and contain it when it executes in order to protect sensitive data and other sensitive assets. To really have an impact, any solution to this problem should be integrated in commodity operating systems...... in the Linux operating system. We are in the process of making this driver part of the mainline Linux kernel.......Software services have become an integral part of our daily life. Cyber-attacks have thus become a problem of increasing importance not only for the IT industry, but for society at large. A way to contain cyber-attacks is to guarantee the integrity of IT systems at run-time. Put differently...

  13. Supporting Multiprocessors in the Icecap Safety-Critical Java Run-Time Environment

    DEFF Research Database (Denmark)

    Zhao, Shuai; Wellings, Andy; Korsholm, Stephan Erbs

    The current version of the Safety Critical Java (SCJ) specification defines three compliance levels. Level 0 targets single processor programs while Level 1 and 2 can support multiprocessor platforms. Level 1 programs must be fully partitioned but Level 2 programs can also be more globally...... scheduled. As of yet, there is no official Reference Implementation for SCJ. However, the icecap project has produced a Safety-Critical Java Run-time Environment based on the Hardware-near Virtual Machine (HVM). This supports SCJ at all compliance levels and provides an implementation of the safety......-critical Java (javax.safetycritical) package. This is still work-in-progress and lacks certain key features. Among these is the ability to support multiprocessor platforms. In this paper, we explore two possible options to adding multiprocessor support to this environment: the “green thread” and the “native...

  14. When Free Isn't Free: The Realities of Running Open Source in School

    Science.gov (United States)

    Derringer, Pam

    2009-01-01

    Despite the last few years' growth in awareness of open-source software in schools and the potential savings it represents, its widespread adoption is still hampered. Randy Orwin, technology director of the Bainbridge Island School District in Washington State and a strong open-source advocate, cautions that installing an open-source…

  15. Running vacuum in the Universe and the time variation of the fundamental constants of Nature

    Energy Technology Data Exchange (ETDEWEB)

    Fritzsch, Harald [Nanyang Technological University, Institute for Advanced Study, Singapore (Singapore); Universitaet Muenchen, Physik-Department, Munich (Germany); Sola, Joan [Nanyang Technological University, Institute for Advanced Study, Singapore (Singapore); Universitat de Barcelona, Departament de Fisica Quantica i Astrofisica, Barcelona, Catalonia (Spain); Universitat de Barcelona (ICCUB), Institute of Cosmos Sciences, Barcelona, Catalonia (Spain); Nunes, Rafael C. [Universidade Federal de Juiz de Fora, Dept. de Fisica, Juiz de Fora, MG (Brazil)

    2017-03-15

    We compute the time variation of the fundamental constants (such as the ratio of the proton mass to the electron mass, the strong coupling constant, the fine-structure constant and Newton's constant) within the context of the so-called running vacuum models (RVMs) of the cosmic evolution. Recently, compelling evidence has been provided that these models are able to fit the main cosmological data (SNIa+BAO+H(z)+LSS+BBN+CMB) significantly better than the concordance ΛCDM model. Specifically, the vacuum parameters of the RVM (i.e. those responsible for the dynamics of the vacuum energy) prove to be nonzero at a confidence level >or similar 3σ. Here we use such remarkable status of the RVMs to make definite predictions on the cosmic time variation of the fundamental constants. It turns out that the predicted variations are close to the present observational limits. Furthermore, we find that the time evolution of the dark matter particle masses should be crucially involved in the total mass variation of our Universe. A positive measurement of this kind of effects could be interpreted as strong support to the ''micro-macro connection'' (viz. the dynamical feedback between the evolution of the cosmological parameters and the time variation of the fundamental constants of the microscopic world), previously proposed by two of us (HF and JS). (orig.)

  16. Haemoglobin mass and running time trial performance after recombinant human erythropoietin administration in trained men.

    Directory of Open Access Journals (Sweden)

    Jérôme Durussel

    Full Text Available UNLABELLED: Recombinant human erythropoietin (rHuEpo increases haemoglobin mass (Hb(mass and maximal oxygen uptake (v O(2 max. PURPOSE: This study defined the time course of changes in Hb(mass, v O(2 max as well as running time trial performance following 4 weeks of rHuEpo administration to determine whether the laboratory observations would translate into actual improvements in running performance in the field. METHODS: 19 trained men received rHuEpo injections of 50 IU•kg(-1 body mass every two days for 4 weeks. Hb(mass was determined weekly using the optimized carbon monoxide rebreathing method until 4 weeks after administration. v O(2 max and 3,000 m time trial performance were measured pre, post administration and at the end of the study. RESULTS: Relative to baseline, running performance significantly improved by ∼6% after administration (10:30±1:07 min:sec vs. 11:08±1:15 min:sec, p<0.001 and remained significantly enhanced by ∼3% 4 weeks after administration (10:46±1:13 min:sec, p<0.001, while v O(2 max was also significantly increased post administration (60.7±5.8 mL•min(-1•kg(-1 vs. 56.0±6.2 mL•min(-1•kg(-1, p<0.001 and remained significantly increased 4 weeks after rHuEpo (58.0±5.6 mL•min(-1•kg(-1, p = 0.021. Hb(mass was significantly increased at the end of administration compared to baseline (15.2±1.5 g•kg(-1 vs. 12.7±1.2 g•kg(-1, p<0.001. The rate of decrease in Hb(mass toward baseline values post rHuEpo was similar to that of the increase during administration (-0.53 g•kg(-1•wk(-1, 95% confidence interval (CI (-0.68, -0.38 vs. 0.54 g•kg(-1•wk(-1, CI (0.46, 0.63 but Hb(mass was still significantly elevated 4 weeks after administration compared to baseline (13.7±1.1 g•kg(-1, p<0.001. CONCLUSION: Running performance was improved following 4 weeks of rHuEpo and remained elevated 4 weeks after administration compared to baseline. These field performance effects coincided with r

  17. Effect of Light/Dark Cycle on Wheel Running and Responding Reinforced by the Opportunity to Run Depends on Postsession Feeding Time

    Science.gov (United States)

    Belke, T. W.; Mondona, A. R.; Conrad, K. M.; Poirier, K. F.; Pickering, K. L.

    2008-01-01

    Do rats run and respond at a higher rate to run during the dark phase when they are typically more active? To answer this question, Long Evans rats were exposed to a response-initiated variable interval 30-s schedule of wheel-running reinforcement during light and dark cycles. Wheel-running and local lever-pressing rates increased modestly during…

  18. Real time source term and dose assessment

    International Nuclear Information System (INIS)

    Breznik, B.; Kovac, A.; Mlakar, P.

    2001-01-01

    The Dose Projection Programme is a tool for decision making in case of nuclear emergency. The essential input data for quick emergency evaluation in the case of hypothetical pressurised water reactor accident are following: source term, core damage assessment, fission product radioactivity, release source term and critical exposure pathways for an early phase of the release. A reduced number of radio-nuclides and simplified calculations can be used in dose calculation algorithm. Simple expert system personal computer programme has been developed for the Krsko Nuclear Power Plant for dose projection within the radius of few kilometers from the pressurised water reactor in early phase of an accident. The input data are instantaneous data of core activity, core damage indicators, release fractions, reduction factor of the release pathways, spray operation, release timing, and dispersion coefficient. Main dose projection steps are: accurate in-core radioactivity determination using reactor power input; core damage and in-containment source term assessment based on quick indications of instrumentation or on activity analysis data; user defines release pathway for typical PWR accident scenarius; dose calculation is performed only for exposure pathway critical for decision about evacuation or sheltering in early phase of an accident.(author)

  19. Walking, running, and resting under time, distance, and average speed constraints: optimality of walk–run–rest mixtures

    Science.gov (United States)

    Long, Leroy L.; Srinivasan, Manoj

    2013-01-01

    On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk–run mixture at intermediate speeds and a walk–rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients—a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk–run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill. PMID:23365192

  20. Real time analysis with the upgraded LHCb trigger in Run III

    Science.gov (United States)

    Szumlak, Tomasz

    2017-10-01

    The current LHCb trigger system consists of a hardware level, which reduces the LHC bunch-crossing rate of 40 MHz to 1.1 MHz, a rate at which the entire detector is read out. A second level, implemented in a farm of around 20k parallel processing CPUs, the event rate is reduced to around 12.5 kHz. The LHCb experiment plans a major upgrade of the detector and DAQ system in the LHC long shutdown II (2018-2019). In this upgrade, a purely software based trigger system is being developed and it will have to process the full 30 MHz of bunch crossings with inelastic collisions. LHCb will also receive a factor of 5 increase in the instantaneous luminosity, which further contributes to the challenge of reconstructing and selecting events in real time with the CPU farm. We discuss the plans and progress towards achieving efficient reconstruction and selection with a 30 MHz throughput. Another challenge is to exploit the increased signal rate that results from removing the 1.1 MHz readout bottleneck, combined with the higher instantaneous luminosity. Many charm hadron signals can be recorded at up to 50 times higher rate. LHCb is implementing a new paradigm in the form of real time data analysis, in which abundant signals are recorded in a reduced event format that can be fed directly to the physics analyses. These data do not need any further offline event reconstruction, which allows a larger fraction of the grid computing resources to be devoted to Monte Carlo productions. We discuss how this real-time analysis model is absolutely critical to the LHCb upgrade, and how it will evolve during Run-II.

  1. Personal best marathon time and longest training run, not anthropometry, predict performance in recreational 24-hour ultrarunners.

    Science.gov (United States)

    Knechtle, Beat; Knechtle, Patrizia; Rosemann, Thomas; Lepers, Romuald

    2011-08-01

    In recent studies, a relationship between both low body fat and low thicknesses of selected skinfolds has been demonstrated for running performance of distances from 100 m to the marathon but not in ultramarathon. We investigated the association of anthropometric and training characteristics with race performance in 63 male recreational ultrarunners in a 24-hour run using bi and multivariate analysis. The athletes achieved an average distance of 146.1 (43.1) km. In the bivariate analysis, body mass (r = -0.25), the sum of 9 skinfolds (r = -0.32), the sum of upper body skinfolds (r = -0.34), body fat percentage (r = -0.32), weekly kilometers ran (r = 0.31), longest training session before the 24-hour run (r = 0.56), and personal best marathon time (r = -0.58) were related to race performance. Stepwise multiple regression showed that both the longest training session before the 24-hour run (p = 0.0013) and the personal best marathon time (p = 0.0015) had the best correlation with race performance. Performance in these 24-hour runners may be predicted (r2 = 0.46) by the following equation: Performance in a 24-hour run, km) = 234.7 + 0.481 (longest training session before the 24-hour run, km) - 0.594 (personal best marathon time, minutes). For practical applications, training variables such as volume and intensity were associated with performance but not anthropometric variables. To achieve maximum kilometers in a 24-hour run, recreational ultrarunners should have a personal best marathon time of ∼3 hours 20 minutes and complete a long training run of ∼60 km before the race, whereas anthropometric characteristics such as low body fat or low skinfold thicknesses showed no association with performance.

  2. A new view of responses to first-time barefoot running.

    OpenAIRE

    Wilkinson, Mick; Caplan, Nick; Akenhead, Richard; Hayes, Phil

    2015-01-01

    We examined acute alterations in gait and oxygen cost from shod-to-barefoot running in habitually-shod well-trained runners with no prior experience of running barefoot. Thirteen runners completed six-minute treadmill runs shod and barefoot on separate days at a mean speed of 12.5 km·h-1. Steady-state oxygen cost in the final minute was recorded. Kinematic data were captured from 30-consecutive strides. Mean differences between conditions were estimated with 90% confidence intervals. When bar...

  3. ESTIMATION OF THE RUNNING COSTS OF AUTONOMOUS ENERGY SOURCES IN TROLLEYBUSES

    Directory of Open Access Journals (Sweden)

    Piotr Hołyszko

    2016-11-01

    Full Text Available The article analyses the performance characteristics and operating costs of the three types of trolley-buses equipped with alternative energy sources, which are used by the MPK (Municipal Transport Company in Lublin. The selected applications are adapted for driving off traction in emergency mode as well as servicing the regular route. Two of them are based on electrochemical batteries and one uses a system with an electric generator driven by an internal combustion engine.

  4. The optimal production-run time for a stock-dependent imperfect production process

    Directory of Open Access Journals (Sweden)

    Jain Divya

    2013-01-01

    Full Text Available This paper develops an inventory model for a hypothesized volume flexible manufacturing system in which the production rate is stock-dependent and the system produces both perfect and imperfect quality items. The demand rate of perfect quality items is known and constant, whereas the demand rate of imperfect (non-conforming to specifications quality items is a function of discount offered in the selling price. In this paper, we determine an optimal production-run time and the optimal discount that should be offered in the selling price to influence the sale of imperfect quality items produced by the manufacturing system. The considered model aims to maximize the net profit obtained through the sales of both perfect and imperfect quality items subject to certain constraints of the system. The solution procedure suggests the use of ‘Interior Penalty Function Method’ to solve the associated constrained maximization problem. Finally, a numerical example demonstrating the applicability of proposed model has been included.

  5. Running out of time: exploring women's motivations for social egg freezing.

    Science.gov (United States)

    Baldwin, Kylie; Culley, Lorraine; Hudson, Nicky; Mitchell, Helene

    2018-04-12

    Few qualitative studies have explored women's use of social egg freezing. Derived from an interview study of 31 participants, this article explores the motivations of women using this technology. Semi-structured interviews were conducted with 31 users of social egg freezing resident in UK (n = 23), USA (n = 7) and Norway (n = 1). Interviews were face to face (n = 16), through Skype and Facetime (n = 9) or by telephone (n = 6). Data were analyzed using interpretive thematic analysis. Women's use of egg freezing was shaped by fears of running out of time to form a conventional family, difficulties in finding a partner and concerns about "panic partnering", together with a desire to avoid future regrets and blame. For some women, use of egg freezing was influenced by recent fertility or health diagnoses as well as critical life events. A fifth of the participants also disclosed an underlying fertility or health issue as affecting their decision. The study provides new insights in to the complex motivations women have for banking eggs. It identifies how women's use of egg freezing was an attempt to "preserve fertility" in the absence of the particular set of "life conditions" they regarded as crucial for pursuing parenthood. It also demonstrates that few women were motivated by a desire to enhance their career and that the boundaries between egg freezing for medical and for social reasons may be more porous than first anticipated.

  6. A Test Run of the EGSIEM Near Real-Time Service Based on GRACE Mission Data

    Science.gov (United States)

    Kvas, A.; Gruber, C.; Gouweleeuw, B.; Guntner, A.; Mayer-Gürr, T.; Flechtner, F. M.

    2017-12-01

    To enable the use of GRACE and GRACE-FO data for rapid monitoring applications, the EGSIEM (European Gravity Service for Improved Emergency Management) project, funded by the Horizon 2020 Framework Program for Research and Innovation of the European Union, has implemented a demonstrator for a near real-time (NRT) gravity field service. The goal of this service is to provide daily gravity field solutions with a maximum latency of five days. For this purpose, two independent approaches were developed at the German Research Centre for Geosciences (GFZ) and Graz University of Technology (TUG). Based on these daily gravity field solutions, statistical flood and drought indicators are derived by the EGSIEM Hydrological Service, developed at GFZ. The NRT products are subsequently provided to the Center for Satellite based Crisis Information (ZKI) at the German Aerospace Center as well as the Global Flood Awareness System (GloFAS) at the Joint Research Center of the European Commission. In the first part of this contribution, the performance of the service based on a statistical analysis of historical flood events during the GRACE period is evaluated. Then, results from the six month long operational test run of the service which started on April 1st 2017 are presented and a comparison between historical and operational gravity products and flood indicators is made.

  7. A Run-Time Verification Framework for Smart Grid Applications Implemented on Simulation Frameworks

    Energy Technology Data Exchange (ETDEWEB)

    Ciraci, Selim; Sozer, Hasan; Tekinerdogan, Bedir

    2013-05-18

    Smart grid applications are implemented and tested with simulation frameworks as the developers usually do not have access to large sensor networks to be used as a test bed. The developers are forced to map the implementation onto these frameworks which results in a deviation between the architecture and the code. On its turn this deviation makes it hard to verify behavioral constraints that are de- scribed at the architectural level. We have developed the ConArch toolset to support the automated verification of architecture-level behavioral constraints. A key feature of ConArch is programmable mapping for architecture to the implementation. Here, developers implement queries to identify the points in the target program that correspond to architectural interactions. ConArch generates run- time observers that monitor the flow of execution between these points and verifies whether this flow conforms to the behavioral constraints. We illustrate how the programmable mappings can be exploited for verifying behavioral constraints of a smart grid appli- cation that is implemented with two simulation frameworks.

  8. Effect of advanced injection timing on emission characteristics of diesel engine running on natural gas

    Energy Technology Data Exchange (ETDEWEB)

    Nwafor, O.M.I. [Department of Mechanical Engineering, Federal University of Technology, Owerri, Imo State (Nigeria)

    2007-11-15

    There has been a growing concern on the emission of greenhouse gases into the atmosphere, whose consequence is global warming. The sources of greenhouse gases have been identified, of which the major contributor is the combustion of fossil fuel. Researchers have intensified efforts towards identifying greener alternative fuel substitutes for the present fossil fuel. Natural gas is now being investigated as potential alternative fuel for diesel engines. Natural gas appears more attractive due to its high octane number and perhaps, due to its environmental friendly nature. The test results showed that alternative fuels exhibit longer ignition delay, with slow burning rates. Longer delays will lead to unacceptable rates of pressure rise with the result of diesel knock. This work examines the effect of advanced injection timing on the emission characteristics of dual-fuel engine. The engine has standard injection timing of 30 BTDC. The injection was first advanced by 5.5 and given injection timing of 35.5 BTDC. The engine performance was erratic on this timing. The injection was then advanced by 3.5 . The engine performance was smooth on this timing especially at low loading conditions. The ignition delay was reduced through advanced injection timing but tended to incur a slight increase in fuel consumption. The CO and CO{sub 2} emissions were reduced through advanced injection timing. (author)

  9. QRTEngine: An easy solution for running online reaction time experiments using Qualtrics.

    Science.gov (United States)

    Barnhoorn, Jonathan S; Haasnoot, Erwin; Bocanegra, Bruno R; van Steenbergen, Henk

    2015-12-01

    Performing online behavioral research is gaining increased popularity among researchers in psychological and cognitive science. However, the currently available methods for conducting online reaction time experiments are often complicated and typically require advanced technical skills. In this article, we introduce the Qualtrics Reaction Time Engine (QRTEngine), an open-source JavaScript engine that can be embedded in the online survey development environment Qualtrics. The QRTEngine can be used to easily develop browser-based online reaction time experiments with accurate timing within current browser capabilities, and it requires only minimal programming skills. After introducing the QRTEngine, we briefly discuss how to create and distribute a Stroop task. Next, we describe a study in which we investigated the timing accuracy of the engine under different processor loads using external chronometry. Finally, we show that the QRTEngine can be used to reproduce classic behavioral effects in three reaction time paradigms: a Stroop task, an attentional blink task, and a masked-priming task. These findings demonstrate that QRTEngine can be used as a tool for conducting online behavioral research even when this requires accurate stimulus presentation times.

  10. Novel real-time alignment and calibration of LHCb detector for Run II and tracking for the upgrade.

    CERN Document Server

    AUTHOR|(CDS)2091576

    2016-01-01

    LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run II. Data collected at the start of the fill is processed in a few minutes and used to update the alignment, while the calibration constants are evaluated for each run. The procedure aims to improve the quality of the online selection and performance stability. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructure for the trigger. A similar scheme is planned to be used for Run III foreseen to start in 2020. At that time LHCb will run at an instantaneous luminosity of $2 \\times 10^{33}$ cm$^2$ s$^1$ and a fully software based trigger strategy will be used. The new running conditions and the tighter timing constraints in the software trigger (only 13 ms per event are available) represent a big challenge for track reconstruction. The new software based trigger strategy implies a full detector read-out at the collision rate of 40 MHz. High performance ...

  11. Primary and secondary effects of real-time feedback to reduce vertical loading rate during running.

    Science.gov (United States)

    Baggaley, M; Willy, R W; Meardon, S A

    2017-05-01

    Gait modifications are often proposed to reduce average loading rate (AVLR) during running. While many modifications may reduce AVLR, little work has investigated secondary gait changes. Thirty-two rearfoot runners [16M, 16F, 24.7 (3.3) years, 22.72 (3.01) kg/m 2 , >16 km/week] ran at a self-selected speed (2.9 ± 0.3 m/s) on an instrumented treadmill, while 3D mechanics were calculated via real-time data acquisition. Real-time visual feedback was provided in a randomized order to cue a forefoot strike (FFS), a minimum 7.5% decrease in step length, or a minimum 15% reduction in AVLR. AVLR was reduced by FFS (mean difference = 26.4 BW/s; 95% CI = 20.1, 32.7; P < 0.001), shortened step length (8.4 BW/s; 95% CI = 2.9, 14.0; P = 0.004), and cues to reduce AVLR (14.9 BW/s; 95% CI = 10.2, 19.6; P < 0.001). FFS, shortened step length, and cues to reduce AVLR all reduced eccentric knee joint work per km [(-48.2 J/kg*m; 95% CI = -58.1, -38.3; P < 0.001), (-35.5 J/kg*m; 95% CI = -42.4, 28.6; P < 0.001), (-23.1 J/kg*m; 95% CI = -33.3, -12.9; P < 0.001)]. However, FFS and cues to reduce AVLR also increased eccentric ankle joint work per km [(54.49 J/kg*m; 95% CI = 45.3, 63.7; P < 0.001), (9.20 J/kg*m; 95% CI = 1.7, 16.7; P = 0.035)]. Potentially injurious secondary effects associated with FFS and cues to reduce AVLR may undermine their clinical utility. Alternatively, a shortened step length resulted in small reductions in AVLR, without any potentially injurious secondary effects. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. Evaluation of JRC source term methodology using MAAP5 as a fast-running crisis tool for a BWR4 Mark I reactor

    International Nuclear Information System (INIS)

    Vela-García, M.; Simola, K.

    2016-01-01

    JRC participated in the OECD/NEA FASTRUN benchmark reviewing fast-running software tools to model fission product releases during accidents at nuclear power plants. The main goal of fast-running software tools is to foresee the accident progression, so that mitigating actions can be taken and the population can be adequately protected. Within the FASTRUN, JRC used the MAAP 4.0.8 code and developed a methodology to obtain the source term (as activity released per radioisotope) of PWR and BWR station black-out accident scenarios. The modifications made in the MAAP models were limited to a minimum number of important parameters. This aims at reproducing a crisis situation with a limited time to adapt a generic input deck. This paper presents further studies, where JRC analysed the FASTRUN BWR scenario using MAAP 5.0.2 that has the capability of calculating doses. A sensitivity study was performed with the MAAP 5.0.2 DOSE package deactivated, using the same methodology as in the case of MAAP 4.0.8 for source term calculation. The results were close to the reference LTSBO SOARCA case, independently of the methodology used. One of the benefits of using the MAAP code is the short runtime of the simulations.

  13. Running the running

    OpenAIRE

    Cabass, Giovanni; Di Valentino, Eleonora; Melchiorri, Alessandro; Pajer, Enrico; Silk, Joseph

    2016-01-01

    We use the recent observations of Cosmic Microwave Background temperature and polarization anisotropies provided by the Planck satellite experiment to place constraints on the running $\\alpha_\\mathrm{s} = \\mathrm{d}n_{\\mathrm{s}} / \\mathrm{d}\\log k$ and the running of the running $\\beta_{\\mathrm{s}} = \\mathrm{d}\\alpha_{\\mathrm{s}} / \\mathrm{d}\\log k$ of the spectral index $n_{\\mathrm{s}}$ of primordial scalar fluctuations. We find $\\alpha_\\mathrm{s}=0.011\\pm0.010$ and $\\beta_\\mathrm{s}=0.027\\...

  14. Biomechanical characteristics of skeletal muscles and associations between running speed and contraction time in 8- to 13-year-old children.

    Science.gov (United States)

    Završnik, Jernej; Pišot, Rado; Šimunič, Boštjan; Kokol, Peter; Blažun Vošner, Helena

    2017-02-01

    Objective To investigate associations between running speeds and contraction times in 8- to 13-year-old children. Method This longitudinal study analyzed tensiomyographic measurements of vastus lateralis and biceps femoris muscles' contraction times and maximum running speeds in 107 children (53 boys, 54 girls). Data were evaluated using multiple correspondence analysis. Results A gender difference existed between the vastus lateralis contraction times and running speeds. The running speed was less dependent on vastus lateralis contraction times in boys than in girls. Analysis of biceps femoris contraction times and running speeds revealed that running speeds of boys were much more structurally associated with contraction times than those of girls, for whom the association seemed chaotic. Conclusion Joint category plots showed that contraction times of biceps femoris were associated much more closely with running speed than those of the vastus lateralis muscle. These results provide insight into a new dimension of children's development.

  15. Low contrast volume run-off CT angiography with optimized scan time based on double-level test bolus technique – feasibility study

    International Nuclear Information System (INIS)

    Baxa, Jan; Vendiš, Tomáš; Moláček, Jiří; Štěpánková, Lucie; Flohr, Thomas; Schmidt, Bernhard; Korporaal, Johannes G.; Ferda, Jiří

    2014-01-01

    Purpose: To verify the technical feasibility of low contrast volume (40 mL) run-off CT angiography (run-off CTA) with the individual scan time optimization based on double-level test bolus technique. Materials and methods: A prospective study of 92 consecutive patients who underwent run-off CTA performed with 40 mL of contrast medium (injection rate of 6 mL/s) and optimized scan times on a second generation of dual-source CT. Individual optimized scan times were calculated from aortopopliteal transit times obtained on the basis of double-level test bolus technique – the single injection of 10 mL test bolus and dynamic acquisitions in two levels (abdominal aorta and popliteal arteries). Intraluminal attenuation (HU) was measured in 6 levels (aorta, iliac, femoral and popliteal arteries, middle and distal lower-legs) and subjective quality (3-point score) was assessed. Relations of image quality, test bolus parameters and arterial circulation involvement were analyzed. Results: High mean attenuation (HU) values (468; 437; 442; 440; 342; 274) and quality score in all monitored levels was achieved. In 91 patients (0.99) the sufficient diagnostic quality (score 1–2) in aorta, iliac and femoral arteries was determined. A total of 6 patients (0.07) were not evaluable in distal lower-legs. Only the weak indirect correlation of image quality and test-bolus parameters was proved in iliac, femoral and popliteal levels (r values: −0.263, −0.298 and −0.254). The statistically significant difference of the test-bolus parameters and image quality was proved in patients with occlusive and aneurysmal disease. Conclusion: We proved the technical feasibility and sufficient quality of run-off CTA with low volume of contrast medium and optimized scan time according to aortopopliteal transit time calculated from double-level test bolus

  16. Mapping real-life applications on run-time reconfigurable NoC-based MPSoC on FPGA.

    NARCIS (Netherlands)

    Singh, A.K.; Kumar, A.; Srikanthan, Th.; Ha, Y.

    2010-01-01

    Multiprocessor systems-on-chip (MPSoC) are required to fulfill the performance demand of modern real-life embedded applications. These MPSoCs are employing Network-on-Chip (NoC) for reasons of efficiency and scalability. Additionally, these systems need to support run-time reconfiguration of their

  17. An investigation of the relation between the 30 meter running time and the femoral volume fraction in the thigh

    Directory of Open Access Journals (Sweden)

    MY Tasmektepligil

    2009-12-01

    Full Text Available Leg components are thought to be a related to speed. Only a limited number of studies have, however, examined the interaction between speed and bone size. In this study, we examined the relationship between the time taken by football players to run thirty meters and the fraction which the femur forms compared to the entire thigh region. Data collected from thirty male football players of average age 17.3 (between 16-19 years old were analyzed. First we detected the thirty meter running times and then we estimated the volume fraction of the femur to the entire thigh region using stereological methods on magnetic resonance images. Our data showed that there was a highly negative relationship between the 30 meter running times and the volume fraction of the bone to the thigh region. Thus, 30 meter running time decreases as the fraction of the bone to the thigh region increases. In other words, speed increases as the fraction of bone volume increases. Our data indicate that selecting sportsman whose femoral volume fractions are high will provide a significant benefit to enhancing performance in those branches of sports which require speed. Moreover, we concluded that training which can increase the bone volume fraction should be practiced when an increase in speed is desired and that the changes in the fraction of thigh region components should be monitored during these trainings.

  18.  Running speed during training and percent body fat predict race time in recreational male marathoners

    Directory of Open Access Journals (Sweden)

    Barandun U

    2012-07-01

    Full Text Available  Background: Recent studies have shown that personal best marathon time is a strong predictor of race time in male ultramarathoners. We aimed to determine variables predictive of marathon race time in recreational male marathoners by using the same characteristics of anthropometry and training as used for ultramarathoners.Methods: Anthropometric and training characteristics of 126 recreational male marathoners were bivariately and multivariately related to marathon race times.Results: After multivariate regression, running speed of the training units (β=-0.52, P<0.0001 and percent body fat (β=0.27, P <0.0001 were the two variables most strongly correlated with marathon race times. Marathon race time for recreational male runners may be estimated to some extent by using the following equation (r2 = 0.44: race time (minutes = 326.3 + 2.394 × (percent body fat, % – 12.06 × (speed in training, km/hours. Running speed during training sessions correlated with prerace percent body fat (r=0.33, P=0.0002. The model including anthropometric and training variables explained 44% of the variance of marathon race times, whereas running speed during training sessions alone explained 40%. Thus, training speed was more predictive of marathon performance times than anthropometric characteristics.Conclusion: The present results suggest that low body fat and running speed during training close to race pace (about 11 km/hour are two key factors for a fast marathon race time in recreational male marathoner runners.Keywords: body fat, skinfold thickness, anthropometry, endurance, athlete

  19. Long-run sectoral development time series evidence for the German economy

    OpenAIRE

    Dietrich, Andreas; Krüger, Jens J.

    2008-01-01

    In economic development, long-run structural change among the three main sectors of an economy follows a typical pattern with the primary sector (agriculture, mining) first dominating, followed by the secondary sector (manufacturing) and finally by the tertiary sector (services) in terms of employment and value added. We reconsider the verbal theoretical work of Fourastié and build a simple model encompassing its main features, most notably the macroeconomic influences on the sectoral develop...

  20. Optimal design and real time control of the integrated urban run-off system

    DEFF Research Database (Denmark)

    Harremoës, Poul; Rauch, Wolfgang

    1999-01-01

    Traditional design of urban run-off systems is based on fixed rules with respect to the points of demarcation between the three systems involved: the sewer system, the treatment plant and the receiving water. An alternative to fixed rules is to model the total system. There is still uncertainty...... and evaluation of competing alternatives for design. However, the complexity of these systems is such that the parameters associated with pollution are hardly identifiable on the basis of reasonable monitoring programmes. The empirical-iterative approach: structures are built on simplified assumptions...

  1. The effect of time constraints and running phases on combined event pistol shooting performance.

    Science.gov (United States)

    Dadswell, Clare; Payton, Carl; Holmes, Paul; Burden, Adrian

    2016-01-01

    The combined event is a crucial aspect of the modern pentathlon competition, but little is known about how shooting performance changes through the event. This study aimed to identify (i) how performance-related variables changed within each shooting series and (ii) how performance-related variables changed between each shooting series. Seventeen modern pentathletes completed combined event trials. An optoelectronic shooting system recorded score and pistol movement, and force platforms recorded centre of pressure movement 1 s prior to every shot. Heart rate and blood lactate values were recorded throughout the event. Whilst heart rate and blood lactate significantly increased between series (P  0.05). Thus, combined event shooting performance following each running phase appears similar to shooting performance following only 20 m of running. This finding has potential implications for the way in which modern pentathletes train for combined event shooting, and highlights the need for modern pentathletes to establish new methods with which to enhance shooting accuracy.

  2. Time-lapse controlled-source electromagnetics using interferometry

    NARCIS (Netherlands)

    Hunziker, J.W.; Slob, E.C.; Wapenaar, C.P.A.

    In time-lapse controlled-source electromagnetics, it is crucial that the source and the receivers are positioned at exactly the same location at all times of measurement. We use interferometry by multidimensional deconvolution (MDD) to overcome problems in repeatability of the source location.

  3. TARGET-DIRECTED RUNNING IN GYMNASTICS: THE ROLE OF THE SPRINGBOARD POSITION AS AN INFORMATIONAL SOURCE TO REGULATE HANDSPRINGS ON VAULT

    Directory of Open Access Journals (Sweden)

    T. Heinen

    2011-11-01

    Full Text Available Empirical evidence highlights the role of visual information to control gymnastics vaulting and thus neglects a stereotyped approach run. However, there is no evidence on which informational source this regulation is based on. The aim of this study was to examine the position of the springboard as an informational source in the regulation of the handspring on vault. The hypothesis tested was that the action of running towards the springboard brings about changes in the approach run kinematics and handspring kinematics that relate directly to the position of the springboard. Therefore, kinematics of N = 14 female expert gymnasts’ handsprings on vault and their approach runs were examined while manipulating the position of the springboard. The results revealed that expert gymnasts placed their feet on average in the same position on the springboard and adapted to the springboard position during the last three steps of the approach run. A smaller springboard distance to the front edge of the vaulting table resulted in a different hand placement on the vaulting table, a shorter first flight phase, a take-off angle closer to 90° and a longer second flight phase. Findings suggest that the position of the springboard is a relevant informational source in gymnastics vaulting. We state that knowledge about relationships between informational sources in the environment and the resulting regulatory processes in athletes may help coaches to develop specific training programmes in order to optimize performance in complex skills.

  4. Safety, Liveness and Run-time Refinement for Modular Process-Aware Information Systems with Dynamic Sub Processes

    DEFF Research Database (Denmark)

    Debois, Søren; Hildebrandt, Thomas; Slaats, Tijs

    2015-01-01

    and verification of flexible, run-time adaptable process-aware information systems, moved into practice via the Dynamic Condition Response (DCR) Graphs notation co-developed with our industrial partner. Our key contributions are: (1) A formal theory of dynamic sub-process instantiation for declarative, event......We study modularity, run-time adaptation and refinement under safety and liveness constraints in event-based process models with dynamic sub-process instantiation. The study is part of a larger programme to provide semantically well-founded technologies for modelling, implementation......-based processes under safety and liveness constraints, given as the DCR* process language, equipped with a compositional operational semantics and conservatively extending the DCR Graphs notation; (2) an expressiveness analysis revealing that the DCR* process language is Turing-complete, while the fragment cor...

  5. Effect of injection timing on combustion and performance of a direct injection diesel engine running on Jatropha methyl ester

    Energy Technology Data Exchange (ETDEWEB)

    Jindal, S. [Mechanical Engineering Department, College of Technology & Engineering, Maharana Pratap University of Agriculture and Technology, Udaipur 313001 (India)

    2011-07-01

    The present study aims at evaluation of effect of injection timing on the combustion, performance and emissions of a small power diesel engine, commonly used for agriculture purpose, running on pure biodiesel, prepared from Jatropha (Jatropha curcas) vegetable oil. The effect of varying injection timing was evaluated in terms of thermal efficiency, specific fuel consumption, power and mean effective pressure, exhaust temperature, cylinder pressure, rate of pressure rise and the heat release rate. It was found that retarding the injection timing by 3 degrees enhances the thermal efficiency by about 8 percent.

  6. Upper Bounds Prediction of the Execution Time of Programs Running on ARM Cortex-A Systems

    OpenAIRE

    Fedotova , Irina; Krause , Bernd; Siemens , Eduard

    2017-01-01

    Part 6: Embedded and Real Time Systems; International audience; This paper describes the application of statistical analysis of the timing behavior for a generic real-time task model. Using specific processor of ARM Cortex-A series and an empirical approach of time values retrieval, the algorithm to predict the upper bounds for the task of the time acquisition operation has been formulated. For the experimental verification of the algorithm, we have used the robust Measurement-Based Probabili...

  7. How to run ions in the future?

    International Nuclear Information System (INIS)

    Küchler, D; Manglunki, D; Scrivens, R

    2014-01-01

    In the light of different running scenarios potential source improvements will be discussed (e.g. one month every year versus two month every other year and impact of the different running options [e.g. an extended ion run] on the source). As the oven refills cause most of the down time the oven design and refilling strategies will be presented. A test stand for off-line developments will be taken into account. Also the implications on the necessary manpower for extended runs will be discussed

  8. Running speed during training and percent body fat predict race time in recreational male marathoners

    OpenAIRE

    Knechtle, Beat; Barandun,; Knechtle,Patrizia; Klipstein,; Rüst,Christoph Alexander; Rosemann,Thomas; Lepers,Romuald

    2012-01-01

     Background: Recent studies have shown that personal best marathon time is a strong predictor of race time in male ultramarathoners. We aimed to determine variables predictive of marathon race time in recreational male marathoners by using the same characteristics of anthropometry and training as used for ultramarathoners.Methods: Anthropometric and training characteristics of 126 recreational male marathoners were bivariately and multivariately related to marathon race times.Results...

  9. Study of run time errors of the ATLAS Pixel Detector in the 2012 data taking period

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00339072

    2013-05-16

    The high resolution silicon Pixel detector is critical in event vertex reconstruction and in particle track reconstruction in the ATLAS detector. During the pixel data taking operation, some modules (Silicon Pixel sensor +Front End Chip+ Module Control Chip (MCC)) go to an auto-disable state, where the Modules don’t send the data for storage. Modules become operational again after reconfiguration. The source of the problem is not fully understood. One possible source of the problem is traced to the occurrence of single event upset (SEU) in the MCC. Such a module goes to either a Timeout or Busy state. This report is the study of different types and rates of errors occurring in the Pixel data taking operation. Also, the study includes the error rate dependency on Pixel detector geometry.

  10. Run-time anomaly detection and mitigation in information-rich cyber-physical systems

    Data.gov (United States)

    National Aeronautics and Space Administration — Next generation space missions require autonomous systems to operate without human intervention for long periods of times in highly dynamic environments. Such...

  11. Temporal analysis and scheduling of hard real-time radios running on a multi-processor

    NARCIS (Netherlands)

    Moreira, O.

    2012-01-01

    On a multi-radio baseband system, multiple independent transceivers must share the resources of a multi-processor, while meeting each its own hard real-time requirements. Not all possible combinations of transceivers are known at compile time, so a solution must be found that either allows for

  12. Precise and accurate train run data: Approximation of actual arrival and departure times

    DEFF Research Database (Denmark)

    Richter, Troels; Landex, Alex; Andersen, Jonas Lohmann Elkjær

    with the approximated actual arrival and departure times. As a result, all future statistics can now either be based on track circuit data with high precision or approximated actual arrival times with a high accuracy. Consequently, performance analysis will be more accurate, punctuality statistics more correct, KPI...

  13. RUN COORDINATION

    CERN Multimedia

    Christophe Delaere

    2013-01-01

    The focus of Run Coordination during LS1 is to monitor closely the advance of maintenance and upgrade activities, to smooth interactions between subsystems and to ensure that all are ready in time to resume operations in 2015 with a fully calibrated and understood detector. After electricity and cooling were restored to all equipment, at about the time of the last CMS week, recommissioning activities were resumed for all subsystems. On 7 October, DCS shifts began 24/7 to allow subsystems to remain on to facilitate operations. That culminated with the Global Run in November (GriN), which   took place as scheduled during the week of 4 November. The GriN has been the first centrally managed operation since the beginning of LS1, and involved all subdetectors but the Pixel Tracker presently in a lab upstairs. All nights were therefore dedicated to long stable runs with as many subdetectors as possible. Among the many achievements in that week, three items may be highlighted. First, the Strip...

  14. Exposure time, running and skill-related performance in international u20 rugby union players during an intensified tournament.

    Directory of Open Access Journals (Sweden)

    Christopher J Carling

    Full Text Available This study investigated exposure time, running and skill-related performance in two international u20 rugby union teams during an intensified tournament: the 2015 Junior World Rugby Championship.Both teams played 5 matches in 19 days. Analyses were conducted using global positioning system (GPS tracking (Viper 2™, Statsports Technologies Ltd and event coding (Opta Pro®.Of the 62 players monitored, 36 (57.1% participated in 4 matches and 23 (36.5% in all 5 matches while player availability for selection was 88%. Analyses of team running output (all players completing >60-min play showed that the total and peak 5-minute high metabolic load distances covered were likely-to-very likely moderately higher in the final match compared to matches 1 and 2 in back and forward players. In individual players with the highest match-play exposure (participation in >75% of total competition playing time and >75-min in each of the final 3 matches, comparisons of performance in matches 4 and 5 versus match 3 (three most important matches reported moderate-to-large decreases in total and high metabolic load distance in backs while similar magnitude reductions occurred in high-speed distance in forwards. In contrast, skill-related performance was unchanged, albeit with trivial and unclear changes, while there were no alterations in either total or high-speed running distance covered at the end of matches.These findings suggest that despite high availability for selection, players were not over-exposed to match-play during an intensified u20 international tournament. They also imply that the teams coped with the running and skill-related demands. Similarly, individual players with the highest exposure to match-play were also able to maintain skill-related performance and end-match running output (despite an overall reduction in the latter. These results support the need for player rotation and monitoring of performance, recovery and intervention strategies during

  15. Exposure time, running and skill-related performance in international u20 rugby union players during an intensified tournament

    Science.gov (United States)

    Carling, Christopher J.; Flanagan, Eamon; O’Doherty, Pearse; Piscione, Julien

    2017-01-01

    Purpose This study investigated exposure time, running and skill-related performance in two international u20 rugby union teams during an intensified tournament: the 2015 Junior World Rugby Championship. Method Both teams played 5 matches in 19 days. Analyses were conducted using global positioning system (GPS) tracking (Viper 2™, Statsports Technologies Ltd) and event coding (Opta Pro®). Results Of the 62 players monitored, 36 (57.1%) participated in 4 matches and 23 (36.5%) in all 5 matches while player availability for selection was 88%. Analyses of team running output (all players completing >60-min play) showed that the total and peak 5-minute high metabolic load distances covered were likely-to-very likely moderately higher in the final match compared to matches 1 and 2 in back and forward players. In individual players with the highest match-play exposure (participation in >75% of total competition playing time and >75-min in each of the final 3 matches), comparisons of performance in matches 4 and 5 versus match 3 (three most important matches) reported moderate-to-large decreases in total and high metabolic load distance in backs while similar magnitude reductions occurred in high-speed distance in forwards. In contrast, skill-related performance was unchanged, albeit with trivial and unclear changes, while there were no alterations in either total or high-speed running distance covered at the end of matches. Conclusions These findings suggest that despite high availability for selection, players were not over-exposed to match-play during an intensified u20 international tournament. They also imply that the teams coped with the running and skill-related demands. Similarly, individual players with the highest exposure to match-play were also able to maintain skill-related performance and end-match running output (despite an overall reduction in the latter). These results support the need for player rotation and monitoring of performance, recovery and

  16. QRTEngine: An easy solution for running online reaction time experiments using Qualtrics

    NARCIS (Netherlands)

    Barnhoorn, Jonathan Sebastiaan; Haasnoot, Erwin; Bocanegra, Bruno R.; van Steenbergen, Henk

    2015-01-01

    Performing online behavioral research is gaining increased popularity among researchers in psychological and cognitive science. However, the currently available methods for conducting online reaction time experiments are often complicated and typically require advanced technical skills. In this

  17. Source fault model of the 2011 off the pacific coast of Tohoku Earthquake, estimated from the detailed distribution of tsunami run-up heights

    International Nuclear Information System (INIS)

    Matsuta, Nobuhisa; Suzuki, Yasuhiro; Sugito, Nobuhiko; Nakata, Takashi; Watanabe, Mitsuhisa

    2015-01-01

    The distribution of tsunami run-up heights generally has spatial variations, because run-up heights are controlled by coastal topography including local-scale landforms such as natural levees, in addition to land use. Focusing on relationships among coastal topography, land conditions, and tsunami run-up heights of historical tsunamis—Meiji Sanriku (1896 A.D.), Syowa Sanriku (1933 A.D.), and Chilean Sanriku (1960 A.D.) tsunamis—along the Sanriku coast, it is found that the wavelength of a tsunami determines inundation areas as well as run-up heights. Small bays facing the Pacific Ocean are sensitive to short wavelength tsunamis, and large bays are sensitive to long wavelength tsunamis. The tsunami observed off Kamaishi during the 2011 off the Pacific coast of Tohoku Earthquake was composed of both short and long wavelength components. We examined run-up heights of the Tohoku tsunami, and found that: (1) coastal areas north of Kamaishi and south of Yamamoto were mainly attacked by short wavelength tsunamis; and (2) no evidence of short wavelength tsunamis was observed from Ofunato to the Oshika Peninsula. This observation coincides with the geomorphologically proposed source fault model, and indicates that the extraordinary large slip along the shallow part of the plate boundary off Sendai, proposed by seismological and geodesic analyses, is not needed to explain the run-up heights of the Tohoku tsunami. To better understand spatial variations of tsunami run-up heights, submarine crustal movements, and source faults, a detailed analysis is required of coastal topography, land conditions, and submarine tectonic landforms from the perspective of geomorphology. (author)

  18. Fission-neutrons source with fast neutron-emission timing

    Energy Technology Data Exchange (ETDEWEB)

    Rusev, G., E-mail: rusev@lanl.gov; Baramsai, B.; Bond, E.M.; Jandel, M.

    2016-05-01

    A neutron source with fast timing has been built to help with detector-response measurements. The source is based on the neutron emission from the spontaneous fission of {sup 252}Cf. The time is provided by registering the fission fragments in a layer of a thin scintillation film with a signal rise time of 1 ns. The scintillation light output is measured by two silicon photomultipliers with rise time of 0.5 ns. Overall time resolution of the source is 0.3 ns. Design of the source and test measurements using it are described. An example application of the source for determining the neutron/gamma pulse-shape discrimination by a stilbene crystal is given.

  19. A free-running, time-based readout method for particle detectors

    International Nuclear Information System (INIS)

    Goerres, A; Ritman, J; Stockmanns, T; Bugalho, R; Francesco, A Di; Gastón, C; Gonçalves, F; Rolo, M D; Silva, J C da; Silva, R; Varela, J; Veckalns, V; Mazza, G; Mignone, M; Pietro, V Di; Riccardi, A; Rivetti, A; Wheadon, R

    2014-01-01

    For the EndoTOFPET-US experiment, the TOFPET ASIC has been developed as a front-end chip to read out data from silicon photomultipliers (SiPM) [1]. It introduces a time of flight information into the measurement of a PET scanner and hence reduces radiation exposure of the patient [2]. The chip is designed to work with a high event rate up to 100 kHz and a time resolution of 50 ps LSB. Using two threshold levels, it can measure the leading edge of the event pulse precisely while successfully suppressing dark counts from the SiPM. This also enables a time over threshold determination, leading to a charge measurement of the signal's pulse. The same, time-based concept is chosen for the PASTA chip used in the PANDA experiment. This high-energy particle detector contains sub-systems for specific measurement goals. The innermost of these is the Micro Vertex Detector, a silicon-based tracking system. The PASTA chip's approach is much like the TOFPET ASIC with some differences. The most significant ones are a changed amplifying part for different input signals as well as protection for radiation effects of the high-radiation environment. Apart from that, the simple and general concept combined with a small area and low power consumption support the choice for using this approach

  20. A free-running, time-based readout method for particle detectors

    Science.gov (United States)

    Goerres, A.; Bugalho, R.; Di Francesco, A.; Gastón, C.; Gonçalves, F.; Mazza, G.; Mignone, M.; Di Pietro, V.; Riccardi, A.; Ritman, J.; Rivetti, A.; Rolo, M. D.; da Silva, J. C.; Silva, R.; Stockmanns, T.; Varela, J.; Veckalns, V.; Wheadon, R.

    2014-03-01

    For the EndoTOFPET-US experiment, the TOFPET ASIC has been developed as a front-end chip to read out data from silicon photomultipliers (SiPM) [1]. It introduces a time of flight information into the measurement of a PET scanner and hence reduces radiation exposure of the patient [2]. The chip is designed to work with a high event rate up to 100 kHz and a time resolution of 50 ps LSB. Using two threshold levels, it can measure the leading edge of the event pulse precisely while successfully suppressing dark counts from the SiPM. This also enables a time over threshold determination, leading to a charge measurement of the signal's pulse. The same, time-based concept is chosen for the PASTA chip used in the PANDA experiment. This high-energy particle detector contains sub-systems for specific measurement goals. The innermost of these is the Micro Vertex Detector, a silicon-based tracking system. The PASTA chip's approach is much like the TOFPET ASIC with some differences. The most significant ones are a changed amplifying part for different input signals as well as protection for radiation effects of the high-radiation environment. Apart from that, the simple and general concept combined with a small area and low power consumption support the choice for using this approach.

  1. Run-time Adaptable VLIW Processors : Resources, Performance, Power Consumption, and Reliability Trade-offs

    NARCIS (Netherlands)

    Anjam, F.

    2013-01-01

    In this dissertation, we propose to combine programmability with reconfigurability by implementing an adaptable programmable VLIW processor in a reconfigurable hardware. The approach allows applications to be developed at high-level (C language level), while at the same time, the processor

  2. Deriving Tools from Real-time Runs: A New CCMC Support for SEC and AFWA

    Science.gov (United States)

    Hesse, Michael; Rastatter, Lutz; MacNeice, Peter; Kuznetsova, Masha

    2008-01-01

    The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions. the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. The second focus of CCMC activities is on validation and verification of space weather models. and on the transition of appropriate models to space weather forecast centers. As part of the latter activity. the CCMC develops real-time simulation systems that stress models through routine execution. A by-product of these real-time calculations is the ability to derive model products, which may be useful for space weather operators. After consultations with NOA/SEC and with AFWA, CCMC has developed a set of tools as a first step to make real-time model output useful to forecast centers. In this presentation, we will discuss the motivation for this activity, the actions taken so far, and options for future tools from model output.

  3. Running into trouble with the time-dependent propagation of a wavepacket

    International Nuclear Information System (INIS)

    Garriz, Abel E; Sztrajman, Alejandro; Mitnik, DarIo

    2010-01-01

    The propagation in time of a wavepacket is a conceptually rich problem suitable to be studied in any introductory quantum mechanics course. This subject is covered analytically in most of the standard textbooks. Computer simulations have become a widespread pedagogical tool, easily implemented in computer labs and in classroom demonstrations. However, we have detected issues raising difficulties in the practical effectuation of these codes which are especially evident when discrete grid methods are used. One issue-relatively well known-appears at high incident energies, producing a wavepacket slower than expected theoretically. The other issue, which appears at low wavepacket energies, does not affect the time evolution of the propagating wavepacket proper, but produces dramatic effects on its spectral decomposition. The origin of the troubles is investigated, and different ways to deal with these issues are proposed. Finally, we show how this problem is manifested and solved in the practical case of the electronic spectra of a metal surface ionized by an ultrashort laser pulse.

  4. Running into trouble with the time-dependent propagation of a wavepacket

    Energy Technology Data Exchange (ETDEWEB)

    Garriz, Abel E; Sztrajman, Alejandro; Mitnik, DarIo, E-mail: dmitnik@df.uba.a [Instituto de AstronomIa y Fisica del Espacio, C.C. 67, Suc. 28, (C1428EGA) Buenos Aires (Argentina)

    2010-07-15

    The propagation in time of a wavepacket is a conceptually rich problem suitable to be studied in any introductory quantum mechanics course. This subject is covered analytically in most of the standard textbooks. Computer simulations have become a widespread pedagogical tool, easily implemented in computer labs and in classroom demonstrations. However, we have detected issues raising difficulties in the practical effectuation of these codes which are especially evident when discrete grid methods are used. One issue-relatively well known-appears at high incident energies, producing a wavepacket slower than expected theoretically. The other issue, which appears at low wavepacket energies, does not affect the time evolution of the propagating wavepacket proper, but produces dramatic effects on its spectral decomposition. The origin of the troubles is investigated, and different ways to deal with these issues are proposed. Finally, we show how this problem is manifested and solved in the practical case of the electronic spectra of a metal surface ionized by an ultrashort laser pulse.

  5. Design and development of a run-time monitor for multi-core architectures in cloud computing.

    Science.gov (United States)

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  6. Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Junghoon Lee

    2011-03-01

    Full Text Available Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  7. Interface Testing for RTOS System Tasks based on the Run-Time Monitoring

    International Nuclear Information System (INIS)

    Sung, Ahyoung; Choi, Byoungju

    2006-01-01

    Safety critical embedded system requires high dependability of not only hardware but also software. It is intricate to modify embedded software once embedded. Therefore, it is necessary to have rigorous regulations to assure the quality of safety critical embedded software. IEEE V and V (Verification and Validation) process is recommended for software dependability, but a more quantitative evaluation method like software testing is necessary. In case of safety critical embedded software, it is essential to have a test that reflects unique features of the target hardware and its operating system. The safety grade PLC (Programmable Logic Controller) is a safety critical embedded system where hardware and software are tightly coupled. The PLC has HdS (Hardware dependent Software) and it is tightly coupled with RTOS (Real Time Operating System). Especially, system tasks that are tightly coupled with target hardware and RTOS kernel have large influence on the dependability of the entire PLC. Therefore, interface testing for system tasks that reflects the features of target hardware and RTOS kernel becomes the core of the PLC integration test. Here, we define interfaces as overlapped parts between two different layers on the system architecture. In this paper, we identify interfaces for system tasks and apply the identified interfaces to the safety grade PLC. Finally, we show the test results through the empirical study

  8. Interface Testing for RTOS System Tasks based on the Run-Time Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Ahyoung; Choi, Byoungju [Ewha University, Seoul (Korea, Republic of)

    2006-07-01

    Safety critical embedded system requires high dependability of not only hardware but also software. It is intricate to modify embedded software once embedded. Therefore, it is necessary to have rigorous regulations to assure the quality of safety critical embedded software. IEEE V and V (Verification and Validation) process is recommended for software dependability, but a more quantitative evaluation method like software testing is necessary. In case of safety critical embedded software, it is essential to have a test that reflects unique features of the target hardware and its operating system. The safety grade PLC (Programmable Logic Controller) is a safety critical embedded system where hardware and software are tightly coupled. The PLC has HdS (Hardware dependent Software) and it is tightly coupled with RTOS (Real Time Operating System). Especially, system tasks that are tightly coupled with target hardware and RTOS kernel have large influence on the dependability of the entire PLC. Therefore, interface testing for system tasks that reflects the features of target hardware and RTOS kernel becomes the core of the PLC integration test. Here, we define interfaces as overlapped parts between two different layers on the system architecture. In this paper, we identify interfaces for system tasks and apply the identified interfaces to the safety grade PLC. Finally, we show the test results through the empirical study.

  9. Real time analysis with the upgraded LHCb trigger in Run-III

    CERN Multimedia

    Szumlak, Tomasz

    2016-01-01

    The current LHCb trigger system consists of a hardware level, which reduces the LHC bunch-crossing rate of 40 MHz to 1 MHz, a rate at which the entire detector is read out. A second level, implemented in a farm of around 20k parallel processing CPUs, the event rate is reduced to around 12.5 kHz. The LHCb experiment plans a major upgrade of the detector and DAQ system in the LHC long shutdown II (2018-2019 ). In this upgrade, a purely software based trigger system is being developed and it will have to process the full 30 MHz of bunch crossings with inelastic collisions. LHCb will also receive a factor of 5 increase in the instantaneous luminosity, which further contributes to the challenge of reconstructing and selecting events in real time with the CPU farm. We discuss the plans and progress towards achieving efficient reconstruction and selection with a 30 MHz throughput. Another challenge is to exploit the increased signal rate that results from removing the 1 MHz readout bottleneck, combined with the high...

  10. Effect of the coefficient of friction of a running surface on sprint time in a sled-towing exercise.

    Science.gov (United States)

    Linthorne, Nicholas P; Cooper, James E

    2013-06-01

    This study investigated the effect of the coefficient of friction of a running surface on an athlete's sprint time in a sled-towing exercise. The coefficients of friction of four common sports surfaces (a synthetic athletics track, a natural grass rugby pitch, a 3G football pitch, and an artificial grass hockey pitch) were determined from the force required to tow a weighted sled across the surface. Timing gates were then used to measure the 30-m sprint time for six rugby players when towing a sled of varied weight across the surfaces. There were substantial differences between the coefficients of friction for the four surfaces (micro = 0.21-0.58), and in the sled-towing exercise the athlete's 30-m sprint time increased linearly with increasing sled weight. The hockey pitch (which had the lowest coefficient of friction) produced a substantially lower rate of increase in 30-m sprint time, but there were no significant differences between the other surfaces. The results indicate that although an athlete's sprint time in a sled-towing exercise is affected by the coefficient offriction of the surface, the relationship relationship between the athlete's rate of increase in 30-m sprint time and the coefficient of friction is more complex than expected.

  11. Double point source W-phase inversion: Real-time implementation and automated model selection

    Science.gov (United States)

    Nealy, Jennifer; Hayes, Gavin

    2015-01-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  12. Impact of Different Time Series Streamflow Data on Energy Generation of a Run-of-River Hydropower Plant

    Science.gov (United States)

    Kentel, E.; Cetinkaya, M. A.

    2013-12-01

    Global issues such as population increase, power supply crises, oil prices, social and environmental concerns have been forcing countries to search for alternative energy sources such as renewable energy to satisfy the sustainable development goals. Hydropower is the most common form of renewable energy in the world. Hydropower does not require any fuel, produces relatively less pollution and waste and it is a reliable energy source with relatively low operating cost. In order to estimate the average annual energy production of a hydropower plant, sufficient and dependable streamflow data is required. The goal of this study is to investigate impact of streamflow data on annual energy generation of Balkusan HEPP which is a small run-of-river hydropower plant at Karaman, Turkey. Two different stream gaging stations are located in the vicinity of Balkusan HEPP and these two stations have different observation periods: one from 1986 to 2004 and the other from 2000 to 2009. These two observation periods show different climatic characteristics. Thus, annual energy estimations based on data from these two different stations differ considerably. Additionally, neither of these stations is located at the power plant axis, thus streamflow observations from these two stream gaging stations need to be transferred to the plant axis. This requirement introduces further errors into energy estimations. Impact of different streamflow data and transfer of streamflow observations to plant axis on annual energy generation of a small hydropower plant is investigated in this study.

  13. Radiation Tolerant Low Power Precision Time Source, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The availability of small, low power atomic clocks is now a reality for ground-based and airborne navigation systems. Kernco's Low Power Precision Time Source...

  14. Addressing Thermal Model Run Time Concerns of the Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA)

    Science.gov (United States)

    Peabody, Hume; Guerrero, Sergio; Hawk, John; Rodriguez, Juan; McDonald, Carson; Jackson, Cliff

    2016-01-01

    The Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) utilizes an existing 2.4 m diameter Hubble sized telescope donated from elsewhere in the federal government for near-infrared sky surveys and Exoplanet searches to answer crucial questions about the universe and dark energy. The WFIRST design continues to increase in maturity, detail, and complexity with each design cycle leading to a Mission Concept Review and entrance to the Mission Formulation Phase. Each cycle has required a Structural-Thermal-Optical-Performance (STOP) analysis to ensure the design can meet the stringent pointing and stability requirements. As such, the models have also grown in size and complexity leading to increased model run time. This paper addresses efforts to reduce the run time while still maintaining sufficient accuracy for STOP analyses. A technique was developed to identify slews between observing orientations that were sufficiently different to warrant recalculation of the environmental fluxes to reduce the total number of radiation calculation points. The inclusion of a cryocooler fluid loop in the model also forced smaller time-steps than desired, which greatly increases the overall run time. The analysis of this fluid model required mitigation to drive the run time down by solving portions of the model at different time scales. Lastly, investigations were made into the impact of the removal of small radiation couplings on run time and accuracy. Use of these techniques allowed the models to produce meaningful results within reasonable run times to meet project schedule deadlines.

  15. Time-dependent source model of the Lusi mud volcano

    Science.gov (United States)

    Shirzaei, M.; Rudolph, M. L.; Manga, M.

    2014-12-01

    The Lusi mud eruption, near Sidoarjo, East Java, Indonesia, began erupting in May 2006 and continues to erupt today. Previous analyses of surface deformation data suggested an exponential decay of the pressure in the mud source, but did not constrain the geometry and evolution of the source(s) from which the erupting mud and fluids ascend. To understand the spatiotemporal evolution of the mud and fluid sources, we apply a time-dependent inversion scheme to a densely populated InSAR time series of the surface deformation at Lusi. The SAR data set includes 50 images acquired on 3 overlapping tracks of the ALOS L-band satellite between May 2006 and April 2011. Following multitemporal analysis of this data set, the obtained surface deformation time series is inverted in a time-dependent framework to solve for the volume changes of distributed point sources in the subsurface. The volume change distribution resulting from this modeling scheme shows two zones of high volume change underneath Lusi at 0.5-1.5 km and 4-5.5km depth as well as another shallow zone, 7 km to the west of Lusi and underneath the Wunut gas field. The cumulative volume change within the shallow source beneath Lusi is ~2-4 times larger than that of the deep source, whilst the ratio of the Lusi shallow source volume change to that of Wunut gas field is ~1. This observation and model suggest that the Lusi shallow source played a key role in eruption process and mud supply, but that additional fluids do ascend from depths >4 km on eruptive timescales.

  16. The Electromagnetic Field of Elementary Time-Dependent Toroidal Sources

    International Nuclear Information System (INIS)

    Afanas'ev, G.N.; Stepanovskij, Yu.P.

    1994-01-01

    The radiation field of toroidal-like time-dependent current configurations is investigated. Time-dependent charge-current sources are found outside which the electromagnetic strengths disappear but the potentials survive. This can be used to carry out time-dependent Aharonov-Bohm-like experiments and the information transfer. Using the Neumann-Helmholtz parametrization of the current density we present the time-dependent electromagnetic field in a form convenient for applications. 17 refs

  17. Real-time dual-comb spectroscopy with a free-running bidirectionally mode-locked fiber laser

    Science.gov (United States)

    Mehravar, S.; Norwood, R. A.; Peyghambarian, N.; Kieu, K.

    2016-06-01

    Dual-comb technique has enabled exciting applications in high resolution spectroscopy, precision distance measurements, and 3D imaging. Major advantages over traditional methods can be achieved with dual-comb technique. For example, dual-comb spectroscopy provides orders of magnitude improvement in acquisition speed over standard Fourier-transform spectroscopy while still preserving the high resolution capability. Wider adoption of the technique has, however, been hindered by the need for complex and expensive ultrafast laser systems. Here, we present a simple and robust dual-comb system that employs a free-running bidirectionally mode-locked fiber laser operating at telecommunication wavelength. Two femtosecond frequency combs (with a small difference in repetition rates) are generated from a single laser cavity to ensure mutual coherent properties and common noise cancellation. As the result, we have achieved real-time absorption spectroscopy measurements without the need for complex servo locking with accurate frequency referencing, and relatively high signal-to-noise ratio.

  18. Evaluation of the 1996 predictions of the run-timing of wild migrant spring/summer yearling chinook in the Snake River Basin using Program RealTime

    International Nuclear Information System (INIS)

    Townsend, R.L.; Yasuda, D.; Skalski, J.R.

    1997-03-01

    This report is a post-season analysis of the accuracy of the 1996 predictions from the program RealTime. Observed 1996 migration data collected at Lower Granite Dam were compared to the predictions made by RealTime for the spring outmigration of wild spring/summer chinook. Appendix A displays the graphical reports of the RealTime program that were interactively accessible via the World Wide Web during the 1996 migration season. Final reports are available at address http://www.cqs.washington.edu/crisprt/. The CRISP model incorporated the predictions of the run status to move the timing forecasts further down the Snake River to Little Goose, Lower Monumental and McNary Dams. An analysis of the dams below Lower Granite Dam is available separately

  19. Comparison of source moment tensor recovered by diffraction stacking migration and source time reversal imaging

    Science.gov (United States)

    Zhang, Q.; Zhang, W.

    2017-12-01

    Diffraction stacking migration is an automatic location methods and widely used in microseismic monitoring of the hydraulic fracturing. It utilizes the stacking of thousands waveform to enhance signal-to-noise ratio of weak events. For surface monitoring, the diffraction stacking method is suffered from polarity reverse among receivers due to radiation pattern of moment source. Joint determination of location and source mechanism has been proposed to overcome the polarity problem but needs significantly increased computational calculations. As an effective method to recover source moment tensor, time reversal imaging based on wave equation can locate microseismic event by using interferometry on the image to extract source position. However, the time reversal imaging is very time consuming compared to the diffraction stacking location because of wave-equation simulation.In this study, we compare the image from diffraction stacking and time reversal imaging to check if the diffraction stacking can obtain similar moment tensor as time reversal imaging. We found that image produced by taking the largest imaging value at each point along time axis does not exhibit the radiation pattern, while with the same level of calculation efficiency, the image produced for each trial origin time can generate radiation pattern similar to time reversal imaging procedure. Thus it is potential to locate the source position by the diffraction stacking method for general moment tensor sources.

  20. A time-domain digitally controlled oscillator composed of a free running ring oscillator and flying-adder

    International Nuclear Information System (INIS)

    Liu Wei; Zhang Shengdong; Wang Yangyuan; Li Wei; Ren Peng; Lin Qinglong

    2009-01-01

    A time-domain digitally controlled oscillator (DCO) is proposed. The DCO is composed of a free-running ring oscillator (FRO) and a two lap-selectors integrated flying-adder (FA). With a coiled cell array which allows uniform loading capacitances of the delay cells, the FRO produces 32 outputs with consistent tap spacing for the FA as reference clocks. The FA uses the outputs from the FRO to generate the output of the DCO according to the control number, resulting in a linear dependence of the output period, instead of the frequency on the digital controlling word input. Thus the proposed DCO ensures a good conversion linearity in a time-domain, and is suitable for time-domain all-digital phase locked loop applications. The DCO was implemented in a standard 0.13 μm digital logic CMOS process. The measurement results show that the DCO has a linear and monotonic tuning curve with gain variation of less than 10%, and a very low root mean square period jitter of 9.3 ps in the output clocks. The DCO works well at supply voltages ranging from 0.6 to 1.2 V, and consumes 4 mW of power with 500 MHz frequency output at 1.2 V supply voltage.

  1. Recent run-time experience and investigation of impurities in turbines circuit of Helium plant of SST-1

    International Nuclear Information System (INIS)

    Panchal, P.; Panchal, R.; Patel, R.

    2013-01-01

    One of the key sub-systems of Steady State superconducting Tokamak (SST-1) is cryogenic 1.3 kW at 4.5 K Helium refrigerator/liquefier system. The helium plant consists of 3 nos. of screw compressors, oil removal system, purifier and cold-box with 3 turbo expanders (turbines) and helium cold circulator. During the recent SST-1 plasma campaigns, we observed high pressure drop of the order of 3 bar between the wheel outlet of turbine A and the wheel inlet of turbine - B. This was significant higher values of pressures drop across turbines, which reduced the speed of turbine A and B and in turn reduced the overall plant capacity. The helium circuits in the plant have 10-micron filter at the mouth of turbine - B. Initially, major suspects of such high blockage are assumed to be air-impurity, dust particles or collapse of filter. Several breaks in plant operation have been taken to warm up the turbines circuits up to 90 K to remove condensation of air-impurities at filter. Still this exercise did not solve blockage of filter in turbine circuits. A detailed investigation exercise with air/water regeneration and rinsing of cold box as well as purification of helium gas in buffer tanks are carried out to remove air impurities from cold-box. A trial run of cold box was executed in liquefier mode with turbines up to cryogenic temperatures and solved blockage in turbine circuits. The paper describes run-time experience of helium plant with helium impurity in turbine circuits, methods to remove impurity, demonstration of turbine performance and lessons learnt during this operation. (author)

  2. Automatic classification of time-variable X-ray sources

    International Nuclear Information System (INIS)

    Lo, Kitty K.; Farrell, Sean; Murphy, Tara; Gaensler, B. M.

    2014-01-01

    To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, and other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ∼97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7–500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.

  3. Disrupting gatekeeping practices: Journalists' source selection in times of crisis.

    Science.gov (United States)

    van der Meer, Toni G L A; Verhoeven, Piet; Beentjes, Johannes W J; Vliegenthart, Rens

    2017-10-01

    As gatekeepers, journalists have the power to select the sources that get a voice in crisis coverage. The aim of this study is to find out how journalists select sources during a crisis. In a survey, journalists were asked how they assess the following sources during an organizational crisis: news agencies, an organization undergoing a crisis, and the general public. The sample consisted of 214 Dutch experienced journalists who at least once covered a crisis. Using structural equation modeling, sources' likelihood of being included in the news was predicted using five source characteristics: credibility, knowledge, willingness, timeliness, and the relationship with the journalist. Findings indicated that during a crisis, news agencies are most likely to be included in the news, followed by the public, and finally the organization. The significance of the five source characteristics is dependent on source type. For example, to be used in the news, news agencies and organizations should be mainly evaluated as knowledgeable, whereas information from the public should be both credible and timely. In addition, organizations should not be seen as too willing or too eager to communicate. The findings imply that, during a crisis, journalists remain critical gatekeepers; however, they rely mainly on familiar sources.

  4. Automatic classification of time-variable X-ray sources

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Kitty K.; Farrell, Sean; Murphy, Tara; Gaensler, B. M. [Sydney Institute for Astronomy, School of Physics, The University of Sydney, Sydney, NSW 2006 (Australia)

    2014-05-01

    To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, and other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ∼97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7–500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.

  5. Time-stretch microscopy based on time-wavelength sequence reconstruction from wideband incoherent source

    International Nuclear Information System (INIS)

    Zhang, Chi; Xu, Yiqing; Wei, Xiaoming; Tsia, Kevin K.; Wong, Kenneth K. Y.

    2014-01-01

    Time-stretch microscopy has emerged as an ultrafast optical imaging concept offering the unprecedented combination of the imaging speed and sensitivity. However, dedicated wideband and coherence optical pulse source with high shot-to-shot stability has been mandated for time-wavelength mapping—the enabling process for ultrahigh speed wavelength-encoded image retrieval. From the practical point of view, exploiting methods to relax the stringent requirements (e.g., temporal stability and coherence) for the source of time-stretch microscopy is thus of great value. In this paper, we demonstrated time-stretch microscopy by reconstructing the time-wavelength mapping sequence from a wideband incoherent source. Utilizing the time-lens focusing mechanism mediated by a narrow-band pulse source, this approach allows generation of a wideband incoherent source, with the spectral efficiency enhanced by a factor of 18. As a proof-of-principle demonstration, time-stretch imaging with the scan rate as high as MHz and diffraction-limited resolution is achieved based on the wideband incoherent source. We note that the concept of time-wavelength sequence reconstruction from wideband incoherent source can also be generalized to any high-speed optical real-time measurements, where wavelength is acted as the information carrier

  6. Disrupting gatekeeping practices: Journalists’ source selection in times of crisis

    Science.gov (United States)

    van der Meer, Toni G.L.A.; Verhoeven, Piet; Beentjes, Johannes W.J.; Vliegenthart, Rens

    2016-01-01

    As gatekeepers, journalists have the power to select the sources that get a voice in crisis coverage. The aim of this study is to find out how journalists select sources during a crisis. In a survey, journalists were asked how they assess the following sources during an organizational crisis: news agencies, an organization undergoing a crisis, and the general public. The sample consisted of 214 Dutch experienced journalists who at least once covered a crisis. Using structural equation modeling, sources’ likelihood of being included in the news was predicted using five source characteristics: credibility, knowledge, willingness, timeliness, and the relationship with the journalist. Findings indicated that during a crisis, news agencies are most likely to be included in the news, followed by the public, and finally the organization. The significance of the five source characteristics is dependent on source type. For example, to be used in the news, news agencies and organizations should be mainly evaluated as knowledgeable, whereas information from the public should be both credible and timely. In addition, organizations should not be seen as too willing or too eager to communicate. The findings imply that, during a crisis, journalists remain critical gatekeepers; however, they rely mainly on familiar sources. PMID:29278263

  7. Blind source separation problem in GPS time series

    Science.gov (United States)

    Gualandi, A.; Serpelloni, E.; Belardinelli, M. E.

    2016-04-01

    A critical point in the analysis of ground displacement time series, as those recorded by space geodetic techniques, is the development of data-driven methods that allow the different sources of deformation to be discerned and characterized in the space and time domains. Multivariate statistic includes several approaches that can be considered as a part of data-driven methods. A widely used technique is the principal component analysis (PCA), which allows us to reduce the dimensionality of the data space while maintaining most of the variance of the dataset explained. However, PCA does not perform well in finding the solution to the so-called blind source separation (BSS) problem, i.e., in recovering and separating the original sources that generate the observed data. This is mainly due to the fact that PCA minimizes the misfit calculated using an L2 norm (χ 2), looking for a new Euclidean space where the projected data are uncorrelated. The independent component analysis (ICA) is a popular technique adopted to approach the BSS problem. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we test the use of a modified variational Bayesian ICA (vbICA) method to recover the multiple sources of ground deformation even in the presence of missing data. The vbICA method models the probability density function (pdf) of each source signal using a mix of Gaussian distributions, allowing for more flexibility in the description of the pdf of the sources with respect to standard ICA, and giving a more reliable estimate of them. Here we present its application to synthetic global positioning system (GPS) position time series, generated by simulating deformation near an active fault, including inter-seismic, co-seismic, and post-seismic signals, plus seasonal signals and noise, and an additional time-dependent volcanic source. We evaluate the ability of the PCA and ICA decomposition

  8. Liquidity Runs

    NARCIS (Netherlands)

    Matta, R.; Perotti, E.

    2016-01-01

    Can the risk of losses upon premature liquidation produce bank runs? We show how a unique run equilibrium driven by asset liquidity risk arises even under minimal fundamental risk. To study the role of illiquidity we introduce realistic norms on bank default, such that mandatory stay is triggered

  9. Running Club

    CERN Multimedia

    Running Club

    2010-01-01

    The 2010 edition of the annual CERN Road Race will be held on Wednesday 29th September at 18h. The 5.5km race takes place over 3 laps of a 1.8 km circuit in the West Area of the Meyrin site, and is open to everyone working at CERN and their families. There are runners of all speeds, with times ranging from under 17 to over 34 minutes, and the race is run on a handicap basis, by staggering the starting times so that (in theory) all runners finish together. Children (< 15 years) have their own race over 1 lap of 1.8km. As usual, there will be a “best family” challenge (judged on best parent + best child). Trophies are awarded in the usual men’s, women’s and veterans’ categories, and there is a challenge for the best age/performance. Every adult will receive a souvenir prize, financed by a registration fee of 10 CHF. Children enter free (each child will receive a medal). More information, and the online entry form, can be found at http://cern.ch/club...

  10. Time-correlated neutron analysis of a multiplying HEU source

    International Nuclear Information System (INIS)

    Miller, E.C.; Kalter, J.M.; Lavelle, C.M.; Watson, S.M.; Kinlaw, M.T.; Chichester, D.L.; Noonan, W.A.

    2015-01-01

    The ability to quickly identify and characterize special nuclear material remains a national security challenge. In counter-proliferation applications, identifying the neutron multiplication of a sample can be a good indication of the level of threat. Currently neutron multiplicity measurements are performed with moderated 3 He proportional counters. These systems rely on the detection of thermalized neutrons, a process which obscures both energy and time information from the source. Fast neutron detectors, such as liquid scintillators, have the ability to detect events on nanosecond time scales, providing more information on the temporal structure of the arriving signal, and provide an alternative method for extracting information from the source. To explore this possibility, a series of measurements were performed on the Idaho National Laboratory's MARVEL assembly, a configurable HEU source. The source assembly was measured in a variety of different HEU configurations and with different reflectors, covering a range of neutron multiplications from 2 to 8. The data was collected with liquid scintillator detectors and digitized for offline analysis. A gap based approach for identifying the bursts of detected neutrons associated with the same fission chain was used. Using this approach, we are able to study various statistical properties of individual fission chains. One of these properties is the distribution of neutron arrival times within a given burst. We have observed two interesting empirical trends. First, this distribution exhibits a weak, but definite, dependence on source multiplication. Second, there are distinctive differences in the distribution depending on the presence and type of reflector. Both of these phenomena might prove to be useful when assessing an unknown source. The physical origins of these phenomena can be illuminated with help of MCNPX-PoliMi simulations

  11. Time-correlated neutron analysis of a multiplying HEU source

    Energy Technology Data Exchange (ETDEWEB)

    Miller, E.C., E-mail: Eric.Miller@jhuapl.edu [Johns Hopkins University Applied Physics Laboratory, Laurel, MD (United States); Kalter, J.M.; Lavelle, C.M. [Johns Hopkins University Applied Physics Laboratory, Laurel, MD (United States); Watson, S.M.; Kinlaw, M.T.; Chichester, D.L. [Idaho National Laboratory, Idaho Falls, ID (United States); Noonan, W.A. [Johns Hopkins University Applied Physics Laboratory, Laurel, MD (United States)

    2015-06-01

    The ability to quickly identify and characterize special nuclear material remains a national security challenge. In counter-proliferation applications, identifying the neutron multiplication of a sample can be a good indication of the level of threat. Currently neutron multiplicity measurements are performed with moderated {sup 3}He proportional counters. These systems rely on the detection of thermalized neutrons, a process which obscures both energy and time information from the source. Fast neutron detectors, such as liquid scintillators, have the ability to detect events on nanosecond time scales, providing more information on the temporal structure of the arriving signal, and provide an alternative method for extracting information from the source. To explore this possibility, a series of measurements were performed on the Idaho National Laboratory's MARVEL assembly, a configurable HEU source. The source assembly was measured in a variety of different HEU configurations and with different reflectors, covering a range of neutron multiplications from 2 to 8. The data was collected with liquid scintillator detectors and digitized for offline analysis. A gap based approach for identifying the bursts of detected neutrons associated with the same fission chain was used. Using this approach, we are able to study various statistical properties of individual fission chains. One of these properties is the distribution of neutron arrival times within a given burst. We have observed two interesting empirical trends. First, this distribution exhibits a weak, but definite, dependence on source multiplication. Second, there are distinctive differences in the distribution depending on the presence and type of reflector. Both of these phenomena might prove to be useful when assessing an unknown source. The physical origins of these phenomena can be illuminated with help of MCNPX-PoliMi simulations.

  12. Time-correlated neutron analysis of a multiplying HEU source

    Science.gov (United States)

    Miller, E. C.; Kalter, J. M.; Lavelle, C. M.; Watson, S. M.; Kinlaw, M. T.; Chichester, D. L.; Noonan, W. A.

    2015-06-01

    The ability to quickly identify and characterize special nuclear material remains a national security challenge. In counter-proliferation applications, identifying the neutron multiplication of a sample can be a good indication of the level of threat. Currently neutron multiplicity measurements are performed with moderated 3He proportional counters. These systems rely on the detection of thermalized neutrons, a process which obscures both energy and time information from the source. Fast neutron detectors, such as liquid scintillators, have the ability to detect events on nanosecond time scales, providing more information on the temporal structure of the arriving signal, and provide an alternative method for extracting information from the source. To explore this possibility, a series of measurements were performed on the Idaho National Laboratory's MARVEL assembly, a configurable HEU source. The source assembly was measured in a variety of different HEU configurations and with different reflectors, covering a range of neutron multiplications from 2 to 8. The data was collected with liquid scintillator detectors and digitized for offline analysis. A gap based approach for identifying the bursts of detected neutrons associated with the same fission chain was used. Using this approach, we are able to study various statistical properties of individual fission chains. One of these properties is the distribution of neutron arrival times within a given burst. We have observed two interesting empirical trends. First, this distribution exhibits a weak, but definite, dependence on source multiplication. Second, there are distinctive differences in the distribution depending on the presence and type of reflector. Both of these phenomena might prove to be useful when assessing an unknown source. The physical origins of these phenomena can be illuminated with help of MCNPX-PoliMi simulations.

  13. Sources of variability and systematic error in mouse timing behavior.

    Science.gov (United States)

    Gallistel, C R; King, Adam; McDonald, Robert

    2004-01-01

    In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the mouse, with target intervals of 5 s, 15 s, and 45 s, in the standard procedure, and in a variant with 3 different target intervals at 3 different locations in a single trial. The authors conclude that the systematic error is due to the asymmetric location of start and stop decision criteria, and the scalar variability derives primarily from sources other than memory.

  14. Energy expended and knee joint load accumulated when walking, running, or standing for the same amount of time.

    Science.gov (United States)

    Miller, Ross H; Edwards, W Brent; Deluzio, Kevin J

    2015-01-01

    Evidence suggests prolonged bouts of sitting are unhealthy, and some public health messages have recently recommended replacing sitting with more standing. However, the relative benefits of replacing sitting with standing compared to locomotion are not known. Specifically, the biomechanical consequences of standing compared to other sitting-alternatives like walking and running are not well known and are usually not considered in studies on sitting. We compared the total knee joint load accumulated (TKJLA) and the total energy expended (TEE) when performing either walking, running, or standing for a common exercise bout duration (30 min). Walking and running both (unsurprisingly) had much more TEE than standing (+300% and +1100%, respectively). TKJLA was similar between walking and standing and 74% greater in running. The results suggest that standing is a poor replacement for walking and running if one wishes to increases energy expenditure, and may be particularly questionable for use in individuals at-risk for knee osteoarthritis due to its surprisingly high TKJLA (just as high as walking, 56% of the load in running) and the type of loading (continuous compression) it places on cartilage. However, standing has health benefits as an "inactivity interrupter" that extend beyond its direct energy expenditure. We suggest that future studies on standing as an inactivity intervention consider the potential biomechanical consequences of standing more often throughout the day, particularly in the case of prolonged bouts of standing. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Timing jitter measurements at the SLC electron source

    International Nuclear Information System (INIS)

    Sodja, J.; Browne, M.J.; Clendenin, J.E.

    1989-03-01

    The SLC thermionic gun and electron source produce a beam of up to 15 /times/ 10 10 /sub e//minus/ in a single S-band bunch. A 170 keV, 2 ns FWHM pulse out of the gun is compressed by means of two subharmonic buncher cavities followed by an S-band buncher and a standard SLAC accelerating section. Ceramic gaps in the beam pipe at the output of the gun allow a measure of the beam intensity and timing. A measurement at these gaps of the timing jitter, with a resolution of <10 ps, is described. 3 refs., 5 figs

  16. THE MATRYOSHKA RUN. II. TIME-DEPENDENT TURBULENCE STATISTICS, STOCHASTIC PARTICLE ACCELERATION, AND MICROPHYSICS IMPACT IN A MASSIVE GALAXY CLUSTER

    International Nuclear Information System (INIS)

    Miniati, Francesco

    2015-01-01

    We use the Matryoshka run to study the time-dependent statistics of structure-formation-driven turbulence in the intracluster medium of a 10 15 M ☉ galaxy cluster. We investigate the turbulent cascade in the inner megaparsec for both compressional and incompressible velocity components. The flow maintains approximate conditions of fully developed turbulence, with departures thereof settling in about an eddy-turnover time. Turbulent velocity dispersion remains above 700 km s –1 even at low mass accretion rate, with the fraction of compressional energy between 10% and 40%. The normalization and the slope of the compressional turbulence are susceptible to large variations on short timescales, unlike the incompressible counterpart. A major merger occurs around redshift z ≅ 0 and is accompanied by a long period of enhanced turbulence, ascribed to temporal clustering of mass accretion related to spatial clustering of matter. We test models of stochastic acceleration by compressional modes for the origin of diffuse radio emission in galaxy clusters. The turbulence simulation model constrains an important unknown of this complex problem and brings forth its dependence on the elusive microphysics of the intracluster plasma. In particular, the specifics of the plasma collisionality and the dissipation physics of weak shocks affect the cascade of compressional modes with strong impact on the acceleration rates. In this context radio halos emerge as complex phenomena in which a hierarchy of processes acting on progressively smaller scales are at work. Stochastic acceleration by compressional modes implies statistical correlation of radio power and spectral index with merging cores distance, both testable in principle with radio surveys

  17. The Reliability and Validity of a Four-Minute Running Time-Trial in Assessing V˙O2max and Performance

    Directory of Open Access Journals (Sweden)

    Kerry McGawley

    2017-05-01

    Full Text Available Introduction: Traditional graded-exercise tests to volitional exhaustion (GXTs are limited by the need to establish starting workloads, stage durations, and step increments. Short-duration time-trials (TTs may be easier to implement and more ecologically valid in terms of real-world athletic events. The purpose of the current study was to assess the reliability and validity of maximal oxygen uptake (V˙O2max and performance measured during a traditional GXT (STEP and a four-minute running time-trial (RunTT.Methods: Ten recreational runners (age: 32 ± 7 years; body mass: 69 ± 10 kg completed five STEP tests with a verification phase (VER and five self-paced RunTTs on a treadmill. The order of the STEP/VER and RunTT trials was alternated and counter-balanced. Performance was measured as time to exhaustion (TTE for STEP and VER and distance covered for RunTT.Results: The coefficient of variation (CV for V˙O2max was similar between STEP, VER, and RunTT (1.9 ± 1.0, 2.2 ± 1.1, and 1.8 ± 0.8%, respectively, but varied for performance between the three types of test (4.5 ± 1.9, 9.7 ± 3.5, and 1.8 ± 0.7% for STEP, VER, and RunTT, respectively. Bland-Altman limits of agreement (bias ± 95% showed V˙O2max to be 1.6 ± 3.6 mL·kg−1·min−1 higher for STEP vs. RunTT. Peak HR was also significantly higher during STEP compared with RunTT (P = 0.019.Conclusion: A four-minute running time-trial appears to provide more reliable performance data in comparison to an incremental test to exhaustion, but may underestimate V˙O2max.

  18. Time-Reversal Study of the Hemet (CA) Tremor Source

    Science.gov (United States)

    Larmat, C. S.; Johnson, P. A.; Guyer, R. A.

    2010-12-01

    Since its first observation by Nadeau & Dolenc (2005) and Gomberg et al. (2008), tremor along the San Andreas fault system is thought to be a probe into the frictional state of the deep part of the fault (e.g. Shelly et al., 2007). Tremor is associated with slow, otherwise deep, aseismic slip events that may be triggered by faint signals such as passing waves from remote earthquakes or solid Earth tides.Well resolved tremor source location is key to constrain frictional models of the fault. However, tremor source location is challenging because of the high-frequency and highly-scattered nature of tremor signal characterized by the lack of isolated phase arrivals. Time Reversal (TR) methods are emerging as a useful tool for location. The unique requirement is a good velocity model for the different time-reversed phases to arrive coherently onto the source point. We present results of location for a tremor source near the town of Hemet, CA, which was triggered by the 2002 M 7.9 Denali Fault earthquake (Gomberg et al., 2008) and by the 2009 M 6.9 Gulf of California earthquake. We performed TR in a volume model of 88 (N-S) x 70 (W-E) x 60 km (Z) using the full-wave 3D wave-propagation package SPECFEM3D (Komatitsch et al., 2002). The results for the 2009 episode indicate a deep source (at about 22km) which is about 4km SW the fault surface scarp. We perform STA/SLA and correlation analysis in order to have independent confirmation of the Hemet tremor source. We gratefully acknowledge the support of the U. S. Department of Energy through the LANL/LDRD Program for this work.

  19. Nuclear energy as a 'golden bridge'? Constitutional legal problems of the negotiation of the prolongation of the running time against skimming of profits

    International Nuclear Information System (INIS)

    Waldhoff, Christian; Aswege, Hanka von

    2010-01-01

    The coalition agreement of Christian Demographic Union (CDU), Christian Social Union (CSU) and Free Democratic Party (FDP) from 26th October, 2009 characterizes the nuclear energy as a bridge technology. The coalition parties explain to prolong the running times of German nuclear power stations up to a reliable replacement by renewable energies. The conditions for the prolongation of the running times are to be regulated in agreement with energy supply companies. In the contribution under consideration, the authors report on the fiscal legal problems of the skimming of profits. Constitutional legal problems of the earmaking of a skimming of profits as well as a consensual agreement are discussed in this contribution. In the result, a financial constitutionally reliable way for the skimming of added profits due to prolongation of the running time is not evident. The legal earmaking of the duty advent for the promotion of renewable energies increases the constitutional doubts.

  20. Can renewable energy sources be financed through competitive power markets in the long run?; Koennen sich erneuerbare Energien langfristig auf wettbewerblich organisierten Strommaerkten finanzieren?

    Energy Technology Data Exchange (ETDEWEB)

    Kopp, Oliver; Essler-Frey, Anke; Engelhorn, Thorsten [MVV Energie AG, Mannheim (Germany)

    2012-12-15

    In this paper we address the issue of whether renewable energy sources can be integrated into power markets if the use of renewable energies is extended at the desired speed. Market integration means that renewable energy sources have to cover their full costs from revenues on competitive markets. In the first part of this paper, we evaluate the long-term revenues of intermittent renewable energy sources using a high resolution power market model. Considering the renewable targets of the German lead study of 2010, we show that due to the merit order effect, intermittent renewable energy sources, such as wind power and photovoltaic, cannot be financed through power markets alone, even if their full costs fall below those of conventional power plants. This is also true for scenarios with high CO{sub 2}-prices and increasing spot market prices. In the second part of this paper, we discuss whether in the long run additional instruments such as green certificates or capacity markets would allow for a more competitive financing of renewable energy sources. Center stage in the discussion is the question under which circumstances these instruments increase competitive pricing and decentralised market decisions. (orig.)

  1. RUN COORDINATION

    CERN Multimedia

    Christophe Delaere

    2012-01-01

      On Wednesday 14 March, the machine group successfully injected beams into LHC for the first time this year. Within 48 hours they managed to ramp the beams to 4 TeV and proceeded to squeeze to β*=0.6m, settings that are used routinely since then. This brought to an end the CMS Cosmic Run at ~Four Tesla (CRAFT), during which we collected 800k cosmic ray events with a track crossing the central Tracker. That sample has been since then topped up to two million, allowing further refinements of the Tracker Alignment. The LHC started delivering the first collisions on 5 April with two bunches colliding in CMS, giving a pile-up of ~27 interactions per crossing at the beginning of the fill. Since then the machine has increased the number of colliding bunches to reach 1380 bunches and peak instantaneous luminosities around 6.5E33 at the beginning of fills. The average bunch charges reached ~1.5E11 protons per bunch which results in an initial pile-up of ~30 interactions per crossing. During the ...

  2. Hard real-time quick EXAFS data acquisition with all open source software on a commodity personal computer

    International Nuclear Information System (INIS)

    So, I.; Siddons, D.P.; Caliebe, W.A.; Khalid, S.

    2007-01-01

    We describe here the data acquisition subsystem of the Quick EXAFS (QEXAFS) experiment at the National Synchrotron Light Source of Brookhaven National Laboratory. For ease of future growth and flexibility, almost all software components are open source with very active maintainers. Among them, Linux running on x86 desktop computer, RTAI for real-time response, COMEDI driver for the data acquisition hardware, Qt and PyQt for graphical user interface, PyQwt for plotting, and Python for scripting. The signal (A/D) and energy-reading (IK220 encoder) devices in the PCI computer are also EPICS enabled. The control system scans the monochromator energy through a networked EPICS motor. With the real-time kernel, the system is capable of deterministic data-sampling period of tens of micro-seconds with typical timing-jitter of several micro-seconds. At the same time, Linux is running in other non-real-time processes handling the user-interface. A modern Qt-based controls-frontend enhances productivity. The fast plotting and zooming of data in time or energy coordinates let the experimenters verify the quality of the data before detailed analysis. Python scripting is built-in for automation. The typical data-rate for continuous runs are around 10 M bytes/min

  3. The acute effects of a caffeine-containing supplement on bench press strength and time to running exhaustion.

    Science.gov (United States)

    Beck, Travis W; Housh, Terry J; Malek, Moh H; Mielke, Michelle; Hendrix, Russell

    2008-09-01

    The purpose of the present study was to examine the acute effects of a caffeine-containing supplement (SUPP) on one-repetition maximum (1-RM) bench press strength and time to running exhaustion (TRE) at a velocity that corresponded to 85% of the peak oxygen uptake ([latin capital V with dot above]O2peak). The study used a double-blinded, placebo-controlled, crossover design. Thirty-one men (mean +/- SD age = 23.0 +/- 2.6 years) were randomly assigned to take either the SUPP or placebo (PLAC) first. The SUPP contained 201 mg of caffeine, and the PLAC was microcrystalline cellulose. All subjects were tested for 1-RM bench press strength and TRE at 45 minutes after taking either the SUPP or PLAC. After 1 week of rest, the subjects returned to the laboratory and ingested the opposite substance (SUPP or PLAC) from what was taken during the previous visit. The 1-RM bench press and TRE tests were then performed in the same manner as before. The results indicated that the SUPP had no effect on 1-RM bench press strength or TRE at 85% [latin capital V with dot above]O2peak. It is possible that the acute effects of caffeine are affected by differences in training status and/or the relative intensity of the exercise task. Future studies should examine these issues, in addition to testing the acute effects of various caffeine doses on performance during maximal strength, power, and aerobic activities. These findings do not, however, support the use of caffeine as an ergogenic aid in untrained to moderately trained individuals.

  4. Analysis and Design of Bi-Directional DC-DC Converter in the Extended Run Time DC UPS System Based on Fuel Cell and Supercapacitor

    DEFF Research Database (Denmark)

    Zhang, Zhe; Thomsen, Ole Cornelius; Andersen, Michael A. E.

    2009-01-01

    Abstract-In this paper, an extended run time DC UPS system structure with fuel cell and supercapacitor is investigated. A wide input range bi-directional dc-dc converter is described along with the phase-shift modulation scheme and phase-shift with duty cycle control, in different modes. The deli......Abstract-In this paper, an extended run time DC UPS system structure with fuel cell and supercapacitor is investigated. A wide input range bi-directional dc-dc converter is described along with the phase-shift modulation scheme and phase-shift with duty cycle control, in different modes...

  5. Towards an accurate real-time locator of infrasonic sources

    Science.gov (United States)

    Pinsky, V.; Blom, P.; Polozov, A.; Marcillo, O.; Arrowsmith, S.; Hofstetter, A.

    2017-11-01

    Infrasonic signals propagate from an atmospheric source via media with stochastic and fast space-varying conditions. Hence, their travel time, the amplitude at sensor recordings and even manifestation in the so-called "shadow zones" are random. Therefore, the traditional least-squares technique for locating infrasonic sources is often not effective, and the problem for the best solution must be formulated in probabilistic terms. Recently, a series of papers has been published about Bayesian Infrasonic Source Localization (BISL) method based on the computation of the posterior probability density function (PPDF) of the source location, as a convolution of a priori probability distribution function (APDF) of the propagation model parameters with likelihood function (LF) of observations. The present study is devoted to the further development of BISL for higher accuracy and stability of the source location results and decreasing of computational load. We critically analyse previous algorithms and propose several new ones. First of all, we describe the general PPDF formulation and demonstrate that this relatively slow algorithm might be among the most accurate algorithms, provided the adequate APDF and LF are used. Then, we suggest using summation instead of integration in a general PPDF calculation for increased robustness, but this leads us to the 3D space-time optimization problem. Two different forms of APDF approximation are considered and applied for the PPDF calculation in our study. One of them is previously suggested, but not yet properly used is the so-called "celerity-range histograms" (CRHs). Another is the outcome from previous findings of linear mean travel time for the four first infrasonic phases in the overlapping consecutive distance ranges. This stochastic model is extended here to the regional distance of 1000 km, and the APDF introduced is the probabilistic form of the junction between this travel time model and range-dependent probability

  6. Improving wheat productivity through source and timing of nitrogen fertilization

    International Nuclear Information System (INIS)

    Jan, M.T.; Khan, A.; Afridi, M.Z.; Arif, M.; Khan, M.J.; Farhatullah; Jan, D.; Saeed, M.

    2011-01-01

    Efficient nitrogen (N) fertilizer management is critical for the improved production of wheat (Triticum aestivum L.) and can be achieved through source and timing of N application. Thus, an experiment was carried out at the Research Farm of KPK Agricultural University Peshawar during 2005-06 to test the effects of sources and timing of N application on yield and yield components of wheat. Nitrogen sources were ammonium (NH/sub 4/) and nitrate (NO/sub 3/) applied at the rate of 100 kg ha/sup -1/ at three different stages i.e., at sowing (S1), tillering (S2) and boot stage (S3). Ammonium N increased yield component but did not affect the final grain yield. Split N application at sowing, tillering and boot stages had increased productive tillers m-2, and thousand grains weight, whereas grain yield was higher when N was applied at tillering and boot stages. Nitrogen fertilization increased 20% grain yield compared to control regardless of N application time. It was concluded from the experiment that split application of NH/sub 4/-N performed better than full dose application and/or NO/sub 3/-N for improved wheat productivity and thus, is recommended for general practice in agro-climatic conditions of Peshawar. (author)

  7. Using Real Time Workshop for rapid and reliable control implementation in the Frascati Tokamak Upgrade Feedback Control System running under RTAI-GNU/Linux

    International Nuclear Information System (INIS)

    Centioli, C.; Iannone, F.; Ledauphin, M.; Panella, M.; Pangione, L.; Podda, S.; Vitale, V.; Zaccarian, L.

    2005-01-01

    The Feedback Control System running at FTU has been recently ported from a commercial platform (based on LynxOS) to an open-source GNU/Linux-based RTAI-LXRT platform, thereby, obtaining significant performance and cost improvements. Based on the new open-source platform, it is now possible to experiment novel control strategies aimed at improving the robustness and accuracy of the feedback control. Nevertheless, the implementation of control ideas still requires a great deal of coding of the control algorithms that, if carried out manually, may be prone to coding errors, therefore time consuming both in the development phase and in the subsequent validation tests consisting of dedicated experiments carried out on FTU. In this paper, we report on recent developments based on Mathworks' Simulink and Real Time Workshop (RTW) packages to obtain a user-friendly environment where the real time code implementing novel control algorithms can be easily generated, tested and validated. Thanks to this new tool, the control designer only needs to specify the block diagram of the control task (namely, a high level and functional description of the new algorithm under consideration) and the corresponding real time code generation and testing is completely automated without any need of dedicated experiments. In the paper, the necessary work carried out to adapt the Real Time Workshop to our RTAI-LXRT context will be illustrated. A necessary re-organization of the previous real time software, aimed at incorporating the code coming from the adapted RTW, will also be discussed. Moreover, we will report on a performance comparison between the code obtained using the automated RTW-based procedure and the hand-written C code, appropriately optimised; at the moment, a preliminary performance comparison consisting of dummy algorithms has shown that the code automatically generated from RTW is faster (about 30% up) than the manually written one. This preliminary result combined with the

  8. Uncertainty in Earthquake Source Imaging Due to Variations in Source Time Function and Earth Structure

    KAUST Repository

    Razafindrakoto, H. N. T.

    2014-03-25

    One way to improve the accuracy and reliability of kinematic earthquake source imaging is to investigate the origin of uncertainty and to minimize their effects. The difficulties in kinematic source inversion arise from the nonlinearity of the problem, nonunique choices in the parameterization, and observational errors. We analyze particularly the uncertainty related to the choice of the source time function (STF) and the variability in Earth structure. We consider a synthetic data set generated from a spontaneous dynamic rupture calculation. Using Bayesian inference, we map the solution space of peak slip rate, rupture time, and rise time to characterize the kinematic rupture in terms of posterior density functions. Our test to investigate the effect of the choice of STF reveals that all three tested STFs (isosceles triangle, regularized Yoffe with acceleration time of 0.1 and 0.3 s) retrieve the patch of high slip and slip rate around the hypocenter. However, the use of an isosceles triangle as STF artificially accelerates the rupture to propagate faster than the target solution. It additionally generates an artificial linear correlation between rupture onset time and rise time. These appear to compensate for the dynamic source effects that are not included in the symmetric triangular STF. The exact rise time for the tested STFs is difficult to resolve due to the small amount of radiated seismic moment in the tail of STF. To highlight the effect of Earth structure variability, we perform inversions including the uncertainty in the wavespeed only, and variability in both wavespeed and layer depth. We find that little difference is noticeable between the resulting rupture model uncertainties from these two parameterizations. Both significantly broaden the posterior densities and cause faster rupture propagation particularly near the hypocenter due to the major velocity change at the depth where the fault is located.

  9. Uncertainty in Earthquake Source Imaging Due to Variations in Source Time Function and Earth Structure

    KAUST Repository

    Razafindrakoto, H. N. T.; Mai, Paul Martin

    2014-01-01

    One way to improve the accuracy and reliability of kinematic earthquake source imaging is to investigate the origin of uncertainty and to minimize their effects. The difficulties in kinematic source inversion arise from the nonlinearity of the problem, nonunique choices in the parameterization, and observational errors. We analyze particularly the uncertainty related to the choice of the source time function (STF) and the variability in Earth structure. We consider a synthetic data set generated from a spontaneous dynamic rupture calculation. Using Bayesian inference, we map the solution space of peak slip rate, rupture time, and rise time to characterize the kinematic rupture in terms of posterior density functions. Our test to investigate the effect of the choice of STF reveals that all three tested STFs (isosceles triangle, regularized Yoffe with acceleration time of 0.1 and 0.3 s) retrieve the patch of high slip and slip rate around the hypocenter. However, the use of an isosceles triangle as STF artificially accelerates the rupture to propagate faster than the target solution. It additionally generates an artificial linear correlation between rupture onset time and rise time. These appear to compensate for the dynamic source effects that are not included in the symmetric triangular STF. The exact rise time for the tested STFs is difficult to resolve due to the small amount of radiated seismic moment in the tail of STF. To highlight the effect of Earth structure variability, we perform inversions including the uncertainty in the wavespeed only, and variability in both wavespeed and layer depth. We find that little difference is noticeable between the resulting rupture model uncertainties from these two parameterizations. Both significantly broaden the posterior densities and cause faster rupture propagation particularly near the hypocenter due to the major velocity change at the depth where the fault is located.

  10. Triathlon: running injuries.

    Science.gov (United States)

    Spiker, Andrea M; Dixit, Sameer; Cosgarea, Andrew J

    2012-12-01

    The running portion of the triathlon represents the final leg of the competition and, by some reports, the most important part in determining a triathlete's overall success. Although most triathletes spend most of their training time on cycling, running injuries are the most common injuries encountered. Common causes of running injuries include overuse, lack of rest, and activities that aggravate biomechanical predisposers of specific injuries. We discuss the running-associated injuries in the hip, knee, lower leg, ankle, and foot of the triathlete, and the causes, presentation, evaluation, and treatment of each.

  11. Patellofemoral Joint Loads During Running at the Time of Return to Sport in Elite Athletes With ACL Reconstruction.

    Science.gov (United States)

    Herrington, Lee; Alarifi, Saud; Jones, Richard

    2017-10-01

    Patellofemoral joint pain and degeneration are common in patients who undergo anterior cruciate ligament reconstruction (ACLR). The presence of patellofemoral joint pain significantly affects the patient's ability to continue sport participation and may even affect participation in activities of daily living. The mechanisms behind patellofemoral joint pain and degeneration are unclear, but previous research has identified altered patellofemoral joint loading in individuals with patellofemoral joint pain when running. It is unclear whether this process occurs after ACLR. To assess the patellofemoral joint stresses during running in ACLR knees and compare the findings to the noninjured knee and matched control knees. Controlled laboratory study. Thirty-four elite sports practitioners who had undergone ACLR and 34 age- and sex-matched controls participated in the study. The participants' running gait was assessed via 3D motion capture, and knee loads and forces were calculated by use of inverse dynamics. A significance difference was found in knee extensor moment, knee flexion angles, patellofemoral contact force (about 23% greater), and patellofemoral contact pressure (about 27% greater) between the ACLR and the noninjured limb ( P ≤ .04) and between the ACLR and the control limb ( P ≤ .04); no significant differences were found between the noninjured and control limbs ( P ≥ .44). Significantly greater levels of patellofemoral joint stress and load were found in the ACLR knee compared with the noninjured and control knees. Altered levels of patellofemoral stress in the ACLR knee during running may predispose individuals to patellofemoral joint pain.

  12. Effects of selective breeding for increased wheel-running behavior on circadian timing of substrate oxidation and ingestive behavior

    NARCIS (Netherlands)

    Jonas, I.; Vaanholt, L. M.; Doornbos, M.; Garland, T.; Scheurink, A. J. W.; Nyakas, C.; van Dijk, G.; Garland Jr., T.

    2010-01-01

    Fluctuations in substrate preference and utilization across the circadian cycle may be influenced by the degree of physical activity and nutritional status. In the present study, we assessed these relationships in control mice and in mice from a line selectively bred for high voluntary wheel-running

  13. Source modeling and inversion with near real-time GPS: a GITEWS perspective for Indonesia

    Science.gov (United States)

    Babeyko, A. Y.; Hoechner, A.; Sobolev, S. V.

    2010-07-01

    We present the GITEWS approach to source modeling for the tsunami early warning in Indonesia. Near-field tsunami implies special requirements to both warning time and details of source characterization. To meet these requirements, we employ geophysical and geological information to predefine a maximum number of rupture parameters. We discretize the tsunamigenic Sunda plate interface into an ordered grid of patches (150×25) and employ the concept of Green's functions for forward and inverse rupture modeling. Rupture Generator, a forward modeling tool, additionally employs different scaling laws and slip shape functions to construct physically reasonable source models using basic seismic information only (magnitude and epicenter location). GITEWS runs a library of semi- and fully-synthetic scenarios to be extensively employed by system testing as well as by warning center personnel teaching and training. Near real-time GPS observations are a very valuable complement to the local tsunami warning system. Their inversion provides quick (within a few minutes on an event) estimation of the earthquake magnitude, rupture position and, in case of sufficient station coverage, details of slip distribution.

  14. Digital time stamping system based on open source technologies.

    Science.gov (United States)

    Miskinis, Rimantas; Smirnov, Dmitrij; Urba, Emilis; Burokas, Andrius; Malysko, Bogdan; Laud, Peeter; Zuliani, Francesco

    2010-03-01

    A digital time stamping system based on open source technologies (LINUX-UBUNTU, OpenTSA, OpenSSL, MySQL) is described in detail, including all important testing results. The system, called BALTICTIME, was developed under a project sponsored by the European Commission under the Program FP 6. It was designed to meet the requirements posed to the systems of legal and accountable time stamping and to be applicable to the hardware commonly used by the national time metrology laboratories. The BALTICTIME system is intended for the use of governmental and other institutions as well as personal bodies. Testing results demonstrate that the time stamps issued to the user by BALTICTIME and saved in BALTICTIME's archives (which implies that the time stamps are accountable) meet all the regulatory requirements. Moreover, the BALTICTIME in its present implementation is able to issue more than 10 digital time stamps per second. The system can be enhanced if needed. The test version of the BALTICTIME service is free and available at http://baltictime. pfi.lt:8080/btws/ and http://baltictime.lnmc.lv:8080/btws/.

  15. Just in Time - Expecting Failure: Do JIT Principles Run Counter to DoD’s Business Nature?

    Science.gov (United States)

    2014-04-01

    Regiment. The last several years witnessed both commercial industry and the Department of Defense (DoD) logistics supply chains trending to-ward an...moving items through a production system only when needed. Equating inventory to an avoidable waste instead of adding value to a company directly...Louisiana plant for a week, Honda Motor Company to suspend orders for Japanese-built Honda and Acura models, and pro- ducers of Boeing’s 787 to run billions

  16. Running Linux

    CERN Document Server

    Dalheimer, Matthias Kalle

    2006-01-01

    The fifth edition of Running Linux is greatly expanded, reflecting the maturity of the operating system and the teeming wealth of software available for it. Hot consumer topics such as audio and video playback applications, groupware functionality, and spam filtering are covered, along with the basics in configuration and management that always made the book popular.

  17. RUN COORDINATION

    CERN Multimedia

    C. Delaere

    2013-01-01

    Since the LHC ceased operations in February, a lot has been going on at Point 5, and Run Coordination continues to monitor closely the advance of maintenance and upgrade activities. In the last months, the Pixel detector was extracted and is now stored in the pixel lab in SX5; the beam pipe has been removed and ME1/1 removal has started. We regained access to the vactank and some work on the RBX of HB has started. Since mid-June, electricity and cooling are back in S1 and S2, allowing us to turn equipment back on, at least during the day. 24/7 shifts are not foreseen in the next weeks, and safety tours are mandatory to keep equipment on overnight, but re-commissioning activities are slowly being resumed. Given the (slight) delays accumulated in LS1, it was decided to merge the two global runs initially foreseen into a single exercise during the week of 4 November 2013. The aim of the global run is to check that we can run (parts of) CMS after several months switched off, with the new VME PCs installed, th...

  18. [Research and implementation of a real-time monitoring system for running status of medical monitors based on the internet of things].

    Science.gov (United States)

    Li, Yiming; Qian, Mingli; Li, Long; Li, Bin

    2014-07-01

    This paper proposed a real-time monitoring system for running status of medical monitors based on the internet of things. In the aspect of hardware, a solution of ZigBee networks plus 470 MHz networks is proposed. In the aspect of software, graphical display of monitoring interface and real-time equipment failure alarm is implemented. The system has the function of remote equipment failure detection and wireless localization, which provides a practical and effective method for medical equipment management.

  19. Real-time control using open source RTOS

    Science.gov (United States)

    Irwin, Philip C.; Johnson, Richard L., Jr.

    2002-12-01

    Complex telescope systems such as interferometers tend to rely heavily on hard real-time operating systems (RTOS). It has been standard practice at NASA's Jet Propulsion Laboratory (JPL) and many other institutions to use costly commercial RTOSs and hardware. After developing a real-time toolkit for VxWorks on the PowerPC platform (dubbed RTC), the interferometry group at JPL is porting this code to the real-time Application Interface (RTAI), an open source RTOS that is essentially an extension to the Linux kernel. This port has the potential to reduce software and hardware costs for future projects, while increasing the level of performance. The goals of this paper are to briefly describe the RTC toolkit, highlight the successes and pitfalls of porting the toolkit from VxWorks to Linux-RTAI, and to discuss future enhancements that will be implemented as a direct result of this port. The first port of any body of code is always the most difficult since it uncovers the OS-specific calls and forces "red flags" into those portions of the code. For this reason, It has also been a huge benefit that the project chose a generic, platform independent OS extension, ACE, and its CORBA counterpart, TAO. This port of RTC will pave the way for conversions to other environments, the most interesting of which is a non-real-time simulation environment, currently being considered by the Space Interferometry Mission (SIM) and the Terrestrial Planet Finder (TPF) Projects.

  20. TEMPS, 1-Group Time-Dependent Pulsed Source Neutron Transport

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1988-01-01

    1 - Description of program or function: TEMPS numerically determines the scalar flux as given by the one-group neutron transport equation with a pulsed source in an infinite medium. Standard plane, point, and line sources are considered as well as a volume source in the negative half-space in plane geometry. The angular distribution of emitted neutrons can either be isotropic or mono-directional (beam) in plane geometry and isotropic in spherical and cylindrical geometry. A general anisotropic scattering Kernel represented in terms of Legendre polynomials can be accommodated with a time- dependent number of secondaries given by c(t)=c 0 (t/t 0 ) β , where β is greater than -1 and less than infinity. TEMPS is designed to provide the flux to a high degree of accuracy (4-5 digits) for use as a benchmark to which results from other numerical solutions or approximations can be compared. 2 - Method of solution: A semi-analytic Method of solution is followed. The main feature of this approach is that no discretization of the transport or scattering operators is employed. The numerical solution involves the evaluation of an analytical representation of the solution by standard numerical techniques. The transport equation is first reformulated in terms of multiple collisions with the flux represented by an infinite series of collisional components. Each component is then represented by an orthogonal Legendre series expansion in the variable x/t where the distance x and time t are measured in terms of mean free path and mean free time, respectively. The moments in the Legendre reconstruction are found from an algebraic recursion relation obtained from Legendre expansion in the direction variable mu. The multiple collision series is evaluated first to a prescribed relative error determined by the number of digits desired in the scalar flux. If the Legendre series fails to converge in the plane or point source case, an accelerative transformation, based on removing the

  1. Hourly Comparison of GPM-IMERG-Final-Run and IMERG-Real-Time (V-03) over a Dense Surface Network in Northeastern Austria

    Science.gov (United States)

    Sharifi, Ehsan; Steinacker, Reinhold; Saghafian, Bahram

    2017-04-01

    Accurate quantitative daily precipitation estimation is key to meteorological and hydrological applications in hazards forecast and management. In-situ observations over mountainous areas are mostly limited, however, currently available satellite precipitation products can potentially provide the precipitation estimation needed for meteorological and hydrological applications. Over the years, blended methods that use multi-satellites and multi-sensors have been developed for estimating of global precipitation. One of the latest satellite precipitation products is GPM-IMERG (Global Precipitation Measurement with 30-minute temporal and 0.1-degree spatial resolutions) which consists of three products: Final-Run (aimed for research), Real-Time early run, and Real-Time late run. The Integrated Multisatellite Retrievals for GPM (IMERG) products built upon the success of TRMM's Multisatellite Precipitation Analysis (TMPA) products continue to make improvements in spatial and temporal resolutions and snowfall estimates. Recently, researchers who evaluated IMERG-Final-Run V-03 and other precipitation products indicated better performance for IMERG-Final-Run against other similar products. In this study two GPM-IMERG products, namely final run and real time-late run, were evaluated against a dense synoptic stations network (62 stations) over Northeastern Austria for mid-March 2015 to end of January 2016 period at hourly time-scale. Both products were examined against the reference data (stations) in capturing the occurrence of precipitation and statistical characteristics of precipitation intensity. Both satellite precipitation products underestimated precipitation events of 0.1 mm/hr to 0.4 mm/hr in intensity. For precipitations 0.4 mm/hr and greater, the trend was reversed and both satellite products overestimated than station recorded data. IMERG-RT outperformed IMERG-FR for precipitation intensity in the range of 0.1 mm/hr to 0.4 mm/hr while in the range of 1.1 to 1.8 mm

  2. The LHCb Run Control

    CERN Document Server

    Alessio, F; Callot, O; Duval, P-Y; Franek, B; Frank, M; Galli, D; Gaspar, C; v Herwijnen, E; Jacobsson, R; Jost, B; Neufeld, N; Sambade, A; Schwemmer, R; Somogyi, P

    2010-01-01

    LHCb has designed and implemented an integrated Experiment Control System. The Control System uses the same concepts and the same tools to control and monitor all parts of the experiment: the Data Acquisition System, the Timing and the Trigger Systems, the High Level Trigger Farm, the Detector Control System, the Experiment's Infrastructure and the interaction with the CERN Technical Services and the Accelerator. LHCb's Run Control, the main interface used by the experiment's operator, provides access in a hierarchical, coherent and homogeneous manner to all areas of the experiment and to all its sub-detectors. It allows for automated (or manual) configuration and control, including error recovery, of the full experiment in its different running modes. Different instances of the same Run Control interface are used by the various sub-detectors for their stand-alone activities: test runs, calibration runs, etc. The architecture and the tools used to build the control system, the guidelines and components provid...

  3. Multiple time-reversed guide-sources in shallow water

    Science.gov (United States)

    Gaumond, Charles F.; Fromm, David M.; Lingevitch, Joseph F.; Gauss, Roger C.; Menis, Richard

    2003-10-01

    Detection in a monostatic, broadband, active sonar system in shallow water is degraded by propagation-induced spreading. The detection improvement from multiple spatially separated guide sources (GSs) is presented as a method to mitigate this degradation. The improvement of detection by using information in a set of one-way transmissions from a variety of positions is shown using sea data. The experimental area is south of the Hudson Canyon off the coast of New Jersey. The data were taken using five elements of a time-reversing VLA. The five elements were contiguous and at midwater depth. The target and guide source was an echo repeater positioned at various ranges and at middepth. The transmitted signals were 3.0- to 3.5-kHz LFMs. The data are analyzed to show the amount of information present in the collection, a baseline probability of detection (PD) not using the collection of GS signals, the improvement in PD from the use of various sets of GS signals. The dependence of the improvement as a function of range is also shown. [The authors acknowledge support from Dr. Jeffrey Simmen, ONR321OS, and the chief scientist Dr. Charles Holland. Work supported by ONR.

  4. Multi-source least-squares reverse time migration

    KAUST Repository

    Dai, Wei

    2012-06-15

    Least-squares migration has been shown to improve image quality compared to the conventional migration method, but its computational cost is often too high to be practical. In this paper, we develop two numerical schemes to implement least-squares migration with the reverse time migration method and the blended source processing technique to increase computation efficiency. By iterative migration of supergathers, which consist in a sum of many phase-encoded shots, the image quality is enhanced and the crosstalk noise associated with the encoded shots is reduced. Numerical tests on 2D HESS VTI data show that the multisource least-squares reverse time migration (LSRTM) algorithm suppresses migration artefacts, balances the amplitudes, improves image resolution and reduces crosstalk noise associated with the blended shot gathers. For this example, the multisource LSRTM is about three times faster than the conventional RTM method. For the 3D example of the SEG/EAGE salt model, with a comparable computational cost, multisource LSRTM produces images with more accurate amplitudes, better spatial resolution and fewer migration artefacts compared to conventional RTM. The empirical results suggest that multisource LSRTM can produce more accurate reflectivity images than conventional RTM does with a similar or less computational cost. The caveat is that the LSRTM image is sensitive to large errors in the migration velocity model. © 2012 European Association of Geoscientists & Engineers.

  5. Multi-source least-squares reverse time migration

    KAUST Repository

    Dai, Wei; Fowler, Paul J.; Schuster, Gerard T.

    2012-01-01

    Least-squares migration has been shown to improve image quality compared to the conventional migration method, but its computational cost is often too high to be practical. In this paper, we develop two numerical schemes to implement least-squares migration with the reverse time migration method and the blended source processing technique to increase computation efficiency. By iterative migration of supergathers, which consist in a sum of many phase-encoded shots, the image quality is enhanced and the crosstalk noise associated with the encoded shots is reduced. Numerical tests on 2D HESS VTI data show that the multisource least-squares reverse time migration (LSRTM) algorithm suppresses migration artefacts, balances the amplitudes, improves image resolution and reduces crosstalk noise associated with the blended shot gathers. For this example, the multisource LSRTM is about three times faster than the conventional RTM method. For the 3D example of the SEG/EAGE salt model, with a comparable computational cost, multisource LSRTM produces images with more accurate amplitudes, better spatial resolution and fewer migration artefacts compared to conventional RTM. The empirical results suggest that multisource LSRTM can produce more accurate reflectivity images than conventional RTM does with a similar or less computational cost. The caveat is that the LSRTM image is sensitive to large errors in the migration velocity model. © 2012 European Association of Geoscientists & Engineers.

  6. Running Club

    CERN Multimedia

    Running Club

    2011-01-01

    The cross country running season has started well this autumn with two events: the traditional CERN Road Race organized by the Running Club, which took place on Tuesday 5th October, followed by the ‘Cross Interentreprises’, a team event at the Evaux Sports Center, which took place on Saturday 8th October. The participation at the CERN Road Race was slightly down on last year, with 65 runners, however the participants maintained the tradition of a competitive yet friendly atmosphere. An ample supply of refreshments before the prize giving was appreciated by all after the race. Many thanks to all the runners and volunteers who ensured another successful race. The results can be found here: https://espace.cern.ch/Running-Club/default.aspx CERN participated successfully at the cross interentreprises with very good results. The teams succeeded in obtaining 2nd and 6th place in the Mens category, and 2nd place in the Mixed category. Congratulations to all. See results here: http://www.c...

  7. RUN COORDINATION

    CERN Multimedia

    M. Chamizo

    2012-01-01

      On 17th January, as soon as the services were restored after the technical stop, sub-systems started powering on. Since then, we have been running 24/7 with reduced shift crew — Shift Leader and DCS shifter — to allow sub-detectors to perform calibration, noise studies, test software upgrades, etc. On 15th and 16th February, we had the first Mid-Week Global Run (MWGR) with the participation of most sub-systems. The aim was to bring CMS back to operation and to ensure that we could run after the winter shutdown. All sub-systems participated in the readout and the trigger was provided by a fraction of the muon systems (CSC and the central RPC wheel). The calorimeter triggers were not available due to work on the optical link system. Initial checks of different distributions from Pixels, Strips, and CSC confirmed things look all right (signal/noise, number of tracks, phi distribution…). High-rate tests were done to test the new CSC firmware to cure the low efficiency ...

  8. Using Simulated Partial Dynamic Run-Time Reconfiguration to Share Embedded FPGA Compute and Power Resources across a Swarm of Unpiloted Airborne Vehicles

    Directory of Open Access Journals (Sweden)

    Kearney David

    2007-01-01

    Full Text Available We show how the limited electrical power and FPGA compute resources available in a swarm of small UAVs can be shared by moving FPGA tasks from one UAV to another. A software and hardware infrastructure that supports the mobility of embedded FPGA applications on a single FPGA chip and across a group of networked FPGA chips is an integral part of the work described here. It is shown how to allocate a single FPGA's resources at run time and to share a single device through the use of application checkpointing, a memory controller, and an on-chip run-time reconfigurable network. A prototype distributed operating system is described for managing mobile applications across the swarm based on the contents of a fuzzy rule base. It can move applications between UAVs in order to equalize power use or to enable the continuous replenishment of fully fueled planes into the swarm.

  9. Effects of cognitive stimulation with a self-modeling video on time to exhaustion while running at maximal aerobic velocity: a pilot study.

    Science.gov (United States)

    Hagin, Vincent; Gonzales, Benoît R; Groslambert, Alain

    2015-04-01

    This study assessed whether video self-modeling improves running performance and influences the rate of perceived exertion and heart rate response. Twelve men (M age=26.8 yr., SD=6; M body mass index=22.1 kg.m(-2), SD=1) performed a time to exhaustion running test at 100 percent maximal aerobic velocity while focusing on a video self-modeling loop to synchronize their stride. Compared to the control condition, there was a significant increase of time to exhaustion. Perceived exertion was lower also, but there was no significant change in mean heart rate. In conclusion, the video self-modeling used as a pacer apparently increased endurance by decreasing perceived exertion without affecting the heart rate.

  10. Passenger Sharing of the High-Speed Railway from Sensitivity Analysis Caused by Price and Run-time Based on the Multi-Agent System

    Directory of Open Access Journals (Sweden)

    Ma Ning

    2013-09-01

    Full Text Available Purpose: Nowadays, governments around the world are active in constructing the high-speed railway. Therefore, it is significant to make research on this increasingly prevalent transport.Design/methodology/approach: In this paper, we simulate the process of the passenger’s travel mode choice by adjusting the ticket fare and the run-time based on the multi-agent system (MAS.Findings: From the research we get the conclusion that increasing the run-time appropriately and reducing the ticket fare in some extent are effective ways to enhance the passenger sharing of the high-speed railway.Originality/value: We hope it can provide policy recommendations for the railway sectors in developing the long-term plan on high-speed railway in the future.

  11. Symmetry in running.

    Science.gov (United States)

    Raibert, M H

    1986-03-14

    Symmetry plays a key role in simplifying the control of legged robots and in giving them the ability to run and balance. The symmetries studied describe motion of the body and legs in terms of even and odd functions of time. A legged system running with these symmetries travels with a fixed forward speed and a stable upright posture. The symmetries used for controlling legged robots may help in elucidating the legged behavior of animals. Measurements of running in the cat and human show that the feet and body sometimes move as predicted by the even and odd symmetry functions.

  12. Distance walked and run as improved metrics over time-based energy estimation in epidemiological studies and prevention; evidence from medication use.

    Directory of Open Access Journals (Sweden)

    Paul T Williams

    Full Text Available The guideline physical activity levels are prescribed in terms of time, frequency, and intensity (e.g., 30 minutes brisk walking, five days a week or its energy equivalence and assume that different activities may be combined to meet targeted goals (exchangeability premise. Habitual runners and walkers may quantify exercise in terms of distance (km/day, and for them, the relationship between activity dose and health benefits may be better assessed in terms of distance rather than time. Analyses were therefore performed to test: 1 whether time-based or distance-based estimates of energy expenditure provide the best metric for relating running and walking to hypertensive, high cholesterol, and diabetes medication use (conditions known to be diminished by exercise, and 2 the exchangeability premise.Logistic regression analyses of medication use (dependent variable vs. metabolic equivalent hours per day (METhr/d of running, walking and other exercise (independent variables using cross-sectional data from the National Runners' (17,201 male, 16,173 female and Walkers' Health Studies (3,434 male, 12,384 female.Estimated METhr/d of running and walking activity were 38% and 31% greater, respectively, when calculated from self-reported time than distance in men, and 43% and 37% greater in women, respectively. Percent reductions in the odds for hypertension and high cholesterol medication use per METhr/d run or per METhr/d walked were ≥ 2-fold greater when estimated from reported distance (km/wk than from time (hr/wk. The per METhr/d odds reduction was significantly greater for the distance- than the time-based estimate for hypertension (runners: P<10(-5 for males and P=0.003 for females; walkers: P=0.03 for males and P<10(-4 for females, high cholesterol medication use in runners (P<10(-4 for males and P=0.02 for females and male walkers (P=0.01 for males and P=0.08 for females and for diabetes medication use in male runners (P<10(-3.Although causality

  13. Quantitative Real-Time PCR Fecal Source Identification in the ...

    Science.gov (United States)

    Rivers in the Tillamook Basin play a vital role in supporting a thriving dairy and cheese-making industry, as well as providing a safe water resource for local human and wildlife populations. Historical concentrations of fecal bacteria in these waters are at times too high to allow for safe use leading to economic loss, endangerment of local wildlife, and poor conditions for recreational use. In this study, we employ host-associated qPCR methods for human (HF183/BacR287 and HumM2), ruminant (Rum2Bac), cattle (CowM2 and CowM3), canine (DG3 and DG37), and avian (GFD) fecal pollution combined with high-resolution geographic information system (GIS) land use data and general indicator bacteria measurements to elucidatewater quality spatial and temporal trends. Water samples (n=584) were collected over a 1-year period at 29 sites along the Trask, Kilchis, and Tillamook rivers and tributaries (Tillamook Basin, OR). A total of 16.6% of samples (n=97) yielded E. coli levels considered impaired based on Oregon Department of Environmental Quality bacteria criteria (406 MPN/100mL). Hostassociated genetic indicators were detected at frequencies of 39.2% (HF183/BacR287), 16.3% (HumM2), 74.6% (Rum2Bac), 13.0% (CowM2), 26.7% (CowM3), 19.8% (DG3), 3.2% (DG37), and 53.4% (GFD) across all water samples (n=584). Seasonal trends in avian, cattle, and human fecal pollution sources were evident over the study area. On a sample site basis, quantitative fecal source identification and

  14. Separation of non-stationary multi-source sound field based on the interpolated time-domain equivalent source method

    Science.gov (United States)

    Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng

    2016-05-01

    In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.

  15. RUN COORDINATION

    CERN Multimedia

    G. Rakness.

    2013-01-01

    After three years of running, in February 2013 the era of sub-10-TeV LHC collisions drew to an end. Recall, the 2012 run had been extended by about three months to achieve the full complement of high-energy and heavy-ion physics goals prior to the start of Long Shutdown 1 (LS1), which is now underway. The LHC performance during these exciting years was excellent, delivering a total of 23.3 fb–1 of proton-proton collisions at a centre-of-mass energy of 8 TeV, 6.2 fb–1 at 7 TeV, and 5.5 pb–1 at 2.76 TeV. They also delivered 170 μb–1 lead-lead collisions at 2.76 TeV/nucleon and 32 nb–1 proton-lead collisions at 5 TeV/nucleon. During these years the CMS operations teams and shift crews made tremendous strides to commission the detector, repeatedly stepping up to meet the challenges at every increase of instantaneous luminosity and energy. Although it does not fully cover the achievements of the teams, a way to quantify their success is the fact that that...

  16. Effect of foot orthoses on magnitude and timing of rearfoot and tibial motions, ground reaction force and knee moment during running.

    Science.gov (United States)

    Eslami, Mansour; Begon, Mickaël; Hinse, Sébastien; Sadeghi, Heydar; Popov, Peter; Allard, Paul

    2009-11-01

    Changes in magnitude and timing of rearfoot eversion and tibial internal rotation by foot orthoses and their contributions to vertical ground reaction force and knee joint moments are not well understood. The objectives of this study were to test if orthoses modify the magnitude and time to peak rearfoot eversion, tibial internal rotation, active ground reaction force and knee adduction moment and determine if rearfoot eversion, tibial internal rotation magnitudes are correlated to peak active ground reaction force and knee adduction moment during the first 60% stance phase of running. Eleven healthy men ran at 170 steps per minute in shod and with foot orthoses conditions. Video and force-plate data were collected simultaneously to calculate foot joint angular displacement, ground reaction forces and knee adduction moments. Results showed that wearing semi-rigid foot orthoses significantly reduced rearfoot eversion 40% (4.1 degrees ; p=0.001) and peak active ground reaction force 6% (0.96N/kg; p=0.008). No significant time differences occurred among the peak rearfoot eversion, tibial internal rotation and peak active ground reaction force in both conditions. A positive and significant correlation was observed between peak knee adduction moment and the magnitude of rearfoot eversion during shod (r=0.59; p=0.04) and shod/orthoses running (r=0.65; p=0.02). In conclusion, foot orthoses could reduce rearfoot eversion so that this can be associated with a reduction of knee adduction moment during the first 60% stance phase of running. Finding implies that modifying rearfoot and tibial motions during running could not be related to a reduction of the ground reaction force.

  17. Solution to the monoenergetic time-dependent neutron transport equation with a time-varying source

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1986-01-01

    Even though fundamental time-dependent neutron transport problems have existed since the inception of neutron transport theory, it has only been recently that a reliable numerical solution to one of the basic problems has been obtained. Experience in generating numerical solutions to time-dependent transport equations has indicated that the multiple collision formulation is the most versatile numerical technique for model problems. The formulation coupled with a moment reconstruction of each collided flux component has led to benchmark-quality (four- to five-digit accuracy) numerical evaluation of the neutron flux in plane infinite geometry for any degree of scattering anisotropy and for both pulsed isotropic and beam sources. As will be shown in this presentation, this solution can serve as a Green's function, thus extending the previous results to more complicated source situations. Here we will be concerned with a time-varying source at the center of an infinite medium. If accurate, such solutions have both pedagogical and practical uses as benchmarks against which other more approximate solutions designed for a wider class of problems can be compared

  18. Decreasing Computational Time for VBBinaryLensing by Point Source Approximation

    Science.gov (United States)

    Tirrell, Bethany M.; Visgaitis, Tiffany A.; Bozza, Valerio

    2018-01-01

    The gravitational lens of a binary system produces a magnification map that is more intricate than a single object lens. This map cannot be calculated analytically and one must rely on computational methods to resolve. There are generally two methods of computing the microlensed flux of a source. One is based on ray-shooting maps (Kayser, Refsdal, & Stabell 1986), while the other method is based on an application of Green’s theorem. This second method finds the area of an image by calculating a Riemann integral along the image contour. VBBinaryLensing is a C++ contour integration code developed by Valerio Bozza, which utilizes this method. The parameters at which the source object could be treated as a point source, or in other words, when the source is far enough from the caustic, was of interest to substantially decrease the computational time. The maximum and minimum values of the caustic curves produced, were examined to determine the boundaries for which this simplification could be made. The code was then run for a number of different maps, with separation values and accuracies ranging from 10-1 to 10-3, to test the theoretical model and determine a safe buffer for which minimal error could be made for the approximation. The determined buffer was 1.5+5q, with q being the mass ratio. The theoretical model and the calculated points worked for all combinations of the separation values and different accuracies except the map with accuracy and separation equal to 10-3 for y1 max. An alternative approach has to be found in order to accommodate a wider range of parameters.

  19. Impact of mine and natural sources of mercury on water, sediment, and biota in Harley Gulch adjacent to the Abbott-Turkey Run mine, Lake County, California

    Science.gov (United States)

    Rytuba, James J.; Hothem, Roger L.; Brussee, Brianne E.; Goldstein, Daniel N.

    2011-01-01

    Executive Summary Stable-isotope data indicate that there are three sources of water that effect the composition and Hg concentration of waters in Harley Gulch: (1) meteoric water that dominates water chemistry during the wet season; (2) thermal water effluent from the Turkey Run mine that effects the chemistry at sample site HG1; and (3) cold connate groundwater that dominates water chemistry during the dry season as it upwells and reaches the surface. The results from sampling executed for this study suggest four distinct areas in Harley Gulch: (1) the contaminated West Fork of Harley Gulch, consisting of the stream immediately downstream from the mine area and the wetlands upstream from Harley Gulch canyon (sample sites HG1-HG2, (2) the East Fork of Harley Gulch, where no mining has occurred (sample site HG3), (3) sample sites HG4-HG7, where a seasonal influx of saline groundwater alters stream chemistry, and (4) sample sites HG7-HG10, downstream in Harley Gulch towards the confluence with Cache Creek.

  20. Effects of selective breeding for increased wheel-running behavior on circadian timing of substrate oxidation and ingestive behavior.

    Science.gov (United States)

    Jónás, I; Vaanholt, L M; Doornbos, M; Garland, T; Scheurink, A J W; Nyakas, C; van Dijk, G

    2010-04-19

    Fluctuations in substrate preference and utilization across the circadian cycle may be influenced by the degree of physical activity and nutritional status. In the present study, we assessed these relationships in control mice and in mice from a line selectively bred for high voluntary wheel-running behavior, either when feeding a carbohydrate-rich/low-fat (LF) or a high-fat (HF) diet. Housed without wheels, selected mice, and in particular the females, exhibited higher cage activity than their non-selected controls during the dark phase and at the onset of the light phase, irrespective of diet. This was associated with increases in energy expenditure in both sexes of the selection line. In selected males, carbohydrate oxidation appeared to be increased compared to controls. In contrast, selected females had profound increases in fat oxidation above the levels in control females to cover the increased energy expenditure during the dark phase. This is remarkable in light of the finding that the selected mice, and in particular the females showed higher preference for the LF diet relative to controls. It is likely that hormonal and/or metabolic signals increase carbohydrate preference in the selected females, which may serve optimal maintenance of cellular metabolism in the presence of augmented fat oxidation. (c) 2010 Elsevier Inc. All rights reserved.

  1.  Running speed during training and percent body fat predict race time in recreational male marathoners

    OpenAIRE

    Barandun U; Knechtle B; Knechtle P; Klipstein A; Rust CA; Rosemann T; Lepers R

    2012-01-01

     Background: Recent studies have shown that personal best marathon time is a strong predictor of race time in male ultramarathoners. We aimed to determine variables predictive of marathon race time in recreational male marathoners by using the same characteristics of anthropometry and training as used for ultramarathoners.Methods: Anthropometric and training characteristics of 126 recreational male marathoners were bivariately and multivariately related to marathon race times.Results...

  2. Pre-Exercise Hyperhydration-Induced Bodyweight Gain Does Not Alter Prolonged Treadmill Running Time-Trial Performance in Warm Ambient Conditions

    Directory of Open Access Journals (Sweden)

    Eric D. B. Goulet

    2012-08-01

    Full Text Available This study compared the effect of pre-exercise hyperhydration (PEH and pre-exercise euhydration (PEE upon treadmill running time-trial (TT performance in the heat. Six highly trained runners or triathletes underwent two 18 km TT runs (~28 °C, 25%–30% RH on a motorized treadmill, in a randomized, crossover fashion, while being euhydrated or after hyperhydration with 26 mL/kg bodyweight (BW of a 130 mmol/L sodium solution. Subjects then ran four successive 4.5 km blocks alternating between 2.5 km at 1% and 2 km at 6% gradient, while drinking a total of 7 mL/kg BW of a 6% sports drink solution (Gatorade, USA. PEH increased BW by 1.00 ± 0.34 kg (P < 0.01 and, compared with PEE, reduced BW loss from 3.1% ± 0.3% (EUH to 1.4% ± 0.4% (HYP (P < 0.01 during exercise. Running TT time did not differ between groups (PEH: 85.6 ± 11.6 min; PEE: 85.3 ± 9.6 min, P = 0.82. Heart rate (5 ± 1 beats/min and rectal (0.3 ± 0.1 °C and body (0.2 ± 0.1 °C temperatures of PEE were higher than those of PEH (P < 0.05. There was no significant difference in abdominal discomfort and perceived exertion or heat stress between groups. Our results suggest that pre-exercise sodium-induced hyperhydration of a magnitude of 1 L does not alter 80–90 min running TT performance under warm conditions in highly-trained runners drinking ~500 mL sports drink during exercise.

  3. Open Source Initiative Powers Real-Time Data Streams

    Science.gov (United States)

    2014-01-01

    Under an SBIR contract with Dryden Flight Research Center, Creare Inc. developed a data collection tool called the Ring Buffered Network Bus. The technology has now been released under an open source license and is hosted by the Open Source DataTurbine Initiative. DataTurbine allows anyone to stream live data from sensors, labs, cameras, ocean buoys, cell phones, and more.

  4. Predicting timing of foot strike during running, independent of striking technique, using principal component analysis of joint angles.

    Science.gov (United States)

    Osis, Sean T; Hettinga, Blayne A; Leitch, Jessica; Ferber, Reed

    2014-08-22

    As 3-dimensional (3D) motion-capture for clinical gait analysis continues to evolve, new methods must be developed to improve the detection of gait cycle events based on kinematic data. Recently, the application of principal component analysis (PCA) to gait data has shown promise in detecting important biomechanical features. Therefore, the purpose of this study was to define a new foot strike detection method for a continuum of striking techniques, by applying PCA to joint angle waveforms. In accordance with Newtonian mechanics, it was hypothesized that transient features in the sagittal-plane accelerations of the lower extremity would be linked with the impulsive application of force to the foot at foot strike. Kinematic and kinetic data from treadmill running were selected for 154 subjects, from a database of gait biomechanics. Ankle, knee and hip sagittal plane angular acceleration kinematic curves were chained together to form a row input to a PCA matrix. A linear polynomial was calculated based on PCA scores, and a 10-fold cross-validation was performed to evaluate prediction accuracy against gold-standard foot strike as determined by a 10 N rise in the vertical ground reaction force. Results show 89-94% of all predicted foot strikes were within 4 frames (20 ms) of the gold standard with the largest error being 28 ms. It is concluded that this new foot strike detection is an improvement on existing methods and can be applied regardless of whether the runner exhibits a rearfoot, midfoot, or forefoot strike pattern. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Astrometric and Timing Effects of Gravitational Waves from Localized Sources

    OpenAIRE

    Kopeikin, Sergei M.; Schafer, Gerhard; Gwinn, Carl R.; Eubanks, T. Marshall

    1998-01-01

    A consistent approach for an exhaustive solution of the problem of propagation of light rays in the field of gravitational waves emitted by a localized source of gravitational radiation is developed in the first post-Minkowskian and quadrupole approximation of General Relativity. We demonstrate that the equations of light propagation in the retarded gravitational field of an arbitrary localized source emitting quadrupolar gravitational waves can be integrated exactly. The influence of the gra...

  6. RUN COORDINATION

    CERN Multimedia

    C. Delaere

    2012-01-01

      With the analysis of the first 5 fb–1 culminating in the announcement of the observation of a new particle with mass of around 126 GeV/c2, the CERN directorate decided to extend the LHC run until February 2013. This adds three months to the original schedule. Since then the LHC has continued to perform extremely well, and the total luminosity delivered so far this year is 22 fb–1. CMS also continues to perform excellently, recording data with efficiency higher than 95% for fills with the magnetic field at nominal value. The highest instantaneous luminosity achieved by LHC to date is 7.6x1033 cm–2s–1, which translates into 35 interactions per crossing. On the CMS side there has been a lot of work to handle these extreme conditions, such as a new DAQ computer farm and trigger menus to handle the pile-up, automation of recovery procedures to minimise the lost luminosity, better training for the shift crews, etc. We did suffer from a couple of infrastructure ...

  7. Click trains and the rate of information processing: does "speeding up" subjective time make other psychological processes run faster?

    Science.gov (United States)

    Jones, Luke A; Allely, Clare S; Wearden, John H

    2011-02-01

    A series of experiments demonstrated that a 5-s train of clicks that have been shown in previous studies to increase the subjective duration of tones they precede (in a manner consistent with "speeding up" timing processes) could also have an effect on information-processing rate. Experiments used studies of simple and choice reaction time (Experiment 1), or mental arithmetic (Experiment 2). In general, preceding trials by clicks made response times significantly shorter than those for trials without clicks, but white noise had no effects on response times. Experiments 3 and 4 investigated the effects of clicks on performance on memory tasks, using variants of two classic experiments of cognitive psychology: Sperling's (1960) iconic memory task and Loftus, Johnson, and Shimamura's (1985) iconic masking task. In both experiments participants were able to recall or recognize significantly more information from stimuli preceded by clicks than those preceded by silence.

  8. The LHCb Run Control

    Energy Technology Data Exchange (ETDEWEB)

    Alessio, F; Barandela, M C; Frank, M; Gaspar, C; Herwijnen, E v; Jacobsson, R; Jost, B; Neufeld, N; Sambade, A; Schwemmer, R; Somogyi, P [CERN, 1211 Geneva 23 (Switzerland); Callot, O [LAL, IN2P3/CNRS and Universite Paris 11, Orsay (France); Duval, P-Y [Centre de Physique des Particules de Marseille, Aix-Marseille Universite, CNRS/IN2P3, Marseille (France); Franek, B [Rutherford Appleton Laboratory, Chilton, Didcot, OX11 0QX (United Kingdom); Galli, D, E-mail: Clara.Gaspar@cern.c [Universita di Bologna and INFN, Bologna (Italy)

    2010-04-01

    LHCb has designed and implemented an integrated Experiment Control System. The Control System uses the same concepts and the same tools to control and monitor all parts of the experiment: the Data Acquisition System, the Timing and the Trigger Systems, the High Level Trigger Farm, the Detector Control System, the Experiment's Infrastructure and the interaction with the CERN Technical Services and the Accelerator. LHCb's Run Control, the main interface used by the experiment's operator, provides access in a hierarchical, coherent and homogeneous manner to all areas of the experiment and to all its sub-detectors. It allows for automated (or manual) configuration and control, including error recovery, of the full experiment in its different running modes. Different instances of the same Run Control interface are used by the various sub-detectors for their stand-alone activities: test runs, calibration runs, etc. The architecture and the tools used to build the control system, the guidelines and components provided to the developers, as well as the first experience with the usage of the Run Control will be presented

  9. Subjective time runs faster under the influence of bright rather than dim light conditions during the forenoon.

    Science.gov (United States)

    Morita, Takeshi; Fukui, Tomoe; Morofushi, Masayo; Tokura, Hiromi

    2007-05-16

    The study investigated if 6 h morning bright light exposure, compared with dim light exposure, could influence time sense (range: 5-15 s). Eight women served as participants. The participant entered a bioclimatic chamber at 10:00 h on the day before the test day, where an ambient temperature and relative humidity were controlled at 25 degrees C and 60%RH. She sat quietly in a sofa in 50 lx until 22:00 h, retired at 22:00 h and then slept in total darkness. She rose at 07:00 h the following morning and again sat quietly in a sofa till 13:00 h, either in bright (2500 lx) or dim light (50 lx), the order of light intensities between the two occasions being randomized. The time-estimation test was performed from 13:00 to 13:10 h in 200 lx. The participant estimated the time that had elapsed between two buzzers, ranging over 5-15 s, and inputting the estimate into a computer. The test was carried out separately upon each individual. Results showed that the participants estimated higher durations of the given time intervals after previous exposure to 6 h of bright rather than dim light. The finding is discussed in terms of different load errors (difference between the actual core temperature and its thermoregulatory set-point) following 6-h exposure to bright or dim light in the morning.

  10. Draft Forecasts from Real-Time Runs of Physics-Based Models - A Road to the Future

    Science.gov (United States)

    Hesse, Michael; Rastatter, Lutz; MacNeice, Peter; Kuznetsova, Masha

    2008-01-01

    The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. The second focus of CCMC activities is on validation and verification of space weather models, and on the transition of appropriate models to space weather forecast centers. As part of the latter activity, the CCMC develops real-time simulation systems that stress models through routine execution. A by-product of these real-time calculations is the ability to derive model products, which may be useful for space weather operators. After consultations with NOAA/SEC and with AFWA, CCMC has developed a set of tools as a first step to make real-time model output useful to forecast centers. In this presentation, we will discuss the motivation for this activity, the actions taken so far, and options for future tools from model output.

  11. BIOSENSOR TECHNOLOGY EVALUATIONS FOR REAL-TIME/SOURCE WATER PROTECTION

    Science.gov (United States)

    Recent advances in electronics and computer technology have made great strides in the field of remote sensing and biomonitoring. The quality of drinking water sources has come under closer scrutiny in recent years. Issues ranging from ecological to public health and national se...

  12. Flash X-Ray (FXR) Accelerator Optimization Electronic Time-Resolved Measurement of X-Ray Source Size

    International Nuclear Information System (INIS)

    Jacob, J; Ong, M; Wargo, P

    2005-01-01

    Lawrence Livermore National Laboratory (LLNL) is currently investigating various approaches to minimize the x-ray source size on the Flash X-Ray (FXR) linear induction accelerator in order to improve x-ray flux and increase resolution for hydrodynamic radiography experiments. In order to effectively gauge improvements to final x-ray source size, a fast, robust, and accurate system for measuring the spot size is required. Timely feedback on x-ray source size allows new and improved accelerator tunes to be deployed and optimized within the limited run-time constraints of a production facility with a busy experimental schedule; in addition, time-resolved measurement capability allows the investigation of not only the time-averaged source size, but also the evolution of the source size, centroid position, and x-ray dose throughout the 70 ns beam pulse. Combined with time-resolved measurements of electron beam parameters such as emittance, energy, and current, key limiting factors can be identified, modeled, and optimized for the best possible spot size. Roll-bar techniques are a widely used method for x-ray source size measurement, and have been the method of choice at FXR for many years. A thick bar of tungsten or other dense metal with a sharp edge is inserted into the path of the x-ray beam so as to heavily attenuate the lower half of the beam, resulting in a half-light, half-dark image as seen downstream of the roll-bar; by measuring the width of the transition from light to dark across the edge of the roll-bar, the source size can be deduced. For many years, film has been the imaging medium of choice for roll-bar measurements thanks to its high resolution, linear response, and excellent contrast ratio. Film measurements, however, are fairly cumbersome and require considerable setup and analysis time; moreover, with the continuing trend towards all-electronic measurement systems, film is becoming increasingly difficult and expensive to procure. Here, we shall

  13. Osmium Isotope Compositions of Komatiite Sources Through Time

    Science.gov (United States)

    Walker, R. J.

    2001-12-01

    Extending Os isotopic measurements to ancient plume sources may help to constrain how and when the well-documented isotopic heterogeneities in modern systems were created. Komatiites and picrites associated with plume-related volcanism are valuable tracers of the Os isotopic composition of plumes because of their typically high Os concentrations and relatively low Re/Os. Re-Os data are now available for a variety of Phanerozoic, Proterozoic and Archean komatiites and picrites. As with modern plumes, the sources of Archean and Proterozoic komatiites exhibit a large range of initial 187Os/188Os ratios. Most komatiites are dominated by sources with chondritic Os isotopic compositions (e.g. Song La; Norseman-Wiluna; Pyke Hill; Alexo), though some (e.g. Gorgona) derive from heterogeneous sources. Of note, however, two ca. 2.7 Ga systems, Kostomuksha (Russia) and Belingwe (Zimbabwe), have initial ratios enriched by 2-3% relative to the contemporary convecting upper mantle. These results suggest that if the 187Os enrichment was due to the incorporation of minor amounts of recycled crust into the mantle source of the rocks, the crust formed very early in Earth history. Thus, the Os results could reflect derivation of melt from hybrid mantle whose composition was modified by the addition of mafic crustal material that would most likely have formed between 4.2 and 4.5 Ga. Alternately, the mantle sources of these komatiites may have derived a portion of their Os from the putative 187Os - and 186Os -enriched outer core. For this hypothesis to be applicable to Archean rocks, an inner core of sufficient mass would have to have crystallized sufficiently early in Earth history to generate an outer core with 187Os enriched by at least 3% relative to the chondritic average. Using the Pt-Re-Os partition coefficients espoused by our earlier work, and assuming linear growth of the inner core started at 4.5 Ga and continued to present, would yield an outer core at 2.7 Ga with a gamma Os

  14. Transforming parts of a differential equations system to difference equations as a method for run-time savings in NONMEM.

    Science.gov (United States)

    Petersson, K J F; Friberg, L E; Karlsson, M O

    2010-10-01

    Computer models of biological systems grow more complex as computing power increase. Often these models are defined as differential equations and no analytical solutions exist. Numerical integration is used to approximate the solution; this can be computationally intensive, time consuming and be a large proportion of the total computer runtime. The performance of different integration methods depend on the mathematical properties of the differential equations system at hand. In this paper we investigate the possibility of runtime gains by calculating parts of or the whole differential equations system at given time intervals, outside of the differential equations solver. This approach was tested on nine models defined as differential equations with the goal to reduce runtime while maintaining model fit, based on the objective function value. The software used was NONMEM. In four models the computational runtime was successfully reduced (by 59-96%). The differences in parameter estimates, compared to using only the differential equations solver were less than 12% for all fixed effects parameters. For the variance parameters, estimates were within 10% for the majority of the parameters. Population and individual predictions were similar and the differences in OFV were between 1 and -14 units. When computational runtime seriously affects the usefulness of a model we suggest evaluating this approach for repetitive elements of model building and evaluation such as covariate inclusions or bootstraps.

  15. AE source location by neural networks with arrival time profiles

    Czech Academy of Sciences Publication Activity Database

    Chlada, Milan; Blaháček, Michal; Převorovský, Zdeněk

    2009-01-01

    Roč. 19, č. 2 (2009), s. 4-4 ISSN 1213-3825. [NDT in PROGRESS. 12.11.2009-14.11.2009, Praha] R&D Projects: GA ČR GA101/07/1518; GA ČR GA106/07/1393 Institutional research plan: CEZ:AV0Z20760514 Keywords : acoustic emission * source location * artificial neural networks Subject RIV: BI - Acoustics www.cndt.cz

  16. Sediment Budgets and Sources Inform a Novel Valley Bottom Restoration Practice Impacted by Legacy Sediment: The Big Spring Run, PA, Restoration Experiment

    Science.gov (United States)

    Walter, R. C.; Merritts, D.; Rahnis, M. A.; Gellis, A.; Hartranft, J.; Mayer, P. M.; Langland, M.; Forshay, K.; Weitzman, J. N.; Schwarz, E.; Bai, Y.; Blair, A.; Carter, A.; Daniels, S. S.; Lewis, E.; Ohlson, E.; Peck, E. K.; Schulte, K.; Smith, D.; Stein, Z.; Verna, D.; Wilson, E.

    2017-12-01

    Big Spring Run (BSR), a small agricultural watershed in southeastern Pennsylvania, is located in the Piedmont Physiographic Province, which has the highest nutrient and sediment yields in the Chesapeake Bay watershed. To effectively reduce nutrient and sediment loading it is important to monitor the effect of management practices on pollutant reduction. Here we present results of an ongoing study, begun in 2008, to understand the impact of a new valley bottom restoration strategy for reducing surface water sediment and nutrient loads. We test the hypotheses that removing legacy sediments will reduce sediment and phosphorus loads, and that restoring eco-hydrological functions of a buried Holocene wetland (Walter & Merritts 2008) will improve surface and groundwater quality by creating accommodation space to trap sediment and process nutrients. Comparisons of pre- and post-restoration gage data show that restoration lowered the annual sediment load by at least 118 t yr-1, or >75%, from the 1000 m-long restoration reach, with the entire reduction accounted for by legacy sediment removal. Repeat RTK-GPS surveys of pre-restoration stream banks verified that >90 t yr-1 of suspended sediment was from bank erosion within the restoration reach. Mass balance calculations of 137Cs data indicate 85-100% of both the pre-restoration and post-restoration suspended sediment storm load was from stream bank sources. This is consistent with trace element data which show that 80-90 % of the pre-restoration outgoing suspended sediment load at BSR was from bank erosion. Meanwhile, an inventory of fallout 137Cs activity from two hill slope transects adjacent to BSR yields average modern upland erosion rates of 2.7 t ha-1 yr-1 and 5.1 t ha-1 yr-1, showing modest erosion on slopes and deposition at toe of slopes. We conclude that upland farm slopes contribute little soil to the suspended sediment supply within this study area, and removal of historic valley bottom sediment effectively

  17. 26 CFR 301.6503(d)-1 - Suspension of running of period of limitation; extension of time for payment of estate tax.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 18 2010-04-01 2010-04-01 false Suspension of running of period of limitation... ADMINISTRATION Limitations Limitations on Assessment and Collection § 301.6503(d)-1 Suspension of running of... payment of any estate tax, the running of the period of limitations for collection of such tax is...

  18. Impact of data source on travel time reliability assessment.

    Science.gov (United States)

    2014-08-01

    Travel time reliability measures are becoming an increasingly important input to the mobility and : congestion management studies. In the case of Maryland State Highway Administration, reliability : measures are key elements in the agencys Annual ...

  19. Development of econometric models for cost and time over-runs: an empirical study of major road construction projects in pakistan

    International Nuclear Information System (INIS)

    Khan, A.; Chaudhary, M.A.

    2016-01-01

    The construction industry is flourishing worldwide and contributes about 10% to the GDP of the world i.e. up to the tune of 4.6 Trillion US dollars. It employs almost 7% of the total employee dpersons and, consumes around 40% of the total energy. The Pakistani construction sector has displayed impressive growth in recent past years. The efficient road network is a key part of construction business and plays a significant role in the economic uplift of country. The overruns in costs and delays in completion of projects are very common phenomena and it has also been observed that the projects involving construction of roads also face problems of delays and cost over runs especially in developing countries. The causes of cost overruns and delays in road projects being undertaken by the premier road construction organization of Pakistan National Highway Authority (NHA) have been considered in this study. It has been done specifically in the context of impact of cause(s) determined from project report of a total of one hundred and thirty one (131) projects. The ten causative factors which we recognize as Design, Planning and Scheduling Related problems, Financial Constraint Related reasons, Social Problem Related reasons, Technical Reasons, Administrative Reasons, Scope Increase, Specification Changes, Cost Escalation Related reasons, Non-Availability of Equipment or Material and Force Majeure play a commanding role in determination of the cost and time over runs. It has also been observed that among these identified causes, the factors of Administrative Reason, Design, Planning and Scheduling Related, Technical Reasons and Force Majeure are the most significant reasons in cost and time overruns. Whereas, the Cost Escalation related reasons has the least impact on cost increase and delays. The NHA possesses a financial worth of around Rs. 36 billion and with an annual turn over amounting to Rs. 22 billion is responsible to perform road construction project in entire

  20. The Effects of Topography on Time Domain Controlled-Source Electromagnetic Data as it Applies to Impact Crater Sites

    Science.gov (United States)

    Hickey, M. S.

    2008-05-01

    Controlled-source electromagnetic geophysical methods provide a noninvasive means of characterizing subsurface structure. In order to properly model the geologic subsurface with a controlled-source time domain electromagnetic (TDEM) system in an extreme topographic environment we must first see the effects of topography on the forward model data. I run simulations using the Texas A&M University (TAMU) finite element (FEM) code in which I include true 3D topography. From these models we see the limits of how much topography we can include before our forward model can no longer give us accurate data output. The simulations are based on a model of a geologic half space with no cultural noise and focus on topography changes associated with impact crater sites, such as crater rims and central uplift. Several topographical variations of the model are run but the main constant is that there is only a small conductivity change on the range of 10-1 s/m between the host medium and the geologic body within. Asking the following questions will guide us through determining the limits of our code: What is the maximum step we can have before we see fringe effects in our data? At what location relative to the body does the topography cause the most effect? After we know the limits of the code we can develop new methods to increase the limits that will allow us to better image the subsurface using TDEM in extreme topography.

  1. Timing reference generators and chopper controllers for neutron sources

    International Nuclear Information System (INIS)

    Nelson, R.; Merl, R.; Rose, C.

    2001-01-01

    Due to AC-power-grid frequency fluctuations, the designers for accelerator-based spallation-neutron facilities have worked to optimize the competing and contrasting demands of accelerator and neutron chopper performance. Powerful new simulation techniques have enabled the modeling of the timing systems that integrate chopper controllers and chopper hardware. For the first time, we are able to quantitatively access the tradeoffs between these two constraints and design or upgrade a facility to optimize total system performance. Thus, at LANSCE, we now operate multiple chopper systems and the accelerator as simple slaves to a single master-timing-reference generator. For the SNS we recommend a similar system that is somewhat less tightly coupled to the power grid. (author)

  2. Human fecal source identification with real-time quantitative PCR

    Science.gov (United States)

    Waterborne diseases represent a significant public health risk worldwide, and can originate from contact with water contaminated with human fecal material. We describe a real-time quantitative PCR (qPCR) method that targets a Bacteroides dori human-associated genetic marker for...

  3. Longitudinal dispersion with time-dependent source concentration ...

    Indian Academy of Sciences (India)

    industries, especially coal-based industries in the industrial states such as Jharkhand and its neigh- bouring states. These industries ..... of the concentration levels of contaminants with time and distance travel- led, may help to rehabilitate the contaminated aquifer and may be useful for groundwater resource management.

  4. Design of an EEG-based brain-computer interface (BCI) from standard components running in real-time under Windows.

    Science.gov (United States)

    Guger, C; Schlögl, A; Walterspacher, D; Pfurtscheller, G

    1999-01-01

    An EEG-based brain-computer interface (BCI) is a direct connection between the human brain and the computer. Such a communication system is needed by patients with severe motor impairments (e.g. late stage of Amyotrophic Lateral Sclerosis) and has to operate in real-time. This paper describes the selection of the appropriate components to construct such a BCI and focuses also on the selection of a suitable programming language and operating system. The multichannel system runs under Windows 95, equipped with a real-time Kernel expansion to obtain reasonable real-time operations on a standard PC. Matlab controls the data acquisition and the presentation of the experimental paradigm, while Simulink is used to calculate the recursive least square (RLS) algorithm that describes the current state of the EEG in real-time. First results of the new low-cost BCI show that the accuracy of differentiating imagination of left and right hand movement is around 95%.

  5. Driving-Simulator-Based Test on the Effectiveness of Auditory Red-Light Running Vehicle Warning System Based on Time-To-Collision Sensor

    Directory of Open Access Journals (Sweden)

    Xuedong Yan

    2014-02-01

    Full Text Available The collision avoidance warning system is an emerging technology designed to assist drivers in avoiding red-light running (RLR collisions at intersections. The aim of this paper is to evaluate the effect of auditory warning information on collision avoidance behaviors in the RLR pre-crash scenarios and further to examine the casual relationships among the relevant factors. A driving-simulator-based experiment was designed and conducted with 50 participants. The data from the experiments were analyzed by approaches of ANOVA and structural equation modeling (SEM. The collisions avoidance related variables were measured in terms of brake reaction time (BRT, maximum deceleration and lane deviation in this study. It was found that the collision avoidance warning system can result in smaller collision rates compared to the without-warning condition and lead to shorter reaction times, larger maximum deceleration and less lane deviation. Furthermore, the SEM analysis illustrate that the audio warning information in fact has both direct and indirect effect on occurrence of collisions, and the indirect effect plays a more important role on collision avoidance than the direct effect. Essentially, the auditory warning information can assist drivers in detecting the RLR vehicles in a timely manner, thus providing drivers more adequate time and space to decelerate to avoid collisions with the conflicting vehicles.

  6. JTSA: an open source framework for time series abstractions.

    Science.gov (United States)

    Sacchi, Lucia; Capozzi, Davide; Bellazzi, Riccardo; Larizza, Cristiana

    2015-10-01

    The evaluation of the clinical status of a patient is frequently based on the temporal evolution of some parameters, making the detection of temporal patterns a priority in data analysis. Temporal abstraction (TA) is a methodology widely used in medical reasoning for summarizing and abstracting longitudinal data. This paper describes JTSA (Java Time Series Abstractor), a framework including a library of algorithms for time series preprocessing and abstraction and an engine to execute a workflow for temporal data processing. The JTSA framework is grounded on a comprehensive ontology that models temporal data processing both from the data storage and the abstraction computation perspective. The JTSA framework is designed to allow users to build their own analysis workflows by combining different algorithms. Thanks to the modular structure of a workflow, simple to highly complex patterns can be detected. The JTSA framework has been developed in Java 1.7 and is distributed under GPL as a jar file. JTSA provides: a collection of algorithms to perform temporal abstraction and preprocessing of time series, a framework for defining and executing data analysis workflows based on these algorithms, and a GUI for workflow prototyping and testing. The whole JTSA project relies on a formal model of the data types and of the algorithms included in the library. This model is the basis for the design and implementation of the software application. Taking into account this formalized structure, the user can easily extend the JTSA framework by adding new algorithms. Results are shown in the context of the EU project MOSAIC to extract relevant patterns from data coming related to the long term monitoring of diabetic patients. The proof that JTSA is a versatile tool to be adapted to different needs is given by its possible uses, both as a standalone tool for data summarization and as a module to be embedded into other architectures to select specific phenotypes based on TAs in a large

  7. Voluntary Wheel Running in Mice.

    Science.gov (United States)

    Goh, Jorming; Ladiges, Warren

    2015-12-02

    Voluntary wheel running in the mouse is used to assess physical performance and endurance and to model exercise training as a way to enhance health. Wheel running is a voluntary activity in contrast to other experimental exercise models in mice, which rely on aversive stimuli to force active movement. This protocol consists of allowing mice to run freely on the open surface of a slanted, plastic saucer-shaped wheel placed inside a standard mouse cage. Rotations are electronically transmitted to a USB hub so that frequency and rate of running can be captured via a software program for data storage and analysis for variable time periods. Mice are individually housed so that accurate recordings can be made for each animal. Factors such as mouse strain, gender, age, and individual motivation, which affect running activity, must be considered in the design of experiments using voluntary wheel running. Copyright © 2015 John Wiley & Sons, Inc.

  8. Species interactions and response time to climate change: ice-cover and terrestrial run-off shaping Arctic char and brown trout competitive asymmetries

    Science.gov (United States)

    Finstad, A. G.; Palm Helland, I.; Jonsson, B.; Forseth, T.; Foldvik, A.; Hessen, D. O.; Hendrichsen, D. K.; Berg, O. K.; Ulvan, E.; Ugedal, O.

    2011-12-01

    There has been a growing recognition that single species responses to climate change often mainly are driven by interaction with other organisms and single species studies therefore not are sufficient to recognize and project ecological climate change impacts. Here, we study how performance, relative abundance and the distribution of two common Arctic and sub-Arctic freshwater fishes (brown trout and Arctic char) are driven by competitive interactions. The interactions are modified both by direct climatic effects on temperature and ice-cover, and indirectly through climate forcing of terrestrial vegetation pattern and associated carbon and nutrient run-off. We first use laboratory studies to show that Arctic char, which is the world's most northernmost distributed freshwater fish, outperform trout under low light levels and also have comparable higher growth efficiency. Corresponding to this, a combination of time series and time-for-space analyses show that ice-cover duration and carbon and nutrient load mediated by catchment vegetation properties strongly affected the outcome of the competition and likely drive the species distribution pattern through competitive exclusion. In brief, while shorter ice-cover period and decreased carbon load favored brown trout, increased ice-cover period and increased carbon load favored Arctic char. Length of ice-covered period and export of allochthonous material from catchments are major, but contrasting, climatic drivers of competitive interaction between these two freshwater lake top-predators. While projected climate change lead to decreased ice-cover, corresponding increase in forest and shrub cover amplify carbon and nutrient run-off. Although a likely outcome of future Arctic and sub-arctic climate scenarios are retractions of the Arctic char distribution area caused by competitive exclusion, the main drivers will act on different time scales. While ice-cover will change instantaneously with increasing temperature

  9. Determinants of the abilities to jump higher and shorten the contact time in a running 1-legged vertical jump in basketball.

    Science.gov (United States)

    Miura, Ken; Yamamoto, Masayoshi; Tamaki, Hiroyuki; Zushi, Koji

    2010-01-01

    This study was conducted to obtain useful information for developing training techniques for the running 1-legged vertical jump in basketball (lay-up shot jump). The ability to perform the lay-up shot jump and various basic jumps was measured by testing 19 male basketball players. The basic jumps consisted of the 1-legged repeated rebound jump, the 2-legged repeated rebound jump, and the countermovement jump. Jumping height, contact time, and jumping index (jumping height/contact time) were measured and calculated using a contact mat/computer system that recorded the contact and air times. The jumping index indicates power. No significant correlation existed between the jumping height and contact time of the lay-up shot jump, the 2 components of the lay-up shot jump index. As a result, jumping height and contact time were found to be mutually independent abilities. The relationships in contact time between the lay-up shot jump to the 1-legged repeated rebound jump and the 2-legged repeated rebound jump were correlated on the same significance levels (p jumping height existed between the 1-legged repeated rebound jump and the lay-up shot jump (p jumping height between the lay-up shot jump and both the 2-legged repeated rebound jump and countermovement jump. The lay-up shot index correlated more strongly to the 1-legged repeated rebound jump index (p jump index (p jump is effective in improving both contact time and jumping height in the lay-up shot jump.

  10. Comparing Sources of Storm-Time Ring Current O+

    Science.gov (United States)

    Kistler, L. M.

    2015-12-01

    The first observations of the storm-time ring current composition using AMPTE/CCE data showed that the O+ contribution to the ring current increases significantly during storms. The ring current is predominantly formed from inward transport of the near-earth plasma sheet. Thus the increase of O+ in the ring current implies that the ionospheric contribution to the plasma sheet has increased. The ionospheric plasma that reaches the plasma sheet can come from both the cusp and the nightside aurora. The cusp outflow moves through the lobe and enters the plasma sheet through reconnection at the near-earth neutral line. The nightside auroral outflow has direct access to nightside plasma sheet. Using data from Cluster and the Van Allen Probes spacecraft, we compare the development of storms in cases where there is a clear input of nightside auroral outflow, and in cases where there is a significant cusp input. We find that the cusp input, which enters the tail at ~15-20 Re becomes isotropized when it crosses the neutral sheet, and becomes part of the hot (>1 keV) plasma sheet population as it convects inward. The auroral outflow, which enters the plasma sheet closer to the earth, where the radius of curvature of the field line is larger, does not isotropize or become significantly energized, but remains a predominantly field aligned low energy population in the inner magnetosphere. It is the hot plasma sheet population that gets accelerated to high enough energies in the inner magnetosphere to contribute strongly to the ring current pressure. Thus it appears that O+ that enters the plasma sheet further down the tail has a greater impact on the storm-time ring current than ions that enter closer to the earth.

  11. Changes in Running Mechanics During a 6-Hour Running Race.

    Science.gov (United States)

    Giovanelli, Nicola; Taboga, Paolo; Lazzer, Stefano

    2017-05-01

    To investigate changes in running mechanics during a 6-h running race. Twelve ultraendurance runners (age 41.9 ± 5.8 y, body mass 68.3 ± 12.6 kg, height 1.72 ± 0.09 m) were asked to run as many 874-m flat loops as possible in 6 h. Running speed, contact time (t c ), and aerial time (t a ) were measured in the first lap and every 30 ± 2 min during the race. Peak vertical ground-reaction force (F max ), stride length (SL), vertical downward displacement of the center of mass (Δz), leg-length change (ΔL), vertical stiffness (k vert ), and leg stiffness (k leg ) were then estimated. Mean distance covered by the athletes during the race was 62.9 ± 7.9 km. Compared with the 1st lap, running speed decreased significantly from 4 h 30 min onward (mean -5.6% ± 0.3%, P running, reaching the maximum difference after 5 h 30 min (+6.1%, P = .015). Conversely, k vert decreased after 4 h, reaching the lowest value after 5 h 30 min (-6.5%, P = .008); t a and F max decreased after 4 h 30 min through to the end of the race (mean -29.2% and -5.1%, respectively, P running, suggesting a possible time threshold that could affect performance regardless of absolute running speed.

  12. Blind Time-Frequency Analysis for Source Discrimination in Multisensor Array Processing

    National Research Council Canada - National Science Library

    Amin, Moeness

    1999-01-01

    .... We have clearly demonstrated, through analysis and simulations, the offerings of time-frequency distributions in solving key problems in sensor array processing, including direction finding, source...

  13. Effects of the airwave in time-domain marine controlled-source electromagnetics

    NARCIS (Netherlands)

    Hunziker, J.W.; Slob, E.C.; Mulder, W.

    2011-01-01

    In marine time-domain controlled-source electromagnetics (CSEM), there are two different acquisition methods: with horizontal sources for fast and simple data acquisition or with vertical sources for minimizing the effects of the airwave. Illustrations of the electric field as a function of space

  14. 3D Multi‐source Least‐squares Reverse Time Migration

    KAUST Repository

    Dai, Wei; Boonyasiriwat, Chaiwoot; Schuster, Gerard T.

    2010-01-01

    : random time shift, random source polarity and random source location selected from a pre‐designed table. Numerical tests for the 3D SEG/EAGE Overthrust model show that multi‐source LSRTM can suppress migration artifacts in the migration image and remove

  15. About the Modeling of Radio Source Time Series as Linear Splines

    Science.gov (United States)

    Karbon, Maria; Heinkelmann, Robert; Mora-Diaz, Julian; Xu, Minghui; Nilsson, Tobias; Schuh, Harald

    2016-12-01

    Many of the time series of radio sources observed in geodetic VLBI show variations, caused mainly by changes in source structure. However, until now it has been common practice to consider source positions as invariant, or to exclude known misbehaving sources from the datum conditions. This may lead to a degradation of the estimated parameters, as unmodeled apparent source position variations can propagate to the other parameters through the least squares adjustment. In this paper we will introduce an automated algorithm capable of parameterizing the radio source coordinates as linear splines.

  16. Effects of running time of a cattle-cooling system on core body temperature of cows on dairy farms in an arid environment.

    Science.gov (United States)

    Ortiz, X A; Smith, J F; Bradford, B J; Harner, J P; Oddy, A

    2010-10-01

    Two experiments were conducted on a commercial dairy farm to describe the effects of a reduction in Korral Kool (KK; Korral Kool Inc., Mesa, AZ) system operating time on core body temperature (CBT) of primiparous and multiparous cows. In the first experiment, KK systems were operated for 18, 21, or 24 h/d while CBT of 63 multiparous Holstein dairy cows was monitored. All treatments started at 0600 h, and KK systems were turned off at 0000 h and 0300 h for the 18-h and 21-h treatments, respectively. Animals were housed in 9 pens and assigned randomly to treatment sequences in a 3 × 3 Latin square design. In the second experiment, 21 multiparous and 21 primiparous cows were housed in 6 pens and assigned randomly to treatment sequences (KK operated for 21 or 24 h/d) in a switchback design. All treatments started at 0600 h, and KK systems were turned off at 0300 h for the 21-h treatments. In experiment 1, cows in the 24-h treatment had a lower mean CBT than cows in the 18- and 21-h treatments (38.97, 39.08, and 39.03±0.04°C, respectively). The significant treatment by time interaction showed that the greatest treatment effects occurred at 0600 h; treatment means at this time were 39.43, 39.37, and 38.88±0.18°C for 18-, 21-, and 24-h treatments, respectively. These results demonstrate that a reduction in KK system running time of ≥3 h/d will increase CBT. In experiment 2, a significant parity by treatment interaction was found. Multiparous cows on the 24-h treatment had lower mean CBT than cows on the 21-h treatment (39.23 and 39.45±0.17°C, respectively), but treatment had no effect on mean CBT of primiparous cows (39.50 and 39.63±0.20°C for 21- and 24-h treatments, respectively). A significant treatment by time interaction was observed, with the greatest treatment effects occurring at 0500 h; treatment means at this time were 39.57, 39.23, 39.89, and 39.04±0.24°C for 21-h primiparous, 24-h primiparous, 21-h multiparous, and 24-h multiparous cows

  17. High-resolution and super stacking of time-reversal mirrors in locating seismic sources

    KAUST Repository

    Cao, Weiping; Hanafy, Sherif M.; Schuster, Gerard T.; Zhan, Ge; Boonyasiriwat, Chaiwoot

    2011-01-01

    Time reversal mirrors can be used to backpropagate and refocus incident wavefields to their actual source location, with the subsequent benefits of imaging with high-resolution and super-stacking properties. These benefits of time reversal mirrors

  18. Time-dependent anisotropic external sources in transient 3-D transport code TORT-TD

    International Nuclear Information System (INIS)

    Seubert, A.; Pautz, A.; Becker, M.; Dagan, R.

    2009-01-01

    This paper describes the implementation of a time-dependent distributed external source in TORT-TD by explicitly considering the external source in the ''fixed-source'' term of the implicitly time-discretised 3-D discrete ordinates transport equation. Anisotropy of the external source is represented by a spherical harmonics series expansion similar to the angular fluxes. The YALINA-Thermal subcritical assembly serves as a test case. The configuration with 280 fuel rods has been analysed with TORT-TD using cross sections in 18 energy groups and P1 scattering order generated by the KAPROS code system. Good agreement is achieved concerning the multiplication factor. The response of the system to an artificial time-dependent source consisting of two square-wave pulses demonstrates the time-dependent external source capability of TORT-TD. The result is physically plausible as judged from validation calculations. (orig.)

  19. Space-time dependence between energy sources and climate related energy production

    Science.gov (United States)

    Engeland, Kolbjorn; Borga, Marco; Creutin, Jean-Dominique; Ramos, Maria-Helena; Tøfte, Lena; Warland, Geir

    2014-05-01

    and solar power production and their co-fluctuation at small time scales. The multi-scale nature of the variability is less studied, i.e., the potential adverse or favorable co-fluctuation at intermediate time scales involving water scarcity or abundance, is less present in the literature.Our review points out that it could be especially interesting to promote research on how the pronounced large-scale fluctuations in inflow to hydropower (intra-annual run-off) and smaller scale fluctuations in wind- and solar-power interact in an energy system. There is a need to better represent the profound difference between wind-, solar- and hydro-energy sources. On the one hand, they are all directly linked to the 2-D horizontal dynamics of meteorology. On the other hand, the branching structure of hydrological systems transforms this variability and governs the complex combination of natural inflows and reservoir storage.Finally, we note that the CRE production is, in addition to weather, also influenced by the energy system and market, i.e., the energy transport and demand across scales as well as changes of market regulation. The CRE production system lies thus in this nexus between climate, energy systems and market regulations. The work presented is part of the FP7 project COMPLEX (Knowledge based climate mitigation systems for a low carbon economy; http://www.complex.ac.uk)

  20. A GIS-based time-dependent seismic source modeling of Northern Iran

    Science.gov (United States)

    Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza

    2017-01-01

    The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.

  1. CDF run II run control and online monitor

    International Nuclear Information System (INIS)

    Arisawa, T.; Ikado, K.; Badgett, W.; Chlebana, F.; Maeshima, K.; McCrory, E.; Meyer, A.; Patrick, J.; Wenzel, H.; Stadie, H.; Wagner, W.; Veramendi, G.

    2001-01-01

    The authors discuss the CDF Run II Run Control and online event monitoring system. Run Control is the top level application that controls the data acquisition activities across 150 front end VME crates and related service processes. Run Control is a real-time multi-threaded application implemented in Java with flexible state machines, using JDBC database connections to configure clients, and including a user friendly and powerful graphical user interface. The CDF online event monitoring system consists of several parts: the event monitoring programs, the display to browse their results, the server program which communicates with the display via socket connections, the error receiver which displays error messages and communicates with Run Control, and the state manager which monitors the state of the monitor programs

  2. Single sources in the low-frequency gravitational wave sky: properties and time to detection by pulsar timing arrays

    Science.gov (United States)

    Kelley, Luke Zoltan; Blecha, Laura; Hernquist, Lars; Sesana, Alberto; Taylor, Stephen R.

    2018-06-01

    We calculate the properties, occurrence rates and detection prospects of individually resolvable `single sources' in the low-frequency gravitational wave (GW) spectrum. Our simulations use the population of galaxies and massive black hole binaries from the Illustris cosmological hydrodynamic simulations, coupled to comprehensive semi-analytic models of the binary merger process. Using mock pulsar timing arrays (PTA) with, for the first time, varying red-noise models, we calculate plausible detection prospects for GW single sources and the stochastic GW background (GWB). Contrary to previous results, we find that single sources are at least as detectable as the GW background. Using mock PTA, we find that these `foreground' sources (also `deterministic'/`continuous') are likely to be detected with ˜20 yr total observing baselines. Detection prospects, and indeed the overall properties of single sources, are only moderately sensitive to binary evolution parameters - namely eccentricity and environmental coupling, which can lead to differences of ˜5 yr in times to detection. Red noise has a stronger effect, roughly doubling the time to detection of the foreground between a white-noise only model (˜10-15 yr) and severe red noise (˜20-30 yr). The effect of red noise on the GWB is even stronger, suggesting that single source detections may be more robust. We find that typical signal-to-noise ratios for the foreground peak near f = 0.1 yr-1, and are much less sensitive to the continued addition of new pulsars to PTA.

  3. LORD-Q: a long-run real-time PCR-based DNA-damage quantification method for nuclear and mitochondrial genome analysis

    Science.gov (United States)

    Lehle, Simon; Hildebrand, Dominic G.; Merz, Britta; Malak, Peter N.; Becker, Michael S.; Schmezer, Peter; Essmann, Frank; Schulze-Osthoff, Klaus; Rothfuss, Oliver

    2014-01-01

    DNA damage is tightly associated with various biological and pathological processes, such as aging and tumorigenesis. Although detection of DNA damage is attracting increasing attention, only a limited number of methods are available to quantify DNA lesions, and these techniques are tedious or only detect global DNA damage. In this study, we present a high-sensitivity long-run real-time PCR technique for DNA-damage quantification (LORD-Q) in both the mitochondrial and nuclear genome. While most conventional methods are of low-sensitivity or restricted to abundant mitochondrial DNA samples, we established a protocol that enables the accurate sequence-specific quantification of DNA damage in >3-kb probes for any mitochondrial or nuclear DNA sequence. In order to validate the sensitivity of this method, we compared LORD-Q with a previously published qPCR-based method and the standard single-cell gel electrophoresis assay, demonstrating a superior performance of LORD-Q. Exemplarily, we monitored induction of DNA damage and repair processes in human induced pluripotent stem cells and isogenic fibroblasts. Our results suggest that LORD-Q provides a sequence-specific and precise method to quantify DNA damage, thereby allowing the high-throughput assessment of DNA repair, genotoxicity screening and various other processes for a wide range of life science applications. PMID:24371283

  4. Time-dependent anisotropic distributed source capability in transient 3-d transport code tort-TD

    International Nuclear Information System (INIS)

    Seubert, A.; Pautz, A.; Becker, M.; Dagan, R.

    2009-01-01

    The transient 3-D discrete ordinates transport code TORT-TD has been extended to account for time-dependent anisotropic distributed external sources. The extension aims at the simulation of the pulsed neutron source in the YALINA-Thermal subcritical assembly. Since feedback effects are not relevant in this zero-power configuration, this offers a unique opportunity to validate the time-dependent neutron kinetics of TORT-TD with experimental data. The extensions made in TORT-TD to incorporate a time-dependent anisotropic external source are described. The steady state of the YALINA-Thermal assembly and its response to an artificial square-wave source pulse sequence have been analysed with TORT-TD using pin-wise homogenised cross sections in 18 prompt energy groups with P 1 scattering order and 8 delayed neutron groups. The results demonstrate the applicability of TORT-TD to subcritical problems with a time-dependent external source. (authors)

  5. The immediate effect of long-distance running on T2 and T2* relaxation times of articular cartilage of the knee in young healthy adults at 3.0 T MR imaging.

    Science.gov (United States)

    Behzadi, Cyrus; Welsch, Goetz H; Laqmani, Azien; Henes, Frank O; Kaul, Michael G; Schoen, Gerhard; Adam, Gerhard; Regier, Marc

    2016-08-01

    To quantitatively assess the immediate effect of long-distance running on T2 and T2* relaxation times of the articular cartilage of the knee at 3.0 T in young healthy adults. 30 healthy male adults (18-31 years) who perform sports at an amateur level underwent an initial MRI at 3.0 T with T2 weighted [16 echo times (TEs): 9.7-154.6 ms] and T2* weighted (24 TEs: 4.6-53.6 ms) relaxation measurements. Thereafter, all participants performed a 45-min run. After the run, all individuals were immediately re-examined. Data sets were post-processed using dedicated software (ImageJ; National Institute of Health, Bethesda, MD). 22 regions of interest were manually drawn in segmented areas of the femoral, tibial and patellar cartilage. For statistical evaluation, Pearson product-moment correlation coefficients and confidence intervals were computed. Mean initial values were 35.7 ms for T2 and 25.1 ms for T2*. After the run, a significant decrease in the mean T2 and T2* relaxation times was observed for all segments in all participants. A mean decrease of relaxation time was observed for T2 with 4.6 ms (±3.6 ms) and for T2* with 3.6 ms (±5.1 ms) after running. A significant decrease could be observed in all cartilage segments for both biomarkers. Both quantitative techniques, T2 and T2*, seem to be valuable parameters in the evaluation of immediate changes in the cartilage ultrastructure after running. This is the first direct comparison of immediate changes in T2 and T2* relaxation times after running in healthy adults.

  6. Dr. Sheehan on Running.

    Science.gov (United States)

    Sheehan, George A.

    This book is both a personal and technical account of the experience of running by a heart specialist who began a running program at the age of 45. In its seventeen chapters, there is information presented on the spiritual, psychological, and physiological results of running; treatment of athletic injuries resulting from running; effects of diet…

  7. Relationship between running kinematic changes and time limit at vVO2max. DOI: http://dx.doi.org/10.5007/1980-0037.2012v14n4p428

    Directory of Open Access Journals (Sweden)

    Sebastião Iberes Lopes Melo

    2012-07-01

    Full Text Available DOI: http://dx.doi.org/10.5007/1980-0037.2012v14n4p428Exhaustive running at maximal oxygen uptake velocity (vVO2max can alter running kinematic parameters and increase energy cost along the time. The aims of the present study were to compare characteristics of ankle and knee kinematics during running at vVO2max and to verify the relationship between changes in kinematic variables and time limit (Tlim. Eleven male volunteers, recreational players of team sports, performed an incremental running test until volitional exhaustion to determine vVO2max and a constant velocity test at vVO2max. Subjects were filmed continuously from the left sagittal plane at 210 Hz for further kinematic analysis. The maximal plantar flexion during swing (p<0.01 was the only variable that increased significantly from beginning to end of the run. Increase in ankle angle at contact was the only variable related to Tlim (r=0.64; p=0.035 and explained 34% of the performance in the test. These findings suggest that the individuals under study maintained a stable running style at vVO2max and that increase in plantar flexion explained the performance in this test when it was applied in non-runners.

  8. Non-uniform dwell times in line source high dose rate brachytherapy: physical and radiobiological considerations

    International Nuclear Information System (INIS)

    Jones, B.; Tan, L.T.; Freestone, G.; Bleasdale, C.; Myint, S.; Littler, J.

    1994-01-01

    The ability to vary source dwell times in high dose rate (HDR) brachytherapy allows for the use of non-uniform dwell times along a line source. This may have advantages in the radical treatment of tumours depending on individual tumour geometry. This study investigates the potential improvements in local tumour control relative to adjacent normal tissue isoeffects when intratumour source dwell times are increased along the central portion of a line source (technique A) in radiotherapy schedules which include a relatively small component of HDR brachytherapy. Such a technique is predicted to increase the local control for tumours of diameters ranging between 2 cm and 4 cm by up to 11% compared with a technique in which there are uniform dwell times along the line source (technique B). There is no difference in the local control rates for the two techniques when used to treat smaller tumours. Normal tissue doses are also modified by the technique used. Technique A produces higher normal tissue doses at points perpendicular to the centre of the line source and lower dose at points nearer the ends of the line source if the prescription point is not in the central plane of the line source. Alternatively, if the dose is prescribed at a point in the central plane of the line source, the dose at all the normal tissue points are lower when technique A is used. (author)

  9. Three-dimensional localization of low activity gamma-ray sources in real-time scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Manish K., E-mail: mksrkf@mst.edu; Alajo, Ayodeji B.; Lee, Hyoung K.

    2016-03-21

    Radioactive source localization plays an important role in tracking radiation threats in homeland security tasks. Its real-time application requires computationally efficient and reasonably accurate algorithms even with limited data to support detection with minimum uncertainty. This paper describes a statistic-based grid-refinement method for backtracing the position of a gamma-ray source in a three-dimensional domain in real-time. The developed algorithm used measurements from various known detector positions to localize the source. This algorithm is based on an inverse-square relationship between source intensity at a detector and the distance from the source to the detector. The domain discretization was developed and implemented in MATLAB. The algorithm was tested and verified from simulation results of an ideal case of a point source in non-attenuating medium. Subsequently, an experimental validation of the algorithm was performed to determine the suitability of deploying this scheme in real-time scenarios. Using the measurements from five known detector positions and for a measurement time of 3 min, the source position was estimated with an accuracy of approximately 53 cm. The accuracy improved and stabilized to approximately 25 cm for higher measurement times. It was concluded that the error in source localization was primarily due to detection uncertainties. In verification and experimental validation of the algorithm, the distance between {sup 137}Cs source and any detector position was between 0.84 m and 1.77 m. The results were also compared with the least squares method. Since the discretization algorithm was validated with a weak source, it is expected that it can localize the source of higher activity in real-time. It is believed that for the same physical placement of source and detectors, a source of approximate activity 0.61–0.92 mCi can be localized in real-time with 1 s of measurement time and same accuracy. The accuracy and computational

  10. Pulsar timing residuals due to individual non-evolving gravitational wave sources

    International Nuclear Information System (INIS)

    Tong Ming-Lei; Zhao Cheng-Shi; Yan Bao-Rong; Yang Ting-Gao; Gao Yu-Ping

    2014-01-01

    The pulsar timing residuals induced by gravitational waves from non-evolving single binary sources are affected by many parameters related to the relative positions of the pulsar and the gravitational wave sources. We will analyze the various effects due to different parameters. The standard deviations of the timing residuals will be calculated with a variable parameter fixing a set of other parameters. The orbits of the binary sources will be generally assumed to be elliptical. The influences of different eccentricities on the pulsar timing residuals will also be studied in detail. We find that the effects of the related parameters are quite different, and some of them display certain regularities

  11. Run Clever - No difference in risk of injury when comparing progression in running volume and running intensity in recreational runners

    DEFF Research Database (Denmark)

    Ramskov, Daniel; Rasmussen, Sten; Sørensen, Henrik

    2018-01-01

    Background/aim: The Run Clever trial investigated if there was a difference in injury occurrence across two running schedules, focusing on progression in volume of running intensity (Sch-I) or in total running volume (Sch-V). It was hypothesised that 15% more runners with a focus on progression...... in volume of running intensity would sustain an injury compared with runners with a focus on progression in total running volume. Methods: Healthy recreational runners were included and randomly allocated to Sch-I or Sch-V. In the first eight weeks of the 24-week follow-up, all participants (n=839) followed...... participants received real-time, individualised feedback on running intensity and running volume. The primary outcome was running-related injury (RRI). Results: After preconditioning a total of 80 runners sustained an RRI (Sch-I n=36/Sch-V n=44). The cumulative incidence proportion (CIP) in Sch-V (reference...

  12. Overcomplete Blind Source Separation by Combining ICA and Binary Time-Frequency Masking

    DEFF Research Database (Denmark)

    Pedersen, Michael Syskind; Wang, DeLiang; Larsen, Jan

    2005-01-01

    a novel method for over-complete blind source separation. Two powerful source separation techniques have been combined, independent component analysis and binary time-frequency masking. Hereby, it is possible to iteratively extract each speech signal from the mixture. By using merely two microphones we...

  13. Time-domain single-source integral equations for analyzing scattering from homogeneous penetrable objects

    KAUST Repository

    Valdé s, Felipe; Andriulli, Francesco P.; Bagci, Hakan; Michielssen, Eric

    2013-01-01

    Single-source time-domain electric-and magnetic-field integral equations for analyzing scattering from homogeneous penetrable objects are presented. Their temporal discretization is effected by using shifted piecewise polynomial temporal basis

  14. Celeris: A GPU-accelerated open source software with a Boussinesq-type wave solver for real-time interactive simulation and visualization

    Science.gov (United States)

    Tavakkol, Sasan; Lynett, Patrick

    2017-08-01

    In this paper, we introduce an interactive coastal wave simulation and visualization software, called Celeris. Celeris is an open source software which needs minimum preparation to run on a Windows machine. The software solves the extended Boussinesq equations using a hybrid finite volume-finite difference method and supports moving shoreline boundaries. The simulation and visualization are performed on the GPU using Direct3D libraries, which enables the software to run faster than real-time. Celeris provides a first-of-its-kind interactive modeling platform for coastal wave applications and it supports simultaneous visualization with both photorealistic and colormapped rendering capabilities. We validate our software through comparison with three standard benchmarks for non-breaking and breaking waves.

  15. Invited Article: Characterization of background sources in space-based time-of-flight mass spectrometers

    International Nuclear Information System (INIS)

    Gilbert, J. A.; Gershman, D. J.; Gloeckler, G.; Lundgren, R. A.; Zurbuchen, T. H.; Orlando, T. M.; McLain, J.; Steiger, R. von

    2014-01-01

    For instruments that use time-of-flight techniques to measure space plasma, there are common sources of background signals that evidence themselves in the data. The background from these sources may increase the complexity of data analysis and reduce the signal-to-noise response of the instrument, thereby diminishing the science value or usefulness of the data. This paper reviews several sources of background commonly found in time-of-flight mass spectrometers and illustrates their effect in actual data using examples from ACE-SWICS and MESSENGER-FIPS. Sources include penetrating particles and radiation, UV photons, energy straggling and angular scattering, electron stimulated desorption of ions, ion-induced electron emission, accidental coincidence events, and noise signatures from instrument electronics. Data signatures of these sources are shown, as well as mitigation strategies and design considerations for future instruments

  16. Time-of-flight diffraction at pulsed neutron sources: An introduction to the symposium

    International Nuclear Information System (INIS)

    Jorgensen, J.D.

    1994-01-01

    In the 25 years since the first low-power demonstration experiments, pulsed neutron sources have become as productive as reactor sources for many types of diffraction experiments. The pulsed neutron sources presently operating in the United States, England, and Japan offer state of the art instruments for powder and single crystal diffraction, small angle scattering, and such specialized techniques as grazing-incidence neutron reflection, as well as quasielastic and inelastic scattering. In this symposium, speakers review the latest advances in diffraction instrumentation for pulsed neutron sources and give examples of some of the important science presently being done. In this introduction to the symposium, I briefly define the basic principles of pulsed neutron sources, review their development, comment in general terms on the development of time-of-flight diffraction instrumentation for these sources, and project how this field will develop in the next ten years

  17. Influence of the Heel-to-Toe Drop of Standard Cushioned Running Shoes on Injury Risk in Leisure-Time Runners: A Randomized Controlled Trial With 6-Month Follow-up.

    Science.gov (United States)

    Malisoux, Laurent; Chambon, Nicolas; Urhausen, Axel; Theisen, Daniel

    2016-11-01

    Modern running shoes are available in a wide range of heel-to-toe drops (ie, the height difference between the forward and rear parts of the inside of the shoe). While shoe drop has been shown to influence strike pattern, its effect on injury risk has never been investigated. Therefore, the reasons for such variety in this parameter are unclear. The first aim of this study was to determine whether the drop of standard cushioned running shoes influences running injury risk. The secondary aim was to investigate whether recent running regularity modifies the relationship between shoe drop and injury risk. Randomized controlled trial; Level of evidence, 1. Leisure-time runners (N = 553) were observed for 6 months after having received a pair of shoes with a heel-to-toe drop of 10 mm (D10), 6 mm (D6), or 0 mm (D0). All participants reported their running activities and injuries (time-loss definition, at least 1 day) in an electronic system. Cox regression analyses were used to compare injury risk between the 3 groups based on hazard rate ratios (HRs) and their 95% CIs. A stratified analysis was conducted to evaluate the effect of shoe drop in occasional runners (running regularity, low-drop shoes (D6 and D0) were found to be associated with a lower injury risk in occasional runners (HR, 0.48; 95% CI, 0.23-0.98), whereas these shoes were associated with a higher injury risk in regular runners (HR, 1.67; 95% CI, 1.07-2.62). Overall, injury risk was not modified by the drop of standard cushioned running shoes. However, low-drop shoes could be more hazardous for regular runners, while these shoes seem to be preferable for occasional runners to limit injury risk. © 2016 The Author(s).

  18. Photodetection-induced relative timing jitter in synchronized time-lens source for coherent Raman scattering microscopy

    Directory of Open Access Journals (Sweden)

    Jiaqi Wang

    2017-09-01

    Full Text Available Synchronized time-lens source is a novel method to generate synchronized optical pulses to mode-locked lasers, and has found widespread applications in coherent Raman scattering microscopy. Relative timing jitter between the mode-locked laser and the synchronized time-lens source is a key parameter for evaluating the synchronization performance of such synchronized laser systems. However, the origins of the relative timing jitter in such systems are not fully determined, which in turn prevents the experimental efforts to optimize the synchronization performance. Here, we demonstrate, through theoretical modeling and numerical simulation, that the photodetection could be one physical origin of the relative timing jitter. Comparison with relative timing jitter due to the intrinsic timing jitter of the mode-locked laser is also demonstrated, revealing different qualitative and quantitative behaviors. Based on the nature of this photodetection-induced timing jitter, we further propose several strategies to reduce the relative timing jitter. Our theoretical results will provide guidelines for optimizing synchronization performance in experiments.

  19. Effect of Minimalist Footwear on Running Efficiency

    Science.gov (United States)

    Gillinov, Stephen M.; Laux, Sara; Kuivila, Thomas; Hass, Daniel; Joy, Susan M.

    2015-01-01

    Background: Although minimalist footwear is increasingly popular among runners, claims that minimalist footwear enhances running biomechanics and efficiency are controversial. Hypothesis: Minimalist and barefoot conditions improve running efficiency when compared with traditional running shoes. Study Design: Randomized crossover trial. Level of Evidence: Level 3. Methods: Fifteen experienced runners each completed three 90-second running trials on a treadmill, each trial performed in a different type of footwear: traditional running shoes with a heavily cushioned heel, minimalist running shoes with minimal heel cushioning, and barefoot (socked). High-speed photography was used to determine foot strike, ground contact time, knee angle, and stride cadence with each footwear type. Results: Runners had more rearfoot strikes in traditional shoes (87%) compared with minimalist shoes (67%) and socked (40%) (P = 0.03). Ground contact time was longest in traditional shoes (265.9 ± 10.9 ms) when compared with minimalist shoes (253.4 ± 11.2 ms) and socked (250.6 ± 16.2 ms) (P = 0.005). There was no difference between groups with respect to knee angle (P = 0.37) or stride cadence (P = 0.20). When comparing running socked to running with minimalist running shoes, there were no differences in measures of running efficiency. Conclusion: When compared with running in traditional, cushioned shoes, both barefoot (socked) running and minimalist running shoes produce greater running efficiency in some experienced runners, with a greater tendency toward a midfoot or forefoot strike and a shorter ground contact time. Minimalist shoes closely approximate socked running in the 4 measurements performed. Clinical Relevance: With regard to running efficiency and biomechanics, in some runners, barefoot (socked) and minimalist footwear are preferable to traditional running shoes. PMID:26131304

  20. Proposed real-time data processing system to control source and special nuclear material (SS) at Mound Laboratory

    International Nuclear Information System (INIS)

    DeVer, E.A.; Baston, M.; Bishop, T.C.

    1976-01-01

    The SS Acountability System was designed to provide accountability of all SS materials by unit identification and grams. The existing system is a gram-accountable system. The new system was designed to incorporate unit identification into an ADP (Automated Data Processing) System. It also records all transactions performed against a particular unit of accountable material. The high volume of data is input via CRT terminals. Input data will consist of the following: source of the material (its unit identification), amount of material being moved, isotopic content, type of material, Health Physics number of the person moving the material, account number from which the material is being moved, unit identification of the material being moved (if all material is not moved), Health Physics number of the person receiving the material, account number to which material is being moved, and acceptance of the material by the receiver. A running inventory of all material is kept. At the end of the month the physical inventory will be compared to the data base and all discrepancies reported. Since a complete history of transactions has been kept, the source and cause for any discrepancies should be easily located. Discrepancies are held to a minimum since errors are detected before entrance into the data base. The system will also furnish all reports necessary to control SS Accountability. These reports may be requested at any time via an accountability master terminal

  1. Time course of effects of emotion on item memory and source memory for Chinese words.

    Science.gov (United States)

    Wang, Bo; Fu, Xiaolan

    2011-05-01

    Although many studies have investigated the effect of emotion on memory, it is unclear whether the effect of emotion extends to all aspects of an event. In addition, it is poorly understood how effects of emotion on item memory and source memory change over time. This study examined the time course of effects of emotion on item memory and source memory. Participants learned intentionally a list of neutral, positive, and negative Chinese words, which were presented twice, and then took test of free recall, followed by recognition and source memory tests, at one of eight delayed points of time. The main findings are (within the time frame of 2 weeks): (1) Negative emotion enhances free recall, whereas there is only a trend that positive emotion enhances free recall. In addition, negative and positive emotions have different points of time at which their effects on free recall reach the greatest magnitude. (2) Negative emotion reduces recognition, whereas positive emotion has no effect on recognition. (3) Neither positive nor negative emotion has any effect on source memory. The above findings indicate that effect of emotion does not necessarily extend to all aspects of an event and that valence is a critical modulating factor in effect of emotion on item memory. Furthermore, emotion does not affect the time course of item memory and source memory, at least with a time frame of 2 weeks. This study has implications for establishing the theoretical model regarding the effect of emotion on memory. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. Time-resolved X-ray studies using third generation synchrotron radiation sources

    International Nuclear Information System (INIS)

    Mills, D.M.

    1991-10-01

    The third generation, high-brilliance, hard x-ray, synchrotron radiation (SR) sources currently under construction (ESRF at Grenoble, France; APS at Argonne, Illinois; and SPring-8 at Harima, Japan) will usher in a new era of x-ray experimentation for both physical and biological sciences. One of the most exciting areas of experimentation will be the extension of x-ray scattering and diffraction techniques to the study of transient or time-evolving systems. The high repetition rate, short-pulse duration, high brilliance, and variable spectral bandwidth of these sources make them ideal for x-ray time-resolved studies. The temporal properties (bunch length, interpulse period, etc.) of these new sources will be summarized. Finally, the scientific potential and the technological challenges of time-resolved x-ray scattering from these new sources will be described. 13 refs., 4 figs

  3. Time-resolved far-infrared experiments at the National Synchrotron Light Source. Final report

    International Nuclear Information System (INIS)

    Tanner, D.B.; Reitze, D.H.; Carr, G.L.

    1999-01-01

    A facility for time-resolved infrared and far-infrared spectroscopy has been built and commissioned at the National Synchrotron Light Source. This facility permits the study of time dependent phenomena over a frequency range from 2-8000cm -1 (0.25 meV-1 eV). Temporal resolution is approximately 200 psec and time dependent phenomena in the time range out to 100 nsec can be investigated

  4. Time domain localization technique with sparsity constraint for imaging acoustic sources

    Science.gov (United States)

    Padois, Thomas; Doutres, Olivier; Sgard, Franck; Berry, Alain

    2017-09-01

    This paper addresses source localization technique in time domain for broadband acoustic sources. The objective is to accurately and quickly detect the position and amplitude of noise sources in workplaces in order to propose adequate noise control options and prevent workers hearing loss or safety risk. First, the generalized cross correlation associated with a spherical microphone array is used to generate an initial noise source map. Then a linear inverse problem is defined to improve this initial map. Commonly, the linear inverse problem is solved with an l2 -regularization. In this study, two sparsity constraints are used to solve the inverse problem, the orthogonal matching pursuit and the truncated Newton interior-point method. Synthetic data are used to highlight the performances of the technique. High resolution imaging is achieved for various acoustic sources configurations. Moreover, the amplitudes of the acoustic sources are correctly estimated. A comparison of computation times shows that the technique is compatible with quasi real-time generation of noise source maps. Finally, the technique is tested with real data.

  5. OpenPSTD : The open source implementation of the pseudospectral time-domain method

    NARCIS (Netherlands)

    Krijnen, T.; Hornikx, M.C.J.; Borkowski, B.

    2014-01-01

    An open source implementation of the pseudospectral time-domain method for the propagation of sound is presented, which is geared towards applications in the built environment. Being a wavebased method, PSTD captures phenomena like diffraction, but maintains efficiency in processing time and memory

  6. Optimization of NANOGrav's time allocation for maximum sensitivity to single sources

    International Nuclear Information System (INIS)

    Christy, Brian; Anella, Ryan; Lommen, Andrea; Camuccio, Richard; Handzo, Emma; Finn, Lee Samuel

    2014-01-01

    Pulsar timing arrays (PTAs) are a collection of precisely timed millisecond pulsars (MSPs) that can search for gravitational waves (GWs) in the nanohertz frequency range by observing characteristic signatures in the timing residuals. The sensitivity of a PTA depends on the direction of the propagating GW source, the timing accuracy of the pulsars, and the allocation of the available observing time. The goal of this paper is to determine the optimal time allocation strategy among the MSPs in the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) for a single source of GW under a particular set of assumptions. We consider both an isotropic distribution of sources across the sky and a specific source in the Virgo cluster. This work improves on previous efforts by modeling the effect of intrinsic spin noise for each pulsar. We find that, in general, the array is optimized by maximizing time spent on the best-timed pulsars, with sensitivity improvements typically ranging from a factor of 1.5 to 4.

  7. Time collimation for elastic neutron scattering instrument at a pulsed source

    International Nuclear Information System (INIS)

    Aksenov, V.L.; Nikitenko, Yu.V.

    1996-01-01

    Conditions for carrying out elastic neutron scattering experiments using the time-of-flight technique are considered. It is shown that the employment of time dependent neutron beam collimation in the source-sample flight path increases the luminosity of the spectrometer under certain resolution restrictions. 3 refs., 8 figs

  8. Moving source localization with a single hydrophone using multipath time delays in the deep ocean.

    Science.gov (United States)

    Duan, Rui; Yang, Kunde; Ma, Yuanliang; Yang, Qiulong; Li, Hui

    2014-08-01

    Localizing a source of radial movement at moderate range using a single hydrophone can be achieved in the reliable acoustic path by tracking the time delays between the direct and surface-reflected arrivals (D-SR time delays). The problem is defined as a joint estimation of the depth, initial range, and speed of the source, which are the state parameters for the extended Kalman filter (EKF). The D-SR time delays extracted from the autocorrelation functions are the measurements for the EKF. Experimental results using pseudorandom signals show that accurate localization results are achieved by offline iteration of the EKF.

  9. Running and osteoarthritis.

    Science.gov (United States)

    Willick, Stuart E; Hansen, Pamela A

    2010-07-01

    The overall health benefits of cardiovascular exercise, such as running, are well established. However, it is also well established that in certain circumstances running can lead to overload injuries of muscle, tendon, and bone. In contrast, it has not been established that running leads to degeneration of articular cartilage, which is the hallmark of osteoarthritis. This article reviews the available literature on the association between running and osteoarthritis, with a focus on clinical epidemiologic studies. The preponderance of clinical reports refutes an association between running and osteoarthritis. Copyright 2010 Elsevier Inc. All rights reserved.

  10. X-LUNA: Extending Free/Open Source Real Time Executive for On-Board Space Applications

    Science.gov (United States)

    Braga, P.; Henriques, L.; Zulianello, M.

    2008-08-01

    In this paper we present xLuna, a system based on the RTEMS [1] Real-Time Operating System that is able to run on demand a GNU/Linux Operating System [2] as RTEMS' lowest priority task. Linux runs in user-mode and in a different memory partition. This allows running Hard Real-Time tasks and Linux applications on the same system sharing the Hardware resources while keeping a safe isolation and the Real-Time characteristics of RTEMS. Communication between both Systems is possible through a loose coupled mechanism based on message queues. Currently only SPARC LEON2 processor with Memory Management Unit (MMU) is supported. The advantage in having two isolated systems is that non critical components are quickly developed or simply ported reducing time-to-market and budget.

  11. Impact source identification in finite isotropic plates using a time-reversal method: theoretical study

    International Nuclear Information System (INIS)

    Chen, Chunlin; Yuan, Fuh-Gwo

    2010-01-01

    This paper aims to identify impact sources on plate-like structures based on the synthetic time-reversal (T-R) concept using an array of sensors. The impact source characteristics, namely, impact location and impact loading time history, are reconstructed using the invariance of time-reversal concept, reciprocal theory, and signal processing algorithms. Numerical verification for two finite isotropic plates under low and high velocity impacts is performed to demonstrate the versatility of the synthetic T-R method for impact source identification. The results show that the impact location and time history of the impact force with various shapes and frequency bands can be readily obtained with only four sensors distributed around the impact location. The effects of time duration and the inaccuracy in the estimated impact location on the accuracy of the time history of the impact force using the T-R method are investigated. Since the T-R technique retraces all the multi-paths of reflected waves from the geometrical boundaries back to the impact location, it is well suited for quantifying the impact characteristics for complex structures. In addition, this method is robust against noise and it is suggested that a small number of sensors is sufficient to quantify the impact source characteristics through simple computation; thus it holds promise for the development of passive structural health monitoring (SHM) systems for impact monitoring in near real-time

  12. Finite element approximation for time-dependent diffusion with measure-valued source

    Czech Academy of Sciences Publication Activity Database

    Seidman, T.; Gobbert, M.; Trott, D.; Kružík, Martin

    2012-01-01

    Roč. 122, č. 4 (2012), s. 709-723 ISSN 0029-599X R&D Projects: GA AV ČR IAA100750802 Institutional support: RVO:67985556 Keywords : measure-valued source * diffusion equation Subject RIV: BA - General Mathematics Impact factor: 1.329, year: 2012 http://library.utia.cas.cz/separaty/2012/MTR/kruzik-finite element approximation for time - dependent diffusion with measure-valued source.pdf

  13. Real-time tunability of chip-based light source enabled by microfluidic mixing

    DEFF Research Database (Denmark)

    Olsen, Brian Bilenberg; Rasmussen, Torben; Balslev, Søren

    2006-01-01

    We demonstrate real-time tunability of a chip-based liquid light source enabled by microfluidic mixing. The mixer and light source are fabricated in SU-8 which is suitable for integration in SU-8-based laboratory-on-a-chip microsystems. The tunability of the light source is achieved by changing...... the concentration of rhodamine 6G dye inside two integrated vertical resonators, since both the refractive index and the gain profile are influenced by the dye concentration. The effect on the refractive index and the gain profile of rhodamine 6G in ethanol is investigated and the continuous tuning of the laser...

  14. Highly coherent free-running dual-comb chip platform.

    Science.gov (United States)

    Hébert, Nicolas Bourbeau; Lancaster, David G; Michaud-Belleau, Vincent; Chen, George Y; Genest, Jérôme

    2018-04-15

    We characterize the frequency noise performance of a free-running dual-comb source based on an erbium-doped glass chip running two adjacent mode-locked waveguide lasers. This compact laser platform, contained only in a 1.2 L volume, rejects common-mode environmental noise by 20 dB thanks to the proximity of the two laser cavities. Furthermore, it displays a remarkably low mutual frequency noise floor around 10  Hz 2 /Hz, which is enabled by its large-mode-area waveguides and low Kerr nonlinearity. As a result, it reaches a free-running mutual coherence time of 1 s since mode-resolved dual-comb spectra are generated even on this time scale. This design greatly simplifies dual-comb interferometers by enabling mode-resolved measurements without any phase lock.

  15. Time-resolved materials science opportunities using synchrotron x-ray sources

    International Nuclear Information System (INIS)

    Larson, B.C.; Tischler, J.Z.

    1995-06-01

    The high brightness, high intensity, and pulsed time-structure of synchrotron sources provide new opportunities for time-resolved x-ray diffraction investigations. With third generation synchrotron sources coming on line, high brilliance and high brightness are now available in x-ray beams with the highest flux. In addition to the high average flux, the instantaneous flux available in synchrotron beams is greatly enhanced by the pulsed time structure, which consists of short bursts of x-rays that are separated by ∼tens to hundreds of nanoseconds. Time-resolved one- and two-dimensional position sensitive detection techniques that take advantage of synchrotron radiation for materials science x-ray diffraction investigations are presented, and time resolved materials science applications are discussed in terms of recent diffraction and spectroscopy results and materials research opportunities

  16. A method for untriggered time-dependent searches for multiple flares from neutrino point sources

    International Nuclear Information System (INIS)

    Gora, D.; Bernardini, E.; Cruz Silva, A.H.

    2011-04-01

    A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)

  17. A method for untriggered time-dependent searches for multiple flares from neutrino point sources

    Energy Technology Data Exchange (ETDEWEB)

    Gora, D. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institute of Nuclear Physics PAN, Cracow (Poland); Bernardini, E.; Cruz Silva, A.H. [Institute of Nuclear Physics PAN, Cracow (Poland)

    2011-04-15

    A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)

  18. Advances in high-order harmonic generation sources for time-resolved investigations

    Energy Technology Data Exchange (ETDEWEB)

    Reduzzi, Maurizio [Dipartimento di Fisica, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano (Italy); Institute of Photonics and Nanotechnologies, CNR-IFN, Piazza Leonardo da Vinci 32, 20133 Milano (Italy); Carpeggiani, Paolo [Dipartimento di Fisica, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano (Italy); Kühn, Sergei [ELI-ALPS, ELI-Hu Kft., Dugonics ter 13, H-6720 Szeged (Hungary); Calegari, Francesca [Institute of Photonics and Nanotechnologies, CNR-IFN, Piazza Leonardo da Vinci 32, 20133 Milano (Italy); Nisoli, Mauro; Stagira, Salvatore [Dipartimento di Fisica, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano (Italy); Institute of Photonics and Nanotechnologies, CNR-IFN, Piazza Leonardo da Vinci 32, 20133 Milano (Italy); Vozzi, Caterina [Institute of Photonics and Nanotechnologies, CNR-IFN, Piazza Leonardo da Vinci 32, 20133 Milano (Italy); Dombi, Peter [ELI-ALPS, ELI-Hu Kft., Dugonics ter 13, H-6720 Szeged (Hungary); Wigner Research Center for Physics, 1121 Budapest (Hungary); Kahaly, Subhendu [ELI-ALPS, ELI-Hu Kft., Dugonics ter 13, H-6720 Szeged (Hungary); Tzallas, Paris; Charalambidis, Dimitris [ELI-ALPS, ELI-Hu Kft., Dugonics ter 13, H-6720 Szeged (Hungary); Foundation for Research and Technology – Hellas, Institute of Electronic Structure and Lasers, P.O. Box 1527, GR-711 10 Heraklion, Crete (Greece); Varju, Katalin [ELI-ALPS, ELI-Hu Kft., Dugonics ter 13, H-6720 Szeged (Hungary); Department of Optics and Quantum Electronics, University of Szeged, Dóm tér 9, 6720 Szeged (Hungary); Osvay, Karoly [ELI-ALPS, ELI-Hu Kft., Dugonics ter 13, H-6720 Szeged (Hungary); and others

    2015-10-15

    We review the main research directions ongoing in the development of extreme ultraviolet sources based on high-harmonic generation for the synthesization and application of trains and isolated attosecond pulses to time-resolved spectroscopy. A few experimental and theoretical works will be discussed in connection to well-established attosecond techniques. In this context, we present the unique possibilities offered for time-resolved investigations on the attosecond timescale by the new Extreme Light Infrastructure Attosecond Light Pulse Source, which is currently under construction.

  19. Advances in high-order harmonic generation sources for time-resolved investigations

    International Nuclear Information System (INIS)

    Reduzzi, Maurizio; Carpeggiani, Paolo; Kühn, Sergei; Calegari, Francesca; Nisoli, Mauro; Stagira, Salvatore; Vozzi, Caterina; Dombi, Peter; Kahaly, Subhendu; Tzallas, Paris; Charalambidis, Dimitris; Varju, Katalin; Osvay, Karoly

    2015-01-01

    We review the main research directions ongoing in the development of extreme ultraviolet sources based on high-harmonic generation for the synthesization and application of trains and isolated attosecond pulses to time-resolved spectroscopy. A few experimental and theoretical works will be discussed in connection to well-established attosecond techniques. In this context, we present the unique possibilities offered for time-resolved investigations on the attosecond timescale by the new Extreme Light Infrastructure Attosecond Light Pulse Source, which is currently under construction.

  20. Evaluating four-dimensional time-lapse electrical resistivity tomography for monitoring DNAPL source zone remediation.

    Science.gov (United States)

    Power, Christopher; Gerhard, Jason I; Karaoulis, Marios; Tsourlos, Panagiotis; Giannopoulos, Antonios

    2014-07-01

    Practical, non-invasive tools do not currently exist for mapping the remediation of dense non-aqueous phase liquids (DNAPLs). Electrical resistivity tomography (ERT) exhibits significant potential but has not yet become a practitioner's tool due to challenges in interpreting the survey results at real sites. This study explores the effectiveness of recently developed four-dimensional (4D, i.e., 3D space plus time) time-lapse surface ERT to monitor DNAPL source zone remediation. A laboratory experiment demonstrated the approach for mapping a changing NAPL distribution over time. A recently developed DNAPL-ERT numerical model was then employed to independently simulate the experiment, providing confidence that the DNAPL-ERT model is a reliable tool for simulating real systems. The numerical model was then used to evaluate the potential for this approach at the field scale. Four DNAPL source zones, exhibiting a range of complexity, were initially simulated, followed by modeled time-lapse ERT monitoring of complete DNAPL remediation by enhanced dissolution. 4D ERT inversion provided estimates of the regions of the source zone experiencing mass reduction with time. Results show that 4D time-lapse ERT has significant potential to map both the outline and the center of mass of the evolving treated portion of the source zone to within a few meters in each direction. In addition, the technique can provide a reasonable, albeit conservative, estimate of the DNAPL volume remediated with time: 25% underestimation in the upper 2m and up to 50% underestimation at late time between 2 and 4m depth. The technique is less reliable for identifying cleanup of DNAPL stringers outside the main DNAPL body. Overall, this study demonstrates that 4D time-lapse ERT has potential for mapping where and how quickly DNAPL mass changes in real time during site remediation. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Time-domain single-source integral equations for analyzing scattering from homogeneous penetrable objects

    KAUST Repository

    Valdés, Felipe

    2013-03-01

    Single-source time-domain electric-and magnetic-field integral equations for analyzing scattering from homogeneous penetrable objects are presented. Their temporal discretization is effected by using shifted piecewise polynomial temporal basis functions and a collocation testing procedure, thus allowing for a marching-on-in-time (MOT) solution scheme. Unlike dual-source formulations, single-source equations involve space-time domain operator products, for which spatial discretization techniques developed for standalone operators do not apply. Here, the spatial discretization of the single-source time-domain integral equations is achieved by using the high-order divergence-conforming basis functions developed by Graglia alongside the high-order divergence-and quasi curl-conforming (DQCC) basis functions of Valdés The combination of these two sets allows for a well-conditioned mapping from div-to curl-conforming function spaces that fully respects the space-mapping properties of the space-time operators involved. Numerical results corroborate the fact that the proposed procedure guarantees accuracy and stability of the MOT scheme. © 2012 IEEE.

  2. Locating the source of diffusion in complex networks by time-reversal backward spreading

    Science.gov (United States)

    Shen, Zhesi; Cao, Shinan; Wang, Wen-Xu; Di, Zengru; Stanley, H. Eugene

    2016-03-01

    Locating the source that triggers a dynamical process is a fundamental but challenging problem in complex networks, ranging from epidemic spreading in society and on the Internet to cancer metastasis in the human body. An accurate localization of the source is inherently limited by our ability to simultaneously access the information of all nodes in a large-scale complex network. This thus raises two critical questions: how do we locate the source from incomplete information and can we achieve full localization of sources at any possible location from a given set of observable nodes. Here we develop a time-reversal backward spreading algorithm to locate the source of a diffusion-like process efficiently and propose a general locatability condition. We test the algorithm by employing epidemic spreading and consensus dynamics as typical dynamical processes and apply it to the H1N1 pandemic in China. We find that the sources can be precisely located in arbitrary networks insofar as the locatability condition is assured. Our tools greatly improve our ability to locate the source of diffusion in complex networks based on limited accessibility of nodal information. Moreover, they have implications for controlling a variety of dynamical processes taking place on complex networks, such as inhibiting epidemics, slowing the spread of rumors, pollution control, and environmental protection.

  3. Electron run-away

    International Nuclear Information System (INIS)

    Levinson, I.B.

    1975-01-01

    The run-away effect of electrons for the Coulomb scattering has been studied by Dricer, but the question for other scattering mechanisms is not yet studied. Meanwhile, if the scattering is quasielastic, a general criterion for the run-away may be formulated; in this case the run-away influence on the distribution function may also be studied in somewhat general and qualitative manner. (Auth.)

  4. An ion source for radiofrequency-pulsed glow discharge time-of-flight mass spectrometry

    International Nuclear Information System (INIS)

    González Gago, C.; Lobo, L.; Pisonero, J.; Bordel, N.; Pereiro, R.; Sanz-Medel, A.

    2012-01-01

    A Grimm-type glow discharge (GD) has been designed and constructed as an ion source for pulsed radiofrequency GD spectrometry when coupled to an orthogonal time of flight mass spectrometer. Pulse shapes of argon species and analytes were studied as a function of the discharge conditions using a new in-house ion source (UNIOVI GD) and results have been compared with a previous design (PROTOTYPE GD). Different behavior and shapes of the pulse profiles have been observed for the two sources evaluated, particularly for the plasma gas ionic species detected. In the more analytically relevant region (afterglow), signals for 40 Ar + with this new design were negligible, while maximum intensity was reached earlier in time for 41 (ArH) + than when using the PROTOTYPE GD. Moreover, while maximum 40 Ar + signals measured along the pulse period were similar in both sources, 41 (ArH) + and 80 (Ar 2 ) + signals tend to be noticeable higher using the PROTOTYPE chamber. The UNIOVI GD design was shown to be adequate for sensitive direct analysis of solid samples, offering linear calibration graphs and good crater shapes. Limits of detection (LODs) are in the same order of magnitude for both sources, although the UNIOVI source provides slightly better LODs for those analytes with masses slightly higher than 41 (ArH) + . - Highlights: ► A new RF-pulsed GD ion source (UNIOVI GD) coupled to TOFMS has been characterized. ► Linear calibration graphs and LODs in the low ppm range are achieved. ► Craters with flat bottoms and vertical walls are obtained. ► UNIOVI source can be easily cleaned as it does not require flow tube. ► UNIOVI GD has a simple design and thus its manufacture is easy and cheap.

  5. Sources

    International Nuclear Information System (INIS)

    Duffy, L.P.

    1991-01-01

    This paper discusses the sources of radiation in the narrow perspective of radioactivity and the even narrow perspective of those sources that concern environmental management and restoration activities at DOE facilities, as well as a few related sources. Sources of irritation, Sources of inflammatory jingoism, and Sources of information. First, the sources of irritation fall into three categories: No reliable scientific ombudsman to speak without bias and prejudice for the public good, Technical jargon with unclear definitions exists within the radioactive nomenclature, and Scientific community keeps a low-profile with regard to public information. The next area of personal concern are the sources of inflammation. This include such things as: Plutonium being described as the most dangerous substance known to man, The amount of plutonium required to make a bomb, Talk of transuranic waste containing plutonium and its health affects, TMI-2 and Chernobyl being described as Siamese twins, Inadequate information on low-level disposal sites and current regulatory requirements under 10 CFR 61, Enhanced engineered waste disposal not being presented to the public accurately. Numerous sources of disinformation regarding low level radiation high-level radiation, Elusive nature of the scientific community, The Federal and State Health Agencies resources to address comparative risk, and Regulatory agencies speaking out without the support of the scientific community

  6. In vivo time-gated diffuse correlation spectroscopy at quasi-null source-detector separation.

    Science.gov (United States)

    Pagliazzi, M; Sekar, S Konugolu Venkata; Di Sieno, L; Colombo, L; Durduran, T; Contini, D; Torricelli, A; Pifferi, A; Mora, A Dalla

    2018-06-01

    We demonstrate time domain diffuse correlation spectroscopy at quasi-null source-detector separation by using a fast time-gated single-photon avalanche diode without the need of time-tagging electronics. This approach allows for increased photon collection, simplified real-time instrumentation, and reduced probe dimensions. Depth discriminating, quasi-null distance measurement of blood flow in a human subject is presented. We envision the miniaturization and integration of matrices of optical sensors of increased spatial resolution and the enhancement of the contrast of local blood flow changes.

  7. Space-time structure of neutron and X-ray sources in a plasma focus

    International Nuclear Information System (INIS)

    Bostick, W.H.; Nardi, V.; Prior, W.

    1977-01-01

    Systematic measurements with paraffin collimators of the neutron emission intensity have been completed on a plasma focus with a 15-20 kV capacitor bank (hollow centre electrode; discharge period T approximately 8 μs; D 2 filling at 4-8 torr). The space resolution was 1 cm or better. These data indicate that at least 70% of the total neutron yield originates within hot-plasma regions where electron beams and high-energy D beams (approximately > 0.1-1 MeV) are produced. The neutron source is composed of several (approximately > 1-10) space-localized sources of different intensity, each with a duration approximately less than 5 ns (FWHM). Localized neutron sources and hard (approximately > 100 keV) X-ray sources have the same time multiplicity and are usually distributed in two groups over a time interval 40-400 ns long. By the mode of operation used by the authors one group of localized sources (Burst II) is observed 200-400 ns after the other group (Burst I) and its space distribution is broader than for Burst I. The maximum intensity of a localized source of neutrons in Burst I is much higher than the maximum intensity in Burst II. Secondary reactions T(D,n) 4 He (from the tritium produced only by primary reactions in the same discharge; no tritium was used in filling the discharge chamber) are observed in a time coincidence with the strongest D-D neutron pulse of Burst I. The neutron signal from a localized source with high intensity has a relatively long tail of small amplitude (area tail approximately less than 0.2 X area peak). This tail can be generated by the D-D reactions of the unconfined part of an ion beam in the cold plasma. Complete elimination of scattered neutrons on the detector was achieved in these measurements. (author)

  8. Rietveld refinement with time-of-flight powder diffraction data from pulsed neutron sources

    International Nuclear Information System (INIS)

    David, W.I.F.; Jorgensen, J.D.

    1990-10-01

    The recent development of accelerator-based pulsed neutron sources has led to the widespread use of the time-of-flight technique for neutron powder diffraction. The properties of the pulsed source make possible unusually high resolution over a wide range of d spacings, high count rates, and the ability to collect complete data at fixed scattering angles. The peak shape and other instrument characteristics can be accurately modelled, which make Rietveld refinement possible for complex structures. In this paper we briefly review the development of the Rietveld method for time-of-flight diffraction data from pulsed neutron sources and discuss the latest developments in high resolution instrumentation and advanced Rietveld analysis methods. 50 refs., 12 figs., 14 tabs

  9. Measurement and simulation of the time-dependent behavior of the UMER source

    International Nuclear Information System (INIS)

    Haber, I.; Feldman, D.; Fiorito, R.; Friedman, A.; Grote, D.P.; Kishek, R.A.; Quinn, B.; Reiser, M.; Rodgers, J.; O'Shea, P.G.; Stratakis, D.; Tian, K.; Vay, J.-L.; Walter, M.

    2007-01-01

    Control of the time-dependent characteristics of the beam pulse, beginning when it is born from the source, is important for obtaining adequate beam intensity on a target. Recent experimental measurements combined with the new mesh-refinement capability in WARP have improved the understanding of time-dependent beam characteristics beginning at the source, as well as the predictive ability of the simulation codes. The University of Maryland Electron Ring (UMER), because of its ease of operation and flexible diagnostics has proved particularly useful for benchmarking WARP by comparing simulation to measurement. One source of significant agreement has been in the ability of three-dimensional WARP simulations to predict the onset of virtual cathode oscillations in the vicinity of the cathode grid in the UMER gun, and the subsequent measurement of the predicted oscillations

  10. Probing Motion of Fast Radio Burst Sources by Timing Strongly Lensed Repeaters

    Science.gov (United States)

    Dai, Liang; Lu, Wenbin

    2017-09-01

    Given the possible repetitive nature of fast radio bursts (FRBs), their cosmological origin, and their high occurrence, detection of strongly lensed sources due to intervening galaxy lenses is possible with forthcoming radio surveys. We show that if multiple images of a repeating source are resolved with VLBI, using a method independent of lens modeling, accurate timing could reveal non-uniform motion, either physical or apparent, of the emission spot. This can probe the physical nature of FRBs and their surrounding environments, constraining scenarios including orbital motion around a stellar companion if FRBs require a compact star in a special system, and jet-medium interactions for which the location of the emission spot may randomly vary. The high timing precision possible for FRBs (˜ms) compared with the typical time delays between images in galaxy lensing (≳10 days) enables the measurement of tiny fractional changes in the delays (˜ {10}-9) and hence the detection of time-delay variations induced by relative motions between the source, the lens, and the Earth. We show that uniform cosmic peculiar velocities only cause the delay time to drift linearly, and that the effect from the Earth’s orbital motion can be accurately subtracted, thus enabling a search for non-trivial source motion. For a timing accuracy of ˜1 ms and a repetition rate (of detected bursts) of ˜0.05 per day of a single FRB source, non-uniform displacement ≳0.1-1 au of the emission spot perpendicular to the line of sight is detectable if repetitions are seen over a period of hundreds of days.

  11. Synchronous Databus Network in ITER: Open source real-time network for the next nuclear fusion experiment

    International Nuclear Information System (INIS)

    Boncagni, L.; Centioli, C.; Iannone, F.; Neri, C.; Panella, M.; Pangione, L.; Riva, M.; Scappaticci, M.; Vitale, V.; Zaccarian, L.

    2008-01-01

    The next nuclear fusion experiment, ITER, is providing the infrastructure for the optimal operation of a burning plasma, requiring feedback control of discharge parameters and on-line evaluation of computationally intensive models running in a cluster of controller nodes. Thus, the synchronization of the available information on the plasma and plant state variables among the controller nodes is a key issue for ITER. The ITER conceptual design aims to perform feedback control on a cluster of distributed controllers connected by a Synchronous Databus Network (SDN). Therefore it is mandatory to achieve a deterministic data exchange among the controller nodes with a refresh rate of at least 1 kHz and a jitter of at least 50 μs. Thus, a conservative estimate of the data flow within the controller network can be 3 kSample/ms. In this paper the open source RTnet project is evaluated to meet the requirements of the SDN of ITER. A testbed involving a cluster of eight nodes connected over a standard ethernet network has been set up to simulate a distributed real-time control system. The main goal of the test is to verify the compliance of the performance with the ITER SDN requirements

  12. Nitrogen Fertilizer Source, Rates, and Timing for a Cover Crop and Subsequent Cotton Crop

    Science.gov (United States)

    The objectives were to compare N fertilizer sources, rates, and time of application for a rye winter cover crop to determine optimal biomass production for conservation tillage production, compare recommended and no additional N fertilizer rates across different biomass levels for cotton, and determ...

  13. Influence of starch source in the required hydrolysis time for the ...

    African Journals Online (AJOL)

    Influence of starch source in the required hydrolysis time for the production of maltodextrins with different dextrose equivalent. José Luis Montañez Soto, Luis Medina García, José Venegas González, Aurea Bernardino Nicanor, Leopoldo González Cruz ...

  14. The Space-, Time-, and Energy-distribution of Neutrons from a Pulsed Plane Source

    Energy Technology Data Exchange (ETDEWEB)

    Claesson, Arne

    1962-05-15

    The space-, time- and energy-distribution of neutrons from a pulsed, plane, high energy source in an infinite medium is determined in a diffusion approximation. For simplicity the moderator is first assumed to be hydrogen gas but it is also shown that the method can be used for a moderator of arbitrary mass.

  15. The effect of interaural-time-difference fluctuations on apparent source width

    DEFF Research Database (Denmark)

    Käsbach, Johannes; May, Tobias; Oskarsdottir, Gudrun

    2014-01-01

    For the perception of spaciousness, the temporal fluctuations of the interaural time differences (ITDs) and interaural level differences (ILDs) provide important binaural cues. One major characteristic of spatial perception is apparent source width (ASW), which describes the perceived width of a ...

  16. Impacts of Reverberation Time, Absorption Location and Background Noise on Listening Conditions in Multi Source Environment

    DEFF Research Database (Denmark)

    Saher, Konca; Rindel, Jens Holger; Nijs, Lau

    2005-01-01

    index (STI) needs to be improved. The impact of the reverberation time (RT), the distribution of the absorptive materials and the introduction of a screen on STI are discussed briefly .However, these objective parameters have to be assessed through subjective judgement. Auralizations of the multi source...

  17. Mitigation of Cognitive Bias with a Serious Game: Two Experiments Testing Feedback Timing and Source

    Science.gov (United States)

    Dunbar, Norah E.; Jensen, Matthew L.; Miller, Claude H.; Bessarabova, Elena; Lee, Yu-Hao; Wilson, Scott N.; Elizondo, Javier; Adame, Bradley J.; Valacich, Joseph; Straub, Sara; Burgoon, Judee K.; Lane, Brianna; Piercy, Cameron W.; Wilson, David; King, Shawn; Vincent, Cindy; Schuetzler, Ryan M.

    2017-01-01

    One of the benefits of using digital games for education is that games can provide feedback for learners to assess their situation and correct their mistakes. We conducted two studies to examine the effectiveness of different feedback design (timing, duration, repeats, and feedback source) in a serious game designed to teach learners about…

  18. Sources and Timing of Sex Education: Relations with American Adolescent Sexual Attitudes and Behavior

    Science.gov (United States)

    Somers, Cheryl L.; Surmann, Amy T.

    2005-01-01

    The purpose of this study was to explore the comparative contribution that (a) multiple sources of education about sexual topics (peers, media, school and other adults), and (b) the timing of this sex education, make on American adolescent sexual attitudes and behavior. Participants were 672 ethnically and economically diverse male and female,…

  19. OpenPSTD : The open source pseudospectral time-domain method for acoustic propagation

    NARCIS (Netherlands)

    Hornikx, M.C.J.; Krijnen, T.F.; van Harten, L.

    2016-01-01

    An open source implementation of the Fourier pseudospectral time-domain (PSTD) method for computing the propagation of sound is presented, which is geared towards applications in the built environment. Being a wave-based method, PSTD captures phenomena like diffraction, but maintains efficiency in

  20. High-resolution and super stacking of time-reversal mirrors in locating seismic sources

    KAUST Repository

    Cao, Weiping

    2011-07-08

    Time reversal mirrors can be used to backpropagate and refocus incident wavefields to their actual source location, with the subsequent benefits of imaging with high-resolution and super-stacking properties. These benefits of time reversal mirrors have been previously verified with computer simulations and laboratory experiments but not with exploration-scale seismic data. We now demonstrate the high-resolution and the super-stacking properties in locating seismic sources with field seismic data that include multiple scattering. Tests on both synthetic data and field data show that a time reversal mirror has the potential to exceed the Rayleigh resolution limit by factors of 4 or more. Results also show that a time reversal mirror has a significant resilience to strong Gaussian noise and that accurate imaging of source locations from passive seismic data can be accomplished with traces having signal-to-noise ratios as low as 0.001. Synthetic tests also demonstrate that time reversal mirrors can sometimes enhance the signal by a factor proportional to the square root of the product of the number of traces, denoted as N and the number of events in the traces. This enhancement property is denoted as super-stacking and greatly exceeds the classical signal-to-noise enhancement factor of. High-resolution and super-stacking are properties also enjoyed by seismic interferometry and reverse-time migration with the exact velocity model. © 2011 European Association of Geoscientists & Engineers.

  1. When the facts are just not enough: credibly communicating about risk is riskier when emotions run high and time is short.

    Science.gov (United States)

    Reynolds, Barbara J

    2011-07-15

    When discussing risk with people, commonly subject matter experts believe that conveying the facts will be enough to allow people to assess a risk and respond rationally to that risk. Because of this expectation, experts often become exasperated by the seemingly illogical way people assess personal risk and choose to manage that risk. In crisis situations when the risk information is less defined and choices must be made within impossible time constraints, the thought processes may be even more susceptible to faulty heuristics. Understanding the perception of risk is essential to understanding why the public becomes more or less upset by events. This article explores the psychological underpinnings of risk assessment within emotionally laden events and the risk communication practices that may facilitate subject matter experts to provide the facts in a manner so they can be more certain those facts are being heard. Source credibility is foundational to risk communication practices. The public meeting is one example in which these best practices can be exercised. Risks are risky because risk perceptions differ and the psychosocial environment in which risk is discussed complicates making risk decisions. Experts who want to influence the actions of the public related to a threat or risk should understand that decisions often involve emotional as well as logical components. The media and other social entities will also influence the risk context. The Center for Disease Control and Prevention's crisis and emergency-risk communication (CERC) principles are intended to increase credibility and recognize emotional components of an event. During a risk event, CERC works to calm emotions and increase trust which can help people apply the expertise being offered by response officials. Copyright © 2011. Published by Elsevier Inc.

  2. When the facts are just not enough: Credibly communicating about risk is riskier when emotions run high and time is short

    International Nuclear Information System (INIS)

    Reynolds, Barbara J.

    2011-01-01

    When discussing risk with people, commonly subject matter experts believe that conveying the facts will be enough to allow people to assess a risk and respond rationally to that risk. Because of this expectation, experts often become exasperated by the seemingly illogical way people assess personal risk and choose to manage that risk. In crisis situations when the risk information is less defined and choices must be made within impossible time constraints, the thought processes may be even more susceptible to faulty heuristics. Understanding the perception of risk is essential to understanding why the public becomes more or less upset by events. This article explores the psychological underpinnings of risk assessment within emotionally laden events and the risk communication practices that may facilitate subject matter experts to provide the facts in a manner so they can be more certain those facts are being heard. Source credibility is foundational to risk communication practices. The public meeting is one example in which these best practices can be exercised. Risks are risky because risk perceptions differ and the psychosocial environment in which risk is discussed complicates making risk decisions. Experts who want to influence the actions of the public related to a threat or risk should understand that decisions often involve emotional as well as logical components. The media and other social entities will also influence the risk context. The Center for Disease Control and Prevention's crisis and emergency-risk communication (CERC) principles are intended to increase credibility and recognize emotional components of an event. During a risk event, CERC works to calm emotions and increase trust which can help people apply the expertise being offered by response officials.

  3. Contributed Review: Source-localization algorithms and applications using time of arrival and time difference of arrival measurements

    Science.gov (United States)

    Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; Carlson, Thomas J.

    2016-04-01

    Locating the position of fixed or mobile sources (i.e., transmitters) based on measurements obtained from sensors (i.e., receivers) is an important research area that is attracting much interest. In this paper, we review several representative localization algorithms that use time of arrivals (TOAs) and time difference of arrivals (TDOAs) to achieve high signal source position estimation accuracy when a transmitter is in the line-of-sight of a receiver. Circular (TOA) and hyperbolic (TDOA) position estimation approaches both use nonlinear equations that relate the known locations of receivers and unknown locations of transmitters. Estimation of the location of transmitters using the standard nonlinear equations may not be very accurate because of receiver location errors, receiver measurement errors, and computational efficiency challenges that result in high computational burdens. Least squares and maximum likelihood based algorithms have become the most popular computational approaches to transmitter location estimation. In this paper, we summarize the computational characteristics and position estimation accuracies of various positioning algorithms. By improving methods for estimating the time-of-arrival of transmissions at receivers and transmitter location estimation algorithms, transmitter location estimation may be applied across a range of applications and technologies such as radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.

  4. 3D Multi‐source Least‐squares Reverse Time Migration

    KAUST Repository

    Dai, Wei

    2010-10-17

    We present the theory and numerical results for least‐squares reverse time migration (LSRTM) of phase‐encoded supergathers, where each supergather is the superposition of phased‐encoded shots. Three type of encoding functions are used in this study: random time shift, random source polarity and random source location selected from a pre‐designed table. Numerical tests for the 3D SEG/EAGE Overthrust model show that multi‐source LSRTM can suppress migration artifacts in the migration image and remove most of the crosstalk noise from multi‐source data. Empirical results suggest that multi‐source LSRTM can provide a noticeable increase in computational efficiency compared to standard RTM, when the CSGs in a supergather are modeled and migrated together with a finite‐difference simulator. If the phase‐encoding functions are dynamically changed after each iteration of LSRTM, the best images are obtained. The potential drawback is that the final results are very sensitive to the accuracy of the starting model.

  5. Using recruitment source timing and diagnosticity to enhance applicants' occupation-specific human capital.

    Science.gov (United States)

    Campion, Michael C; Ployhart, Robert E; Campion, Michael A

    2017-05-01

    [Correction Notice: An Erratum for this article was reported in Vol 102(5) of Journal of Applied Psychology (see record 2017-14296-001). In the article, the following headings were inadvertently set at the wrong level: Method, Participants and Procedure, Measures, Occupation specific human capital, Symbolic jobs, Relevant majors, Occupation-specific capital hotspots, Source timing, Source diagnosticity, Results, and Discussion. All versions of this article have been corrected.] This study proposes that reaching applicants through more diagnostic recruitment sources earlier in their educational development (e.g., in high school) can lead them to invest more in their occupation-specific human capital (OSHC), thereby making them higher quality candidates. Using a sample of 78,157 applicants applying for jobs within a desirable professional occupation in the public sector, results indicate that applicants who report hearing about the occupation earlier, and applicants who report hearing about the occupation through more diagnostic sources, have higher levels of OSHC upon application. Additionally, source timing and diagnosticity affect the likelihood of candidates applying for jobs symbolic of the occupation, selecting relevant majors, and attending educational institutions with top programs related to the occupation. These findings suggest a firm's recruiting efforts may influence applicants' OSHC investment strategies. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. SUPRA: open-source software-defined ultrasound processing for real-time applications : A 2D and 3D pipeline from beamforming to B-mode.

    Science.gov (United States)

    Göbl, Rüdiger; Navab, Nassir; Hennersperger, Christoph

    2018-06-01

    Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. With SUPRA, we propose an open-source pipeline for fully software-defined ultrasound processing for real-time applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run time. The pipeline shows image quality comparable to a clinical system and backed by point spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real time. Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data), it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.

  7. Time-Dependent Moment Tensors of the First Four Source Physics Experiments (SPE) Explosions

    Science.gov (United States)

    Yang, X.

    2015-12-01

    We use mainly vertical-component geophone data within 2 km from the epicenter to invert for time-dependent moment tensors of the first four SPE explosions: SPE-1, SPE-2, SPE-3 and SPE-4Prime. We employ a one-dimensional (1D) velocity model developed from P- and Rg-wave travel times for Green's function calculations. The attenuation structure of the model is developed from P- and Rg-wave amplitudes. We select data for the inversion based on the criterion that they show consistent travel times and amplitude behavior as those predicted by the 1D model. Due to limited azimuthal coverage of the sources and the mostly vertical-component-only nature of the dataset, only long-period, diagonal components of the moment tensors are well constrained. Nevertheless, the moment tensors, particularly their isotropic components, provide reasonable estimates of the long-period source amplitudes as well as estimates of corner frequencies, albeit with larger uncertainties. The estimated corner frequencies, however, are consistent with estimates from ratios of seismogram spectra from different explosions. These long-period source amplitudes and corner frequencies cannot be fit by classical P-wave explosion source models. The results motivate the development of new P-wave source models suitable for these chemical explosions. To that end, we fit inverted moment-tensor spectra by modifying the classical explosion model using regressions of estimated source parameters. Although the number of data points used in the regression is small, the approach suggests a way for the new-model development when more data are collected.

  8. Overcoming the "Run" Response

    Science.gov (United States)

    Swanson, Patricia E.

    2013-01-01

    Recent research suggests that it is not simply experiencing anxiety that affects mathematics performance but also how one responds to and regulates that anxiety (Lyons and Beilock 2011). Most people have faced mathematics problems that have triggered their "run response." The issue is not whether one wants to run, but rather…

  9. Overuse injuries in running

    DEFF Research Database (Denmark)

    Larsen, Lars Henrik; Rasmussen, Sten; Jørgensen, Jens Erik

    2016-01-01

    What is an overuse injury in running? This question is a corner stone of clinical documentation and research based evidence.......What is an overuse injury in running? This question is a corner stone of clinical documentation and research based evidence....

  10. PRECIS Runs at IITM

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. PRECIS Runs at IITM. Evaluation experiment using LBCs derived from ERA-15 (1979-93). Runs (3 ensembles in each experiment) already completed with LBCs having a length of 30 years each, for. Baseline (1961-90); A2 scenario (2071-2100); B2 scenario ...

  11. Evaluating scintillator performance in time-resolved hard X-ray studies at synchrotron light sources

    International Nuclear Information System (INIS)

    Rutherford, Michael E.; Chapman, David J.; White, Thomas G.; Drakopoulos, Michael; Rack, Alexander; Eakins, Daniel E.

    2016-01-01

    Scintillator performance in time-resolved, hard, indirect detection X-ray studies on the sub-microsecond timescale at synchrotron light sources is reviewed, modelled and examined experimentally. LYSO:Ce is found to be the only commercially available crystal suitable for these experiments. The short pulse duration, small effective source size and high flux of synchrotron radiation is ideally suited for probing a wide range of transient deformation processes in materials under extreme conditions. In this paper, the challenges of high-resolution time-resolved indirect X-ray detection are reviewed in the context of dynamic synchrotron experiments. In particular, the discussion is targeted at two-dimensional integrating detector methods, such as those focused on dynamic radiography and diffraction experiments. The response of a scintillator to periodic synchrotron X-ray excitation is modelled and validated against experimental data collected at the Diamond Light Source (DLS) and European Synchrotron Radiation Facility (ESRF). An upper bound on the dynamic range accessible in a time-resolved experiment for a given bunch separation is calculated for a range of scintillators. New bunch structures are suggested for DLS and ESRF using the highest-performing commercially available crystal LYSO:Ce, allowing time-resolved experiments with an interframe time of 189 ns and a maximum dynamic range of 98 (6.6 bits)

  12. Evaluating scintillator performance in time-resolved hard X-ray studies at synchrotron light sources

    Energy Technology Data Exchange (ETDEWEB)

    Rutherford, Michael E.; Chapman, David J.; White, Thomas G. [Imperial College London, London (United Kingdom); Drakopoulos, Michael [Diamond Light Source, I12 Joint Engineering, Environmental, Processing (JEEP) Beamline, Didcot, Oxfordshire (United Kingdom); Rack, Alexander [European Synchrotron Radiation Facility, Grenoble (France); Eakins, Daniel E., E-mail: d.eakins@imperial.ac.uk [Imperial College London, London (United Kingdom)

    2016-03-24

    Scintillator performance in time-resolved, hard, indirect detection X-ray studies on the sub-microsecond timescale at synchrotron light sources is reviewed, modelled and examined experimentally. LYSO:Ce is found to be the only commercially available crystal suitable for these experiments. The short pulse duration, small effective source size and high flux of synchrotron radiation is ideally suited for probing a wide range of transient deformation processes in materials under extreme conditions. In this paper, the challenges of high-resolution time-resolved indirect X-ray detection are reviewed in the context of dynamic synchrotron experiments. In particular, the discussion is targeted at two-dimensional integrating detector methods, such as those focused on dynamic radiography and diffraction experiments. The response of a scintillator to periodic synchrotron X-ray excitation is modelled and validated against experimental data collected at the Diamond Light Source (DLS) and European Synchrotron Radiation Facility (ESRF). An upper bound on the dynamic range accessible in a time-resolved experiment for a given bunch separation is calculated for a range of scintillators. New bunch structures are suggested for DLS and ESRF using the highest-performing commercially available crystal LYSO:Ce, allowing time-resolved experiments with an interframe time of 189 ns and a maximum dynamic range of 98 (6.6 bits)

  13. Blind Separation of Nonstationary Sources Based on Spatial Time-Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Zhang Yimin

    2006-01-01

    Full Text Available Blind source separation (BSS based on spatial time-frequency distributions (STFDs provides improved performance over blind source separation methods based on second-order statistics, when dealing with signals that are localized in the time-frequency (t-f domain. In this paper, we propose the use of STFD matrices for both whitening and recovery of the mixing matrix, which are two stages commonly required in many BSS methods, to provide robust BSS performance to noise. In addition, a simple method is proposed to select the auto- and cross-term regions of time-frequency distribution (TFD. To further improve the BSS performance, t-f grouping techniques are introduced to reduce the number of signals under consideration, and to allow the receiver array to separate more sources than the number of array sensors, provided that the sources have disjoint t-f signatures. With the use of one or more techniques proposed in this paper, improved performance of blind separation of nonstationary signals can be achieved.

  14. Winter Annual Weed Response to Nitrogen Sources and Application Timings prior to a Burndown Corn Herbicide

    Directory of Open Access Journals (Sweden)

    Kelly A. Nelson

    2015-01-01

    Full Text Available Autumn and early preplant N applications, sources, and placement may affect winter annual weed growth. Field research evaluated (1 the effect of different nitrogen sources in autumn and early preplant on total winter annual weed growth (2006–2010, and (2 strip-till and broadcast no-till N applied in autumn and early preplant on henbit (Lamium amplexicaule L. growth (2008–2010 prior to a burndown herbicide application. Total winter annual weed biomass was greater than the nontreated control when applying certain N sources in autumn or early preplant for no-till corn. Anhydrous ammonia had the lowest average weed density (95 weeds m−2, though results were inconsistent over the years. Winter annual weed biomass was lowest (43 g m−2 when applying 32% urea ammonium nitrate in autumn and was similar to applying anhydrous ammonia in autumn or early preplant and the nontreated control. Henbit biomass was 28% greater when applying N in the autumn compared to an early preplant application timing. Nitrogen placement along with associated tillage with strip-till placement was important in reducing henbit biomass. Nitrogen source selection, application timing, and placement affected the impact of N on winter annual weed growth and should be considered when recommending a burndown herbicide application timing.

  15. The space-time outside a source of gravitational radiation: the axially symmetric null fluid

    Energy Technology Data Exchange (ETDEWEB)

    Herrera, L. [Universidad Central de Venezuela, Escuela de Fisica, Facultad de Ciencias, Caracas (Venezuela, Bolivarian Republic of); Universidad de Salamanca, Instituto Universitario de Fisica Fundamental y Matematicas, Salamanca (Spain); Di Prisco, A. [Universidad Central de Venezuela, Escuela de Fisica, Facultad de Ciencias, Caracas (Venezuela, Bolivarian Republic of); Ospino, J. [Universidad de Salamanca, Departamento de Matematica Aplicada and Instituto Universitario de Fisica Fundamental y Matematicas, Salamanca (Spain)

    2016-11-15

    We carry out a study of the exterior of an axially and reflection symmetric source of gravitational radiation. The exterior of such a source is filled with a null fluid produced by the dissipative processes inherent to the emission of gravitational radiation, thereby representing a generalization of the Vaidya metric for axially and reflection symmetric space-times. The role of the vorticity, and its relationship with the presence of gravitational radiation is put in evidence. The spherically symmetric case (Vaidya) is, asymptotically, recovered within the context of the 1 + 3 formalism. (orig.)

  16. Real-time implementation of logo detection on open source BeagleBoard

    Science.gov (United States)

    George, M.; Kehtarnavaz, N.; Estevez, L.

    2011-03-01

    This paper presents the real-time implementation of our previously developed logo detection and tracking algorithm on the open source BeagleBoard mobile platform. This platform has an OMAP processor that incorporates an ARM Cortex processor. The algorithm combines Scale Invariant Feature Transform (SIFT) with k-means clustering, online color calibration and moment invariants to robustly detect and track logos in video. Various optimization steps that are carried out to allow the real-time execution of the algorithm on BeagleBoard are discussed. The results obtained are compared to the PC real-time implementation results.

  17. The Dynamic Method for Time-of-Flight Measurement of Thermal Neutron Spectra from Pulsed Sources

    International Nuclear Information System (INIS)

    Pepelyshev, Yu.N.; Tulaev, A.B.; Bobrakov, V.F.

    1994-01-01

    The time-of-flight method for a measurement of thermal neutron spectra in the pulsed neutron sources with high efficiency of neutron registration, more than 10 5 times higher in comparison with traditional one, is described. The main problems connected with the electric current technique for time-of-flight spectra measurement are examined. The methodical errors, problems of a special neutron detector design and other questions are discussed. Some experimental results, spectra from surfaces of the water and solid methane moderators, obtained in the pulsed reactor IBR-2 (Dubna, Russia) are presented. 4 refs., 5 figs

  18. The dynamic method for time-of-flight measurement of thermal neutron spectra from pulsed sources

    International Nuclear Information System (INIS)

    Pepyolyshev, Yu.N.; Chuklyaev, S.V.; Tulaev, A.B.; Bobrakov, V.F.

    1995-01-01

    A time-of-flight method for measurement of thermal neutron spectra in pulsed neutron sources with an efficiency more than 10 5 times higher than the standard method is described. The main problems associated with the electric current technique for time-of-flight spectra measurement are examined. The methodical errors, problems of special neutron detector design and other questions are discussed. Some experimental results for spectra from the surfaces of water and solid methane moderators obtained at the IBR-2 pulsed reactor (Dubna, Russia) are presented. (orig.)

  19. Of faeces and sweat. How much a mouse is willing to run: having a hard time measuring spontaneous physical activity in different mouse sub-strains

    Directory of Open Access Journals (Sweden)

    Dario Coletti

    2017-03-01

    Full Text Available Physical activity has multiple beneficial effects in the physiology and pathology of the organism. In particular, we and other groups have shown that running counteracts cancer cachexia in both humans and rodents. The latter are prone to exercise in wheel-equipped cages even at advanced stages of cachexia. However, when we wanted to replicate the experimental model routinely used at the University of Rome in a different laboratory (i.e. at Paris 6 University, we had to struggle with puzzling results due to unpredicted mouse behavior. Here we report the experience and offer the explanation underlying these apparently irreproducible results. The original data are currently used for teaching purposes in undergraduate student classes of biological sciences.

  20. IMPLEMENTING FISCAL OR MONETARY POLICY IN TIME OF CRISIS? RUNNING GRANGER CAUSALITY TO TEST THE PHILLIPS CURVE IN SOME EURO ZONE COUNTRIES

    Directory of Open Access Journals (Sweden)

    Nico Gianluigi

    2014-12-01

    Full Text Available This paper aims to provide empirical evidence about the theoretical relationship between inflation and unemployment in 9 European countries. Based on two major goals for economic policymakers namely, to keep both inflation and unemployment low, we use the ingredients of the Phillips curve to orient fiscal and monetary policies. These policies are prerogative for the achievement of a desirable combination of unemployment and inflation. More in detail, we attempt to address two basic issues. One strand of the study examines the size and sign of the impact of unemployment rate on percentage changes in inflation. In our preferred econometric model, we have made explicit the evidence according to which one unit increase (% in unemployment reduces inflation of roughly 0.73 percent, on average. Next, we turn to the question concerning the causal link between inflation and unemployment and we derive a political framework enables to orient European policymakers in the implementation of either fiscal or monetary policy. In this context, by means of the Granger causality test, we mainly find evidence of a directional causality which runs from inflation to unemployment in 4 out of 9 European countries under analysis. This result implies that political authorities of Austria, Belgium, Germany and Italy should implement monetary policy in order to achieve pre-established targets of unemployment and inflation. In the same context, a directional causality running from unemployment to inflation has been found in France and Cyprus suggesting that a reduction in the unemployment level can be achieved through controlling fiscal policy. However, succeeding in this goal may lead to an increasing demand for goods and services which, in turn, might cause a higher inflation than expected. Finally, while there is no statistical evidence of a causal link between unemployment and inflation in Finland and Greece, a bidirectional causality has been found in Estonia. This

  1. sources

    Directory of Open Access Journals (Sweden)

    Shu-Yin Chiang

    2002-01-01

    Full Text Available In this paper, we study the simplified models of the ATM (Asynchronous Transfer Mode multiplexer network with Bernoulli random traffic sources. Based on the model, the performance measures are analyzed by the different output service schemes.

  2. The first synchrotron infrared beamlines at the Advanced Light Source: Microspectroscopy and fast timing

    International Nuclear Information System (INIS)

    Martin, M.C.; McKinney, W.R.

    1998-05-01

    A set of new infrared (IR) beamlines on the 1.4 bending magnet port at the Advanced Light Source, LBNL, are described. Using a synchrotron as an IR source provides considerable brightness advantages, which manifests itself most beneficially when performing spectroscopy on a microscopic length scale. Beamline (BL) 1.4.3 is a dedicated microspectroscopy beamline, where the much smaller focused spot size using the synchrotron source is utilized. This enables an entirely new set of experiments to be performed where spectroscopy on a truly microscopic scale is now possible. BL 1.4.2 consists of a vacuum FTIR bench with a wide spectral range and step-scan capabilities. The fast timing is demonstrated by observing the synchrotron electron storage pattern at the ALS

  3. A new time-space accounting scheme to predict stream water residence time and hydrograph source components at the watershed scale

    Science.gov (United States)

    Takahiro Sayama; Jeffrey J. McDonnell

    2009-01-01

    Hydrograph source components and stream water residence time are fundamental behavioral descriptors of watersheds but, as yet, are poorly represented in most rainfall-runoff models. We present a new time-space accounting scheme (T-SAS) to simulate the pre-event and event water fractions, mean residence time, and spatial source of streamflow at the watershed scale. We...

  4. SLStudio: Open-source framework for real-time structured light

    DEFF Research Database (Denmark)

    Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus

    2014-01-01

    that this software makes real-time 3D scene capture more widely accessible and serves as a foundation for new structured light scanners operating in real-time, e.g. 20 depth images per second and more. The use cases for such scanners are plentyfull, however due to the computational constraints, all public......An open-source framework for real-time structured light is presented. It is called “SLStudio”, and enables real-time capture of metric depth images. The framework is modular, and extensible to support new algorithms for scene encoding/decoding, triangulation, and aquisition hardware. It is the aim...... implementations so far are limited to offline processing. With “SLStudio”, we are making a platform available which enables researchers from many different fields to build application specific real time 3D scanners. The software is hosted at http://compute.dtu.dk/~jakw/slstudio....

  5. Source apportionment of the summer time carbonaceous aerosol at Nordic rural background sites

    Directory of Open Access Journals (Sweden)

    K. E. Yttri

    2011-12-01

    Full Text Available In the present study, natural and anthropogenic sources of particulate organic carbon (OCp and elemental carbon (EC have been quantified based on weekly filter samples of PM10 (particles with aerodynamic diameter <10 μm collected at four Nordic rural background sites [Birkenes (Norway, Hyytiälä (Finland, Vavihill (Sweden, Lille Valby, (Denmark] during late summer (5 August–2 September 2009. Levels of source specific tracers, i.e. cellulose, levoglucosan, mannitol and the 14C/12C ratio of total carbon (TC, have been used as input for source apportionment of the carbonaceous aerosol, whereas Latin Hypercube Sampling (LHS was used to statistically treat the multitude of possible combinations resulting from this approach. The carbonaceous aerosol (here: TCp; i.e. particulate TC was totally dominated by natural sources (69–86%, with biogenic secondary organic aerosol (BSOA being the single most important source (48–57%. Interestingly, primary biological aerosol particles (PBAP were the second most important source (20–32%. The anthropogenic contribution was mainly attributed to fossil fuel sources (OCff and ECff (10–24%, whereas no more than 3–7% was explained by combustion of biomass (OCbb and ECbb in this late summer campaign i.e. emissions from residential wood burning and/or wild/agricultural fires. Fossil fuel sources totally dominated the ambient EC loading, which accounted for 4–12% of TCp, whereas <1.5% of EC was attributed to combustion of biomass. The carbonaceous aerosol source apportionment showed only minor variation between the four selected sites. However, Hyytiälä and Birkenes showed greater resemblance to each other, as did Lille Valby and Vavihill, the two latter being somewhat more influenced by anthropogenic sources. Ambient levels of organosulphates and nitrooxy-organosulphates in the Nordic rural

  6. Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition

    KAUST Repository

    Wang, H.; Alkhalifah, Tariq Ali

    2017-01-01

    The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.

  7. Collective Odor Source Estimation and Search in Time-Variant Airflow Environments Using Mobile Robots

    Science.gov (United States)

    Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming

    2011-01-01

    This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots’ search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot’s detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection–diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method. PMID:22346650

  8. Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition

    KAUST Repository

    Wang, H.

    2017-05-26

    The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.

  9. Real time measurements of submicrometer aerosols in Seoul, Korea: Sources, characteristics, and processing of organic aerosols during winter time.

    Science.gov (United States)

    Kim, H.; Zhang, Q.

    2016-12-01

    Highly time-resolved chemical characterization of non-refractory submicrometer particulate matter (NR-PM1) was conducted in Seoul, the capital of Korea, using an Aerodyne high-resolution time-of-flight aerosol mass spectrometer (HR-ToF-AMS). The measurements were performed during winter when persistent air quality problems associated with elevated PM concentrations were observed. The average NR-PM1 concentration was 27.5 µg m-3 and the average mass was dominated by organics (44%), followed by nitrate (24%) and sulfate (10%). Five distinct sources of organic aerosol (OA) were identified from positive matrix factorization (PMF) analysis of the AMS data: vehicle emissions represented by a hydrocarbon-like OA factor (HOA), cooking represented by a cooking OA factor (COA), wood combustion represented by a biomass burning OA factor (BBOA), and secondary aerosol formation in the atmosphere that is represented by a semi-volatile oxygenated OA factor (SVOOA) and a low volatile oxygenated OA factor (LVOOA). These factors, on average, contributed 16, 20, 23, 15 and 26% to the total OA mass, respectively, with primary organic aerosol (POA = HOA + COA + BBOA) accounting for 59% of the OA mass. On average, both primary emissions and secondary aerosol formation are important factors affecting air quality in Seoul during winter, contributing approximately equal. However, differences in the fraction of PM source and properties were observed between high and low loading PM period. For example, during stagnant period with low wind speed (WS) (0.99 ± 0.7 m/s) and high RH (71%), high PM loadings (43.6 ± 12.4 µg m-3) with enhanced fractions of nitrate (27%) and SVOOA (8%) were observed, indicating a strong influence from locally generated secondary aerosol. On the other hand, when low PM loadings (12.6 ± 7.1 µg m-3), which were commonly associated with high WS (1.8 ± 1.1 m/s) and low RH (50 %), were observed, the fraction of regional sources, such as sulfate (12%) and LVOOA (21

  10. Validation of the direct analysis in real time source for use in forensic drug screening.

    Science.gov (United States)

    Steiner, Robert R; Larson, Robyn L

    2009-05-01

    The Direct Analysis in Real Time (DART) ion source is a relatively new mass spectrometry technique that is seeing widespread use in chemical analyses world-wide. DART studies include such diverse topics as analysis of flavors and fragrances, melamine in contaminated dog food, differentiation of writing inks, characterization of solid counterfeit drugs, and as a detector for planar chromatography. Validation of this new technique for the rapid screening of forensic evidence for drugs of abuse, utilizing the DART source coupled to an accurate mass time-of-flight mass spectrometer, was conducted. The study consisted of the determination of the lower limit of detection for the method, determination of selectivity and a comparison of this technique to established analytical protocols. Examples of DART spectra are included. The results of this study have allowed the Virginia Department of Forensic Science to incorporate this new technique into their analysis scheme for the screening of solid dosage forms of drugs of abuse.

  11. A compact time-of-flight mass spectrometer for ion source characterization

    International Nuclear Information System (INIS)

    Chen, L.; Wan, X.; Jin, D. Z.; Tan, X. H.; Huang, Z. X.; Tan, G. B.

    2015-01-01

    A compact time-of-flight mass spectrometer with overall dimension of about 413 × 250 × 414 mm based on orthogonal injection and angle reflection has been developed for ion source characterization. Configuration and principle of the time-of-flight mass spectrometer are introduced in this paper. The mass resolution is optimized to be about 1690 (FWHM), and the ion energy detection range is tested to be between about 3 and 163 eV with the help of electron impact ion source. High mass resolution and compact configuration make this spectrometer useful to provide a valuable diagnostic for ion spectra fundamental research and study the mass to charge composition of plasma with wide range of parameters

  12. Triple GEM gas detectors as real time fast neutron beam monitors for spallation neutron sources

    International Nuclear Information System (INIS)

    Murtas, F; Claps, G; Croci, G; Tardocchi, M; Pietropaolo, A; Cippo, E Perelli; Rebai, M; Gorini, G; Frost, C D; Raspino, D; Rhodes, N J; Schooneveld, E M

    2012-01-01

    A fast neutron beam monitor based on a triple Gas Electron Multiplier (GEM) detector was developed and tested for the ISIS spallation neutron source in U.K. The test on beam was performed at the VESUVIO beam line operating at ISIS. The 2D fast neutron beam footprint was recorded in real time with a spatial resolution of a few millimeters thanks to the patterned detector readout.

  13. Studing Regional Wave Source Time Functions Using A Massive Automated EGF Deconvolution Procedure

    Science.gov (United States)

    Xie, J. "; Schaff, D. P.

    2010-12-01

    Reliably estimated source time functions (STF) from high-frequency regional waveforms, such as Lg, Pn and Pg, provide important input for seismic source studies, explosion detection, and minimization of parameter trade-off in attenuation studies. The empirical Green’s function (EGF) method can be used for estimating STF, but it requires a strict recording condition. Waveforms from pairs of events that are similar in focal mechanism, but different in magnitude must be on-scale recorded on the same stations for the method to work. Searching for such waveforms can be very time consuming, particularly for regional waves that contain complex path effects and have reduced S/N ratios due to attenuation. We have developed a massive, automated procedure to conduct inter-event waveform deconvolution calculations from many candidate event pairs. The procedure automatically evaluates the “spikiness” of the deconvolutions by calculating their “sdc”, which is defined as the peak divided by the background value. The background value is calculated as the mean absolute value of the deconvolution, excluding 10 s around the source time function. When the sdc values are about 10 or higher, the deconvolutions are found to be sufficiently spiky (pulse-like), indicating similar path Green’s functions and good estimates of the STF. We have applied this automated procedure to Lg waves and full regional wavetrains from 989 M ≥ 5 events in and around China, calculating about a million deconvolutions. Of these we found about 2700 deconvolutions with sdc greater than 9, which, if having a sufficiently broad frequency band, can be used to estimate the STF of the larger events. We are currently refining our procedure, as well as the estimated STFs. We will infer the source scaling using the STFs. We will also explore the possibility that the deconvolution procedure could complement cross-correlation in a real time event-screening process.

  14. eBooking of beam-time over internet for beamlines of Indus synchrotron radiation sources

    International Nuclear Information System (INIS)

    Jain, Alok; Verma, Rajesh; Rajan, Alpana; Modi, M.H.; Rawat, Anil

    2015-01-01

    Users from various research labs and academic institutes carry out experiments on beamlines of two Synchrotron Radiation Sources Indus-1 and Indus-2 available at RRCAT, Indore. To carry out experimental work on beamlines of both synchrotron radiation sources, beam-time is booked over Internet by the users of beamlines using user portal designed, developed and deployed over Internet. This portal has made the process of beamtime booking fast, hassle free and paperless as manual booking of beam-time for carrying out experiment on a particular beamline is cumbersome. The portal facilitates in-charge of Indus-1 and Indus-2 beamlines to keep track of users' records, work progress and other activities linked to experiments carried on beamlines. It is important to keep record and provide statistics about the usage of the beam lines from time-to-time. The user portal for e-booking of beam-time has been developed in-house using open source software development tools. Multi-step activities of users and beamline administrators are workflow based with seamless flow of information across various modules and fully authenticated using role based mechanism for different roles of software usage. The software is in regular use since November 2013 and has helped beamline in- charges in efficiently managing various activities related to user registration, booking of beam-time, booking of Guest House, Generation of Security permits, User feedback etc. Design concept, role based authentication mechanism and features provided by the web portal are discussed in detail in this paper. (author)

  15. openPSTD: The open source pseudospectral time-domain method for acoustic propagation

    Science.gov (United States)

    Hornikx, Maarten; Krijnen, Thomas; van Harten, Louis

    2016-06-01

    An open source implementation of the Fourier pseudospectral time-domain (PSTD) method for computing the propagation of sound is presented, which is geared towards applications in the built environment. Being a wave-based method, PSTD captures phenomena like diffraction, but maintains efficiency in processing time and memory usage as it allows to spatially sample close to the Nyquist criterion, thus keeping both the required spatial and temporal resolution coarse. In the implementation it has been opted to model the physical geometry as a composition of rectangular two-dimensional subdomains, hence initially restricting the implementation to orthogonal and two-dimensional situations. The strategy of using subdomains divides the problem domain into local subsets, which enables the simulation software to be built according to Object-Oriented Programming best practices and allows room for further computational parallelization. The software is built using the open source components, Blender, Numpy and Python, and has been published under an open source license itself as well. For accelerating the software, an option has been included to accelerate the calculations by a partial implementation of the code on the Graphical Processing Unit (GPU), which increases the throughput by up to fifteen times. The details of the implementation are reported, as well as the accuracy of the code.

  16. Set up and programming of an ALICE Time-Of-Flight trigger facility and software implementation for its Quality Assurance (QA) during LHC Run 2

    CERN Document Server

    Toschi, Francesco

    2016-01-01

    The Cosmic and Topology Trigger Module (CTTM) is the main component of a trigger based on the ALICE TOF detector. Taking advantage of the TOF fast response, this VME board implements the trigger logic and delivers several L0 trigger outputs, used since Run 1, to provide cosmic triggers and rare triggers in pp, p+Pb and Pb+Pb data taking. Due to TOF DCS architectural change of the PCs controlling the CTTM (from 32 bits to 64 bits) it is mandatory to upgrade the software related to the CTTM including the code programming the FPGA firmware. A dedicated CTTM board will be installed in a CERN lab (Meyrin site), with the aim of recreating the electronics chain of the TOF trigger, to get a comfortable porting of the code to the 64 bit environment. The project proposed to the summer student is the setting up of the CTTM and the porting of the software. Moreover, in order to monitor the CTTM Trigger board during the real data taking, the implementation of a new Quality Assurance (QA) code is also crucial, together wit...

  17. Source detection at 100 meter standoff with a time-encoded imaging system

    International Nuclear Information System (INIS)

    Brennan, J.; Brubaker, E.; Gerling, M.; Marleau, P.; Monterial, M.

    2017-01-01

    Here, we present the design, characterization, and testing of a laboratory prototype radiological search and localization system. The system, based on time-encoded imaging, uses the attenuation signature of neutrons in time, induced by the geometrical layout and motion of the system. We have demonstrated the ability to detect a ~1 mCi 252 Cf radiological source at 100 m standoff with 90% detection efficiency and 10% false positives against background in 12 min. As a result, this same detection efficiency is met at 15 s for a 40 m standoff, and 1.2 s for a 20 m standoff.

  18. Implications on 1 + 1 D Tsunami Runup Modeling due to Time Features of the Earthquake Source

    Science.gov (United States)

    Fuentes, M.; Riquelme, S.; Ruiz, J.; Campos, J.

    2018-04-01

    The time characteristics of the seismic source are usually neglected in tsunami modeling, due to the difference in the time scale of both processes. Nonetheless, there are just a few analytical studies that intended to explain separately the role of the rise time and the rupture velocity. In this work, we extend an analytical 1 + 1 D solution for the shoreline motion time series, from the static case to the kinematic case, by including both rise time and rupture velocity. Our results show that the static case corresponds to a limit case of null rise time and infinite rupture velocity. Both parameters contribute in shifting the arrival time, but maximum runup may be affected by very slow ruptures and long rise time. Parametric analysis reveals that runup is strictly decreasing with the rise time while is highly amplified in a certain range of slow rupture velocities. For even lower rupture velocities, the tsunami excitation vanishes and for larger, quicker approaches to the instantaneous case.

  19. Implications on 1 + 1 D Tsunami Runup Modeling due to Time Features of the Earthquake Source

    Science.gov (United States)

    Fuentes, M.; Riquelme, S.; Ruiz, J.; Campos, J.

    2018-02-01

    The time characteristics of the seismic source are usually neglected in tsunami modeling, due to the difference in the time scale of both processes. Nonetheless, there are just a few analytical studies that intended to explain separately the role of the rise time and the rupture velocity. In this work, we extend an analytical 1 + 1 D solution for the shoreline motion time series, from the static case to the kinematic case, by including both rise time and rupture velocity. Our results show that the static case corresponds to a limit case of null rise time and infinite rupture velocity. Both parameters contribute in shifting the arrival time, but maximum runup may be affected by very slow ruptures and long rise time. Parametric analysis reveals that runup is strictly decreasing with the rise time while is highly amplified in a certain range of slow rupture velocities. For even lower rupture velocities, the tsunami excitation vanishes and for larger, quicker approaches to the instantaneous case.

  20. RUNNING INJURY DEVELOPMENT

    DEFF Research Database (Denmark)

    Johansen, Karen Krogh; Hulme, Adam; Damsted, Camma

    2017-01-01

    BACKGROUND: Behavioral science methods have rarely been used in running injury research. Therefore, the attitudes amongst runners and their coaches regarding factors leading to running injuries warrants formal investigation. PURPOSE: To investigate the attitudes of middle- and long-distance runners...... able to compete in national championships and their coaches about factors associated with running injury development. METHODS: A link to an online survey was distributed to middle- and long-distance runners and their coaches across 25 Danish Athletics Clubs. The main research question was: "Which...... factors do you believe influence the risk of running injuries?". In response to this question, the athletes and coaches had to click "Yes" or "No" to 19 predefined factors. In addition, they had the possibility to submit a free-text response. RESULTS: A total of 68 athletes and 19 coaches were included...

  1. Running Injury Development

    DEFF Research Database (Denmark)

    Krogh Johansen, Karen; Hulme, Adam; Damsted, Camma

    2017-01-01

    Background: Behavioral science methods have rarely been used in running injury research. Therefore, the attitudes amongst runners and their coaches regarding factors leading to running injuries warrants formal investigation. Purpose: To investigate the attitudes of middle- and long-distance runners...... able to compete in national championships and their coaches about factors associated with running injury development. Methods: A link to an online survey was distributed to middle- and long-distance runners and their coaches across 25 Danish Athletics Clubs. The main research question was: “Which...... factors do you believe influence the risk of running injuries?”. In response to this question, the athletes and coaches had to click “Yes” or “No” to 19 predefined factors. In addition, they had the possibility to submit a free-text response. Results: A total of 68 athletes and 19 coaches were included...

  2. Disambiguating past events: Accurate source memory for time and context depends on different retrieval processes.

    Science.gov (United States)

    Persson, Bjorn M; Ainge, James A; O'Connor, Akira R

    2016-07-01

    Current animal models of episodic memory are usually based on demonstrating integrated memory for what happened, where it happened, and when an event took place. These models aim to capture the testable features of the definition of human episodic memory which stresses the temporal component of the memory as a unique piece of source information that allows us to disambiguate one memory from another. Recently though, it has been suggested that a more accurate model of human episodic memory would include contextual rather than temporal source information, as humans' memory for time is relatively poor. Here, two experiments were carried out investigating human memory for temporal and contextual source information, along with the underlying dual process retrieval processes, using an immersive virtual environment paired with a 'Remember-Know' memory task. Experiment 1 (n=28) showed that contextual information could only be retrieved accurately using recollection, while temporal information could be retrieved using either recollection or familiarity. Experiment 2 (n=24), which used a more difficult task, resulting in reduced item recognition rates and therefore less potential for contamination by ceiling effects, replicated the pattern of results from Experiment 1. Dual process theory predicts that it should only be possible to retrieve source context from an event using recollection, and our results are consistent with this prediction. That temporal information can be retrieved using familiarity alone suggests that it may be incorrect to view temporal context as analogous to other typically used source contexts. This latter finding supports the alternative proposal that time since presentation may simply be reflected in the strength of memory trace at retrieval - a measure ideally suited to trace strength interrogation using familiarity, as is typically conceptualised within the dual process framework. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Time delay estimation in a reverberant environment by low rate sampling of impulsive acoustic sources

    KAUST Repository

    Omer, Muhammad

    2012-07-01

    This paper presents a new method of time delay estimation (TDE) using low sample rates of an impulsive acoustic source in a room environment. The proposed method finds the time delay from the room impulse response (RIR) which makes it robust against room reverberations. The RIR is considered a sparse phenomenon and a recently proposed sparse signal reconstruction technique called orthogonal clustering (OC) is utilized for its estimation from the low rate sampled received signal. The arrival time of the direct path signal at a pair of microphones is identified from the estimated RIR and their difference yields the desired time delay. Low sampling rates reduce the hardware and computational complexity and decrease the communication between the microphones and the centralized location. The performance of the proposed technique is demonstrated by numerical simulations and experimental results. © 2012 IEEE.

  4. Effects of detector–source distance and detector bias voltage variations on time resolution of general purpose plastic scintillation detectors

    International Nuclear Information System (INIS)

    Ermis, E.E.; Celiktas, C.

    2012-01-01

    Effects of source-detector distance and the detector bias voltage variations on time resolution of a general purpose plastic scintillation detector such as BC400 were investigated. 133 Ba and 207 Bi calibration sources with and without collimator were used in the present work. Optimum source-detector distance and bias voltage values were determined for the best time resolution by using leading edge timing method. Effect of the collimator usage on time resolution was also investigated. - Highlights: ► Effect of the source-detector distance on time spectra was investigated. ► Effect of the detector bias voltage variations on time spectra was examined. ► Optimum detector–source distance was determined for the best time resolution. ► Optimum detector bias voltage was determined for the best time resolution. ► 133 Ba and 207 Bi radioisotopes were used.

  5. Atmospheric Nitrogen Deposition in the Western United States: Sources, Sinks and Changes over Time

    Science.gov (United States)

    Anderson, Sarah Marie

    Anthropogenic activities have greatly modified the way nitrogen moves through the atmosphere and terrestrial and aquatic environments. Excess reactive nitrogen generated through fossil fuel combustion, industrial fixation, and intensification of agriculture is not confined to anthropogenic systems but leaks into natural ecosystems with consequences including acidification, eutrophication, and biodiversity loss. A better understanding of where excess nitrogen originates and how that changes over time is crucial to identifying when, where, and to what degree environmental impacts occur. A major route into ecosystems for excess nitrogen is through atmospheric deposition. Excess nitrogen is emitted to the atmosphere where it can be transported great distances before being deposited back to the Earth's surface. Analyzing the composition of atmospheric nitrogen deposition and biological indicators that reflect deposition can provide insight into the emission sources as well as processes and atmospheric chemistry that occur during transport and what drives variation in these sources and processes. Chapter 1 provides a review and proof of concept of lichens to act as biological indicators and how their elemental and stable isotope composition can elucidate variation in amounts and emission sources of nitrogen over space and time. Information on amounts and emission sources of nitrogen deposition helps inform natural resources and land management decisions by helping to identify potentially impacted areas and causes of those impacts. Chapter 2 demonstrates that herbaria lichen specimens and field lichen samples reflect historical changes in atmospheric nitrogen deposition from urban and agricultural sources across the western United States. Nitrogen deposition increases throughout most of the 20 th century because of multiple types of emission sources until the implementation of the Clean Air Act Amendments of 1990 eventually decrease nitrogen deposition around the turn of

  6. Miniaturized pulsed laser source for time-domain diffuse optics routes to wearable devices.

    Science.gov (United States)

    Di Sieno, Laura; Nissinen, Jan; Hallman, Lauri; Martinenghi, Edoardo; Contini, Davide; Pifferi, Antonio; Kostamovaara, Juha; Mora, Alberto Dalla

    2017-08-01

    We validate a miniaturized pulsed laser source for use in time-domain (TD) diffuse optics, following rigorous and shared protocols for performance assessment of this class of devices. This compact source (12×6  mm2) has been previously developed for range finding applications and is able to provide short, high energy (∼100  ps, ∼0.5  nJ) optical pulses at up to 1 MHz repetition rate. Here, we start with a basic level laser characterization with an analysis of suitability of this laser for the diffuse optics application. Then, we present a TD optical system using this source and its performances in both recovering optical properties of tissue-mimicking homogeneous phantoms and in detecting localized absorption perturbations. Finally, as a proof of concept of in vivo application, we demonstrate that the system is able to detect hemodynamic changes occurring in the arm of healthy volunteers during a venous occlusion. Squeezing the laser source in a small footprint removes a key technological bottleneck that has hampered so far the realization of a miniaturized TD diffuse optics system, able to compete with already assessed continuous-wave devices in terms of size and cost, but with wider performance potentialities, as demonstrated by research over the last two decades. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  7. The Effect of Training in Minimalist Running Shoes on Running Economy.

    Science.gov (United States)

    Ridge, Sarah T; Standifird, Tyler; Rivera, Jessica; Johnson, A Wayne; Mitchell, Ulrike; Hunter, Iain

    2015-09-01

    The purpose of this study was to examine the effect of minimalist running shoes on oxygen uptake during running before and after a 10-week transition from traditional to minimalist running shoes. Twenty-five recreational runners (no previous experience in minimalist running shoes) participated in submaximal VO2 testing at a self-selected pace while wearing traditional and minimalist running shoes. Ten of the 25 runners gradually transitioned to minimalist running shoes over 10 weeks (experimental group), while the other 15 maintained their typical training regimen (control group). All participants repeated submaximal VO2 testing at the end of 10 weeks. Testing included a 3 minute warm-up, 3 minutes of running in the first pair of shoes, and 3 minutes of running in the second pair of shoes. Shoe order was randomized. Average oxygen uptake was calculated during the last minute of running in each condition. The average change from pre- to post-training for the control group during testing in traditional and minimalist shoes was an improvement of 3.1 ± 15.2% and 2.8 ± 16.2%, respectively. The average change from pre- to post-training for the experimental group during testing in traditional and minimalist shoes was an improvement of 8.4 ± 7.2% and 10.4 ± 6.9%, respectively. Data were analyzed using a 2-way repeated measures ANOVA. There were no significant interaction effects, but the overall improvement in running economy across time (6.15%) was significant (p = 0.015). Running in minimalist running shoes improves running economy in experienced, traditionally shod runners, but not significantly more than when running in traditional running shoes. Improvement in running economy in both groups, regardless of shoe type, may have been due to compliance with training over the 10-week study period and/or familiarity with testing procedures. Key pointsRunning in minimalist footwear did not result in a change in running economy compared to running in traditional footwear

  8. 太阳能与地源热泵复合系统的优化配置与运行方式%Optimizing configuring and running-mode of solar energy and ground-source heat pump hybrid systems

    Institute of Scientific and Technical Information of China (English)

    冯晓梅; 张昕宇; 邹瑜; 郑瑞澄

    2011-01-01

    以某实际工程为例,对太阳能系统与地源热泵系统联合运行时的优化配置与运行方式进行了模拟分析.得到结论:要优先利用太阳能系统;对太阳能资源要梯级利用;尽可能增大太阳能集热器面积,提高太阳能直接利用的可能性;单位面积太阳能集热器成本为250元/m2左右比较合适.%With a project, simulates and analyses optimizing configuring and running-mode of the hybrid system. Concludes that the solar energy system should be prior to the ground-source heat pump system in operation, the utilization of solar energy resource should be the way according to the energy grade, a possibly larger solar collector area is good for direct utilization of solar energy, and the solar collector cost of 250 RMB per square meter is appropriate.

  9. Time-resolved hard x-ray studies using third-generation synchrotron radiation sources (abstract)

    International Nuclear Information System (INIS)

    Mills, D.M.

    1992-01-01

    The third-generation, high-brilliance, synchrotron radiation sources currently under construction will usher in a new era of x-ray research in the physical, chemical, and biological sciences. One of the most exciting areas of experimentation will be the extension of static x-ray scattering and diffraction techniques to the study of transient or time-evolving systems. The high repetition rate, short-pulse duration, high-brilliance, variable spectral bandwidth, and large particle beam energies of these sources make them ideal for hard x-ray, time-resolved studies. The primary focus of this presentation will be on the novel instrumentation required for time-resolved studies such as optics which can increase the flux on the sample or disperse the x-ray beam, detectors and electronics for parallel data collection, and methods for altering the natural time structure of the radiation. This work is supported by the U.S. Department of Energy, BES-Materials Science, under Contract No. W-31-109-ENG-38

  10. Evaluating scintillator performance in time-resolved hard X-ray studies at synchrotron light sources.

    Science.gov (United States)

    Rutherford, Michael E; Chapman, David J; White, Thomas G; Drakopoulos, Michael; Rack, Alexander; Eakins, Daniel E

    2016-05-01

    The short pulse duration, small effective source size and high flux of synchrotron radiation is ideally suited for probing a wide range of transient deformation processes in materials under extreme conditions. In this paper, the challenges of high-resolution time-resolved indirect X-ray detection are reviewed in the context of dynamic synchrotron experiments. In particular, the discussion is targeted at two-dimensional integrating detector methods, such as those focused on dynamic radiography and diffraction experiments. The response of a scintillator to periodic synchrotron X-ray excitation is modelled and validated against experimental data collected at the Diamond Light Source (DLS) and European Synchrotron Radiation Facility (ESRF). An upper bound on the dynamic range accessible in a time-resolved experiment for a given bunch separation is calculated for a range of scintillators. New bunch structures are suggested for DLS and ESRF using the highest-performing commercially available crystal LYSO:Ce, allowing time-resolved experiments with an interframe time of 189 ns and a maximum dynamic range of 98 (6.6 bits).

  11. Acoustic emission source location in plates using wavelet analysis and cross time frequency spectrum.

    Science.gov (United States)

    Mostafapour, A; Davoodi, S; Ghareaghaji, M

    2014-12-01

    In this study, the theories of wavelet transform and cross-time frequency spectrum (CTFS) are used to locate AE source with frequency-varying wave velocity in plate-type structures. A rectangular array of four sensors is installed on the plate. When an impact is generated by an artificial AE source such as Hsu-Nielsen method of pencil lead breaking (PLB) at any position of the plate, the AE signals will be detected by four sensors at different times. By wavelet packet decomposition, a packet of signals with frequency range of 0.125-0.25MHz is selected. The CTFS is calculated by the short-time Fourier transform of the cross-correlation between considered packets captured by AE sensors. The time delay is calculated when the CTFS reaches the maximum value and the corresponding frequency is extracted per this maximum value. The resulting frequency is used to calculate the group velocity of wave velocity in combination with dispersive curve. The resulted locating error shows the high precision of proposed algorithm. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. [Regional pilot study to evaluate the laboratory turnaround time according to the client source].

    Science.gov (United States)

    Salinas, M; López-Garrigós, M; Yago, M; Ortuño, M; Díaz, J; Marcaida, G; Chinchilla, V; Carratala, A; Aguado, C; Rodríguez-Borja, E; Laíz, B; Guaita, M; Esteban, A; Lorente, M A; Uris, J

    2011-01-01

    To show turnaround time to client source in eight laboratories covering eight Health Areas (2,014,475 inhabitants) of the Valencian Community (Spain). Internal Laboratory Information System (LIS) registers (test register and verification date and time), and daily LIS registers were used to design the indicators, These indicators showed the percentage of key tests requested (full blood count and serum glucose and thyrotropin) that were validated on the same day the blood was taken (inpatients and Primary Care and/or at 12 a.m. (inpatients). Urgent (stat) tests were also registered as key tests (serum troponin and potassium) and were recorded in minutes. Registers were collected and indicators calculated automatically through a Data Warehouse application and OLAP cube software. Long turnaround time differences were observed at 12 a.m. in inpatients, and in the day of sample extraction in primary care patients. The variability in turnaround of stat tests is related to hospital size, activity and validation by the laboratory physician. The study results show the large turnaround time disparity in eight Health Care Areas of Valencian Community. The various requesting sources covered by the laboratories create the need for continuous mapping processes redesign and benchmarking studies to achieve customer satisfaction. Copyright © 2010 SECA. Published by Elsevier Espana. All rights reserved.

  13. Source size and time dependence of multifragmentation induced by GeV 3He beams

    International Nuclear Information System (INIS)

    Wang, G.; Kwiatkowski, K.; Bracken, D.S.; Renshaw Foxford, E.; Hsi, W.; Morley, K.B.; Viola, V.E.; Yoder, N.R.; Volant, C.; Legrain, R.; Pollacco, E.C.; Korteling, R.G.; Botvina, A.; Brzychczyk, J.; Breuer, H.

    1999-01-01

    To investigate the source size and time dependence of multifragmentation reactions, small- and large-angle relative velocity correlations between coincident complex fragments have been measured for the 1.8 - 4.8 GeV 3 He+ nat Ag, 197 Au systems. The results support an evolutionary scenario for the fragment emission process in which lighter IMFs (Z approx-lt 6) are emitted from a hot, more dense source prior to breakup of an expanded residue. For the most highly excited residues, for which there is a significant yield of fragments with very soft energy spectra (E/A≤3 MeV), comparisons with an N-body simulation suggest a breakup time of τ∼50 fm/c for the expanded residue. Comparison of these data with both the evolutionary expanding emitting source model and the Copenhagen statistical multifragmentation model shows good agreement for heavier IMF close-quote s formed in the final breakup stage, but only the evolutionary model is successful in accounting for the lighter IMFs. copyright 1999 The American Physical Society

  14. HYSPEC : A CRYSTAL TIME OF FLIGHT HYBRID SPECTROMETER FOR THE SPALLATION NEUTRON SOURCE

    International Nuclear Information System (INIS)

    SHAPIRO, S.M.; ZALIZNYAK, I.A.

    2002-01-01

    This document lays out a proposal by the Instrument Development Team (IDT) composed of scientists from leading Universities and National Laboratories to design and build a conceptually new high-flux inelastic neutron spectrometer at the pulsed Spallation Neutron Source (SNS) at Oak Ridge. This instrument is intended to supply users of the SNS and scientific community, of which the IDT is an integral part, with a platform for ground-breaking investigations of the low-energy atomic-scale dynamical properties of crystalline solids. It is also planned that the proposed instrument will be equipped with a polarization analysis capability, therefore becoming the first polarized beam inelastic spectrometer in the SNS instrument suite, and the first successful polarized beam inelastic instrument at a pulsed spallation source worldwide. The proposed instrument is designed primarily for inelastic and elastic neutron spectroscopy of single crystals. In fact, the most informative neutron scattering studies of the dynamical properties of solids nearly always require single crystal samples, and they are almost invariably flux-limited. In addition, in measurements with polarization analysis the available flux is reduced through selection of the particular neutron polarization, which puts even more stringent limits on the feasibility of a particular experiment. To date, these investigations have mostly been carried out on crystal spectrometers at high-flux reactors, which usually employ focusing Bragg optics to concentrate the neutron beam on a typically small sample. Construction at Oak Ridge of the high-luminosity spallation neutron source, which will provide intense pulsed neutron beams with time-averaged fluxes equal to those at medium-flux reactors, opens entirely new opportunities for single crystal neutron spectroscopy. Drawing upon experience acquired during decades of studies with both crystal and time-of-flight (TOF) spectrometers, the IDT has developed a conceptual

  15. HYSPEC : A CRYSTAL TIME OF FLIGHT HYBRID SPECTROMETER FOR THE SPALLATION NEUTRON SOURCE.

    Energy Technology Data Exchange (ETDEWEB)

    SHAPIRO,S.M.; ZALIZNYAK,I.A.

    2002-12-30

    This document lays out a proposal by the Instrument Development Team (IDT) composed of scientists from leading Universities and National Laboratories to design and build a conceptually new high-flux inelastic neutron spectrometer at the pulsed Spallation Neutron Source (SNS) at Oak Ridge. This instrument is intended to supply users of the SNS and scientific community, of which the IDT is an integral part, with a platform for ground-breaking investigations of the low-energy atomic-scale dynamical properties of crystalline solids. It is also planned that the proposed instrument will be equipped with a polarization analysis capability, therefore becoming the first polarized beam inelastic spectrometer in the SNS instrument suite, and the first successful polarized beam inelastic instrument at a pulsed spallation source worldwide. The proposed instrument is designed primarily for inelastic and elastic neutron spectroscopy of single crystals. In fact, the most informative neutron scattering studies of the dynamical properties of solids nearly always require single crystal samples, and they are almost invariably flux-limited. In addition, in measurements with polarization analysis the available flux is reduced through selection of the particular neutron polarization, which puts even more stringent limits on the feasibility of a particular experiment. To date, these investigations have mostly been carried out on crystal spectrometers at high-flux reactors, which usually employ focusing Bragg optics to concentrate the neutron beam on a typically small sample. Construction at Oak Ridge of the high-luminosity spallation neutron source, which will provide intense pulsed neutron beams with time-averaged fluxes equal to those at medium-flux reactors, opens entirely new opportunities for single crystal neutron spectroscopy. Drawing upon experience acquired during decades of studies with both crystal and time-of-flight (TOF) spectrometers, the IDT has developed a conceptual

  16. Time-Dependent S{sub N} Calculations Describing Pulsed Source Experiments at the FRO Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Bergstrom, A.; Kockum, J.; Soderberg, S. [Research Institute of National Defence, Stockholm (Sweden)

    1968-04-15

    In view of the difficulties in describing pulsed source experiments quantitatively in assemblies consisting of a fast core and a light reflector, a time-dependent S{sub N} code has been applied to this type of assembly. The code, written for the IBM 7090 computer, divides time into short intervals and computes the flux in spherical geometry for each interval using the Carlson S{sub N} scheme. The source term is obtained by extrapolation from two earlier time-intervals. Several problems in connection with the discretization of the time, space and energy dimensions are discussed. For the sub-critical assembly studied the treatment of the lower energy-groups is decisive for the numerical stability. A 22-group cross-section set with a low energy cut-off at 0.04 eV obtained with the SPENG programme has been used. The time intervals are varied continuously and are set proportional to the inverse of the maximum logarithmic time-derivative of the space and energy-dependent flux with the further restriction that they are not allowed to increase above a predetermined value. In a typical case, the intervals vary between 10{sup -9} and 10{sup -8} sec. The memory of the computer is fully exploited when 22 energy groups and 46 radial points are used. The computing time for each time-interval is about 6 sec. The code has been applied to a 3.5% sub-critical assembly consisting of a 20% enriched, spherical uranium metal core with a thick copper reflector and the calculations have been compared to experiments with good agreement. The calculations show that spectral equilibrium below 10 keV is not reached until times long compared to the usual measuring times and that the exponential decay finally reached is entirely determined by reflector properties at almost thermal energies. It is also shown that the simple one- and two-region models are inadequate in this case and that no time-independent prompt neutron life-time can be obtained from the measurements. (author)

  17. Real-time source deformation modeling through GNSS permanent stations at Merapi volcano (Indonesia

    Science.gov (United States)

    Beauducel, F.; Nurnaning, A.; Iguchi, M.; Fahmi, A. A.; Nandaka, M. A.; Sumarti, S.; Subandriyo, S.; Metaxian, J. P.

    2014-12-01

    Mt. Merapi (Java, Indonesia) is one of the most active and dangerous volcano in the world. A first GPS repetition network was setup and periodically measured since 1993, allowing detecting a deep magma reservoir, quantifying magma flux in conduit and identifying shallow discontinuities around the former crater (Beauducel and Cornet, 1999;Beauducel et al., 2000, 2006). After the 2010 centennial eruption, when this network was almost completely destroyed, Indonesian and Japanese teams installed a new continuous GPS network for monitoring purpose (Iguchi et al., 2011), consisting of 3 stations located at the volcano flanks, plus a reference station at the Yogyakarta Observatory (BPPTKG).In the framework of DOMERAPI project (2013-2016) we have completed this network with 5 additional stations, which are located on the summit area and volcano surrounding. The new stations are 1-Hz sampling, GNSS (GPS + GLONASS) receivers, and near real-time data streaming to the Observatory. An automatic processing has been developed and included in the WEBOBS system (Beauducel et al., 2010) based on GIPSY software computing precise daily moving solutions every hour, and for different time scales (2 months, 1 and 5 years), time series and velocity vectors. A real-time source modeling estimation has also been implemented. It uses the depth-varying point source solution (Mogi, 1958; Williams and Wadge, 1998) in a systematic inverse problem model exploration that displays location, volume variation and 3-D probability map.The operational system should be able to better detect and estimate the location and volume variations of possible magma sources, and to follow magma transfer towards the surface. This should help monitoring and contribute to decision making during future unrest or eruption.

  18. EMFs run aground

    International Nuclear Information System (INIS)

    Raloff, J.

    1993-01-01

    Presently no one knows whether electromagnetic fields (EMFs) play a role in human cancer or other ailments, though epidemiological studies over the years have suggested that possibility. A study by the Electric Power Research Institute attempted to quantify everything it could about the magnetic environment of a home, identifying not only major sources of magnetic fields, but also their frequencies, strengths, and how they fall off with distance. Sources of a homes magnetic environment include appliances, overhead powerlines, and grounding connections to metallic water pipes. Fields will vary over time, depending on how much current is passing through the electrically conductive sources. Additional contributors to a home's magnetic background may include unusual wiring in the walls, underground power lines, and near-by high voltage transmission lines. This paper summarizes the study results, indicating weak, persistant EMFs may dominate, but small magnetic field associated with ground currents can end up contributing more to the overall EMF background than appliances producing far larger fields which fall off more quickly with distance. 2 figs

  19. Real-time software for multi-isotopic source term estimation

    International Nuclear Information System (INIS)

    Goloubenkov, A.; Borodin, R.; Sohier, A.

    1996-01-01

    Consideration is given to development of software for one of crucial components of the RODOS - assessment of the source rate (SR) from indirect measurements. Four components of the software are described in the paper. First component is a GRID system, which allow to prepare stochastic meteorological and radioactivity fields using measured data. Second part is a model of atmospheric transport which can be adapted for emulation of practically any gamma dose/spectrum detectors. The third one is a method which allows space-time and quantitative discrepancies in measured and modelled data to be taken into account simultaneously. It bases on the preference scheme selected by an expert. Last component is a special optimization method for calculation of multi-isotopic SR and its uncertainties. Results of a validation of the software using tracer experiments data and Chernobyl source estimation for main dose-forming isotopes are enclosed in the paper

  20. Time-of-flight small-angle scattering spectrometers on pulsed neutron sources

    International Nuclear Information System (INIS)

    Ostanevich, Yu.M.

    1987-01-01

    The operation principles, constructions, advantages and shortcomings of known time-of-flight small angle neutron scattering (TOF SANS) spectrometers built up with pulsed neutron sources are reviewed. The most important characteristics of TOF SANS apparatuses are rather a high luminosity and the possibility for the measurement in an extremely wide range of scattering vector at a single exposure. This is achieved by simultaneous employment of white beam, TOF technique for wave length-scan and the commonly known angle-scan. However, the electronic equipment, data-matching programs, and the measurement procedure, necessary for accurate normalization of experimental data and their transformation into absolute cross-section scale, they all become more complex, as compared with those for SANS apparatuses operating on steady-state neutron sources, where only angle-scan is used

  1. Characterization of Ground Displacement Sources from Variational Bayesian Independent Component Analysis of Space Geodetic Time Series

    Science.gov (United States)

    Gualandi, Adriano; Serpelloni, Enrico; Elina Belardinelli, Maria; Bonafede, Maurizio; Pezzo, Giuseppe; Tolomei, Cristiano

    2015-04-01

    A critical point in the analysis of ground displacement time series, as those measured by modern space geodetic techniques (primarly continuous GPS/GNSS and InSAR) is the development of data driven methods that allow to discern and characterize the different sources that generate the observed displacements. A widely used multivariate statistical technique is the Principal Component Analysis (PCA), which allows to reduce the dimensionality of the data space maintaining most of the variance of the dataset explained. It reproduces the original data using a limited number of Principal Components, but it also shows some deficiencies, since PCA does not perform well in finding the solution to the so-called Blind Source Separation (BSS) problem. The recovering and separation of the different sources that generate the observed ground deformation is a fundamental task in order to provide a physical meaning to the possible different sources. PCA fails in the BSS problem since it looks for a new Euclidean space where the projected data are uncorrelated. Usually, the uncorrelation condition is not strong enough and it has been proven that the BSS problem can be tackled imposing on the components to be independent. The Independent Component Analysis (ICA) is, in fact, another popular technique adopted to approach this problem, and it can be used in all those fields where PCA is also applied. An ICA approach enables us to explain the displacement time series imposing a fewer number of constraints on the model, and to reveal anomalies in the data such as transient deformation signals. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we use a variational bayesian ICA (vbICA) method, which models the probability density function (pdf) of each source signal using a mix of Gaussian distributions. This technique allows for more flexibility in the description of the pdf of the sources

  2. Contributed Review: Source-localization algorithms and applications using time of arrival and time difference of arrival measurements

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xinya [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352, USA; Deng, Zhiqun Daniel [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352, USA; Rauchenstein, Lynn T. [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352, USA; Carlson, Thomas J. [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352, USA

    2016-04-01

    Locating the position of fixed or mobile sources (i.e., transmitters) based on received measurements from sensors is an important research area that is attracting much research interest. In this paper, we present localization algorithms using time of arrivals (TOA) and time difference of arrivals (TDOA) to achieve high accuracy under line-of-sight conditions. The circular (TOA) and hyperbolic (TDOA) location systems both use nonlinear equations that relate the locations of the sensors and tracked objects. These nonlinear equations can develop accuracy challenges because of the existence of measurement errors and efficiency challenges that lead to high computational burdens. Least squares-based and maximum likelihood-based algorithms have become the most popular categories of location estimators. We also summarize the advantages and disadvantages of various positioning algorithms. By improving measurement techniques and localization algorithms, localization applications can be extended into the signal-processing-related domains of radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.

  3. An elementary solution of the Maxwell equations for a time-dependent source

    International Nuclear Information System (INIS)

    Rivera, R; Villarroel, D

    2002-01-01

    We present an elementary solution of the Maxwell equations for a time-dependent source consisting of an infinite solenoid with a current density that increases linearly with time. The geometrical symmetries and the time dependence of the current density make possible a mathematical treatment that does not involve the usual technical difficulties, thus making this presentation suitable for students that are taking a first course in electromagnetism. We also show that the electric field generated by the solenoid can be used to construct an exact solution of the relativistic equation of motion of the electron that takes into account the effect of the radiation. In particular, we derive, in an almost trivial way, the formula for the radiation rate of an electron in circular motion

  4. Estimates of Imaging Times for Conventional and Synchrotron X-Ray Sources

    CERN Document Server

    Kinney, J

    2003-01-01

    The following notes are to be taken as estimates of the time requirements for imaging NIF targets in three-dimensions with absorption contrast. The estimates ignore target geometry and detector inefficiency, and focus only on the statistical question of detecting compositional (structural) differences between adjacent volume elements in the presence of noise. The basic equations, from the classic reference by Grodzins, consider imaging times in terms of the required number of photons necessary to provide an image with given resolution and noise. The time estimates, therefore, have been based on the calculated x-ray fluxes from the proposed Advanced Light Source (ALS) imaging beamline, and from the calculated flux for a tungsten anode x-ray generator operated in a point focus mode.

  5. Space-time quantitative source apportionment of soil heavy metal concentration increments.

    Science.gov (United States)

    Yang, Yong; Christakos, George; Guo, Mingwu; Xiao, Lu; Huang, Wei

    2017-04-01

    Assessing the space-time trends and detecting the sources of heavy metal accumulation in soils have important consequences in the prevention and treatment of soil heavy metal pollution. In this study, we collected soil samples in the eastern part of the Qingshan district, Wuhan city, Hubei Province, China, during the period 2010-2014. The Cd, Cu, Pb and Zn concentrations in soils exhibited a significant accumulation during 2010-2014. The spatiotemporal Kriging technique, based on a quantitative characterization of soil heavy metal concentration variations in terms of non-separable variogram models, was employed to estimate the spatiotemporal soil heavy metal distribution in the study region. Our findings showed that the Cd, Cu, and Zn concentrations have an obvious incremental tendency from the southwestern to the central part of the study region. However, the Pb concentrations exhibited an obvious tendency from the northern part to the central part of the region. Then, spatial overlay analysis was used to obtain absolute and relative concentration increments of adjacent 1- or 5-year periods during 2010-2014. The spatial distribution of soil heavy metal concentration increments showed that the larger increments occurred in the center of the study region. Lastly, the principal component analysis combined with the multiple linear regression method were employed to quantify the source apportionment of the soil heavy metal concentration increments in the region. Our results led to the conclusion that the sources of soil heavy metal concentration increments should be ascribed to industry, agriculture and traffic. In particular, 82.5% of soil heavy metal concentration increment during 2010-2014 was ascribed to industrial/agricultural activities sources. Using STK and SOA to obtain the spatial distribution of heavy metal concentration increments in soils. Using PCA-MLR to quantify the source apportionment of soil heavy metal concentration increments. Copyright © 2017

  6. Real-time earthquake source imaging: An offline test for the 2011 Tohoku earthquake

    Science.gov (United States)

    Zhang, Yong; Wang, Rongjiang; Zschau, Jochen; Parolai, Stefano; Dahm, Torsten

    2014-05-01

    In recent decades, great efforts have been expended in real-time seismology aiming at earthquake and tsunami early warning. One of the most important issues is the real-time assessment of earthquake rupture processes using near-field seismogeodetic networks. Currently, earthquake early warning systems are mostly based on the rapid estimate of P-wave magnitude, which contains generally large uncertainties and the known saturation problem. In the case of the 2011 Mw9.0 Tohoku earthquake, JMA (Japan Meteorological Agency) released the first warning of the event with M7.2 after 25 s. The following updates of the magnitude even decreased to M6.3-6.6. Finally, the magnitude estimate stabilized at M8.1 after about two minutes. This led consequently to the underestimated tsunami heights. By using the newly developed Iterative Deconvolution and Stacking (IDS) method for automatic source imaging, we demonstrate an offline test for the real-time analysis of the strong-motion and GPS seismograms of the 2011 Tohoku earthquake. The results show that we had been theoretically able to image the complex rupture process of the 2011 Tohoku earthquake automatically soon after or even during the rupture process. In general, what had happened on the fault could be robustly imaged with a time delay of about 30 s by using either the strong-motion (KiK-net) or the GPS (GEONET) real-time data. This implies that the new real-time source imaging technique is helpful to reduce false and missing warnings, and therefore should play an important role in future tsunami early warning and earthquake rapid response systems.

  7. A rapid estimation of near field tsunami run-up

    Science.gov (United States)

    Riqueime, Sebastian; Fuentes, Mauricio; Hayes, Gavin; Campos, Jamie

    2015-01-01

    Many efforts have been made to quickly estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori.However, such models are generally based on uniform slip distributions and thus oversimplify the knowledge of the earthquake source. Here, we show how to predict tsunami run-up from any seismic source model using an analytic solution, that was specifically designed for subduction zones with a well defined geometry, i.e., Chile, Japan, Nicaragua, Alaska. The main idea of this work is to provide a tool for emergency response, trading off accuracy for speed. The solutions we present for large earthquakes appear promising. Here, run-up models are computed for: The 1992 Mw 7.7 Nicaragua Earthquake, the 2001 Mw 8.4 Perú Earthquake, the 2003Mw 8.3 Hokkaido Earthquake, the 2007 Mw 8.1 Perú Earthquake, the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake and the recent 2014 Mw 8.2 Iquique Earthquake. The maximum run-up estimations are consistent with measurements made inland after each event, with a peak of 9 m for Nicaragua, 8 m for Perú (2001), 32 m for Maule, 41 m for Tohoku, and 4.1 m for Iquique. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first minutes after the occurrence of similar events. Thus, such calculations will provide faster run-up information than is available from existing uniform-slip seismic source databases or past events of pre-modeled seismic sources.

  8. The importance of source and cue type in time-based everyday prospective memory.

    Science.gov (United States)

    Oates, Joyce M; Peynircioğlu, Zehra F

    2014-01-01

    We examined the effects of the source of a prospective memory task (provided or generated) and the type of cue (specific or general) triggering that task in everyday settings. Participants were asked to complete both generated and experimenter-provided tasks and to send a text message when each task was completed. The cue/context for the to-be-completed tasks was either a specific time or a general deadline (time-based cue), and the cue/context for the texting task was the completion of the task itself (activity-based cue). Although generated tasks were completed more often, generated cues/contexts were no more effective than provided ones in triggering the intention. Furthermore, generated tasks were completed more often when the cue/context comprised a specific time, whereas provided tasks were completed more often when the cue/context comprised a general deadline. However, texting was unaffected by the source of the cue/context. Finally, emotion modulated the effects. Results are discussed within a process-driven framework.

  9. Travel-time source-specific station correction improves location accuracy

    Science.gov (United States)

    Giuntini, Alessandra; Materni, Valerio; Chiappini, Stefano; Carluccio, Roberto; Console, Rodolfo; Chiappini, Massimo

    2013-04-01

    Accurate earthquake locations are crucial for investigating seismogenic processes, as well as for applications like verifying compliance to the Comprehensive Test Ban Treaty (CTBT). Earthquake location accuracy is related to the degree of knowledge about the 3-D structure of seismic wave velocity in the Earth. It is well known that modeling errors of calculated travel times may have the effect of shifting the computed epicenters far from the real locations by a distance even larger than the size of the statistical error ellipses, regardless of the accuracy in picking seismic phase arrivals. The consequences of large mislocations of seismic events in the context of the CTBT verification is particularly critical in order to trigger a possible On Site Inspection (OSI). In fact, the Treaty establishes that an OSI area cannot be larger than 1000 km2, and its larger linear dimension cannot be larger than 50 km. Moreover, depth accuracy is crucial for the application of the depth event screening criterion. In the present study, we develop a method of source-specific travel times corrections based on a set of well located events recorded by dense national seismic networks in seismically active regions. The applications concern seismic sequences recorded in Japan, Iran and Italy. We show that mislocations of the order of 10-20 km affecting the epicenters, as well as larger mislocations in hypocentral depths, calculated from a global seismic network and using the standard IASPEI91 travel times can be effectively removed by applying source-specific station corrections.

  10. A phantom for verification of dwell position and time of a high dose rate brachytherapy source

    International Nuclear Information System (INIS)

    Madebo, M.; Kron, T.; Pillainayagam, J.; Franich, R.

    2012-01-01

    Accuracy of dwell position and reproducibility of dwell time are critical in high dose rate (HDR) brachytherapy. A phantom was designed to verify dwell position and dwell time reproducibility for an Ir-192 HDR stepping source using Computed Radiography (CR). The central part of the phantom, incorporating thin alternating strips of lead and acrylic, was used to measure dwell positions. The outer part of the phantom features recesses containing different absorber materials (lead, aluminium, acrylic and polystyrene foam), and was used for determining reproducibility of dwell times. Dwell position errors of <1 mm were easily detectable using the phantom. The effect of bending a transfer tube was studied with this phantom and no change of clinical significance was observed when varying the curvature of the transfer tube in typical clinical scenarios. Changes of dwell time as low as 0.1 s, the minimum dwell time of the treatment unit, could be detected by choosing dwell times over the four materials that produce identical exposure at the CR detector.

  11. Discrete-Time Domain Modelling of Voltage Source Inverters in Standalone Applications

    DEFF Research Database (Denmark)

    Federico, de Bosio; de Sousa Ribeiro, Luiz Antonio; Freijedo Fernandez, Francisco Daniel

    2017-01-01

    modelling of the LC plant with consideration of delay and sample-and-hold effects on the state feedback cross-coupling decoupling is derived. From this plant formulation, current controllers with wide bandwidth and good relative stability properties are developed. Two controllers based on lead compensation......The decoupling of the capacitor voltage and inductor current has been shown to improve significantly the dynamic performance of voltage source inverters in standalone applications. However, the computation and PWM delays still limit the achievable bandwidth. In this paper a discrete-time domain...

  12. Calibration of time of flight detectors using laser-driven neutron source

    Energy Technology Data Exchange (ETDEWEB)

    Mirfayzi, S. R.; Kar, S., E-mail: s.kar@qub.ac.uk; Ahmed, H.; Green, A.; Alejo, A.; Jung, D. [Centre for Plasma Physics, School of Mathematics and Physics, Queen’s University Belfast, Belfast BT7 1NN (United Kingdom); Krygier, A. G.; Freeman, R. R. [Department of Physics, The Ohio State University, Columbus, Ohio 43210 (United States); Clarke, R. [Central Laser Facility, Rutherford Appleton Laboratory, Didcot, Oxfordshire OX11 0QX (United Kingdom); Fuchs, J.; Vassura, L. [LULI, Ecole Polytechnique, CNRS, Route de Saclay, 91128 Palaiseau Cedex (France); Kleinschmidt, A.; Roth, M. [Institut für Kernphysik, Technische Universität Darmstadt, Schloßgartenstrasse 9, D-64289 Darmstadt,Germany (Germany); Morrison, J. T. [Propulsion Systems Directorate, Air Force Research Lab, Wright Patterson Air Force Base, Ohio 45433 (United States); Najmudin, Z.; Nakamura, H. [Blackett Laboratory, Department of Physics, Imperial College, London SW7 2AZ (United Kingdom); Norreys, P. [Central Laser Facility, Rutherford Appleton Laboratory, Didcot, Oxfordshire OX11 0QX (United Kingdom); Department of Physics, University of Oxford, Oxford OX1 3PU (United Kingdom); Oliver, M. [Department of Physics, University of Oxford, Oxford OX1 3PU (United Kingdom); Zepf, M. [Centre for Plasma Physics, School of Mathematics and Physics, Queen’s University Belfast, Belfast BT7 1NN (United Kingdom); Helmholtz Institut Jena, D-07743 Jena (Germany); Borghesi, M. [Centre for Plasma Physics, School of Mathematics and Physics, Queen’s University Belfast, Belfast BT7 1NN (United Kingdom); Institute of Physics of the ASCR, ELI-Beamlines Project, Na Slovance 2, 18221 Prague (Czech Republic)

    2015-07-15

    Calibration of three scintillators (EJ232Q, BC422Q, and EJ410) in a time-of-flight arrangement using a laser drive-neutron source is presented. The three plastic scintillator detectors were calibrated with gamma insensitive bubble detector spectrometers, which were absolutely calibrated over a wide range of neutron energies ranging from sub-MeV to 20 MeV. A typical set of data obtained simultaneously by the detectors is shown, measuring the neutron spectrum emitted from a petawatt laser irradiated thin foil.

  13. Calibration of time of flight detectors using laser-driven neutron source

    Science.gov (United States)

    Mirfayzi, S. R.; Kar, S.; Ahmed, H.; Krygier, A. G.; Green, A.; Alejo, A.; Clarke, R.; Freeman, R. R.; Fuchs, J.; Jung, D.; Kleinschmidt, A.; Morrison, J. T.; Najmudin, Z.; Nakamura, H.; Norreys, P.; Oliver, M.; Roth, M.; Vassura, L.; Zepf, M.; Borghesi, M.

    2015-07-01

    Calibration of three scintillators (EJ232Q, BC422Q, and EJ410) in a time-of-flight arrangement using a laser drive-neutron source is presented. The three plastic scintillator detectors were calibrated with gamma insensitive bubble detector spectrometers, which were absolutely calibrated over a wide range of neutron energies ranging from sub-MeV to 20 MeV. A typical set of data obtained simultaneously by the detectors is shown, measuring the neutron spectrum emitted from a petawatt laser irradiated thin foil.

  14. Calibration of time of flight detectors using laser-driven neutron source

    International Nuclear Information System (INIS)

    Mirfayzi, S. R.; Kar, S.; Ahmed, H.; Green, A.; Alejo, A.; Jung, D.; Krygier, A. G.; Freeman, R. R.; Clarke, R.; Fuchs, J.; Vassura, L.; Kleinschmidt, A.; Roth, M.; Morrison, J. T.; Najmudin, Z.; Nakamura, H.; Norreys, P.; Oliver, M.; Zepf, M.; Borghesi, M.

    2015-01-01

    Calibration of three scintillators (EJ232Q, BC422Q, and EJ410) in a time-of-flight arrangement using a laser drive-neutron source is presented. The three plastic scintillator detectors were calibrated with gamma insensitive bubble detector spectrometers, which were absolutely calibrated over a wide range of neutron energies ranging from sub-MeV to 20 MeV. A typical set of data obtained simultaneously by the detectors is shown, measuring the neutron spectrum emitted from a petawatt laser irradiated thin foil

  15. Running Boot Camp

    CERN Document Server

    Toporek, Chuck

    2008-01-01

    When Steve Jobs jumped on stage at Macworld San Francisco 2006 and announced the new Intel-based Macs, the question wasn't if, but when someone would figure out a hack to get Windows XP running on these new "Mactels." Enter Boot Camp, a new system utility that helps you partition and install Windows XP on your Intel Mac. Boot Camp does all the heavy lifting for you. You won't need to open the Terminal and hack on system files or wave a chicken bone over your iMac to get XP running. This free program makes it easy for anyone to turn their Mac into a dual-boot Windows/OS X machine. Running Bo

  16. Time-limited effects of emotional arousal on item and source memory.

    Science.gov (United States)

    Wang, Bo; Sun, Bukuan

    2015-01-01

    Two experiments investigated the time-limited effects of emotional arousal on consolidation of item and source memory. In Experiment 1, participants memorized words (items) and the corresponding speakers (sources) and then took an immediate free recall test. Then they watched a neutral, positive, or negative video 5, 35, or 50 min after learning, and 24 hours later they took surprise memory tests. Experiment 2 was similar to Experiment 1 except that (a) a reality monitoring task was used; (b) elicitation delays of 5, 30, and 45 min were used; and (c) delayed memory tests were given 60 min after learning. Both experiments showed that, regardless of elicitation delay, emotional arousal did not enhance item recall memory. Second, both experiments showed that negative arousal enhanced delayed item recognition memory only at the medium elicitation delay, but not in the shorter or longer delays. Positive arousal enhanced performance only in Experiment 1. Third, regardless of elicitation delay, emotional arousal had little effect on source memory. These findings have implications for theories of emotion and memory, suggesting that emotion effects are contingent upon the nature of the memory task and elicitation delay.

  17. Real-time analysis, visualization, and steering of microtomography experiments at photon sources

    International Nuclear Information System (INIS)

    Laszeski, G. von; Insley, J.A.; Foster, I.; Bresnahan, J.; Kesselman, C.; Su, M.; Thiebaux, M.; Rivers, M.L.; Wang, S.; Tieman, B.; McNulty, I.

    2000-01-01

    A new generation of specialized scientific instruments called synchrotron light sources allow the imaging of materials at very fine scales. However, in contrast to a traditional microscope, interactive use has not previously been possible because of the large amounts of data generated and the considerable computation required translating this data into a useful image. The authors describe a new software architecture that uses high-speed networks and supercomputers to enable quasi-real-time and hence interactive analysis of synchrotron light source data. This architecture uses technologies provided by the Globus computational grid toolkit to allow dynamic creation of a reconstruction pipeline that transfers data from a synchrotron source beamline to a preprocessing station, next to a parallel reconstruction system, and then to multiple visualization stations. Collaborative analysis tools allow multiple users to control data visualization. As a result, local and remote scientists can see and discuss preliminary results just minutes after data collection starts. The implications for more efficient use of this scarce resource and for more effective science appear tremendous

  18. The first synchrotron infrared beamlines at the Advanced Light Source: Spectromicroscopy and fast timing

    International Nuclear Information System (INIS)

    Martin, Michael C.; McKinney, Wayne R.

    1999-01-01

    Two recently commissioned infrared beamlines on the 1.4 bending magnet port at the Advanced Light Source, LBNL, are described. Using a synchrotron as an IR source provides three primary advantages: increased brightness, very fast light pulses, and enhanced far-IR flux. The considerable brightness advantage manifests itself most beneficially when performing spectroscopy on a microscopic length scale. Beamline (BL) 1.4.3 is a dedicated FTIR spectromicroscopy beamline, where a diffraction-limited spot size using the synchrotron source is utilized. BL 1.4.2 consists of a vacuum FTIR bench with a wide spectral range and step-scan capability. This BL makes use of the pulsed nature of the synchrotron light as well as the far-IR flux. Fast timing is demonstrated by observing the pulses from the electron bunch storage pattern at the ALS. Results from several experiments from both IR beamlines will be presented as an overview of the IR research currently being done at the ALS

  19. Studying Regional Wave Source Time Functions Using the Empirical Green's Function Method: Application to Central Asia

    Science.gov (United States)

    Xie, J.; Schaff, D. P.; Chen, Y.; Schult, F.

    2013-12-01

    Reliably estimated source time functions (STFs) from high-frequency regional waveforms, such as Lg, Pn and Pg, provide important input for seismic source studies, explosion detection and discrimination, and minimization of parameter trade-off in attenuation studies. We have searched for candidate pairs of larger and small earthquakes in and around China that share the same focal mechanism but significantly differ in magnitudes, so that the empirical Green's function (EGF) method can be applied to study the STFs of the larger events. We conducted about a million deconvolutions using waveforms from 925 earthquakes, and screened the deconvolved traces to exclude those that are from event pairs that involved different mechanisms. Only 2,700 traces passed this screening and could be further analyzed using the EGF method. We have developed a series of codes for speeding up the final EGF analysis by implementing automations and user-graphic interface procedures. The codes have been fully tested with a subset of screened data and we are currently applying them to all the screened data. We will present a large number of deconvolved STFs retrieved using various phases (Lg, Pn, Sn and Pg and coda) with information on any directivities, any possible dependence of pulse durations on the wave types, on scaling relations for the pulse durations and event sizes, and on the estimated source static stress drops.

  20. Fermilab DART run control

    International Nuclear Information System (INIS)

    Oleynik, G.; Engelfried, J.; Mengel, L.

    1996-01-01

    DART is the high speed, Unix based data acquisition system being developed by Fermilab in collaboration with seven High Energy Physics Experiments. This paper describes DART run control, which has been developed over the past year and is a flexible, distributed, extensible system for the control and monitoring of the data acquisition systems. The authors discuss the unique and interesting concepts of the run control and some of the experiences in developing it. They also give a brief update and status of the whole DART system

  1. Fermilab DART run control

    International Nuclear Information System (INIS)

    Oleynik, G.; Engelfried, J.; Mengel, L.

    1995-05-01

    DART is the high speed, Unix based data acquisition system being developed by Fermilab in collaboration with seven High Energy Physics Experiments. This paper describes DART run control, which has been developed over the past year and is a flexible, distributed, extensible system for the, control and monitoring of the data acquisition systems. We discuss the unique and interesting concepts of the run control and some of our experiences in developing it. We also give a brief update and status of the whole DART system

  2. Estimating the contributions of mobile sources of PAH to urban air using real-time PAH monitoring.

    Science.gov (United States)

    Dunbar, J C; Lin, C I; Vergucht, I; Wong, J; Duran, J L

    2001-11-12

    Motor vehicles are a significant source of airborne polycyclic aromatic hydrocarbons (PAH) in many urban areas. Traditional approaches used in determining the relative contributions of individual vehicle types to the total amount of PAH in air have been based on the analysis of integrated samples of airborne particles and gases for the presence of chemical tracers indicative of the vehicles from which the chemicals derived. As an alternative, we have used a photoelectric aerosol sensor (PAS) capable of measuring PAH levels in real-time in the emissions plumes from motor vehicles. We placed the PAS near a traffic-light in Kenmore Square, a busy crossroads in downtown Boston (MA, USA). A video camera co-located at the site recorded the vehicles passing the sensor, and this record was correlated with the PAS data. During a 5-day monitoring period (approximately 59 h) in the summer of 1998, over 34,000 motor vehicles were counted and classified and over 24,000 PAS readings were recorded (frequency = 1/8.6 s). The composition of the vehicle population was 94% passenger vehicles, 1.4% buses, 2.6% small trucks, 1.3% medium trucks, 0.35% large trucks, and 0.45% garbage and construction trucks. In analyzing the PAS data, it was assumed that the highest PAS measurements--those that exceeded the 95% critical level of the 5-min moving average of all the PAS measurements--were indicative of primary vehicular emissions. We found that approximately 46% of the mass of particle-bound PAH (i.e. approximately 46% of the integrated area under the PAS signal vs. time plots) was attributable to primary emissions from motor vehicles passing the sensor. Of this, 35-61% was attributable to passenger vehicles (cars, pickup trucks, and sports utility vehicles) and 39-65% was attributable to non-passenger vehicles [buses (14-23%), small trucks (12-20%), medium trucks (8.4-14%), large trucks (2.9-4.8%) and garbage and construction trucks (1.9-3.2%)]. Our results suggest that on a per vehicle

  3. Laser plasma x-ray source for ultrafast time-resolved x-ray absorption spectroscopy

    Directory of Open Access Journals (Sweden)

    L. Miaja-Avila

    2015-03-01

    Full Text Available We describe a laser-driven x-ray plasma source designed for ultrafast x-ray absorption spectroscopy. The source is comprised of a 1 kHz, 20 W, femtosecond pulsed infrared laser and a water target. We present the x-ray spectra as a function of laser energy and pulse duration. Additionally, we investigate the plasma temperature and photon flux as we vary the laser energy. We obtain a 75 μm FWHM x-ray spot size, containing ∼106 photons/s, by focusing the produced x-rays with a polycapillary optic. Since the acquisition of x-ray absorption spectra requires the averaging of measurements from >107 laser pulses, we also present data on the source stability, including single pulse measurements of the x-ray yield and the x-ray spectral shape. In single pulse measurements, the x-ray flux has a measured standard deviation of 8%, where the laser pointing is the main cause of variability. Further, we show that the variability in x-ray spectral shape from single pulses is low, thus justifying the combining of x-rays obtained from different laser pulses into a single spectrum. Finally, we show a static x-ray absorption spectrum of a ferrioxalate solution as detected by a microcalorimeter array. Altogether, our results demonstrate that this water-jet based plasma source is a suitable candidate for laboratory-based time-resolved x-ray absorption spectroscopy experiments.

  4. A time resolved microfocus XEOL facility at the Diamond Light Source

    International Nuclear Information System (INIS)

    Mosselmans, J F W; Taylor, R P; Quinn, P D; Cibin, G; Gianolio, D; Finch, A A; Sapelkin, A V

    2013-01-01

    We have constructed a Time-Resolved X-ray Excited Optical Luminescence (TR-XEOL) detection system at the Microfocus Spectroscopy beamline I18 at the Diamond Light Source. Using the synchrotron in h ybrid bunch mode , the data collection is triggered by the RF clock, and we are able to record XEOL photons with a time resolution of 6.1 ps during the 230 ns gap between the hybrid bunch and the main train of electron bunches. We can detect photons over the range 180-850 nm using a bespoke optical fibre, with X-ray excitation energies between 2 and 20 keV. We have used the system to study a range of feldspars. The detector is portable and has also been used on beamline B18 to collect Optically Determined X-ray Absorption Spectroscopy (OD-XAS) in QEXAFS mode.

  5. A time resolved microfocus XEOL facility at the Diamond Light Source

    Science.gov (United States)

    Mosselmans, J. F. W.; Taylor, R. P.; Quinn, P. D.; Finch, A. A.; Cibin, G.; Gianolio, D.; Sapelkin, A. V.

    2013-03-01

    We have constructed a Time-Resolved X-ray Excited Optical Luminescence (TR-XEOL) detection system at the Microfocus Spectroscopy beamline I18 at the Diamond Light Source. Using the synchrotron in "hybrid bunch mode", the data collection is triggered by the RF clock, and we are able to record XEOL photons with a time resolution of 6.1 ps during the 230 ns gap between the hybrid bunch and the main train of electron bunches. We can detect photons over the range 180-850 nm using a bespoke optical fibre, with X-ray excitation energies between 2 and 20 keV. We have used the system to study a range of feldspars. The detector is portable and has also been used on beamline B18 to collect Optically Determined X-ray Absorption Spectroscopy (OD-XAS) in QEXAFS mode.

  6. Estimation of the Plant Time Constant of Current-Controlled Voltage Source Converters

    DEFF Research Database (Denmark)

    Vidal, Ana; Yepes, Alejandro G.; Malvar, Jano

    2014-01-01

    Precise knowledge of the plant time constant is essential to perform a thorough analysis of the current control loop in voltage source converters (VSCs). As the loop behavior can be significantly influenced by the VSC working conditions, the effects associated to converter losses should be included...... in the model, through an equivalent series resistance. In a recent work, an algorithm to identify this parameter was developed, considering the inductance value as known and practically constant. Nevertheless, the plant inductance can also present important uncertainties with respect to the inductance...... of the VSC interface filter measured at rated conditions. This paper extends that method so that both parameters of the plant time constant (resistance and inductance) are estimated. Such enhancement is achieved through the evaluation of the closed-loop transient responses of both axes of the synchronous...

  7. Time-resolved X-ray scattering program at the Advanced Photon Source

    International Nuclear Information System (INIS)

    Rodricks, B.

    1994-01-01

    The Time-Resolved Scattering Program's goal is the development of instruments and techniques for time-resolved studies. This entails the development of wide bandpass and focusing optics, high-speed detectors, mechanical choppers, and components for the measurement and creation of changes in samples. Techniques being developed are pump-probe experiments, single-bunch scattering experiments, high-speed white and pink beam Laue scattering, and nanosecond to microsecond synchronization of instruments. This program will be carried out primarily from a white-beam, bend-magnet source, experimental station, 1-BM-B, that immediately follows the first optics enclosure (1-BM-A). This paper will describe the experimental station and instruments under development to carry out the program

  8. Tandem Terminal Ion Source

    International Nuclear Information System (INIS)

    Harper, G.C.; Lindner, C.E.; Myers, A.W.; Wechel, T.D. van

    2000-01-01

    OAK-B135 Tandem Terminal Ion Source. The terminal ion source (TIS) was used in several experiments during this reporting period, all for the 7 Be(γ) 8 B experiment. Most of the runs used 1 H + at terminal voltages from 0.3 MV to 1.5 MV. One of the runs used 2 H + at terminal voltage of 1.4 MV. The other run used 4 He + at a terminal voltage of 1.37 MV. The list of experiments run with the TIS to date is given in table 1 below. The tank was opened four times for unscheduled source repairs. On one occasion the tank was opened to replace the einzel lens power supply which had failed. The 10 kV unit was replaced with a 15 kV unit. The second time the tank was opened to repair the extractor supply which was damaged by a tank spark. On the next occasion the tank was opened to replace a source canal which had sputtered away. Finally, the tank was opened to replace the discharge bottle which had been coated with aluminum sputtered from the exit canal

  9. Tandem Terminal Ion Source

    International Nuclear Information System (INIS)

    None

    2000-01-01

    OAK-B135 Tandem Terminal Ion Source. The terminal ion source (TIS) was used in several experiments during this reporting period, all for the(sup 7)Be((gamma))(sup 8)B experiment. Most of the runs used(sup 1)H(sup+) at terminal voltages from 0.3 MV to 1.5 MV. One of the runs used(sup 2)H(sup+) at terminal voltage of 1.4 MV. The other run used(sup 4)He(sup+) at a terminal voltage of 1.37 MV. The list of experiments run with the TIS to date is given in table 1 below. The tank was opened four times for unscheduled source repairs. On one occasion the tank was opened to replace the einzel lens power supply which had failed. The 10 kV unit was replaced with a 15 kV unit. The second time the tank was opened to repair the extractor supply which was damaged by a tank spark. On the next occasion the tank was opened to replace a source canal which had sputtered away. Finally, the tank was opened to replace the discharge bottle which had been coated with aluminum sputtered from the exit canal

  10. Measurement of Neutron Energy Spectrum Emitted by Cf-252 Source Using Time-of-Flight Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Cheol Ho; Son, Jaebum; Kim, Tae Hoon; Lee, Sangmin; Kim, Yong-Kyun [Hanyang University, Seoul (Korea, Republic of)

    2016-10-15

    The techniques proposed to detect the neutrons usually require the detection of a secondary recoiling nucleus in a scintillator (or other type of detector) to indicate the rare collision of a neutron with a nucleus. This is the same basic technique, in this case detection of a recoil proton that was used by Chadwick in the 1930 s to discover and identify the neutron and determine its mass. It is primary technique still used today for detection of fast neutron, which typically involves the use of a hydrogen based organic plastic or liquid scintillator coupled to a photo-multiplier tube. The light output from such scintillators is a function of the cross section and nuclear kinematics of the n + nucleus collision. With the exception of deuterated scintillators, the scintillator signal does not necessarily produce a distinct peak in the scintillator spectrum directly related to the incident neutron energy. Instead neutron time-of-flight (TOF) often must be utilized to determine the neutron energy, which requires generation of a prompt start signal from the nuclear source emitting the neutrons. This method takes advantage of the high number of prompt gamma rays. The Time-of-Flight method was used to measure neutron energy spectrum emitted by the Cf-252 neutron source. Plastic scintillator that has a superior discrimination ability of neutron and gamma-ray was used as a stop signal detector and liquid scintillator was used as a stat signal detector. In experiment, neutron and gamma-ray spectrum was firstly measured and discriminated using the TOF method. Secondly, neutron energy spectrum was obtained through spectrum analysis. Equation of neutron energy spectrum that was emitted by Cf-252 source using the Gaussian fitting was obtained.

  11. An Open Source-Based Real-Time Data Processing Architecture Framework for Manufacturing Sustainability

    Directory of Open Access Journals (Sweden)

    Muhammad Syafrudin

    2017-11-01

    Full Text Available Currently, the manufacturing industry is experiencing a data-driven revolution. There are multiple processes in the manufacturing industry and will eventually generate a large amount of data. Collecting, analyzing and storing a large amount of data are one of key elements of the smart manufacturing industry. To ensure that all processes within the manufacturing industry are functioning smoothly, the big data processing is needed. Thus, in this study an open source-based real-time data processing (OSRDP architecture framework was proposed. OSRDP architecture framework consists of several open sources technologies, including Apache Kafka, Apache Storm and NoSQL MongoDB that are effective and cost efficient for real-time data processing. Several experiments and impact analysis for manufacturing sustainability are provided. The results showed that the proposed system is capable of processing a massive sensor data efficiently when the number of sensors data and devices increases. In addition, the data mining based on Random Forest is presented to predict the quality of products given the sensor data as the input. The Random Forest successfully classifies the defect and non-defect products, and generates high accuracy compared to other data mining algorithms. This study is expected to support the management in their decision-making for product quality inspection and support manufacturing sustainability.

  12. Recent innovation in microbial source tracking using bacterial real-time PCR markers in shellfish

    International Nuclear Information System (INIS)

    Mauffret, A.; Mieszkin, S.; Morizur, M.; Alfiansah, Y.; Lozach, S.; Gourmelon, M.

    2013-01-01

    Highlights: ► DNA extraction from intravalvular liquid is promising for microbial source tracking in oysters. ► Host-associated bacterial markers in shellfish digestive tissues were difficult to assess with real-time PCR. ► DNA extracts from shellfish flesh appeared to have low inhibitor levels but low marker levels. ► Protocol transfer from one shellfish species to another does not appear possible. -- Abstract: We assessed the capacity of real-time PCR markers to identify the origin of contamination in shellfish. Oyster, cockles or clams were either contaminated with fecal materials and host-associated markers designed from Bacteroidales or Catellicoccus marimammalium 16S RNA genes were extracted from their intravalvular liquid, digestive tissues or shellfish flesh. Extraction of bacterial DNA from the oyster intravalvular liquid with FastDNA spin kit for soil enabled the selected markers to be quantified in 100% of artificially contaminated samples, and the source of contamination to be identified in 13 out of 38 naturally contaminated batches from European Class B and Class C areas. However, this protocol did not enable the origin of the contamination to be identified in cockle or clam samples. Although results are promising for extracts from intravalvular liquid in oyster, it is unlikely that a single protocol could be the best across all bacterial markers and types of shellfish

  13. Real-time speckle variance swept-source optical coherence tomography using a graphics processing unit.

    Science.gov (United States)

    Lee, Kenneth K C; Mariampillai, Adrian; Yu, Joe X Z; Cadotte, David W; Wilson, Brian C; Standish, Beau A; Yang, Victor X D

    2012-07-01

    Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second.

  14. Non performing loans (NPLs) in a crisis economy: Long-run equilibrium analysis with a real time VEC model for Greece (2001-2015)

    Science.gov (United States)

    Konstantakis, Konstantinos N.; Michaelides, Panayotis G.; Vouldis, Angelos T.

    2016-06-01

    As a result of domestic and international factors, the Greek economy faced a severe crisis which is directly comparable only to the Great Recession. In this context, a prominent victim of this situation was the country's banking system. This paper attempts to shed light on the determining factors of non-performing loans in the Greek banking sector. The analysis presents empirical evidence from the Greek economy, using aggregate data on a quarterly basis, in the time period 2001-2015, fully capturing the recent recession. In this work, we use a relevant econometric framework based on a real time Vector Autoregressive (VAR)-Vector Error Correction (VEC) model, which captures the dynamic interdependencies among the variables used. Consistent with international evidence, the empirical findings show that both macroeconomic and financial factors have a significant impact on non-performing loans in the country. Meanwhile, the deteriorating credit quality feeds back into the economy leading to a self-reinforcing negative loop.

  15. Influence of Advanced Injection Timing and Fuel Additive on Combustion, Performance, and Emission Characteristics of a DI Diesel Engine Running on Plastic Pyrolysis Oil

    Directory of Open Access Journals (Sweden)

    Ioannis Kalargaris

    2017-01-01

    Full Text Available This paper presents the investigation of engine optimisation when plastic pyrolysis oil (PPO is used as the primary fuel of a direct injection diesel engine. Our previous investigation revealed that PPO is a promising fuel; however the results suggested that control parameters should be optimised in order to obtain a better engine performance. In the present work, the injection timing was advanced, and fuel additives were utilised to overcome the issues experienced in the previous work. In addition, spray characteristics of PPO were investigated in comparison with diesel to provide in-depth understanding of the engine behaviour. The experimental results on advanced injection timing (AIT showed reduced brake thermal efficiency and increased carbon monoxide, unburned hydrocarbons, and nitrogen oxides emissions in comparison to standard injection timing. On the other hand, the addition of fuel additive resulted in higher engine efficiency and lower exhaust emissions. Finally, the spray tests revealed that the spray tip penetration for PPO is faster than diesel. The results suggested that AIT is not a preferable option while fuel additive is a promising solution for long-term use of PPO in diesel engines.

  16. Back-trajectory modeling of high time-resolution air measurement data to separate nearby sources

    Science.gov (United States)

    Strategies to isolate air pollution contributions from sources is of interest as voluntary or regulatory measures are undertaken to reduce air pollution. When different sources are located in close proximity to one another and have similar emissions, separating source emissions ...

  17. 'Outrunning' the running ear

    African Journals Online (AJOL)

    Chantel

    In even the most experienced hands, an adequate physical examination of the ears can be difficult to perform because of common problems such as cerumen blockage of the auditory canal, an unco- operative toddler or an exasperated parent. The most common cause for a running ear in a child is acute purulent otitis.

  18. Interactive, open source, travel time scenario modelling: tools to facilitate participation in health service access analysis.

    Science.gov (United States)

    Fisher, Rohan; Lassa, Jonatan

    2017-04-18

    Modelling travel time to services has become a common public health tool for planning service provision but the usefulness of these analyses is constrained by the availability of accurate input data and limitations inherent in the assumptions and parameterisation. This is particularly an issue in the developing world where access to basic data is limited and travel is often complex and multi-modal. Improving the accuracy and relevance in this context requires greater accessibility to, and flexibility in, travel time modelling tools to facilitate the incorporation of local knowledge and the rapid exploration of multiple travel scenarios. The aim of this work was to develop simple open source, adaptable, interactive travel time modelling tools to allow greater access to and participation in service access analysis. Described are three interconnected applications designed to reduce some of the barriers to the more wide-spread use of GIS analysis of service access and allow for complex spatial and temporal variations in service availability. These applications are an open source GIS tool-kit and two geo-simulation models. The development of these tools was guided by health service issues from a developing world context but they present a general approach to enabling greater access to and flexibility in health access modelling. The tools demonstrate a method that substantially simplifies the process for conducting travel time assessments and demonstrate a dynamic, interactive approach in an open source GIS format. In addition this paper provides examples from empirical experience where these tools have informed better policy and planning. Travel and health service access is complex and cannot be reduced to a few static modeled outputs. The approaches described in this paper use a unique set of tools to explore this complexity, promote discussion and build understanding with the goal of producing better planning outcomes. The accessible, flexible, interactive and

  19. Measurement of Dα sources for particle confinement time determination in TEXTOR

    International Nuclear Information System (INIS)

    Gray, D.S.; Boedo, J.A.; Conn, R.W.; Finken, K.H.; Mank, G.; Pospieszczyk, A.; Samm, U.

    1993-01-01

    An important quantity in the study of tokamak discharges is the global particle confinement time, defined for each ionic species i by the equation below, where N i is the total population of the species in the plasma and S i is the source rate (ionization rate) of the species: τ pi N i /(S i - dN i /dt). Of particular significance is the confinement time of the main plasma component, deuterium; here, in most cases of interest, the time derivative is negligible and the confinement time is given by N/S. The deuterium content N can be estimated from the electron content, measured by interferometry, if Z eff is known. A common method of estimating the fueling rate S is to measure the emission of D α light from recycling neutrals in the plasma boundary, since collisional-radiative modeling has shown that, for plasma conditions typical in the tokamak edge, the rate of ionization of D atoms and the rate of emission of D α photons are related by a factor that varies only weakly with electron density and temperature. This paper describes the use of a CCD video camera at TEXTOR for the purpose of spatially resolving the D α light in order to measure more accurately the total emission so that τ p can be determined reliably. (author) 5 refs., 5 figs

  20. Effects of detector-source distance and detector bias voltage variations on time resolution of general purpose plastic scintillation detectors.

    Science.gov (United States)

    Ermis, E E; Celiktas, C

    2012-12-01

    Effects of source-detector distance and the detector bias voltage variations on time resolution of a general purpose plastic scintillation detector such as BC400 were investigated. (133)Ba and (207)Bi calibration sources with and without collimator were used in the present work. Optimum source-detector distance and bias voltage values were determined for the best time resolution by using leading edge timing method. Effect of the collimator usage on time resolution was also investigated. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. On the Reliability of Source Time Functions Estimated Using Empirical Green's Function Methods

    Science.gov (United States)

    Gallegos, A. C.; Xie, J.; Suarez Salas, L.

    2017-12-01

    The Empirical Green's Function (EGF) method (Hartzell, 1978) has been widely used to extract source time functions (STFs). In this method, seismograms generated by collocated events with different magnitudes are deconvolved. Under a fundamental assumption that the STF of the small event is a delta function, the deconvolved Relative Source Time Function (RSTF) yields the large event's STF. While this assumption can be empirically justified by examination of differences in event size and frequency content of the seismograms, there can be a lack of rigorous justification of the assumption. In practice, a small event might have a finite duration when the RSTF is retrieved and interpreted as the large event STF with a bias. In this study, we rigorously analyze this bias using synthetic waveforms generated by convolving a realistic Green's function waveform with pairs of finite-duration triangular or parabolic STFs. The RSTFs are found using a time-domain based matrix deconvolution. We find when the STFs of smaller events are finite, the RSTFs are a series of narrow non-physical spikes. Interpreting these RSTFs as a series of high-frequency source radiations would be very misleading. The only reliable and unambiguous information we can retrieve from these RSTFs is the difference in durations and the moment ratio of the two STFs. We can apply a Tikhonov smoothing to obtain a single-pulse RSTF, but its duration is dependent on the choice of weighting, which may be subjective. We then test the Multi-Channel Deconvolution (MCD) method (Plourde & Bostock, 2017) which assumes that both STFs have finite durations to be solved for. A concern about the MCD method is that the number of unknown parameters is larger, which would tend to make the problem rank-deficient. Because the kernel matrix is dependent on the STFs to be solved for under a positivity constraint, we can only estimate the rank-deficiency with a semi-empirical approach. Based on the results so far, we find that the

  2. Three-Dimensional Passive-Source Reverse-Time Migration of Converted Waves: The Method

    Science.gov (United States)

    Li, Jiahang; Shen, Yang; Zhang, Wei

    2018-02-01

    At seismic discontinuities in the crust and mantle, part of the compressional wave energy converts to shear wave, and vice versa. These converted waves have been widely used in receiver function (RF) studies to image discontinuity structures in the Earth. While generally successful, the conventional RF method has its limitations and is suited mostly to flat or gently dipping structures. Among the efforts to overcome the limitations of the conventional RF method is the development of the wave-theory-based, passive-source reverse-time migration (PS-RTM) for imaging complex seismic discontinuities and scatters. To date, PS-RTM has been implemented only in 2D in the Cartesian coordinate for local problems and thus has limited applicability. In this paper, we introduce a 3D PS-RTM approach in the spherical coordinate, which is better suited for regional and global problems. New computational procedures are developed to reduce artifacts and enhance migrated images, including back-propagating the main arrival and the coda containing the converted waves separately, using a modified Helmholtz decomposition operator to separate the P and S modes in the back-propagated wavefields, and applying an imaging condition that maintains a consistent polarity for a given velocity contrast. Our new approach allows us to use migration velocity models with realistic velocity discontinuities, improving accuracy of the migrated images. We present several synthetic experiments to demonstrate the method, using regional and teleseismic sources. The results show that both regional and teleseismic sources can illuminate complex structures and this method is well suited for imaging dipping interfaces and sharp lateral changes in discontinuity structures.

  3. THE STATISTICS OF RADIO ASTRONOMICAL POLARIMETRY: BRIGHT SOURCES AND HIGH TIME RESOLUTION

    International Nuclear Information System (INIS)

    Van Straten, W.

    2009-01-01

    A four-dimensional statistical description of electromagnetic radiation is developed and applied to the analysis of radio pulsar polarization. The new formalism provides an elementary statistical explanation of the modal-broadening phenomenon in single-pulse observations. It is also used to argue that the degree of polarization of giant pulses has been poorly defined in past studies. Single- and giant-pulse polarimetry typically involves sources with large flux-densities and observations with high time-resolution, factors that necessitate consideration of source-intrinsic noise and small-number statistics. Self-noise is shown to fully explain the excess polarization dispersion previously noted in single-pulse observations of bright pulsars, obviating the need for additional randomly polarized radiation. Rather, these observations are more simply interpreted as an incoherent sum of covariant, orthogonal, partially polarized modes. Based on this premise, the four-dimensional covariance matrix of the Stokes parameters may be used to derive mode-separated pulse profiles without any assumptions about the intrinsic degrees of mode polarization. Finally, utilizing the small-number statistics of the Stokes parameters, it is established that the degree of polarization of an unresolved pulse is fundamentally undefined; therefore, previous claims of highly polarized giant pulses are unsubstantiated.

  4. In-situ hydrogen in metal determination using a minimum neutron source strength and exposure time.

    Science.gov (United States)

    Hatem, M; Agamy, S; Khalil, M Y

    2013-08-01

    Water is frequently present in the environment and is a source of hydrogen that can interact with many materials. Because of its small atomic size, a hydrogen atom can easily diffuse into a host metal, and though the metal may appear unchanged for a time, the metal will eventually abruptly lose its strength and ductility. Thus, measuring the hydrogen content in metals is important in many fields, such as in the nuclear industry, in automotive and aircraft fabrication, and particularly, in offshore oil and gas fields. It has been demonstrated that the use of nuclear methods to measure the hydrogen content in metals can achieve sensitivity levels on the order of parts per million. However, the use of nuclear methods in the field has not been conducted for two reasons. The first reason is due to exposure limitations. The second reason is due to the hi-tech instruments required for better accuracy. In this work, a new method using a low-strength portable neutron source is explored in conjunction with detectors based on plastic nuclear detection films. The following are the in-situ requirements: simplicity in setup, high reliability, minimal exposure dose, and acceptable accuracy at an acceptable cost. A computer model of the experimental setup is used to reproduce the results of a proof-of-concept experiment and to predict the sensitivity levels under optimised experimental conditions. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Assessment of In Situ Time Resolved Shock Experiments at Synchrotron Light Sources*

    Science.gov (United States)

    Belak, J.; Ilavsky, J.; Hessler, J. P.

    2005-07-01

    Prior to fielding in situ time resolved experiments of shock wave loading at the Advanced Photon Source, we have performed feasibility experiments assessing a single photon bunch. Using single and poly-crystal Al, Ti, V and Cu shock to incipient spallation on the gas gun, samples were prepared from slices normal to the spall plane of thickness 100-500 microns. In addition, single crystal Al of thickness 500 microns was shocked to incipient spallation and soft recovered using the LLNL e-gun mini-flyer system. The e-gun mini-flyer impacts the sample target producing a 10's ns flat-top shock transient. Here, we present results for imaging, small-angle scattering (SAS), and diffraction. In particular, there is little SAS away from the spall plane and significant SAS at the spall plane, demonstrating the presence of sub-micron voids. * Use of the Advanced Photon Source was supported by the U. S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under Contract No. W-31-109-Eng-38 and work performed under the auspices of the U.S. Department of Energy by University of California, Lawrence Livermore National Laboratory under Contract W-7405-Eng-48.

  6. NEAR REAL-TIME DETERMINATION OF EARTHQUAKE SOURCE PARAMETERS FOR TSUNAMI EARLY WARNING FROM GEODETIC OBSERVATIONS

    Directory of Open Access Journals (Sweden)

    S. Manneela

    2016-06-01

    Full Text Available Exemplifying the tsunami source immediately after an earthquake is the most critical component of tsunami early warning, as not every earthquake generates a tsunami. After a major under sea earthquake, it is very important to determine whether or not it has actually triggered the deadly wave. The near real-time observations from near field networks such as strong motion and Global Positioning System (GPS allows rapid determination of fault geometry. Here we present a complete processing chain of Indian Tsunami Early Warning System (ITEWS, starting from acquisition of geodetic raw data, processing, inversion and simulating the situation as it would be at warning center during any major earthquake. We determine the earthquake moment magnitude and generate the centroid moment tensor solution using a novel approach which are the key elements for tsunami early warning. Though the well established seismic monitoring network, numerical modeling and dissemination system are currently capable to provide tsunami warnings to most of the countries in and around the Indian Ocean, the study highlights the critical role of geodetic observations in determination of tsunami source for high-quality forecasting.

  7. Null stream analysis of Pulsar Timing Array data: localisation of resolvable gravitational wave sources

    Science.gov (United States)

    Goldstein, Janna; Veitch, John; Sesana, Alberto; Vecchio, Alberto

    2018-04-01

    Super-massive black hole binaries are expected to produce a gravitational wave (GW) signal in the nano-Hertz frequency band which may be detected by pulsar timing arrays (PTAs) in the coming years. The signal is composed of both stochastic and individually resolvable components. Here we develop a generic Bayesian method for the analysis of resolvable sources based on the construction of `null-streams' which cancel the part of the signal held in common for each pulsar (the Earth-term). For an array of N pulsars there are N - 2 independent null-streams that cancel the GW signal from a particular sky location. This method is applied to the localisation of quasi-circular binaries undergoing adiabatic inspiral. We carry out a systematic investigation of the scaling of the localisation accuracy with signal strength and number of pulsars in the PTA. Additionally, we find that source sky localisation with the International PTA data release one is vastly superior than what is achieved by its constituent regional PTAs.

  8. Real-Time Processing Library for Open-Source Hardware Biomedical Sensors.

    Science.gov (United States)

    Molina-Cantero, Alberto J; Castro-García, Juan A; Lebrato-Vázquez, Clara; Gómez-González, Isabel M; Merino-Monge, Manuel

    2018-03-29

    Applications involving data acquisition from sensors need samples at a preset frequency rate, the filtering out of noise and/or analysis of certain frequency components. We propose a novel software architecture based on open-software hardware platforms which allows programmers to create data streams from input channels and easily implement filters and frequency analysis objects. The performances of the different classes given in the size of memory allocated and execution time (number of clock cycles) were analyzed in the low-cost platform Arduino Genuino. In addition, 11 people took part in an experiment in which they had to implement several exercises and complete a usability test. Sampling rates under 250 Hz (typical for many biomedical applications) makes it feasible to implement filters, sliding windows and Fourier analysis, operating in real time. Participants rated software usability at 70.2 out of 100 and the ease of use when implementing several signal processing applications was rated at just over 4.4 out of 5. Participants showed their intention of using this software because it was percieved as useful and very easy to use. The performances of the library showed that it may be appropriate for implementing small biomedical real-time applications or for human movement monitoring, even in a simple open-source hardware device like Arduino Genuino. The general perception about this library is that it is easy to use and intuitive.

  9. Target life time of laser ion source for low charge state ion production

    Energy Technology Data Exchange (ETDEWEB)

    Kanesue,T.; Tamura, J.; Okamura, M.

    2008-06-23

    Laser ion source (LIS) produces ions by irradiating pulsed high power laser shots onto the solid state target. For the low charge state ion production, laser spot diameter on the target can be over several millimeters using a high power laser such as Nd:YAG laser. In this case, a damage to the target surface is small while there is a visible crater in case of the best focused laser shot for high charge state ion production (laser spot diameter can be several tens of micrometers). So the need of target displacement after each laser shot to use fresh surface to stabilize plasma is not required for low charge state ion production. We tested target lifetime using Nd:YAG laser with 5 Hz repetition rate. Also target temperature and vacuum condition were recorded during experiment. The feasibility of a long time operation was verified.

  10. An open source/real-time atomic force microscope architecture to perform customizable force spectroscopy experiments.

    Science.gov (United States)

    Materassi, Donatello; Baschieri, Paolo; Tiribilli, Bruno; Zuccheri, Giampaolo; Samorì, Bruno

    2009-08-01

    We describe the realization of an atomic force microscope architecture designed to perform customizable experiments in a flexible and automatic way. Novel technological contributions are given by the software implementation platform (RTAI-LINUX), which is free and open source, and from a functional point of view, by the implementation of hard real-time control algorithms. Some other technical solutions such as a new way to estimate the optical lever constant are described as well. The adoption of this architecture provides many degrees of freedom in the device behavior and, furthermore, allows one to obtain a flexible experimental instrument at a relatively low cost. In particular, we show how such a system has been employed to obtain measures in sophisticated single-molecule force spectroscopy experiments [Fernandez and Li, Science 303, 1674 (2004)]. Experimental results on proteins already studied using the same methodologies are provided in order to show the reliability of the measure system.

  11. 'Ready to hit the ground running': Alumni and employer accounts of a unique part-time distance learning pre-registration nurse education programme.

    Science.gov (United States)

    Draper, Jan; Beretta, Ruth; Kenward, Linda; McDonagh, Lin; Messenger, Julie; Rounce, Jill

    2014-10-01

    This study explored the impact of The Open University's (OU) preregistration nursing programme on students' employability, career progression and its contribution to developing the nursing workforce across the United Kingdom. Designed for healthcare support workers who are sponsored by their employers, the programme is the only part-time supported open/distance learning programme in the UK leading to registration as a nurse. The international literature reveals that relatively little is known about the impact of previous experience as a healthcare support worker on the experience of transition, employability skills and career progression. To identify alumni and employer views of the perceived impact of the programme on employability, career progression and workforce development. A qualitative design using telephone interviews which were digitally recorded, and transcribed verbatim prior to content analysis to identify recurrent themes. Three geographical areas across the UK. Alumni (n=17) and employers (n=7). Inclusion criterion for alumni was a minimum of two years' post-qualifying experience. Inclusion criteria for employers were those that had responsibility for sponsoring students on the programme and employing them as newly qualified nurses. Four overarching themes were identified: transition, expectations, learning for and in practice, and flexibility. Alumni and employers were of the view that the programme equipped them well to meet the competencies and expectations of being a newly qualified nurse. It provided employers with a flexible route to growing their own workforce and alumni the opportunity to achieve their ambition of becoming a qualified nurse when other more conventional routes would not have been open to them. Some of them had already demonstrated career progression. Generalising results requires caution due to the small, self-selecting sample but findings suggest that a widening participation model of pre-registration nurse education for

  12. Conceptual design of the time-of-flight backscattering spectrometer, MIRACLES, at the European Spallation Source

    International Nuclear Information System (INIS)

    Tsapatsaris, N.; Bordallo, H. N.; Lechner, R. E.; Markó, M.

    2016-01-01

    In this work, we present the conceptual design of the backscattering time-of-flight spectrometer MIRACLES approved for construction at the long-pulse European Spallation Source (ESS). MIRACLES’s unparalleled combination of variable resolution, high flux, extended energy, and momentum transfer (0.2–6 Å"−"1) ranges will open new avenues for neutron backscattering spectroscopy. Its remarkable flexibility can be attributed to 3 key elements: the long-pulse time structure and low repetition rate of the ESS neutron source, the chopper cascade that tailors the moderator pulse in the primary part of the spectrometer, and the bent Si(111) analyzer crystals arranged in a near-backscattering geometry in the secondary part of the spectrometer. Analytical calculations combined with instrument Monte-Carlo simulations show that the instrument will provide a variable elastic energy resolution, δ(ħ ω), between 2 and 32 μeV, when using a wavelength of λ ≈ 6.267 Å (Si(111)-reflection), with an energy transfer range, ħ ω, centered at the elastic line from −600 to +600 μeV. In addition, when selecting λ ≈ 2.08 Å (i.e., the Si(333)-reflection), δ(ħ ω) can be relaxed to 300 μeV and ħ ω from about 10 meV in energy gain to ca −40 meV in energy loss. Finally, the dynamic wavelength range of MIRACLES, approximately 1.8 Å, can be shifted within the interval of 2–20 Å to allow the measurement of low-energy inelastic excitations.

  13. Conceptual design of the time-of-flight backscattering spectrometer, MIRACLES, at the European Spallation Source

    Energy Technology Data Exchange (ETDEWEB)

    Tsapatsaris, N., E-mail: nikolaos.tsapatsaris@esss.se, E-mail: ruep.lechner@gmail.com, E-mail: bordallo@nbi.ku.dk; Bordallo, H. N., E-mail: nikolaos.tsapatsaris@esss.se, E-mail: ruep.lechner@gmail.com, E-mail: bordallo@nbi.ku.dk [Niels Bohr Institute, The University of Copenhagen, Copenhagen 2100 (Denmark); European Spallation Source ERIC, Tunavägen 24, 22100 Lund (Sweden); Lechner, R. E., E-mail: nikolaos.tsapatsaris@esss.se, E-mail: ruep.lechner@gmail.com, E-mail: bordallo@nbi.ku.dk [European Spallation Source ERIC, Tunavägen 24, 22100 Lund (Sweden); Markó, M. [Neutron Spectroscopy Department, Wigner Research Centre for Physics, H-1525 Budapest (Hungary)

    2016-08-15

    In this work, we present the conceptual design of the backscattering time-of-flight spectrometer MIRACLES approved for construction at the long-pulse European Spallation Source (ESS). MIRACLES’s unparalleled combination of variable resolution, high flux, extended energy, and momentum transfer (0.2–6 Å{sup −1}) ranges will open new avenues for neutron backscattering spectroscopy. Its remarkable flexibility can be attributed to 3 key elements: the long-pulse time structure and low repetition rate of the ESS neutron source, the chopper cascade that tailors the moderator pulse in the primary part of the spectrometer, and the bent Si(111) analyzer crystals arranged in a near-backscattering geometry in the secondary part of the spectrometer. Analytical calculations combined with instrument Monte-Carlo simulations show that the instrument will provide a variable elastic energy resolution, δ(ħ ω), between 2 and 32 μeV, when using a wavelength of λ ≈ 6.267 Å (Si(111)-reflection), with an energy transfer range, ħ ω, centered at the elastic line from −600 to +600 μeV. In addition, when selecting λ ≈ 2.08 Å (i.e., the Si(333)-reflection), δ(ħ ω) can be relaxed to 300 μeV and ħ ω from about 10 meV in energy gain to ca −40 meV in energy loss. Finally, the dynamic wavelength range of MIRACLES, approximately 1.8 Å, can be shifted within the interval of 2–20 Å to allow the measurement of low-energy inelastic excitations.

  14. Demographics and run timing of adult Lost River (Deltistes luxatus) and short nose (Chasmistes brevirostris) suckers in Upper Klamath Lake, Oregon, 2012

    Science.gov (United States)

    Hewitt, David A.; Janney, Eric C.; Hayes, Brian S.; Harris, Alta C.

    2014-01-01

    Data from a long-term capture-recapture program were used to assess the status and dynamics of populations of two long-lived, federally endangered catostomids in Upper Klamath Lake, Oregon. Lost River suckers (Deltistes luxatus) and shortnose suckers (Chasmistes brevirostris) have been captured and tagged with passive integrated transponder (PIT) tags during their spawning migrations in each year since 1995. In addition, beginning in 2005, individuals that had been previously PIT-tagged were re-encountered on remote underwater antennas deployed throughout sucker spawning areas. Captures and remote encounters during spring 2012 were used to describe the spawning migrations in that year and also were incorporated into capture-recapture analyses of population dynamics. Cormack-Jolly-Seber (CJS) open population capture-recapture models were used to estimate annual survival probabilities, and a reverse-time analog of the CJS model was used to estimate recruitment of new individuals into the spawning populations. In addition, data on the size composition of captured fish were examined to provide corroborating evidence of recruitment. Model estimates of survival and recruitment were used to derive estimates of changes in population size over time and to determine the status of the populations in 2011. Separate analyses were conducted for each species and also for each subpopulation of Lost River suckers (LRS). Shortnose suckers (SNS) and one subpopulation of LRS migrate into tributary rivers to spawn, whereas the other LRS subpopulation spawns at groundwater upwelling areas along the eastern shoreline of the lake. In 2012, we captured, tagged, and released 749 LRS at four lakeshore spawning areas and recaptured an additional 969 individuals that had been tagged in previous years. Across all four areas, the remote antennas detected 6,578 individual LRS during the spawning season. Spawning activity peaked in April and most individuals were encountered at Cinder Flats and

  15. The recovery of a time-dependent point source in a linear transport equation: application to surface water pollution

    International Nuclear Information System (INIS)

    Hamdi, Adel

    2009-01-01

    The aim of this paper is to localize the position of a point source and recover the history of its time-dependent intensity function that is both unknown and constitutes the right-hand side of a 1D linear transport equation. Assuming that the source intensity function vanishes before reaching the final control time, we prove that recording the state with respect to the time at two observation points framing the source region leads to the identification of the source position and the recovery of its intensity function in a unique manner. Note that at least one of the two observation points should be strategic. We establish an identification method that determines quasi-explicitly the source position and transforms the task of recovering its intensity function into solving directly a well-conditioned linear system. Some numerical experiments done on a variant of the water pollution BOD model are presented

  16. Running economy and energy cost of running with backpacks.

    Science.gov (United States)

    Scheer, Volker; Cramer, Leoni; Heitkamp, Hans-Christian

    2018-05-02

    Running is a popular recreational activity and additional weight is often carried in backpacks on longer runs. Our aim was to examine running economy and other physiological parameters while running with a 1kg and 3 kg backpack at different submaximal running velocities. 10 male recreational runners (age 25 ± 4.2 years, VO2peak 60.5 ± 3.1 ml·kg-1·min-1) performed runs on a motorized treadmill of 5 minutes durations at three different submaximal speeds of 70, 80 and 90% of anaerobic lactate threshold (LT) without additional weight, and carrying a 1kg and 3 kg backpack. Oxygen consumption, heart rate, lactate and RPE were measured and analysed. Oxygen consumption, energy cost of running and heart rate increased significantly while running with a backpack weighing 3kg compared to running without additional weight at 80% of speed at lactate threshold (sLT) (p=0.026, p=0.009 and p=0.003) and at 90% sLT (p<0.001, p=0.001 and p=0.001). Running with a 1kg backpack showed a significant increase in heart rate at 80% sLT (p=0.008) and a significant increase in oxygen consumption and heart rate at 90% sLT (p=0.045 and p=0.007) compared to running without additional weight. While running at 70% sLT running economy and cardiovascular effort increased with weighted backpack running compared to running without additional weight, however these increases did not reach statistical significance. Running economy deteriorates and cardiovascular effort increases while running with additional backpack weight especially at higher submaximal running speeds. Backpack weight should therefore be kept to a minimum.

  17. Lead-acid batteries life time prolongation in renewable energy source plants

    Directory of Open Access Journals (Sweden)

    Костянтин Ігорович Ткаченко

    2015-11-01

    Full Text Available Charge controllers with microprocessor control are recognized to be almost optimal process control devices for collecting and storing energy in batteries in power systems with renewable energy sources such as solar photoelectric batteries, wind electrogenerators and others. The task of the controller is charging process control, that is such as charging and discharging the batteries while providing maximum charging speed and battery saving parameters that characterize the state of the battery, within certain limits, preventing overcharging, overheating and the batteries deep discharge. The possibility of archiving data that keeps the battery parameters time dependance is also important. Thus, the concept of a charge controller with Texas Instruments microcontroller device MSP430G2553 was introduced in the study. The program saved in the ROM microcontroller provides for: charge regime(with a particular algorithm; control and training cycle followed by charging; continuous charge-discharge regime to restore the battery or the study of charge regime algorithms influence on repair effectiveness. The device can perform its functions without being connected to a personal computer, but this connection makes it possible to observe in real time the characteristics of a number of discharge and charge regimes parameters, as well as reading the stored data from microcontroller flash memory and storing these data on the PC hard disk for further analysis. A four stages charging algorithm with reverse charging regime was offered by the author and correctness of algorithm was proved

  18. Aurorasaurus Database of Real-Time, Soft-Sensor Sourced Aurora Data for Space Weather Research

    Science.gov (United States)

    Kosar, B.; MacDonald, E.; Heavner, M.

    2017-12-01

    Aurorasaurus is an innovative citizen science project focused on two fundamental objectives i.e., collecting real-time, ground-based signals of auroral visibility from citizen scientists (soft-sensors) and incorporating this new type of data into scientific investigations pertaining to aurora. The project has been live since the Fall of 2014, and as of Summer 2017, the database compiled approximately 12,000 observations (5295 direct reports and 6413 verified tweets). In this presentation, we will focus on demonstrating the utility of this robust science quality data for space weather research needs. These data scale with the size of the event and are well-suited to capture the largest, rarest events. Emerging state-of-the-art computational methods based on statistical inference such as machine learning frameworks and data-model integration methods can offer new insights that could potentially lead to better real-time assessment and space weather prediction when citizen science data are combined with traditional sources.

  19. Initial time-resolved particle beam profile measurements at the Advanced Photon Source

    International Nuclear Information System (INIS)

    Yang, B.X.; Lumpkin, A.H.

    1995-01-01

    The commissioning of the 7-GeV Advanced Photon Source (APS) storage ring began in early 1995. Characterization of the stored particle beam properties involved time-resolved transverse and longitudinal profile measurements using optical synchrotron radiation (OSR) monitors. Early results include the observation of the beam on a single turn, measurements of the transverse beam sizes after damping using a 100 μs integration time (σ x ∼ 150 ± 25 μm, σ γ ∼ 65 ± 25 μm, depending on vertical coupling), and measurement of the bunch length (σ τ ∼ 25 to 55 ps, depending on the charge per bunch). The results are consistent with specifications and predictions based on the 8.2 nm-rad natural emittance, the calculated lattice parameters, and vertical coupling less than 10%. The novel, single-element focusing mirror for the photon transport line and the dual-sweep streak camera techniques which allow turn-by-turn measurements will also be presented. The latter measurements are believed to be the first of their kind on a storage ring in the USA

  20. HTML 5 up and running

    CERN Document Server

    Pilgrim, Mark

    2010-01-01

    If you don't know about the new features available in HTML5, now's the time to find out. This book provides practical information about how and why the latest version of this markup language will significantly change the way you develop for the Web. HTML5 is still evolving, yet browsers such as Safari, Mozilla, Opera, and Chrome already support many of its features -- and mobile browsers are even farther ahead. HTML5: Up & Running carefully guides you though the important changes in this version with lots of hands-on examples, including markup, graphics, and screenshots. You'll learn how to

  1. Sources of atmospheric aerosols controlling PM10 levels in Heraklion, Crete during winter time

    Science.gov (United States)

    Kalivitis, Nikolaos; Kouvarakis, Giorgos; Stavroulas, Iasonas; Kandilogiannaki, Maria; Vavadaki, Katerina; Mihalopoulos, Nikolaos

    2016-04-01

    High concentrations of Particulate Matter (PM) in the atmosphere have negative impact to human health. Thresholds for ambient concentrations that are defined by the directive 2008/50/EC are frequently exceeded even at background conditions in the Mediterranean region as shown in earlier studies. The sources of atmospheric particles in the urban environment of a medium size city of eastern Mediterranean are studied in the present work in order to better understand the causes and characteristics of exceedances of the daily mean PM10limit value of 50 μg m-3. Measurements were performed at the atmospheric quality measurement station of the Region of Crete, at the Heraklion city center on Crete island, during the winter/spring period of 2014-2015 and 2015-2016. Special emphasis was given to the study of the contribution of Black Carbon (BC) to the levels of PM10. Continuous measurements were performed using a beta-attenuation PM10monitor and a 7-wavelength Aethalometer with a time resolution of 30 and 5 minutes respectively. For direct comparison to background regional conditions, concurrent routine measurements at the atmospheric research station of University of Crete at Finokalia were used as background reference. Analysis of exceedances in the daily PM10 mass concentration showed that the total of the exceedances was related to long range transport of Saharan dust rather than local sources. However, compared to the Finokalia station it was found that there were 20% more exceedances in Heraklion, the addition of transported dust on the local pollution was the reason for the additional exceedance days. Excluding dust events, it was found that the PM10variability was dependent on the BC abundance, traffic during rush hours in the morning and biomass burning for domestic heating in the evening contributed significantly to PM10levels in Heraklion.

  2. Time dependence of the field energy densities surrounding sources: Application to scalar mesons near point sources and to electromagnetic fields near molecules

    International Nuclear Information System (INIS)

    Persico, F.; Power, E.A.

    1987-01-01

    The time dependence of the dressing-undressing process, i.e., the acquiring or losing by a source of a boson field intensity and hence of a field energy density in its neighborhood, is considered by examining some simple soluble models. First, the loss of the virtual field is followed in time when a point source is suddenly decoupled from a neutral scalar meson field. Second, an initially bare point source acquires a virtual meson cloud as the coupling is switched on. The third example is that of an initially bare molecule interacting with the vacuum of the electromagnetic field to acquire a virtual photon cloud. In all three cases the dressing-undressing is shown to take place within an expanding sphere of radius r = ct centered at the source. At each point in space the energy density tends, for large times, to that of the ground state of the total system. Differences in the time dependence of the dressing between the massive scalar field and the massless electromagnetic field are discussed. The results are also briefly discussed in the light of Feinberg's ideas on the nature of half-dressed states in quantum field theory

  3. Dual Source Time-of-flight Mass Spectrometer and Sample Handling System

    Science.gov (United States)

    Brinckerhoff, W.; Mahaffy, P.; Cornish, T.; Cheng, A.; Gorevan, S.; Niemann, H.; Harpold, D.; Rafeek, S.; Yucht, D.

    We present details of an instrument under development for potential NASA missions to planets and small bodies. The instrument comprises a dual ionization source (laser and electron impact) time-of-flight mass spectrometer (TOF-MS) and a carousel sam- ple handling system for in situ analysis of solid materials acquired by, e.g., a coring drill. This DSTOF instrument could be deployed on a fixed lander or a rover, and has an open design that would accommodate measurements by additional instruments. The sample handling system (SHS) is based on a multi-well carousel, originally de- signed for Champollion/DS4. Solid samples, in the form of drill cores or as loose chips or fines, are inserted through an access port, sealed in vacuum, and transported around the carousel to a pyrolysis cell and/or directly to the TOF-MS inlet. Samples at the TOF-MS inlet are xy-addressable for laser or optical microprobe. Cups may be ejected from their holders for analyzing multiple samples or caching them for return. Samples are analyzed with laser desorption and evolved-gas/electron-impact sources. The dual ion source permits studies of elemental, isotopic, and molecular composition of unprepared samples with a single mass spectrometer. Pulsed laser desorption per- mits the measurement of abundance and isotope ratios of refractory elements, as well as the detection of high-mass organic molecules in solid samples. Evolved gas analysis permits similar measurements of the more volatile species in solids and aerosols. The TOF-MS is based on previous miniature prototypes at JHU/APL that feature high sensitivity and a wide mass range. The laser mode, in which the sample cup is directly below the TOF-MS inlet, permits both ablation and desorption measurements, to cover elemental and molecular species, respectively. In the evolved gas mode, sample cups are raised into a small pyrolysis cell and heated, producing a neutral gas that is elec- tron ionized and pulsed into the TOF-MS. (Any imaging

  4. Source-independent time-domain waveform inversion using convolved wavefields: Application to the encoded multisource waveform inversion

    KAUST Repository

    Choi, Yun Seok; Alkhalifah, Tariq Ali

    2011-01-01

    Full waveform inversion requires a good estimation of the source wavelet to improve our chances of a successful inversion. This is especially true for an encoded multisource time-domain implementation, which, conventionally, requires separate

  5. On run-time exploitation of concurrency

    NARCIS (Netherlands)

    Holzenspies, P.K.F.

    2010-01-01

    The `free' speed-up stemming from ever increasing processor speed is over. Performance increase in computer systems can now only be achieved through parallelism. One of the biggest challenges in computer science is how to map applications onto parallel computers. Concurrency, seen as the set of

  6. Ubuntu Up and Running

    CERN Document Server

    Nixon, Robin

    2010-01-01

    Ubuntu for everyone! This popular Linux-based operating system is perfect for people with little technical background. It's simple to install, and easy to use -- with a strong focus on security. Ubuntu: Up and Running shows you the ins and outs of this system with a complete hands-on tour. You'll learn how Ubuntu works, how to quickly configure and maintain Ubuntu 10.04, and how to use this unique operating system for networking, business, and home entertainment. This book includes a DVD with the complete Ubuntu system and several specialized editions -- including the Mythbuntu multimedia re

  7. Causal Analysis of Railway Running Delays

    DEFF Research Database (Denmark)

    Cerreto, Fabrizio; Nielsen, Otto Anker; Harrod, Steven

    Operating delays and network propagation are inherent characteristics of railway operations. These are traditionally reduced by provision of time supplements or “slack” in railway timetables and operating plans. Supplement allocation policies must trade off reliability in the service commitments...... Denmark (the Danish infrastructure manager). The statistical analysis of the data identifies the minimum running times and the scheduled running time supplements and investigates the evolution of train delays along given train paths. An improved allocation of time supplements would result in smaller...

  8. Injecting Artificial Memory Errors Into a Running Computer Program

    Science.gov (United States)

    Bornstein, Benjamin J.; Granat, Robert A.; Wagstaff, Kiri L.

    2008-01-01

    Single-event upsets (SEUs) or bitflips are computer memory errors caused by radiation. BITFLIPS (Basic Instrumentation Tool for Fault Localized Injection of Probabilistic SEUs) is a computer program that deliberately injects SEUs into another computer program, while the latter is running, for the purpose of evaluating the fault tolerance of that program. BITFLIPS was written as a plug-in extension of the open-source Valgrind debugging and profiling software. BITFLIPS can inject SEUs into any program that can be run on the Linux operating system, without needing to modify the program s source code. Further, if access to the original program source code is available, BITFLIPS offers fine-grained control over exactly when and which areas of memory (as specified via program variables) will be subjected to SEUs. The rate of injection of SEUs is controlled by specifying either a fault probability or a fault rate based on memory size and radiation exposure time, in units of SEUs per byte per second. BITFLIPS can also log each SEU that it injects and, if program source code is available, report the magnitude of effect of the SEU on a floating-point value or other program variable.

  9. ATLAS people can run!

    CERN Multimedia

    Claudia Marcelloni de Oliveira; Pauline Gagnon

    It must be all the training we are getting every day, running around trying to get everything ready for the start of the LHC next year. This year, the ATLAS runners were in fine form and came in force. Nine ATLAS teams signed up for the 37th Annual CERN Relay Race with six runners per team. Under a blasting sun on Wednesday 23rd May 2007, each team covered the distances of 1000m, 800m, 800m, 500m, 500m and 300m taking the runners around the whole Meyrin site, hills included. A small reception took place in the ATLAS secretariat a week later to award the ATLAS Cup to the best ATLAS team. For the details on this complex calculation which takes into account the age of each runner, their gender and the color of their shoes, see the July 2006 issue of ATLAS e-news. The ATLAS Running Athena Team, the only all-women team enrolled this year, won the much coveted ATLAS Cup for the second year in a row. In fact, they are so good that Peter Schmid and Patrick Fassnacht are wondering about reducing the women's bonus in...

  10. Underwater running device

    International Nuclear Information System (INIS)

    Kogure, Sumio; Matsuo, Takashiro; Yoshida, Yoji

    1996-01-01

    An underwater running device for an underwater inspection device for detecting inner surfaces of a reactor or a water vessel has an outer frame and an inner frame, and both of them are connected slidably by an air cylinder and connected rotatably by a shaft. The outer frame has four outer frame legs, and each of the outer frame legs is equipped with a sucker at the top end. The inner frame has four inner frame legs each equipped with a sucker at the top end. The outer frame legs and the inner frame legs are each connected with the outer frame and the inner frame by the air cylinder. The outer and the inner frame legs can be elevated or lowered (or extended or contracted) by the air cylinder. The sucker is connected with a jet pump-type negative pressure generator. The device can run and move by repeating attraction and releasing of the outer frame legs and the inner frame legs alternately while maintaining the posture of the inspection device stably. (I.N.)

  11. System identification through nonstationary data using Time-Frequency Blind Source Separation

    Science.gov (United States)

    Guo, Yanlin; Kareem, Ahsan

    2016-06-01

    Classical output-only system identification (SI) methods are based on the assumption of stationarity of the system response. However, measured response of buildings and bridges is usually non-stationary due to strong winds (e.g. typhoon, and thunder storm etc.), earthquakes and time-varying vehicle motions. Accordingly, the response data may have time-varying frequency contents and/or overlapping of modal frequencies due to non-stationary colored excitation. This renders traditional methods problematic for modal separation and identification. To address these challenges, a new SI technique based on Time-Frequency Blind Source Separation (TFBSS) is proposed. By selectively utilizing "effective" information in local regions of the time-frequency plane, where only one mode contributes to energy, the proposed technique can successfully identify mode shapes and recover modal responses from the non-stationary response where the traditional SI methods often encounter difficulties. This technique can also handle response with closely spaced modes which is a well-known challenge for the identification of large-scale structures. Based on the separated modal responses, frequency and damping can be easily identified using SI methods based on a single degree of freedom (SDOF) system. In addition to the exclusive advantage of handling non-stationary data and closely spaced modes, the proposed technique also benefits from the absence of the end effects and low sensitivity to noise in modal separation. The efficacy of the proposed technique is demonstrated using several simulation based studies, and compared to the popular Second-Order Blind Identification (SOBI) scheme. It is also noted that even some non-stationary response data can be analyzed by the stationary method SOBI. This paper also delineates non-stationary cases where SOBI and the proposed scheme perform comparably and highlights cases where the proposed approach is more advantageous. Finally, the performance of the

  12. Soil Monitor: an open source web application for real-time soil sealing monitoring and assessment

    Science.gov (United States)

    Langella, Giuliano; Basile, Angelo; Giannecchini, Simone; Iamarino, Michela; Munafò, Michele; Terribile, Fabio

    2016-04-01

    Soil sealing is one of the most important causes of land degradation and desertification. In Europe, soil covered by impermeable materials has increased by about 80% from the Second World War till nowadays, while population has only grown by one third. There is an increasing concern at the high political levels about the need to attenuate imperviousness itself and its effects on soil functions. European Commission promulgated a roadmap (COM(2011) 571) by which the net land take would be zero by 2050. Furthermore, European Commission also published a report in 2011 providing best practices and guidelines for limiting soil sealing and imperviousness. In this scenario, we developed an open source and an open source based Soil Sealing Geospatial Cyber Infrastructure (SS-GCI) named as "Soil Monitor". This tool merges a webGIS with parallel geospatial computation in a fast and dynamic fashion in order to provide real-time assessments of soil sealing at high spatial resolution (20 meters and below) over the whole Italy. Common open source webGIS packages are used to implement both the data management and visualization infrastructures, such as GeoServer and MapStore. The high-speed geospatial computation is ensured by a GPU parallelism using the CUDA (Computing Unified Device Architecture) framework by NVIDIA®. This kind of parallelism required the writing - from scratch - all codes needed to fulfil the geospatial computation built behind the soil sealing toolbox. The combination of GPU computing with webGIS infrastructures is relatively novel and required particular attention at the Java-CUDA programming interface. As a result, Soil Monitor is smart because it can perform very high time-consuming calculations (querying for instance an Italian administrative region as area of interest) in less than one minute. The web application is embedded in a web browser and nothing must be installed before using it. Potentially everybody can use it, but the main targets are the

  13. Mean platelet volume (MPV) predicts middle distance running performance.

    Science.gov (United States)

    Lippi, Giuseppe; Salvagno, Gian Luca; Danese, Elisa; Skafidas, Spyros; Tarperi, Cantor; Guidi, Gian Cesare; Schena, Federico

    2014-01-01

    Running economy and performance in middle distance running depend on several physiological factors, which include anthropometric variables, functional characteristics, training volume and intensity. Since little information is available about hematological predictors of middle distance running time, we investigated whether some hematological parameters may be associated with middle distance running performance in a large sample of recreational runners. The study population consisted in 43 amateur runners (15 females, 28 males; median age 47 years), who successfully concluded a 21.1 km half-marathon at 75-85% of their maximal aerobic power (VO2max). Whole blood was collected 10 min before the run started and immediately thereafter, and hematological testing was completed within 2 hours after sample collection. The values of lymphocytes and eosinophils exhibited a significant decrease compared to pre-run values, whereas those of mean corpuscular volume (MCV), platelets, mean platelet volume (MPV), white blood cells (WBCs), neutrophils and monocytes were significantly increased after the run. In univariate analysis, significant associations with running time were found for pre-run values of hematocrit, hemoglobin, mean corpuscular hemoglobin (MCH), red blood cell distribution width (RDW), MPV, reticulocyte hemoglobin concentration (RetCHR), and post-run values of MCH, RDW, MPV, monocytes and RetCHR. In multivariate analysis, in which running time was entered as dependent variable whereas age, sex, blood lactate, body mass index, VO2max, mean training regimen and the hematological parameters significantly associated with running performance in univariate analysis were entered as independent variables, only MPV values before and after the trial remained significantly associated with running time. After adjustment for platelet count, the MPV value before the run (p = 0.042), but not thereafter (p = 0.247), remained significantly associated with running

  14. Mean platelet volume (MPV predicts middle distance running performance.

    Directory of Open Access Journals (Sweden)

    Giuseppe Lippi

    Full Text Available Running economy and performance in middle distance running depend on several physiological factors, which include anthropometric variables, functional characteristics, training volume and intensity. Since little information is available about hematological predictors of middle distance running time, we investigated whether some hematological parameters may be associated with middle distance running performance in a large sample of recreational runners.The study population consisted in 43 amateur runners (15 females, 28 males; median age 47 years, who successfully concluded a 21.1 km half-marathon at 75-85% of their maximal aerobic power (VO2max. Whole blood was collected 10 min before the run started and immediately thereafter, and hematological testing was completed within 2 hours after sample collection.The values of lymphocytes and eosinophils exhibited a significant decrease compared to pre-run values, whereas those of mean corpuscular volume (MCV, platelets, mean platelet volume (MPV, white blood cells (WBCs, neutrophils and monocytes were significantly increased after the run. In univariate analysis, significant associations with running time were found for pre-run values of hematocrit, hemoglobin, mean corpuscular hemoglobin (MCH, red blood cell distribution width (RDW, MPV, reticulocyte hemoglobin concentration (RetCHR, and post-run values of MCH, RDW, MPV, monocytes and RetCHR. In multivariate analysis, in which running time was entered as dependent variable whereas age, sex, blood lactate, body mass index, VO2max, mean training regimen and the hematological parameters significantly associated with running performance in univariate analysis were entered as independent variables, only MPV values before and after the trial remained significantly associated with running time. After adjustment for platelet count, the MPV value before the run (p = 0.042, but not thereafter (p = 0.247, remained significantly associated with running

  15. 7 Questions to Ask Open Source Vendors

    Science.gov (United States)

    Raths, David

    2012-01-01

    With their budgets under increasing pressure, many campus IT directors are considering open source projects for the first time. On the face of it, the savings can be significant. Commercial emergency-planning software can cost upward of six figures, for example, whereas the open source Kuali Ready might run as little as $15,000 per year when…

  16. AlgoRun: a Docker-based packaging system for platform-agnostic implemented algorithms.

    Science.gov (United States)

    Hosny, Abdelrahman; Vera-Licona, Paola; Laubenbacher, Reinhard; Favre, Thibauld

    2016-08-01

    There is a growing need in bioinformatics for easy-to-use software implementations of algorithms that are usable across platforms. At the same time, reproducibility of computational results is critical and often a challenge due to source code changes over time and dependencies. The approach introduced in this paper addresses both of these needs with AlgoRun, a dedicated packaging system for implemented algorithms, using Docker technology. Implemented algorithms, packaged with AlgoRun, can be executed through a user-friendly interface directly from a web browser or via a standardized RESTful web API to allow easy integration into more complex workflows. The packaged algorithm includes the entire software execution environment, thereby eliminating the common problem of software dependencies and the irreproducibility of computations over time. AlgoRun-packaged algorithms can be published on http://algorun.org, a centralized searchable directory to find existing AlgoRun-packaged algorithms. AlgoRun is available at http://algorun.org and the source code under GPL license is available at https://github.com/algorun laubenbacher@uchc.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. The 2017 North Korea M6 seismic sequence: moment tensor, source time function, and aftershocks

    Science.gov (United States)

    Ni, S.; Zhan, Z.; Chu, R.; He, X.

    2017-12-01

    On September 3rd, 2017, an M6 seismic event occurred in North Korea, with location near previous nuclear test sites. The event features strong P waves and short period Rayleigh waves are observed in contrast to weak S waves, suggesting mostly explosion mechanism. We performed joint inversion for moment tensor and depth with both local and teleseismic waveforms, and find that the event is shallow with mostly isotropic yet substantial non-isotropic components. Deconvolution of seismic waveforms of this event with respect to previous nuclear test events shows clues of complexity in source time function. The event is followed by smaller earthquakes, as early as 8.5 minutes and lasted at least to October. The later events occurred in a compact region, and show clear S waves, suggesting double couple focal mechanism. Via analyzing Rayleigh wave spectrum, these smaller events are found to be shallow. Relative locations, difference in waveforms of the events are used to infer their possible links and generation mechanism.

  18. Measurement system of correlation functions of microwave single photon source in real time

    Science.gov (United States)

    Korenkov, A.; Dmitriev, A.; Astafiev, O.

    2018-02-01

    Several quantum setups, such as quantum key distribution networks[1] and quantum simulators (e.g. boson sampling), by their design rely on single photon sources (SPSs). These quantum setups were demonstrated to operate in optical frequency domain. However, following the steady advances in circuit quantum electrodynamics, a proposal has been made recently[2] to demonstrate boson sampling with microwave photons. This in turn requires the development of reliable microwave SPS. It's one of the most important characteristics are the first-order and the second-order correlation functions g1 and g2. The measurement technique of g1 and g2 is significantly different from that in the optical domain [3],[4] because of the current unavailability of microwave single-photon detectors. In particular, due to high levels of noise present in the system a substantial amount of statistics in needed to be acquired. This work presents a platform for measurement of g1 and g2 that processes the incoming data in real time, maximizing the efficiency of data acquisition. The use of field-programmable gate array (FPGA) electronics, common in similar experiments[3] but complex in programming, is avoided; instead, the calculations are performed on a standard desktop computer. The platform is used to perform the measurements of the first-order and the second-order correlation functions of the microwave SPS.

  19. A Comparison between Predicted and Observed Atmospheric States and their Effects on Infrasonic Source Time Function Inversion at Source Physics Experiment 6

    Science.gov (United States)

    Aur, K. A.; Poppeliers, C.; Preston, L. A.

    2017-12-01

    The Source Physics Experiment (SPE) consists of a series of underground chemical explosions at the Nevada National Security Site (NNSS) designed to gain an improved understanding of the generation and propagation of physical signals in the near and far field. Characterizing the acoustic and infrasound source mechanism from underground explosions is of great importance to underground explosion monitoring. To this end we perform full waveform source inversion of infrasound data collected from the SPE-6 experiment at distances from 300 m to 6 km and frequencies up to 20 Hz. Our method requires estimating the state of the atmosphere at the time of each experiment, computing Green's functions through these atmospheric models, and subsequently inverting the observed data in the frequency domain to obtain a source time function. To estimate the state of the atmosphere at the time of the experiment, we utilize the Weather Research and Forecasting - Data Assimilation (WRF-DA) modeling system to derive a unified atmospheric state model by combining Global Energy and Water Cycle Experiment (GEWEX) Continental-scale International Project (GCIP) data and locally obtained sonde and surface weather observations collected at the time of the experiment. We synthesize Green's functions through these atmospheric models using Sandia's moving media acoustic propagation simulation suite (TDAAPS). These models include 3-D variations in topography, temperature, pressure, and wind. We compare inversion results using the atmospheric models derived from the unified weather models versus previous modeling results and discuss how these differences affect computed source waveforms with respect to observed waveforms at various distances. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear

  20. The design of the run Clever randomized trial: running volume, -intensity and running-related injuries.

    Science.gov (United States)

    Ramskov, Daniel; Nielsen, Rasmus Oestergaard; Sørensen, Henrik; Parner, Erik; Lind, Martin; Rasmussen, Sten

    2016-04-23

    Injury incidence and prevalence in running populations have been investigated and documented in several studies. However, knowledge about injury etiology and prevention is needed. Training errors in running are modifiable risk factors and people engaged in recreational running need evidence-based running schedules to minimize the risk of injury. The existing literature on running volume and running intensity and the development of injuries show conflicting results. This may be related to previously applied study designs, methods used to quantify the performed running and the statistical analysis of the collected data. The aim of the Run Clever trial is to investigate if a focus on running intensity compared with a focus on running volume in a running schedule influences the overall injury risk differently. The Run Clever trial is a randomized trial with a 24-week follow-up. Healthy recreational runners between 18 and 65 years and with an average of 1-3 running sessions per week the past 6 months are included. Participants are randomized into two intervention groups: Running schedule-I and Schedule-V. Schedule-I emphasizes a progression in running intensity by increasing the weekly volume of running at a hard pace, while Schedule-V emphasizes a progression in running volume, by increasing the weekly overall volume. Data on the running performed is collected by GPS. Participants who sustain running-related injuries are diagnosed by a diagnostic team of physiotherapists using standardized diagnostic criteria. The members of the diagnostic team are blinded. The study design, procedures and informed consent were approved by the Ethics Committee Northern Denmark Region (N-20140069). The Run Clever trial will provide insight into possible differences in injury risk between running schedules emphasizing either running intensity or running volume. The risk of sustaining volume- and intensity-related injuries will be compared in the two intervention groups using a competing

  1. Using Integration and Autonomy to Teach an Elementary Running Unit

    Science.gov (United States)

    Sluder, J. Brandon; Howard-Shaughnessy, Candice

    2015-01-01

    Cardiovascular fitness is an important aspect of overall fitness, health, and wellness, and running can be an excellent lifetime physical activity. One of the most simple and effective means of exercise, running raises heart rate in a short amount of time and can be done with little to no cost for equipment. There are many benefits to running,…

  2. The running pattern and its importance in running long-distance gears

    Directory of Open Access Journals (Sweden)

    Jarosław Hoffman

    2017-07-01

    Full Text Available The running pattern is individual for each runner, regardless of distance. We can characterize it as the sum of the data of the runner (age, height, training time, etc. and the parameters of his run. Building the proper technique should focus first and foremost on the work of movement coordination and the power of the runner. In training the correct running steps we can use similar tools as working on deep feeling. The aim of this paper was to define what we can call a running pattern, what is its influence in long-distance running, and the relationship between the training technique and the running pattern. The importance of a running pattern in long-distance racing is immense, as the more distracted and departed from the norm, the greater the harm to the body will cause it to repetition in long run. Putting on training exercises that shape the technique is very important and affects the running pattern significantly.

  3. Real-time particle monitor calibration factors and PM2.5 emission factors for multiple indoor sources.

    Science.gov (United States)

    Dacunto, Philip J; Cheng, Kai-Chung; Acevedo-Bolton, Viviana; Jiang, Ruo-Ting; Klepeis, Neil E; Repace, James L; Ott, Wayne R; Hildemann, Lynn M

    2013-08-01

    Indoor sources can greatly contribute to personal exposure to particulate matter less than 2.5 μm in diameter (PM2.5). To accurately assess PM2.5 mass emission factors and concentrations, real-time particle monitors must be calibrated for individual sources. Sixty-six experiments were conducted with a common, real-time laser photometer (TSI SidePak™ Model AM510 Personal Aerosol Monitor) and a filter-based PM2.5 gravimetric sampler to quantify the monitor calibration factors (CFs), and to estimate emission factors for common indoor sources including cigarettes, incense, cooking, candles, and fireplaces. Calibration factors for these indoor sources were all significantly less than the factory-set CF of 1.0, ranging from 0.32 (cigarette smoke) to 0.70 (hamburger). Stick incense had a CF of 0.35, while fireplace emissions ranged from 0.44-0.47. Cooking source CFs ranged from 0.41 (fried bacon) to 0.65-0.70 (fried pork chops, salmon, and hamburger). The CFs of combined sources (e.g., cooking and cigarette emissions mixed) were linear combinations of the CFs of the component sources. The highest PM2.5 emission factors per time period were from burned foods and fireplaces (15-16 mg min(-1)), and the lowest from cooking foods such as pizza and ground beef (0.1-0.2 mg min(-1)).

  4. Rankine models for time-dependent gravity spreading of terrestrial source flows over subplanar slopes

    NARCIS (Netherlands)

    Wijermars, R.; Dooley, T.P.; Jackson, M.P.A.; Hudec, M.R.

    2014-01-01

    Geological mass flows extruding from a point source include mud, lava, and salt issued from subsurface reservoirs and ice from surface feeders. The delivery of the material may occur via a salt stock, a volcanic pipe (for magma and mud flows), or a valley glacier (for ice). All these source flows

  5. Barefoot running: biomechanics and implications for running injuries.

    Science.gov (United States)

    Altman, Allison R; Davis, Irene S

    2012-01-01

    Despite the technological developments in modern running footwear, up to 79% of runners today get injured in a given year. As we evolved barefoot, examining this mode of running is insightful. Barefoot running encourages a forefoot strike pattern that is associated with a reduction in impact loading and stride length. Studies have shown a reduction in injuries to shod forefoot strikers as compared with rearfoot strikers. In addition to a forefoot strike pattern, barefoot running also affords the runner increased sensory feedback from the foot-ground contact, as well as increased energy storage in the arch. Minimal footwear is being used to mimic barefoot running, but it is not clear whether it truly does. The purpose of this article is to review current and past research on shod and barefoot/minimal footwear running and their implications for running injuries. Clearly more research is needed, and areas for future study are suggested.

  6. [Multiple time scales analysis of spatial differentiation characteristics of non-point source nitrogen loss within watershed].

    Science.gov (United States)

    Liu, Mei-bing; Chen, Xing-wei; Chen, Ying

    2015-07-01

    Identification of the critical source areas of non-point source pollution is an important means to control the non-point source pollution within the watershed. In order to further reveal the impact of multiple time scales on the spatial differentiation characteristics of non-point source nitrogen loss, a SWAT model of Shanmei Reservoir watershed was developed. Based on the simulation of total nitrogen (TN) loss intensity of all 38 subbasins, spatial distribution characteristics of nitrogen loss and critical source areas were analyzed at three time scales of yearly average, monthly average and rainstorms flood process, respectively. Furthermore, multiple linear correlation analysis was conducted to analyze the contribution of natural environment and anthropogenic disturbance on nitrogen loss. The results showed that there were significant spatial differences of TN loss in Shanmei Reservoir watershed at different time scales, and the spatial differentiation degree of nitrogen loss was in the order of monthly average > yearly average > rainstorms flood process. TN loss load mainly came from upland Taoxi subbasin, which was identified as the critical source area. At different time scales, land use types (such as farmland and forest) were always the dominant factor affecting the spatial distribution of nitrogen loss, while the effect of precipitation and runoff on the nitrogen loss was only taken in no fertilization month and several processes of storm flood at no fertilization date. This was mainly due to the significant spatial variation of land use and fertilization, as well as the low spatial variability of precipitation and runoff.

  7. A rapid estimation of tsunami run-up based on finite fault models

    Science.gov (United States)

    Campos, J.; Fuentes, M. A.; Hayes, G. P.; Barrientos, S. E.; Riquelme, S.

    2014-12-01

    Many efforts have been made to estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori. However, such models are generally based on uniform slip distributions and thus oversimplify our knowledge of the earthquake source. Instead, we can use finite fault models of earthquakes to give a more accurate prediction of the tsunami run-up. Here we show how to accurately predict tsunami run-up from any seismic source model using an analytic solution found by Fuentes et al, 2013 that was especially calculated for zones with a very well defined strike, i.e, Chile, Japan, Alaska, etc. The main idea of this work is to produce a tool for emergency response, trading off accuracy for quickness. Our solutions for three large earthquakes are promising. Here we compute models of the run-up for the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake, and the recent 2014 Mw 8.2 Iquique Earthquake. Our maximum rup-up predictions are consistent with measurements made inland after each event, with a peak of 15 to 20 m for Maule, 40 m for Tohoku, and 2,1 m for the Iquique earthquake. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first five minutes after the occurrence of any such event. Such calculations will thus provide more accurate run-up information than is otherwise available from existing uniform-slip seismic source databases.

  8. Open source and healthcare in Europe - time to put leading edge ideas into practice.

    Science.gov (United States)

    Murray, Peter J; Wright, Graham; Karopka, Thomas; Betts, Helen; Orel, Andrej

    2009-01-01

    Free/Libre and Open Source Software (FLOSS) is a process of software development, a method of licensing and a philosophy. Although FLOSS plays a significant role in several market areas, the impact in the health care arena is still limited. FLOSS is promoted as one of the most effective means for overcoming fragmentation in the health care sector and providing a basis for more efficient, timely and cost effective health care provision. The 2008 European Federation for Medical Informatics (EFMI) Special Topic Conference (STC) explored a range of current and future issues related to FLOSS in healthcare (FLOSS-HC). In particular, there was a focus on health records, ubiquitous computing, knowledge sharing, and current and future applications. Discussions resulted in a list of main barriers and challenges for use of FLOSS-HC. Based on the outputs of this event, the 2004 Open Steps events and subsequent workshops at OSEHC2009 and Med-e-Tel 2009, a four-step strategy has been proposed for FLOSS-HC: 1) a FLOSS-HC inventory; 2) a FLOSS-HC collaboration platform, use case database and knowledge base; 3) a worldwide FLOSS-HC network; and 4) FLOSS-HC dissemination activities. The workshop will further refine this strategy and elaborate avenues for FLOSS-HC from scientific, business and end-user perspectives. To gain acceptance by different stakeholders in the health care industry, different activities have to be conducted in collaboration. The workshop will focus on the scientific challenges in developing methodologies and criteria to support FLOSS-HC in becoming a viable alternative to commercial and proprietary software development and deployment.

  9. X-ray time and spectral variability as probes of ultraluminous x-ray sources

    Science.gov (United States)

    Pasham, Dheeraj Ranga Reddy

    A long-standing debate in the field of ultraluminous X-ray sources (ULXs: luminosities > 3x1039 ergs s-1) is whether these objects are powered by stellar-mass black holes (mass range of 3-25 solar masses) undergoing hyper-accretion/emission or if they host the long-sought after class of intermediate-mass black holes (mass range of a few 100-1000 solar masses) accreting material at sub-Eddington rates. We present X-ray time and energy spectral variability studies of ULXs in order to understand their physical environments and accurately weigh their compact objects. A sample of ULXs exhibit quasi-periodic oscillations (QPOs) with centroid frequencies in the range of 10-200 mHz. The nature of the power density spectra (PDS) of these sources is qualitatively similar to stellar-mass black holes when they exhibit the so-called type-C low-frequency QPOs (frequency range of 0.2-15 Hz). However, the crucial difference is that the characteristic frequencies within the PDS of ULXs, viz., the break frequencies and the centroid frequencies of the QPOs, are scaled down by a factor of approximately 10-100 compared to stellar-mass black holes. It has thus been argued that the ULX mHz QPOs are the type-C low-frequency QPO analogs of stellar-mass black holes and that the observed difference in the frequencies (a fewx0.01 Hz compared with a few Hz) is due to the presence of intermediate-mass black holes ( MULX = (QPOstellar-mass black hole }/QPOULX)xM stellar-mass black hole, where M and QPO are the mass and the QPO frequency, respectively) within these ULXs. We analyzed all the archival XMM-Newton X-ray data of ULXs NGC 5408 X-1 and M82 X-1 in order to test the hypothesis that the ULX mHz QPOs are the type-C analogs by searching for a correlation between the mHz QPO frequency and the energy spectral power-law index as type-C QPOs show such a dependence. From our multi-epoch timing and spectral analysis of ULXs NGC 5408 X-1 and M82 X-1, we found that the mHz QPOs of these sources vary

  10. Source Apportionment of the Summer Time Carbonaceous Aerosol at Nordic Rural Background Sites

    Science.gov (United States)

    In the present study, natural and anthropogenic sources of particulate organic carbon (OCp) and elemental carbon (EC) have been quantified based on weekly filter samples of PM10 (particles with aerodynamic diameter Nordic rural backgro...

  11. Impact of the diagnostic process on the accuracy of source identification and time to antibiotics in septic emergency department patients.

    Science.gov (United States)

    Uittenbogaard, Annemieke J M; de Deckere, Ernie R J T; Sandel, Maro H; Vis, Alice; Houser, Christine M; de Groot, Bas

    2014-06-01

    Timely administration of effective antibiotics is important in sepsis management. Source-targeted antibiotics are believed to be most effective, but source identification could cause time delays. First, to describe the accuracy/time delays of a diagnostic work-up and the association with time to antibiotics in septic emergency department (ED) patients. Second, to assess the fraction in which source-targeted antibiotics could have been administered solely on the basis of patient history and physical examination. Secondary analysis of the prospective observational study on septic ED patients was carried out. The time to test result availability was associated with time to antibiotics. The accuracy of the suspected source of infection in the ED was assessed. For patients with pneumosepsis, urosepsis, and abdominal sepsis, combinations of signs and symptoms were assessed to achieve a maximal positive predictive value for the sepsis source, identifying a subset of patients in whom source-targeted antibiotics could be administered without waiting for diagnostic test results. The time to antibiotics increased by 18 (95% confidence interval: 12-24) min/h delay in test result availability (n=323). In 38-79% of patients, antibiotics were administered after additional tests, whereas the ED diagnosis was correct in 68-85% of patients. The maximal positive predictive value of signs and symptoms was 0.87 for patients with pneumosepsis and urosepsis and 0.75 for those with abdominal sepsis. Use of signs and symptoms would have led to correct ED diagnosis in 33% of patients. Diagnostic tests are associated with delayed administration of antibiotics to septic ED patients while increasing the diagnostic accuracy to only 68-85%. In one-third of septic ED patients, the choice of antibiotics could have been accurately determined solely on the basis of patient history and physical examination.

  12. The psychological benefits of recreational running: a field study.

    Science.gov (United States)

    Szabo, Attila; Abrahám, Júlia

    2013-01-01

    Running yields positive changes in affect, but the external validity of controlled studies has received little attention in the literature. In this inquiry, 50 recreational runners completed the Exercise-Induced Feeling Inventory (Gauvin & Rejeskí, 1993) before and after a bout of self-planned running on an urban running path. Positive changes were seen in all four measures of affect (p run, weekly running time, weekly running distance, and running experience) to the observed changes in affect. The results have revealed that exercise characteristics accounted for only 14-30% of the variance in the recreational runners' affect, in both directions. It is concluded that psychological benefits of recreational running may be linked to placebo (conditioning and/or expectancy) effects.

  13. Time-Dependent Selection of an Optimal Set of Sources to Define a Stable Celestial Reference Frame

    Science.gov (United States)

    Le Bail, Karine; Gordon, David

    2010-01-01

    Temporal statistical position stability is required for VLBI sources to define a stable Celestial Reference Frame (CRF) and has been studied in many recent papers. This study analyzes the sources from the latest realization of the International Celestial Reference Frame (ICRF2) with the Allan variance, in addition to taking into account the apparent linear motions of the sources. Focusing on the 295 defining sources shows how they are a good compromise of different criteria, such as statistical stability and sky distribution, as well as having a sufficient number of sources, despite the fact that the most stable sources of the entire ICRF2 are mostly in the Northern Hemisphere. Nevertheless, the selection of a stable set is not unique: studying different solutions (GSF005a and AUG24 from GSFC and OPA from the Paris Observatory) over different time periods (1989.5 to 2009.5 and 1999.5 to 2009.5) leads to selections that can differ in up to 20% of the sources. Observing, recording, and network improvement are some of the causes, showing better stability for the CRF over the last decade than the last twenty years. But this may also be explained by the assumption of stationarity that is not necessarily right for some sources.

  14. Validation and augmentation of Inrix arterial travel time data using independent sources : [research summary].

    Science.gov (United States)

    2015-02-01

    Although the freeway travel time data has been validated extensively in recent : years, the quality of arterial travel time data is not well known. This project : presents a comprehensive validation scheme for arterial travel time data based : on GPS...

  15. Darlington up and running

    International Nuclear Information System (INIS)

    Show, Don

    1993-01-01

    We've built some of the largest and most successful generating stations in the world. Nonetheless, we cannot take our knowledge and understanding of the technology for granted. Although, I do believe that we are getting better, building safer, more efficient plants, and introducing significant improvements to our existing stations. Ontario Hydro is a large and technically rich organization. Even so, we realize that partnerships with others in the industry are absolutely vital. I am thinking particularly of Atomic Energy of Canada Limited. We enjoy a very close relationship with Aecl, and their support was never more important than during the N/A Investigations. In recent years, we've strengthened our relationship with Aecl considerably. For example, we recently signed an agreement with Aecl, making available all of the Darlington 900 MW e design. Much of the cooperation between Ontario Hydro and Aecl occurs through the CANDU Engineering Authority and the CANDU Owners Group (CO G). These organizations are helping both of US to greatly improve cooperation and efficiency, and they are helping ensure we get the biggest return on our CANDU investments. CO G also provides an important information network which links CANDU operators in Canada, here in Korea, Argentina, India, Pakistan and Romania. In many respects, it is helping to develop the strong partnerships to support CANDU technology worldwide. We all benefit in the long run form sharing information and resources

  16. Metadata aided run selection at ATLAS

    International Nuclear Information System (INIS)

    Buckingham, R M; Gallas, E J; Tseng, J C-L; Viegas, F; Vinek, E

    2011-01-01

    Management of the large volume of data collected by any large scale scientific experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user interfaces, to pinpoint collections of data needed for specific purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called 'runBrowser' makes these Conditions Metadata available as a Run based selection service. runBrowser, based on PHP and JavaScript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions attributes, but also gives the user information at each stage about the relationship between the conditions chosen and the remaining conditions criteria available. When a set of COMA selections are complete, runBrowser produces a human readable report as well as an XML file in a standardized ATLAS format. This XML can be saved for later use or refinement in a future runBrowser session, shared with physics/detector groups, or used as input to ELSSI (event level Metadata browser) or other ATLAS run or event processing services.

  17. Multi-Sensor Constrained Time Varying Emissions Estimation of Black Carbon: Attributing Urban and Fire Sources Globally

    Science.gov (United States)

    Cohen, J. B.

    2015-12-01

    The short lifetime and heterogeneous distribution of Black Carbon (BC) in the atmosphere leads to complex impacts on radiative forcing, climate, and health, and complicates analysis of its atmospheric processing and emissions. Two recent papers have estimated the global and regional emissions of BC using advanced statistical and computational methods. One used a Kalman Filter, including data from AERONET, NOAA, and other ground-based sources, to estimate global emissions of 17.8+/-5.6 Tg BC/year (with the increase attributable to East Asia, South Asia, Southeast Asia, and Eastern Europe - all regions which have had rapid urban, industrial, and economic expansion). The second additionally used remotely sensed measurements from MISR and a variance maximizing technique, uniquely quantifying fire and urban sources in Southeast Asia, as well as their large year-to-year variability over the past 12 years, leading to increases from 10% to 150%. These new emissions products, when run through our state-of-the art modelling system of chemistry, physics, transport, removal, radiation, and climate, match 140 ground stations and satellites better in both an absolute and a temporal sense. New work now further includes trace species measurements from OMI, which are used with the variance maximizing technique to constrain the types of emissions sources. Furthermore, land-use change and fire estimation products from MODIS are also included, which provide other constraints on the temporal and spatial nature of the variations of intermittent sources like fires or new permanent sources like expanded urbanization. This talk will introduce a new, top-down constrained, weekly varying BC emissions dataset, show that it produces a better fit with observations, and draw conclusions about the sources and impacts from urbanization one hand, and fires on another hand. Results specific to the Southeast and East Asia will demonstrate inter- and intra-annual variations, such as the function of

  18. Backward running or absence of running from Creutz ratios

    International Nuclear Information System (INIS)

    Giedt, Joel; Weinberg, Evan

    2011-01-01

    We extract the running coupling based on Creutz ratios in SU(2) lattice gauge theory with two Dirac fermions in the adjoint representation. Depending on how the extrapolation to zero fermion mass is performed, either backward running or an absence of running is observed at strong bare coupling. This behavior is consistent with other findings which indicate that this theory has an infrared fixed point.

  19. Physiological demands of running during long distance runs and triathlons.

    Science.gov (United States)

    Hausswirth, C; Lehénaff, D

    2001-01-01

    The aim of this review article is to identify the main metabolic factors which have an influence on the energy cost of running (Cr) during prolonged exercise runs and triathlons. This article proposes a physiological comparison of these 2 exercises and the relationship between running economy and performance. Many terms are used as the equivalent of 'running economy' such as 'oxygen cost', 'metabolic cost', 'energy cost of running', and 'oxygen consumption'. It has been suggested that these expressions may be defined by the rate of oxygen uptake (VO2) at a steady state (i.e. between 60 to 90% of maximal VO2) at a submaximal running speed. Endurance events such as triathlon or marathon running are known to modify biological constants of athletes and should have an influence on their running efficiency. The Cr appears to contribute to the variation found in distance running performance among runners of homogeneous level. This has been shown to be important in sports performance, especially in events like long distance running. In addition, many factors are known or hypothesised to influence Cr such as environmental conditions, participant specificity, and metabolic modifications (e.g. training status, fatigue). The decrease in running economy during a triathlon and/or a marathon could be largely linked to physiological factors such as the enhancement of core temperature and a lack of fluid balance. Moreover, the increase in circulating free fatty acids and glycerol at the end of these long exercise durations bear witness to the decrease in Cr values. The combination of these factors alters the Cr during exercise and hence could modify the athlete's performance in triathlons or a prolonged run.

  20. The Source of Time-Correlated Photons at 1.064 μm and its Applications

    Directory of Open Access Journals (Sweden)

    Gostev P.P.

    2015-01-01

    Full Text Available The source of time-correlated photon-pairs at 1064 nm is described. The source consists of the spontaneous parametric down-conversion (SPDC generator, pumped by cw laser operating at 532 nm, and the measuring and control appliances. One of the main parts of the electronic systems is the “time-to-digital converter” which is designed and built by our group. The system allows to create and detect correlation of photon pairs with resolution better than 1 ns. We adduce the results of a quantum key distribution through open air. The key length was about 5000 bits and the accuracy ~0.1%.

  1. Leisure Time in Modern Societies: A New Source of Boredom and Stress?

    Science.gov (United States)

    Haller, Max; Hadler, Markus; Kaup, Gerd

    2013-01-01

    The increase in leisure time over the last century is well documented. We know much less, however, about the quality of the use of leisure time. Quite divergent predictions exist in this regard: Some authors have argued that the new, extensive free time will lead to new forms of time pressure and stress; others have foreseen an expansion of…

  2. Calculating method for confinement time and charge distribution of ions in electron cyclotron resonance sources

    International Nuclear Information System (INIS)

    Dougar-Jabon, V.D.; Umnov, A.M.; Kutner, V.B.

    1996-01-01

    It is common knowledge that the electrostatic pit in a core plasma of electron cyclotron resonance sources exerts strict control over generation of ions in high charge states. This work is aimed at finding a dependence of the lifetime of ions on their charge states in the core region and to elaborate a numerical model of ion charge dispersion not only for the core plasmas but for extracted beams as well. The calculated data are in good agreement with the experimental results on charge distributions and magnitudes for currents of beams extracted from the 14 GHz DECRIS source. copyright 1996 American Institute of Physics

  3. Running a reliable messaging infrastructure for CERN's control system

    International Nuclear Information System (INIS)

    Ehm, F.

    2012-01-01

    The current middle-ware for CERN's Control System is based on 2 implementations: CORBA-based Controls Middle-Ware (CMW) and Java Messaging Service (JMS). The JMS service is realized using the open source messaging product ActiveMQ and had became an increasing vital part of beam operations as data need to be transported reliably for various areas such as the beam protection system, post mortem analysis, beam commissioning or the alarm system. The current JMS service is made of 18 brokers running either in clusters or as single nodes. The main service is deployed as a 2 node cluster providing fail-over and load balancing capabilities for high availability. Non-critical applications running on virtual machines or desktop machines read data via a third broker to decouple the load from the operational main cluster. This scenario has been introduced last year and the statistics showed an up-time of 99.998% and an average data serving rate of 1.6 G-Byte per minute represented by around 150 messages per second. Deploying, running, maintaining and protecting such messaging infrastructure is not trivial and includes setting up of careful monitoring and failure pre-recognition. Naturally, lessons have been learnt and their outcome is very important for the current and future operation of such service. (author)

  4. Investigation of the ion beam of the Titan source by the time-of-flight mass spectrometer

    International Nuclear Information System (INIS)

    Bugaev, A.S.; Gushenets, V.V.; Nikolaev, A.G.; Yushkov, G.Yu.

    2000-01-01

    The Titan ion source generates wide-aperture beams of both gaseous and metal ions of various materials. The above possibility is realized on the account of combining two types of arc discharge with cold cathodes in the source discharge system. The vacuum arc, initiated between the cathode accomplished from the ion forming material, and hollow anode, is used for obtaining the metal ions. The pinch-effect low pressure arc discharge, ignited on the same hollow anode, is used for obtaining gaseous ions. The composition of ion beams, generated by the Titan source through the specially designed time-of-flight spectrometer, is studied. The spectrometer design and principle pf operation are presented. The physical peculiarities of the source functioning, influencing the ion beam composition, are discussed [ru

  5. SU-E-T-459: Impact of Source Position and Traveling Time On HDR Skin Surface Applicator Dosimetry

    International Nuclear Information System (INIS)

    Jeong, J; Barker, C; Zaider, M; Cohen, G

    2015-01-01

    Purpose: Observed dosimetric discrepancy between measured and treatment planning system (TPS) predicted values, during applicator commissioning, were traced to source position uncertainty in the applicator. We quantify the dosimetric impact of this geometric uncertainty, and of the source traveling time inside the applicator, and propose corrections for clinical use. Methods: We measured the dose profiles from the Varian Leipzig-style (horizontal) HDR skin applicator, using EBT3 film, photon diode, and optically stimulated luminescence dosimeter (OSLD) and three different GammaMed HDR afterloders. The dose profiles and depth dose of each aperture were measured at several depths (up to about 10 mm, depending on the dosimeter). The measured dose profiles were compared with Acuros calculated profiles in BrachyVision TPS. For the impact of the source position, EBT3 film measurements were performed with applicator, facing-down and facing-up orientations. The dose with and without source traveling was measured with diode detector using HDR timer and electrometer timer, respectively. Results: Depth doses measured using the three dosimeters were in good agreement, but were consistently higher than the Acuros dose calculations. Measurements with the applicator facing-up were significantly lower than those in the facing-down position with maximum difference of about 18% at the surface, due to source sag inside the applicator. Based on the inverse-square law, the effective source sag was evaluated to be about 0.5 mm from the planned position. The additional dose from the source traveling was about 2.8% for 30 seconds with 10 Ci source, decreasing with increased dwelling time and decreased source activity. Conclusion: Due to the short source-to-surface distance of the applicator, the small source sag inside the applicator has significant dosimetric impact, which should be considered before the clinical use of the applicator. Investigation of the effect for other applicators

  6. Conservation as an alternative energy source

    Science.gov (United States)

    Allen, D. E.

    1978-01-01

    A speech is given outlining the energy situation in the United States. It is warned that the existing energy situation cannot prevail and the time is fast running out for continued growth or even maintenance of present levels. Energy conservation measures are given as an aid to decrease U.S. energy consumption, which would allow more time to develop alternative sources of energy.

  7. A Universe of Information, One Citation at a Time: How Students Engage with Scholarly Sources

    Science.gov (United States)

    Ludovico, Carrie; Wittig, Carol

    2015-01-01

    We spend hours teaching students where to go to find resources, but how do students really use those scholarly resources--and other resources--in their papers? Inspired by the Citation Project, University of Richmond liaison librarians examined First-Year Seminar papers to see what types of sources students used in their writing, how they…

  8. On using peak amplitude and rise time for AE source characterization

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    The major objective of signal analysis is to study the characteristics of the sources of emissions. ... When AE is used as a non-destructive evaluation tool, this information is extracted using a .... Hence, frequency response H (f ) of the transducer.

  9. Time Series Analysis of Monte Carlo Fission Sources - I: Dominance Ratio Computation

    International Nuclear Information System (INIS)

    Ueki, Taro; Brown, Forrest B.; Parsons, D. Kent; Warsa, James S.

    2004-01-01

    In the nuclear engineering community, the error propagation of the Monte Carlo fission source distribution through cycles is known to be a linear Markov process when the number of histories per cycle is sufficiently large. In the statistics community, linear Markov processes with linear observation functions are known to have an autoregressive moving average (ARMA) representation of orders p and p - 1. Therefore, one can perform ARMA fitting of the binned Monte Carlo fission source in order to compute physical and statistical quantities relevant to nuclear criticality analysis. In this work, the ARMA fitting of a binary Monte Carlo fission source has been successfully developed as a method to compute the dominance ratio, i.e., the ratio of the second-largest to the largest eigenvalues. The method is free of binning mesh refinement and does not require the alteration of the basic source iteration cycle algorithm. Numerical results are presented for problems with one-group isotropic, two-group linearly anisotropic, and continuous-energy cross sections. Also, a strategy for the analysis of eigenmodes higher than the second-largest eigenvalue is demonstrated numerically

  10. Application of just-in-time manufacturing techniques in radioactive source in well logging industry

    Directory of Open Access Journals (Sweden)

    Atma Yudha Prawira

    2017-03-01

    Full Text Available Nuclear logging is one of major areas of logging development. This paper presents an empirical investigation to bring the drilling and completion of wells from an ill-defined art to a refined sci-ence by using radioactive source to “look and measure” such as formation type, formation dip, porosity, fluid type and numerous other important factors. The initial nuclear logging tools rec-ords the radiation emitted by formation as they were crossed by boreholes. Gamma radiation is used in well logging as it is powerful enough to penetrate the formation and steel casing. The ra-dioactive source is reusable so that after engineer finished the job the radioactive source is sent back to bunker. In this case inventory level of radioactive source is relatively high compared with monthly movement and the company must spend large amount of cost just for inventory. After calculating and averaging the monthly movement in 2014 and 2015, we detected a big pos-sibility to cut the inventory level to reduce the inventory cost.

  11. A Dozen Years after Open Source's 1998 Birth, It's Time for "OpenTechComm"

    Science.gov (United States)

    Still, Brian

    2010-01-01

    2008 marked the 10-year Anniversary of the Open Source movement, which has had a substantial impact on not only software production and adoption, but also on the sharing and distribution of information. Technical communication as a discipline has taken some advantage of the movement or its derivative software, but this article argues not as much…

  12. Time dependence of energy spectra of brachytherapy sources and its impact on the half and the tenth value layers

    International Nuclear Information System (INIS)

    Yue, Ning J.; Chen Zhe; Hearn, Robert A.; Rodgers, Joseph J.; Nath, Ravinder

    2009-01-01

    Purpose: Several factors including radionuclide purity influence the photon energy spectra from sealed brachytherapy sources. The existence of impurities and trace elements in radioactive materials as well as the substrate and encapsulation may not only alter the spectrum at a given time but also cause change in the spectra as a function of time. The purpose of this study is to utilize a semiempirical formalism, which quantitatively incorporates this time dependence, to calculate and evaluate the shielding requirement impacts introduced by this time dependence for a 103 Pd source. Methods: The formalism was used to calculate the NthVL thicknesses in lead for a 103 Pd model 200 seed. Prior to 2005, the 103 Pd in this source was purified to a level better than 0.006% of the total 103 Pd activity, the key trace impurity consisting of 65 Zn. Because 65 Zn emits higher energy photons and has a much longer half-life of 244 days compared to 103 Pd, its presence in 103 Pd seeds led to a time dependence of the photon spectrum and other related physical quantities. This study focuses on the time dependence of the NthVL and the analysis of the corresponding shielding requirements. Results: The results indicate that the first HVL and the first TVL in lead steadily increased with time for about 200 days and then reached a plateau. The increases at plateau were more than 1000 times compared to the corresponding values on the zeroth day. The second and third TVLs in lead reached their plateaus in about 100 and 60 days, respectively, and the increases were about 19 and 2.33 times the corresponding values on the zeroth day, respectively. All the TVLs demonstrated a similar time dependence pattern, with substantial increases and eventual approach to a plateau. Conclusions: The authors conclude that the time dependence of the emitted photon spectra from brachytherapy sources can introduce substantial variations in the values of the NthVL with time if certain impurities are present

  13. 1987 DOE review: First collider run operation

    International Nuclear Information System (INIS)

    Childress, S.; Crawford, J.; Dugan, G.

    1987-05-01

    This review covers the operations of the first run of the 1.8 TeV superconducting super collider. The papers enclosed cover: PBAR source status, fixed target operation, Tevatron cryogenic reliability and capacity upgrade, Tevatron Energy upgrade progress and plans, status of the D0 low beta insertion, 1.8 K and 4.7 K refrigeration for low-β quadrupoles, progress and plans for the LINAC and booster, near term and long term and long term performance improvements

  14. Real-time Inversion of Tsunami Source from GNSS Ground Deformation Observations and Tide Gauges.

    Science.gov (United States)

    Arcas, D.; Wei, Y.

    2017-12-01

    Over the last decade, the NOAA Center for Tsunami Research (NCTR) has developed an inversion technique to constrain tsunami sources based on the use of Green's functions in combination with data reported by NOAA's Deep-ocean Assessment and Reporting of Tsunamis (DART®) systems. The system has consistently proven effective in providing highly accurate tsunami forecasts of wave amplitude throughout an entire basin. However, improvement is necessary in two critical areas: reduction of data latency for near-field tsunami predictions and reduction of maintenance cost of the network. Two types of sensors have been proposed as supplementary to the existing network of DART®systems: Global Navigation Satellite System (GNSS) stations and coastal tide gauges. The use GNSS stations to provide autonomous geo-spatial positioning at specific sites during an earthquake has been proposed in recent years to supplement the DART® array in tsunami source inversion. GNSS technology has the potential to provide substantial contributions in the two critical areas of DART® technology where improvement is most necessary. The present study uses GNSS ground displacement observations of the 2011 Tohoku-Oki earthquake in combination with NCTR operational database of Green's functions, to produce a rapid estimate of tsunami source based on GNSS observations alone. The solution is then compared with that obtained via DART® data inversion and the difficulties in obtaining an accurate GNSS-based solution are underlined. The study also identifies the set of conditions required for source inversion from coastal tide-gauges using the degree of nonlinearity of the signal as a primary criteria. We then proceed to identify the conditions and scenarios under which a particular gage could be used to invert a tsunami source.

  15. ALICE HLT Run 2 performance overview.

    Science.gov (United States)

    Krzewicki, Mikolaj; Lindenstruth, Volker; ALICE Collaboration

    2017-10-01

    For the LHC Run 2 the ALICE HLT architecture was consolidated to comply with the upgraded ALICE detector readout technology. The software framework was optimized and extended to cope with the increased data load. Online calibration of the TPC using online tracking capabilities of the ALICE HLT was deployed. Offline calibration code was adapted to run both online and offline and the HLT framework was extended to support that. The performance of this schema is important for Run 3 related developments. An additional data transport approach was developed using the ZeroMQ library, forming at the same time a test bed for the new data flow model of the O2 system, where further development of this concept is ongoing. This messaging technology was used to implement the calibration feedback loop augmenting the existing, graph oriented HLT transport framework. Utilising the online reconstruction of many detectors, a new asynchronous monitoring scheme was developed to allow real-time monitoring of the physics performance of the ALICE detector, on top of the new messaging scheme for both internal and external communication. Spare computing resources comprising the production and development clusters are run as a tier-2 GRID site using an OpenStack-based setup. The development cluster is running continuously, the production cluster contributes resources opportunistically during periods of LHC inactivity.

  16. Effective action and brane running

    International Nuclear Information System (INIS)

    Brevik, Iver; Ghoroku, Kazuo; Yahiro, Masanobu

    2004-01-01

    We address the renormalized effective action for a Randall-Sundrum brane running in 5D bulk space. The running behavior of the brane action is obtained by shifting the brane position without changing the background and fluctuations. After an appropriate renormalization, we obtain an effective, low energy brane world action, in which the effective 4D Planck mass is independent of the running position. We address some implications for this effective action

  17. Asymmetric information and bank runs

    OpenAIRE

    Gu, Chao

    2007-01-01

    It is known that sunspots can trigger panic-based bank runs and that the optimal banking contract can tolerate panic-based runs. The existing literature assumes that these sunspots are based on a publicly observed extrinsic randomizing device. In this paper, I extend the analysis of panic-based runs to include an asymmetric-information, extrinsic randomizing device. Depositors observe different, but correlated, signals on the stability of the bank. I find that if the signals that depositors o...

  18. Validation and augmentation of Inrix arterial travel time data using independent sources.

    Science.gov (United States)

    2015-02-01

    Travel time data is a key input to Intelligent Transportation Systems (ITS) applications. Advancement in vehicle : tracking and identification technologies and proliferation of location-aware and connected devices has made network-wide travel time da...

  19. Making consultations run smoothly

    DEFF Research Database (Denmark)

    Jespersen, Astrid Pernille; Elgaard Jensen, Torben

    2012-01-01

    This article investigates the skilful use of time in general practice consultations. It argues that consultation work involves social and material interactions, which are only partially conceptualized in existing medical practice literatures. As an alternative, this article employs ideas from the......-inspired analysis opens up a wider discussion of time as a complex resource and problem in general practice....

  20. Modeling Soak-Time Distribution of Trips for Mobile Source Emissions Forecasting: Techniques and Applications

    Science.gov (United States)

    2000-08-01

    The soak-time of vehicle trip starts is defined as the duration of time in which the vehicle's engine is not operating and that precedes a successful vehicle start. The temporal distribution of the soak-time in an area is an important determinant of ...