WorldWideScience

Sample records for method run times

  1. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...

  2. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...... of error detection methods includes a high level software specification. this has the purpose of illustrating that the designed can be used in practice....

  3. A free-running, time-based readout method for particle detectors

    International Nuclear Information System (INIS)

    Goerres, A; Ritman, J; Stockmanns, T; Bugalho, R; Francesco, A Di; Gastón, C; Gonçalves, F; Rolo, M D; Silva, J C da; Silva, R; Varela, J; Veckalns, V; Mazza, G; Mignone, M; Pietro, V Di; Riccardi, A; Rivetti, A; Wheadon, R

    2014-01-01

    For the EndoTOFPET-US experiment, the TOFPET ASIC has been developed as a front-end chip to read out data from silicon photomultipliers (SiPM) [1]. It introduces a time of flight information into the measurement of a PET scanner and hence reduces radiation exposure of the patient [2]. The chip is designed to work with a high event rate up to 100 kHz and a time resolution of 50 ps LSB. Using two threshold levels, it can measure the leading edge of the event pulse precisely while successfully suppressing dark counts from the SiPM. This also enables a time over threshold determination, leading to a charge measurement of the signal's pulse. The same, time-based concept is chosen for the PASTA chip used in the PANDA experiment. This high-energy particle detector contains sub-systems for specific measurement goals. The innermost of these is the Micro Vertex Detector, a silicon-based tracking system. The PASTA chip's approach is much like the TOFPET ASIC with some differences. The most significant ones are a changed amplifying part for different input signals as well as protection for radiation effects of the high-radiation environment. Apart from that, the simple and general concept combined with a small area and low power consumption support the choice for using this approach

  4. A free-running, time-based readout method for particle detectors

    Science.gov (United States)

    Goerres, A.; Bugalho, R.; Di Francesco, A.; Gastón, C.; Gonçalves, F.; Mazza, G.; Mignone, M.; Di Pietro, V.; Riccardi, A.; Ritman, J.; Rivetti, A.; Rolo, M. D.; da Silva, J. C.; Silva, R.; Stockmanns, T.; Varela, J.; Veckalns, V.; Wheadon, R.

    2014-03-01

    For the EndoTOFPET-US experiment, the TOFPET ASIC has been developed as a front-end chip to read out data from silicon photomultipliers (SiPM) [1]. It introduces a time of flight information into the measurement of a PET scanner and hence reduces radiation exposure of the patient [2]. The chip is designed to work with a high event rate up to 100 kHz and a time resolution of 50 ps LSB. Using two threshold levels, it can measure the leading edge of the event pulse precisely while successfully suppressing dark counts from the SiPM. This also enables a time over threshold determination, leading to a charge measurement of the signal's pulse. The same, time-based concept is chosen for the PASTA chip used in the PANDA experiment. This high-energy particle detector contains sub-systems for specific measurement goals. The innermost of these is the Micro Vertex Detector, a silicon-based tracking system. The PASTA chip's approach is much like the TOFPET ASIC with some differences. The most significant ones are a changed amplifying part for different input signals as well as protection for radiation effects of the high-radiation environment. Apart from that, the simple and general concept combined with a small area and low power consumption support the choice for using this approach.

  5. EnergyPlus Run Time Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences, identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.

  6. Transforming parts of a differential equations system to difference equations as a method for run-time savings in NONMEM.

    Science.gov (United States)

    Petersson, K J F; Friberg, L E; Karlsson, M O

    2010-10-01

    Computer models of biological systems grow more complex as computing power increase. Often these models are defined as differential equations and no analytical solutions exist. Numerical integration is used to approximate the solution; this can be computationally intensive, time consuming and be a large proportion of the total computer runtime. The performance of different integration methods depend on the mathematical properties of the differential equations system at hand. In this paper we investigate the possibility of runtime gains by calculating parts of or the whole differential equations system at given time intervals, outside of the differential equations solver. This approach was tested on nine models defined as differential equations with the goal to reduce runtime while maintaining model fit, based on the objective function value. The software used was NONMEM. In four models the computational runtime was successfully reduced (by 59-96%). The differences in parameter estimates, compared to using only the differential equations solver were less than 12% for all fixed effects parameters. For the variance parameters, estimates were within 10% for the majority of the parameters. Population and individual predictions were similar and the differences in OFV were between 1 and -14 units. When computational runtime seriously affects the usefulness of a model we suggest evaluating this approach for repetitive elements of model building and evaluation such as covariate inclusions or bootstraps.

  7. LORD-Q: a long-run real-time PCR-based DNA-damage quantification method for nuclear and mitochondrial genome analysis

    Science.gov (United States)

    Lehle, Simon; Hildebrand, Dominic G.; Merz, Britta; Malak, Peter N.; Becker, Michael S.; Schmezer, Peter; Essmann, Frank; Schulze-Osthoff, Klaus; Rothfuss, Oliver

    2014-01-01

    DNA damage is tightly associated with various biological and pathological processes, such as aging and tumorigenesis. Although detection of DNA damage is attracting increasing attention, only a limited number of methods are available to quantify DNA lesions, and these techniques are tedious or only detect global DNA damage. In this study, we present a high-sensitivity long-run real-time PCR technique for DNA-damage quantification (LORD-Q) in both the mitochondrial and nuclear genome. While most conventional methods are of low-sensitivity or restricted to abundant mitochondrial DNA samples, we established a protocol that enables the accurate sequence-specific quantification of DNA damage in >3-kb probes for any mitochondrial or nuclear DNA sequence. In order to validate the sensitivity of this method, we compared LORD-Q with a previously published qPCR-based method and the standard single-cell gel electrophoresis assay, demonstrating a superior performance of LORD-Q. Exemplarily, we monitored induction of DNA damage and repair processes in human induced pluripotent stem cells and isogenic fibroblasts. Our results suggest that LORD-Q provides a sequence-specific and precise method to quantify DNA damage, thereby allowing the high-throughput assessment of DNA repair, genotoxicity screening and various other processes for a wide range of life science applications. PMID:24371283

  8. Implementing Run-Time Evaluation of Distributed Timing Constraints in a Real-Time Environment

    DEFF Research Database (Denmark)

    Kristensen, C. H.; Drejer, N.

    1994-01-01

    In this paper we describe a solution to the problem of implementing run-time evaluation of timing constraints in distributed real-time environments......In this paper we describe a solution to the problem of implementing run-time evaluation of timing constraints in distributed real-time environments...

  9. Combining Compile-Time and Run-Time Parallelization

    Directory of Open Access Journals (Sweden)

    Sungdo Moon

    1999-01-01

    Full Text Available This paper demonstrates that significant improvements to automatic parallelization technology require that existing systems be extended in two ways: (1 they must combine high‐quality compile‐time analysis with low‐cost run‐time testing; and (2 they must take control flow into account during analysis. We support this claim with the results of an experiment that measures the safety of parallelization at run time for loops left unparallelized by the Stanford SUIF compiler’s automatic parallelization system. We present results of measurements on programs from two benchmark suites – SPECFP95 and NAS sample benchmarks – which identify inherently parallel loops in these programs that are missed by the compiler. We characterize remaining parallelization opportunities, and find that most of the loops require run‐time testing, analysis of control flow, or some combination of the two. We present a new compile‐time analysis technique that can be used to parallelize most of these remaining loops. This technique is designed to not only improve the results of compile‐time parallelization, but also to produce low‐cost, directed run‐time tests that allow the system to defer binding of parallelization until run‐time when safety cannot be proven statically. We call this approach predicated array data‐flow analysis. We augment array data‐flow analysis, which the compiler uses to identify independent and privatizable arrays, by associating predicates with array data‐flow values. Predicated array data‐flow analysis allows the compiler to derive “optimistic” data‐flow values guarded by predicates; these predicates can be used to derive a run‐time test guaranteeing the safety of parallelization.

  10. 16 CFR 803.10 - Running of time.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Running of time. 803.10 Section 803.10 Commercial Practices FEDERAL TRADE COMMISSION RULES, REGULATIONS, STATEMENTS AND INTERPRETATIONS UNDER THE HART-SCOTT-RODINO ANTITRUST IMPROVEMENTS ACT OF 1976 TRANSMITTAL RULES § 803.10 Running of time. (a...

  11. Aspects for Run-time Component Integration

    DEFF Research Database (Denmark)

    Truyen, Eddy; Jørgensen, Bo Nørregaard; Joosen, Wouter

    2000-01-01

    Component framework technology has become the cornerstone of building a family of systems and applications. A component framework defines a generic architecture into which specialized components can be plugged. As such, the component framework leverages the glue that connects the different inserted...... to dynamically integrate into the architecture of middleware systems new services that support non-functional aspects such as security, transactions, real-time....

  12. Compilation time analysis to minimize run-time overhead in preemptive scheduling on multiprocessors

    Science.gov (United States)

    Wauters, Piet; Lauwereins, Rudy; Peperstraete, J.

    1994-10-01

    This paper describes a scheduling method for hard real-time Digital Signal Processing (DSP) applications, implemented on a multi-processor. Due to the very high operating frequencies of DSP applications (typically hundreds of kHz) runtime overhead should be kept as small as possible. Because static scheduling introduces very little run-time overhead it is used as much as possible. Dynamic pre-emption of tasks is allowed if and only if it leads to better performance in spite of the extra run-time overhead. We essentially combine static scheduling with dynamic pre-emption using static priorities. Since we are dealing with hard real-time applications we must be able to guarantee at compile-time that all timing requirements will be satisfied at run-time. We will show that our method performs at least as good as any static scheduling method. It also reduces the total amount of dynamic pre-emptions compared with run time methods like deadline monotonic scheduling.

  13. Time Optimal Run-time Evaluation of Distributed Timing Constraints in Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.; Kristensen, C.H.

    1993-01-01

    This paper considers run-time evaluation of an important class of constraints; Timing constraints. These appear extensively in process control systems. Timing constraints are considered in distributed systems, i.e. systems consisting of multiple autonomous nodes......

  14. An Empirical Derivation of the Run Time of the Bubble Sort Algorithm.

    Science.gov (United States)

    Gonzales, Michael G.

    1984-01-01

    Suggests a moving pictorial tool to help teach principles in the bubble sort algorithm. Develops such a tool applied to an unsorted list of numbers and describes a method to derive the run time of the algorithm. The method can be modified to run the times of various other algorithms. (JN)

  15. Optimal Infinite Runs in One-Clock Priced Timed Automata

    DEFF Research Database (Denmark)

    David, Alexandre; Ejsing-Duun, Daniel; Fontani, Lisa

    We address the problem of finding an infinite run with the optimal cost-time ratio in a one-clock priced timed automaton and pro- vide an algorithmic solution. Through refinements of the quotient graph obtained by strong time-abstracting bisimulation partitioning, we con- struct a graph with time...

  16. Accuracy versus run time in an adiabatic quantum search

    International Nuclear Information System (INIS)

    Rezakhani, A. T.; Pimachev, A. K.; Lidar, D. A.

    2010-01-01

    Adiabatic quantum algorithms are characterized by their run time and accuracy. The relation between the two is essential for quantifying adiabatic algorithmic performance yet is often poorly understood. We study the dynamics of a continuous time, adiabatic quantum search algorithm and find rigorous results relating the accuracy and the run time. Proceeding with estimates, we show that under fairly general circumstances the adiabatic algorithmic error exhibits a behavior with two discernible regimes: The error decreases exponentially for short times and then decreases polynomially for longer times. We show that the well-known quadratic speedup over classical search is associated only with the exponential error regime. We illustrate the results through examples of evolution paths derived by minimization of the adiabatic error. We also discuss specific strategies for controlling the adiabatic error and run time.

  17. Stroller running: Energetic and kinematic changes across pushing methods.

    Science.gov (United States)

    Alcantara, Ryan S; Wall-Scheffler, Cara M

    2017-01-01

    Running with a stroller provides an opportunity for parents to exercise near their child and counteract health declines experienced during early parenthood. Understanding biomechanical and physiological changes that occur when stroller running is needed to evaluate its health impact, yet the effects of stroller running have not been clearly presented. Here, three commonly used stroller pushing methods were investigated to detect potential changes in energetic cost and lower-limb kinematics. Sixteen individuals (M/F: 10/6) ran at self-selected speeds for 800m under three stroller conditions (2-Hands, 1-Hand, and Push/Chase) and an independent running control. A significant decrease in speed (p = 0.001) and stride length (ppushing method had a significant effect on speed (p = 0.001) and stride length (ppushing technique influences stroller running speed and kinematics. These findings suggest specific fitness effects may be achieved through the implementation of different pushing methods.

  18. Combining monitoring with run-time assertion checking

    NARCIS (Netherlands)

    Gouw, Stijn de

    2013-01-01

    We develop a new technique for Run-time Checking for two object-oriented languages: Java and the Abstract Behavioral Specification language ABS. In object-oriented languages, objects communicate by sending each other messages. Assuming encapsulation, the behavior of objects is completely

  19. LHCb's Time-Real Alignment in RunII

    CERN Multimedia

    Batozskaya, Varvara

    2015-01-01

    LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run 2. Data collected at the start of the fill will be processed in a few minutes and used to update the alignment, while the calibration constants will be evaluated for each run. This procedure will improve the quality of the online alignment. Critically, this new real-time alignment and calibration procedure allows identical constants to be used in the online and offline reconstruction, thus improving the correlation between triggered and offline selected events. This offers the opportunity to optimise the event selection in the trigger by applying stronger constraints. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructure for the trigger. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from both the operational and physics performance points of view. Specific challenges of this novel configur...

  20. Visualization of synchronization of the uterine contraction signals: running cross-correlation and wavelet running cross-correlation methods.

    Science.gov (United States)

    Oczeretko, Edward; Swiatecka, Jolanta; Kitlas, Agnieszka; Laudanski, Tadeusz; Pierzynski, Piotr

    2006-01-01

    In physiological research, we often study multivariate data sets, containing two or more simultaneously recorded time series. The aim of this paper is to present the cross-correlation and the wavelet cross-correlation methods to assess synchronization between contractions in different topographic regions of the uterus. From a medical point of view, it is important to identify time delays between contractions, which may be of potential diagnostic significance in various pathologies. The cross-correlation was computed in a moving window with a width corresponding to approximately two or three contractions. As a result, the running cross-correlation function was obtained. The propagation% parameter assessed from this function allows quantitative description of synchronization in bivariate time series. In general, the uterine contraction signals are very complicated. Wavelet transforms provide insight into the structure of the time series at various frequencies (scales). To show the changes of the propagation% parameter along scales, a wavelet running cross-correlation was used. At first, the continuous wavelet transforms as the uterine contraction signals were received and afterwards, a running cross-correlation analysis was conducted for each pair of transformed time series. The findings show that running functions are very useful in the analysis of uterine contractions.

  1. Thermally-aware composite run-time CPU power models

    OpenAIRE

    Walker, Matthew J.; Diestelhorst, Stephan; Hansson, Andreas; Balsamo, Domenico; Merrett, Geoff V.; Al-Hashimi, Bashir M.

    2016-01-01

    Accurate and stable CPU power modelling is fundamental in modern system-on-chips (SoCs) for two main reasons: 1) they enable significant online energy savings by providing a run-time manager with reliable power consumption data for controlling CPU energy-saving techniques; 2) they can be used as accurate and trusted reference models for system design and exploration. We begin by showing the limitations in typical performance monitoring counter (PMC) based power modelling approaches and illust...

  2. LHCb's Real-Time Alignment in Run2

    CERN Multimedia

    Batozskaya, Varvara

    2015-01-01

    Stable, precise spatial alignment and PID calibration are necessary to achieve optimal detector performances. During Run2, LHCb will have a new real-time detector alignment and calibration to reach equivalent performances in the online and offline reconstruction. This offers the opportunity to optimise the event selection by applying stronger constraints as well as hadronic particle identification at the trigger level. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructure for the trigger.

  3. Change in skeletal muscle stiffness after running competition is dependent on both running distance and recovery time: a pilot study.

    Science.gov (United States)

    Sadeghi, Seyedali; Newman, Cassidy; Cortes, Daniel H

    2018-01-01

    Long-distance running competitions impose a large amount of mechanical loading and strain leading to muscle edema and delayed onset muscle soreness (DOMS). Damage to various muscle fibers, metabolic impairments and fatigue have been linked to explain how DOMS impairs muscle function. Disruptions of muscle fiber during DOMS exacerbated by exercise have been shown to change muscle mechanical properties. The objective of this study is to quantify changes in mechanical properties of different muscles in the thigh and lower leg as function of running distance and time after competition. A custom implementation of Focused Comb-Push Ultrasound Shear Elastography (F-CUSE) method was used to evaluate shear modulus in runners before and after a race. Twenty-two healthy individuals (age: 23 ± 5 years) were recruited using convenience sampling and split into three race categories: short distance (nine subjects, 3-5 miles), middle distance (10 subjects, 10-13 miles), and long distance (three subjects, 26+ miles). Shear Wave Elastography (SWE) measurements were taken on both legs of each subject on the rectus femoris (RF), vastus lateralis (VL), vastus medialis (VM), soleus, lateral gastrocnemius (LG), medial gastrocnemius (MG), biceps femoris (BF) and semitendinosus (ST) muscles. For statistical analyses, a linear mixed model was used, with recovery time and running distance as fixed variables, while shear modulus was used as the dependent variable. Recovery time had a significant effect on the soleus ( p  = 0.05), while running distance had considerable effect on the biceps femoris ( p  = 0.02), vastus lateralis ( p  trend from before competition to immediately after competition. The preliminary results suggest that SWE could potentially be used to quantify changes of muscle mechanical properties as a way for measuring recovery procedures for runners.

  4. Change in skeletal muscle stiffness after running competition is dependent on both running distance and recovery time: a pilot study

    Directory of Open Access Journals (Sweden)

    Seyedali Sadeghi

    2018-03-01

    Full Text Available Long-distance running competitions impose a large amount of mechanical loading and strain leading to muscle edema and delayed onset muscle soreness (DOMS. Damage to various muscle fibers, metabolic impairments and fatigue have been linked to explain how DOMS impairs muscle function. Disruptions of muscle fiber during DOMS exacerbated by exercise have been shown to change muscle mechanical properties. The objective of this study is to quantify changes in mechanical properties of different muscles in the thigh and lower leg as function of running distance and time after competition. A custom implementation of Focused Comb-Push Ultrasound Shear Elastography (F-CUSE method was used to evaluate shear modulus in runners before and after a race. Twenty-two healthy individuals (age: 23 ± 5 years were recruited using convenience sampling and split into three race categories: short distance (nine subjects, 3–5 miles, middle distance (10 subjects, 10–13 miles, and long distance (three subjects, 26+ miles. Shear Wave Elastography (SWE measurements were taken on both legs of each subject on the rectus femoris (RF, vastus lateralis (VL, vastus medialis (VM, soleus, lateral gastrocnemius (LG, medial gastrocnemius (MG, biceps femoris (BF and semitendinosus (ST muscles. For statistical analyses, a linear mixed model was used, with recovery time and running distance as fixed variables, while shear modulus was used as the dependent variable. Recovery time had a significant effect on the soleus (p = 0.05, while running distance had considerable effect on the biceps femoris (p = 0.02, vastus lateralis (p < 0.01 and semitendinosus muscles (p = 0.02. Sixty-seven percent of muscles exhibited a decreasing stiffness trend from before competition to immediately after competition. The preliminary results suggest that SWE could potentially be used to quantify changes of muscle mechanical properties as a way for measuring recovery procedures for runners.

  5. Safety evaluation of the ITP filter/stripper test runs and quiet time runs using simulant solution. Revision 3

    International Nuclear Information System (INIS)

    Gupta, M.K.

    1994-06-01

    The purpose is to provide the technical bases for the evaluation of Unreviewed Safety Question for the In-Tank Precipitation (ITP) Filter/Stripper Test Runs (Ref. 7) and Quiet Time Runs Program (described in Section 3.6). The Filter/Stripper Test Runs and Quiet Time Runs program involves a 12,000 gallon feed tank containing an agitator, a 4,000 gallon flush tank, a variable speed pump, associated piping and controls, and equipment within both the Filter and the Stripper Building

  6. Preventing Run-Time Bugs at Compile-Time Using Advanced C++

    Energy Technology Data Exchange (ETDEWEB)

    Neswold, Richard [Fermilab

    2018-01-01

    When writing software, we develop algorithms that tell the computer what to do at run-time. Our solutions are easier to understand and debug when they are properly modeled using class hierarchies, enumerations, and a well-factored API. Unfortunately, even with these design tools, we end up having to debug our programs at run-time. Worse still, debugging an embedded system changes its dynamics, making it tough to find and fix concurrency issues. This paper describes techniques using C++ to detect run-time bugs *at compile time*. A concurrency library, developed at Fermilab, is used for examples in illustrating these techniques.

  7. SASD and the CERN/SPS run-time coordinator

    International Nuclear Information System (INIS)

    Morpurgo, G.

    1990-01-01

    Structured Analysis and Structured Design (SASD) provides us with a handy way of specifying the flow of data between the different modules (functional units) of a system. But the formalism loses its immediacy when the control flow has to be taken into account as well. Moreover, due to the lack of appropriate software infrastructure, very often the actual implementation of the system does not reflect the module decoupling and independence so much emphasized at the design stage. In this paper the run-time coordinator, a complete software infrastructure to support a real decoupling of the functional units, is described. Special attention is given to the complementarity of our approach and the SASD methodology. (orig.)

  8. Success Run Waiting Times and Fuss-Catalan Numbers

    Directory of Open Access Journals (Sweden)

    S. J. Dilworth

    2015-01-01

    Full Text Available We present power series expressions for all the roots of the auxiliary equation of the recurrence relation for the distribution of the waiting time for the first run of k consecutive successes in a sequence of independent Bernoulli trials, that is, the geometric distribution of order k. We show that the series coefficients are Fuss-Catalan numbers and write the roots in terms of the generating function of the Fuss-Catalan numbers. Our main result is a new exact expression for the distribution, which is more concise than previously published formulas. Our work extends the analysis by Feller, who gave asymptotic results. We obtain quantitative improvements of the error estimates obtained by Feller.

  9. Icelandic Public Pensions: Why time is running out

    Directory of Open Access Journals (Sweden)

    Ólafur Ísleifsson

    2011-12-01

    Full Text Available The aim of this paper is to analyse the Icelandic public sector pension system enjoying a third party guarantee. Defined benefit funds fundamentally differ from defined contribution pension funds without a third party guarantee as is the case with the Icelandic general labour market pension funds. We probe the special nature of the public sector pension funds and make a comparison to the defined contribution pension funds of the general labour market. We explore the financial and economic effects of the third party guarantee of the funds, their investment performance and other relevant factors. We seek an answer to the question why time is running out for the country’s largest pension fund that currently faces the prospect of becoming empty by the year 2022.

  10. Time Series Analysis Based on Running Mann Whitney Z Statistics

    Science.gov (United States)

    A sensitive and objective time series analysis method based on the calculation of Mann Whitney U statistics is described. This method samples data rankings over moving time windows, converts those samples to Mann-Whitney U statistics, and then normalizes the U statistics to Z statistics using Monte-...

  11. Adaptive Embedded Systems – Challenges of Run-Time Resource Management

    DEFF Research Database (Denmark)

    Understanding and efficiently controlling the dynamic behavior of adaptive embedded systems is a challenging endavor. The challenges come from the often very complicated interplay between the application, the application mapping, and the underlying hardware architecture. With MPSoC, we have...... the technology to design and fabricate dynamically reconfigurable hardware platforms. However, such platforms will pose new challenges to tools and methods to efficiently explore these platforms at run-time. This talk will address some of the challenges of run-time resource management in adaptive embedded...... systems....

  12. Running Speed Can Be Predicted from Foot Contact Time during Outdoor over Ground Running

    NARCIS (Netherlands)

    de Ruiter, C.J.; van Oeveren, B.; Francke, A.; Zijlstra, P.; van Dieen, J.H.

    2016-01-01

    The number of validation studies of commercially available foot pods that provide estimates of running speed is limited and these studies have been conducted under laboratory conditions. Moreover, internal data handling and algorithms used to derive speed from these pods are proprietary and thereby

  13. Walking, running, and resting under time, distance, and average speed constraints: optimality of walk-run-rest mixtures.

    Science.gov (United States)

    Long, Leroy L; Srinivasan, Manoj

    2013-04-06

    On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk-run mixture at intermediate speeds and a walk-rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients-a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk-run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill.

  14. The efficacy of downhill running as a method to enhance running economy in trained distance runners.

    Science.gov (United States)

    Shaw, Andrew J; Ingham, Stephen A; Folland, Jonathan P

    2018-06-01

    Running downhill, in comparison to running on the flat, appears to involve an exaggerated stretch-shortening cycle (SSC) due to greater impact loads and higher vertical velocity on landing, whilst also incurring a lower metabolic cost. Therefore, downhill running could facilitate higher volumes of training at higher speeds whilst performing an exaggerated SSC, potentially inducing favourable adaptations in running mechanics and running economy (RE). This investigation assessed the efficacy of a supplementary 8-week programme of downhill running as a means of enhancing RE in well-trained distance runners. Nineteen athletes completed supplementary downhill (-5% gradient; n = 10) or flat (n = 9) run training twice a week for 8 weeks within their habitual training. Participants trained at a standardised intensity based on the velocity of lactate turnpoint (vLTP), with training volume increased incrementally between weeks. Changes in energy cost of running (E C ) and vLTP were assessed on both flat and downhill gradients, in addition to maximal oxygen uptake (⩒O 2max). No changes in E C were observed during flat running following downhill (1.22 ± 0.09 vs 1.20 ± 0.07 Kcal kg -1  km -1 , P = .41) or flat run training (1.21 ± 0.13 vs 1.19 ± 0.12 Kcal kg -1  km -1 ). Moreover, no changes in E C during downhill running were observed in either condition (P > .23). vLTP increased following both downhill (16.5 ± 0.7 vs 16.9 ± 0.6 km h -1 , P = .05) and flat run training (16.9 ± 0.7 vs 17.2 ± 1.0 km h -1 , P = .05), though no differences in responses were observed between groups (P = .53). Therefore, a short programme of supplementary downhill run training does not appear to enhance RE in already well-trained individuals.

  15. A Formal Approach to Run-Time Evaluation of Real-Time Behaviour in Distributed Process Control Systems

    DEFF Research Database (Denmark)

    Kristensen, C.H.

    This thesis advocates a formal approach to run-time evaluation of real-time behaviour in distributed process sontrol systems, motivated by a growing interest in applying the increasingly popular formal methods in the application area of distributed process control systems. We propose to evaluate...... because the real-time aspects of distributed process control systems are considered to be among the hardest and most interesting to handle....

  16. Run-time middleware to support real-time system scenarios

    NARCIS (Netherlands)

    Goossens, K.; Koedam, M.; Sinha, S.; Nelson, A.; Geilen, M.

    2015-01-01

    Systems on Chip (SOC) are powerful multiprocessor systems capable of running multiple independent applications, often with both real-time and non-real-time requirements. Scenarios exist at two levels: first, combinations of independent applications, and second, different states of a single

  17. Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment

    Directory of Open Access Journals (Sweden)

    Qi Liu

    2016-08-01

    Full Text Available Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs.

  18. Integrating software testing and run-time checking in an assertion verification framework

    OpenAIRE

    Mera, E.; López García, Pedro; Hermenegildo, Manuel V.

    2009-01-01

    We have designed and implemented a framework that unifies unit testing and run-time verification (as well as static verification and static debugging). A key contribution of our approach is that a unified assertion language is used for all of these tasks. We first propose methods for compiling runtime checks for (parts of) assertions which cannot be verified at compile-time via program transformation. This transformation allows checking preconditions and postconditions, including conditional...

  19. How Many Times Should One Run a Computational Simulation?

    DEFF Research Database (Denmark)

    Seri, Raffaello; Secchi, Davide

    2017-01-01

    This chapter is an attempt to answer the question “how many runs of a computational simulation should one do,” and it gives an answer by means of statistical analysis. After defining the nature of the problem and which types of simulation are mostly affected by it, the article introduces statisti......This chapter is an attempt to answer the question “how many runs of a computational simulation should one do,” and it gives an answer by means of statistical analysis. After defining the nature of the problem and which types of simulation are mostly affected by it, the article introduces...

  20. Urban Run-off Volumes Dependency on Rainfall Measurement Method

    DEFF Research Database (Denmark)

    Pedersen, L.; Jensen, N. E.; Rasmussen, Michael R.

    2005-01-01

    Urban run-off is characterized with fast response since the large surface run-off in the catchments responds immediately to variations in the rainfall. Modeling such type of catchments is most often done with the input from very few rain gauges, but the large variation in rainfall over small areas...... resolutions and single gauge rainfall was fed to a MOUSE run-off model. The flow and total volume over the event is evaluated....

  1. COMPARISON OF METHODS FOR SIMULATING TSUNAMI RUN-UP THROUGH COASTAL FORESTS

    Directory of Open Access Journals (Sweden)

    Benazir

    2017-09-01

    Full Text Available The research is aimed at reviewing two numerical methods for modeling the effect of coastal forest on tsunami run-up and to propose an alternative approach. Two methods for modeling the effect of coastal forest namely the Constant Roughness Model (CRM and Equivalent Roughness Model (ERM simulate the effect of the forest by using an artificial Manning roughness coefficient. An alternative approach that simulates each of the trees as a vertical square column is introduced. Simulations were carried out with variations of forest density and layout pattern of the trees. The numerical model was validated using an existing data series of tsunami run-up without forest protection. The study indicated that the alternative method is in good agreement with ERM method for low forest density. At higher density and when the trees were planted in a zigzag pattern, the ERM produced significantly higher run-up. For a zigzag pattern and at 50% forest densities which represents a water tight wall, both the ERM and CRM methods produced relatively high run-up which should not happen theoretically. The alternative method, on the other hand, reflected the entire tsunami. In reality, housing complex can be considered and simulated as forest with various size and layout of obstacles where the alternative approach is applicable. The alternative method is more accurate than the existing methods for simulating a coastal forest for tsunami mitigation but consumes considerably more computational time.

  2. An enhanced Ada run-time system for real-time embedded processors

    Science.gov (United States)

    Sims, J. T.

    1991-01-01

    An enhanced Ada run-time system has been developed to support real-time embedded processor applications. The primary focus of this development effort has been on the tasking system and the memory management facilities of the run-time system. The tasking system has been extended to support efficient and precise periodic task execution as required for control applications. Event-driven task execution providing a means of task-asynchronous control and communication among Ada tasks is supported in this system. Inter-task control is even provided among tasks distributed on separate physical processors. The memory management system has been enhanced to provide object allocation and protected access support for memory shared between disjoint processors, each of which is executing a distinct Ada program.

  3. Safety evaluation of the ITP filter/stripper test runs and quiet time runs using simulant solution

    International Nuclear Information System (INIS)

    Gupta, M.K.

    1993-10-01

    In-Tank Precipitation is a process for removing radioactivity from the salt stored in the Waste Management Tank Farm at Savannah River. The process involves precipitation of cesium and potassium with sodium tetraphenylborate (STPB) and adsorption of strontium and actinides on insoluble sodium titanate (ST) particles. The purpose of this report is to provide the technical bases for the evaluation of Unreviewed Safety Question for the In-Tank Precipitation (ITP) Filter/Stripper Test Runs and Quiet Time Runs Program. The primary objective of the filter-stripper test runs and quiet time runs program is to ensure that the facility will fulfill its design basis function prior to the introduction of radioactive feed. Risks associated with the program are identified and include hazards, both personnel and environmental, associated with handling the chemical simulants; the presence of flammable materials; the potential for damage to the permanenet ITP and Tank Farm facilities. The risks, potential accident scenarios, and safeguards either in place or planned are discussed at length

  4. Safety evaluation of the ITP filter/stripper test runs and quiet time runs using simulant solution

    Energy Technology Data Exchange (ETDEWEB)

    Gupta, M.K.

    1993-10-01

    In-Tank Precipitation is a process for removing radioactivity from the salt stored in the Waste Management Tank Farm at Savannah River. The process involves precipitation of cesium and potassium with sodium tetraphenylborate (STPB) and adsorption of strontium and actinides on insoluble sodium titanate (ST) particles. The purpose of this report is to provide the technical bases for the evaluation of Unreviewed Safety Question for the In-Tank Precipitation (ITP) Filter/Stripper Test Runs and Quiet Time Runs Program. The primary objective of the filter-stripper test runs and quiet time runs program is to ensure that the facility will fulfill its design basis function prior to the introduction of radioactive feed. Risks associated with the program are identified and include hazards, both personnel and environmental, associated with handling the chemical simulants; the presence of flammable materials; the potential for damage to the permanenet ITP and Tank Farm facilities. The risks, potential accident scenarios, and safeguards either in place or planned are discussed at length.

  5. A Modular Environment for Geophysical Inversion and Run-time Autotuning using Heterogeneous Computing Systems

    Science.gov (United States)

    Myre, Joseph M.

    Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that

  6. Run-time verification of behavioural conformance for conversational web services

    OpenAIRE

    Dranidis, Dimitris; Ramollari, Ervin; Kourtesis, Dimitrios

    2009-01-01

    Web services exposing run-time behaviour that deviates from their behavioural specifications represent a major threat to the sustainability of a service-oriented ecosystem. It is therefore critical to verify the behavioural conformance of services during run-time. This paper discusses a novel approach for run-time verification of Web services. It proposes the utilisation of Stream X-machines for constructing formal behavioural specifications of Web services which can be exploited for verifyin...

  7. Time limit and time at VO2max' during a continuous and an intermittent run.

    Science.gov (United States)

    Demarie, S; Koralsztein, J P; Billat, V

    2000-06-01

    The purpose of this study was to verify, by track field tests, whether sub-elite runners (n=15) could (i) reach their VO2max while running at v50%delta, i.e. midway between the speed associated with lactate threshold (vLAT) and that associated with maximal aerobic power (vVO2max), and (ii) if an intermittent exercise provokes a maximal and/or supra maximal oxygen consumption longer than a continuous one. Within three days, subjects underwent a multistage incremental test during which their vVO2max and vLAT were determined; they then performed two additional testing sessions, where continuous and intermittent running exercises at v50%delta were performed up to exhaustion. Subject's gas exchange and heart rate were continuously recorded by means of a telemetric apparatus. Blood samples were taken from fingertip and analysed for blood lactate concentration. In the continuous and the intermittent tests peak VO2 exceeded VO2max values, as determined during the incremental test. However in the intermittent exercise, peak VO2, time to exhaustion and time at VO2max reached significantly higher values, while blood lactate accumulation showed significantly lower values than in the continuous one. The v50%delta is sufficient to stimulate VO2max in both intermittent and continuous running. The intermittent exercise results better than the continuous one in increasing maximal aerobic power, allowing longer time at VO2max and obtaining higher peak VO2 with lower lactate accumulation.

  8. Rapid Large Earthquake and Run-up Characterization in Quasi Real Time

    Science.gov (United States)

    Bravo, F. J.; Riquelme, S.; Koch, P.; Cararo, S.

    2017-12-01

    Several test in quasi real time have been conducted by the rapid response group at CSN (National Seismological Center) to characterize earthquakes in Real Time. These methods are known for its robustness and realibility to create Finite Fault Models. The W-phase FFM Inversion, The Wavelet Domain FFM and The Body Wave and FFM have been implemented in real time at CSN, all these algorithms are running automatically and triggered by the W-phase Point Source Inversion. Dimensions (Large and Width ) are predefined by adopting scaling laws for earthquakes in subduction zones. We tested the last four major earthquakes occurred in Chile using this scheme: The 2010 Mw 8.8 Maule Earthquake, The 2014 Mw 8.2 Iquique Earthquake, The 2015 Mw 8.3 Illapel Earthquake and The 7.6 Melinka Earthquake. We obtain many solutions as time elapses, for each one of those we calculate the run-up using an analytical formula. Our results are in agreements with some FFM already accepted by the sicentific comunnity aswell as run-up observations in the field.

  9. Leisure-time running reduces all-cause and cardiovascular mortality risk.

    Science.gov (United States)

    Lee, Duck-Chul; Pate, Russell R; Lavie, Carl J; Sui, Xuemei; Church, Timothy S; Blair, Steven N

    2014-08-05

    Although running is a popular leisure-time physical activity, little is known about the long-term effects of running on mortality. The dose-response relations between running, as well as the change in running behaviors over time, and mortality remain uncertain. We examined the associations of running with all-cause and cardiovascular mortality risks in 55,137 adults, 18 to 100 years of age (mean age 44 years). Running was assessed on a medical history questionnaire by leisure-time activity. During a mean follow-up of 15 years, 3,413 all-cause and 1,217 cardiovascular deaths occurred. Approximately 24% of adults participated in running in this population. Compared with nonrunners, runners had 30% and 45% lower adjusted risks of all-cause and cardiovascular mortality, respectively, with a 3-year life expectancy benefit. In dose-response analyses, the mortality benefits in runners were similar across quintiles of running time, distance, frequency, amount, and speed, compared with nonrunners. Weekly running even benefits, with 29% and 50% lower risks of all-cause and cardiovascular mortality, respectively, compared with never-runners. Running, even 5 to 10 min/day and at slow speeds benefits. Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  10. Run-Time HW/SW Scheduling of Data Flow Applications on Reconfigurable Architectures

    Directory of Open Access Journals (Sweden)

    Ghaffari Fakhreddine

    2009-01-01

    Full Text Available This paper presents an efficient dynamic and run-time Hardware/Software scheduling approach. This scheduling heuristic consists in mapping online the different tasks of a highly dynamic application in such a way that the total execution time is minimized. We consider soft real-time data flow graph oriented applications for which the execution time is function of the input data nature. The target architecture is composed of two processors connected to a dynamically reconfigurable hardware accelerator. Our approach takes advantage of the reconfiguration property of the considered architecture to adapt the treatment to the system dynamics. We compare our heuristic with another similar approach. We present the results of our scheduling method on several image processing applications. Our experiments include simulation and synthesis results on a Virtex V-based platform. These results show a better performance against existing methods.

  11. Biomechanical characteristics of skeletal muscles and associations between running speed and contraction time in 8- to 13-year-old children.

    Science.gov (United States)

    Završnik, Jernej; Pišot, Rado; Šimunič, Boštjan; Kokol, Peter; Blažun Vošner, Helena

    2017-02-01

    Objective To investigate associations between running speeds and contraction times in 8- to 13-year-old children. Method This longitudinal study analyzed tensiomyographic measurements of vastus lateralis and biceps femoris muscles' contraction times and maximum running speeds in 107 children (53 boys, 54 girls). Data were evaluated using multiple correspondence analysis. Results A gender difference existed between the vastus lateralis contraction times and running speeds. The running speed was less dependent on vastus lateralis contraction times in boys than in girls. Analysis of biceps femoris contraction times and running speeds revealed that running speeds of boys were much more structurally associated with contraction times than those of girls, for whom the association seemed chaotic. Conclusion Joint category plots showed that contraction times of biceps femoris were associated much more closely with running speed than those of the vastus lateralis muscle. These results provide insight into a new dimension of children's development.

  12. Lower bounds on the run time of the univariate marginal distribution algorithm on OneMax

    DEFF Research Database (Denmark)

    Krejca, Martin S.; Witt, Carsten

    2017-01-01

    The Univariate Marginal Distribution Algorithm (UMDA), a popular estimation of distribution algorithm, is studied from a run time perspective. On the classical OneMax benchmark function, a lower bound of Ω(μ√n + n log n), where μ is the population size, on its expected run time is proved...... values maintained by the algorithm, including carefully designed potential functions. These techniques may prove useful in advancing the field of run time analysis for estimation of distribution algorithms in general........ This is the first direct lower bound on the run time of the UMDA. It is stronger than the bounds that follow from general black-box complexity theory and is matched by the run time of many evolutionary algorithms. The results are obtained through advanced analyses of the stochastic change of the frequencies of bit...

  13. Discount-Optimal Infinite Runs in Priced Timed Automata

    DEFF Research Database (Denmark)

    Fahrenberg, Uli; Larsen, Kim Guldstrand

    2009-01-01

    We introduce a new discounting semantics for priced timed automata. Discounting provides a way to model optimal-cost problems for infinite traces and has applications in optimal scheduling and other areas. In the discounting semantics, prices decrease exponentially, so that the contribution...

  14. Design-time application mapping and platform exploration for MP-SoC customised run-time management

    NARCIS (Netherlands)

    Ykman-Couvreur, Ch.; Nollet, V.; Marescaux, T.M.; Brockmeyer, E.; Catthoor, F.; Corporaal, H.

    2007-01-01

    Abstract: In an Multi-Processor system-on-Chip (MP-SoC) environment, a customized run-time management layer should be incorporated on top of the basic Operating System services to alleviate the run-time decision-making and to globally optimise costs (e.g. energy consumption) across all active

  15. Safety provision for nuclear power plants during remaining running time

    International Nuclear Information System (INIS)

    Rossnagel, Alexander; Hentschel, Anja

    2012-01-01

    With the phasing-out of the industrial use of nuclear energy for the power generation, the risk of the nuclear power plants has not been eliminated in principle, but only for a limited period of time. Therefore, the remaining nine nuclear power plants must also be used for the remaining ten years according to the state of science and technology. Regulatory authorities must substantiate the safety requirements for each nuclear power plant and enforce these requirements by means of various regulatory measures. The consequences of Fukushima must be included in the assessment of the safety level of nuclear power plants in Germany. In this respect, the regulatory authorities have the important tasks to investigate and assess the security risks as well as to develop instructions and orders.

  16. On the Use of Running Trends as Summary Statistics for Univariate Time Series and Time Series Association

    OpenAIRE

    Trottini, Mario; Vigo, Isabel; Belda, Santiago

    2015-01-01

    Given a time series, running trends analysis (RTA) involves evaluating least squares trends over overlapping time windows of L consecutive time points, with overlap by all but one observation. This produces a new series called the “running trends series,” which is used as summary statistics of the original series for further analysis. In recent years, RTA has been widely used in climate applied research as summary statistics for time series and time series association. There is no doubt that ...

  17. A simple field method to identify foot strike pattern during running.

    Science.gov (United States)

    Giandolini, Marlène; Poupard, Thibaut; Gimenez, Philippe; Horvais, Nicolas; Millet, Guillaume Y; Morin, Jean-Benoît; Samozino, Pierre

    2014-05-07

    Identifying foot strike patterns in running is an important issue for sport clinicians, coaches and footwear industrials. Current methods allow the monitoring of either many steps in laboratory conditions or only a few steps in the field. Because measuring running biomechanics during actual practice is critical, our purpose is to validate a method aiming at identifying foot strike patterns during continuous field measurements. Based on heel and metatarsal accelerations, this method requires two uniaxial accelerometers. The time between heel and metatarsal acceleration peaks (THM) was compared to the foot strike angle in the sagittal plane (αfoot) obtained by 2D video analysis for various conditions of speed, slope, footwear, foot strike and state of fatigue. Acceleration and kinematic measurements were performed at 1000Hz and 120Hz, respectively, during 2-min treadmill running bouts. Significant correlations were observed between THM and αfoot for 14 out of 15 conditions. The overall correlation coefficient was r=0.916 (Pstrike except for extreme forefoot strike during which the heel rarely or never strikes the ground, and for different footwears and states of fatigue. We proposed a classification based on THM: FFS<-5.49msmethod, it is reliable for distinguishing rearfoot and non-rearfoot strikers in situ. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Effect of treadmill versus overground running on the structure of variability of stride timing.

    Science.gov (United States)

    Lindsay, Timothy R; Noakes, Timothy D; McGregor, Stephen J

    2014-04-01

    Gait timing dynamics of treadmill and overground running were compared. Nine trained runners ran treadmill and track trials at 80, 100, and 120% of preferred pace for 8 min. each. Stride time series were generated for each trial. To each series, detrended fluctuation analysis (DFA), power spectral density (PSD), and multiscale entropy (MSE) analysis were applied to infer the regime of control along the randomness-regularity axis. Compared to overground running, treadmill running exhibited a higher DFA and PSD scaling exponent, as well as lower entropy at non-preferred speeds. This indicates a more ordered control for treadmill running, especially at non-preferred speeds. The results suggest that the treadmill itself brings about greater constraints and requires increased voluntary control. Thus, the quantification of treadmill running gait dynamics does not necessarily reflect movement in overground settings.

  19. System and Component Software Specification, Run-time Verification and Automatic Test Generation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The following background technology is described in Part 5: Run-time Verification (RV), White Box Automatic Test Generation (WBATG). Part 5 also describes how WBATG...

  20. Strong normalization by type-directed partial evaluation and run-time code generation

    DEFF Research Database (Denmark)

    Balat, Vincent; Danvy, Olivier

    1998-01-01

    We investigate the synergy between type-directed partial evaluation and run-time code generation for the Caml dialect of ML. Type-directed partial evaluation maps simply typed, closed Caml values to a representation of their long βη-normal form. Caml uses a virtual machine and has the capability...... to load byte code at run time. Representing the long βη-normal forms as byte code gives us the ability to strongly normalize higher-order values (i.e., weak head normal forms in ML), to compile the resulting strong normal forms into byte code, and to load this byte code all in one go, at run time. We...... conclude this note with a preview of our current work on scaling up strong normalization by run-time code generation to the Caml module language....

  1. ANALYSIS OF POSSIBILITY TO AVOID A RUNNING-DOW ACCIDENT TIMELY BRAKING

    Directory of Open Access Journals (Sweden)

    Sarayev, A.

    2013-06-01

    Full Text Available Such circumstances under which the drive can stop the vehicle by applying timely braking before reaching the pedestrian crossing or decrease the speed to the safe limit to avoid a running-down accident is considered.

  2. Strong Normalization by Type-Directed Partial Evaluation and Run-Time Code Generation

    DEFF Research Database (Denmark)

    Balat, Vincent; Danvy, Olivier

    1997-01-01

    We investigate the synergy between type-directed partial evaluation and run-time code generation for the Caml dialect of ML. Type-directed partial evaluation maps simply typed, closed Caml values to a representation of their long βη-normal form. Caml uses a virtual machine and has the capability...... to load byte code at run time. Representing the long βη-normal forms as byte code gives us the ability to strongly normalize higher-order values (i.e., weak head normal forms in ML), to compile the resulting strong normal forms into byte code, and to load this byte code all in one go, at run time. We...... conclude this note with a preview of our current work on scaling up strong normalization by run-time code generation to the Caml module language....

  3. Real time data analysis with the ATLAS Trigger at the LHC in Run-2

    CERN Document Server

    Beauchemin, Pierre-Hugues; The ATLAS collaboration

    2018-01-01

    The trigger selection capabilities of the ATLAS detector have been significantly enhanced for the LHC Run- 2 in order to cope with the higher event rates and with the large number of simultaneous interactions (pile-up) per protonproton bunch crossing. A new hardware system, designed to analyse real time event-topologies at Level-1 came to full use in 2017. A hardware-based track reconstruction system, expected to be used real-time in 2018, is designed to provide track information to the high-level software trigger at its full input rate. The high-level trigger selections are largely relying on offline-like reconstruction techniques, and in some cases multivariate analysis methods. Despite the sudden change in LHC operations during the second half of 2017, which caused an increase in pile-up and therefore also in CPU usage of the trigger algorithms, the set of triggers (so called trigger menu) running online has undergone only minor modifications thanks to the robustness and redundancy of the trigger system, a...

  4. Real time data analysis with the ATLAS trigger at the LHC in Run-2

    CERN Document Server

    Beauchemin, Pierre-Hugues; The ATLAS collaboration

    2018-01-01

    The trigger selection capabilities of the ATLAS detector have been significantly enhanced for the LHC Run-2 in order to cope with the higher event rates and with the large number of simultaneous interactions (pile-up) per proton-proton bunch crossing. A new hardware system, designed to analyse real time event-topologies at Level-1 came to full use in 2017. A hardware-based track reconstruction system, expected to be used real-time in 2018, is designed to provide track information to the high-level software trigger at its full input rate. The high-level trigger selections are largely relying on offline-like reconstruction techniques, and in some cases multi-variate analysis methods. Despite the sudden change in LHC operations during the second half of 2017, which caused an increase in pile-up and therefore also in CPU usage of the trigger algorithms, the set of triggers (so called trigger menu) running online has undergone only minor modifications thanks to the robustness and redundancy of the trigger system, ...

  5. Investigations of timing during the schedule and reinforcement intervals with wheel-running reinforcement.

    Science.gov (United States)

    Belke, Terry W; Christie-Fougere, Melissa M

    2006-11-01

    Across two experiments, a peak procedure was used to assess the timing of the onset and offset of an opportunity to run as a reinforcer. The first experiment investigated the effect of reinforcer duration on temporal discrimination of the onset of the reinforcement interval. Three male Wistar rats were exposed to fixed-interval (FI) 30-s schedules of wheel-running reinforcement and the duration of the opportunity to run was varied across values of 15, 30, and 60s. Each session consisted of 50 reinforcers and 10 probe trials. Results showed that as reinforcer duration increased, the percentage of postreinforcement pauses longer than the 30-s schedule interval increased. On probe trials, peak response rates occurred near the time of reinforcer delivery and peak times varied with reinforcer duration. In a second experiment, seven female Long-Evans rats were exposed to FI 30-s schedules leading to 30-s opportunities to run. Timing of the onset and offset of the reinforcement period was assessed by probe trials during the schedule interval and during the reinforcement interval in separate conditions. The results provided evidence of timing of the onset, but not the offset of the wheel-running reinforcement period. Further research is required to assess if timing occurs during a wheel-running reinforcement period.

  6. Coupling methods for parallel running RELAPSim codes in nuclear power plant simulation

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yankai; Lin, Meng, E-mail: linmeng@sjtu.edu.cn; Yang, Yanhua

    2016-02-15

    When the plant is modeled detailedly for high precision, it is hard to achieve real-time calculation for one single RELAP5 in a large-scale simulation. To improve the speed and ensure the precision of simulation at the same time, coupling methods for parallel running RELAPSim codes were proposed in this study. Explicit coupling method via coupling boundaries was realized based on a data-exchange and procedure-control environment. Compromise of synchronization frequency was well considered to improve the precision of simulation and guarantee the real-time simulation at the same time. The coupling methods were assessed using both single-phase flow models and two-phase flow models and good agreements were obtained between the splitting–coupling models and the integrated model. The mitigation of SGTR was performed as an integral application of the coupling models. A large-scope NPP simulator was developed adopting six splitting–coupling models of RELAPSim and other simulation codes. The coupling models could improve the speed of simulation significantly and make it possible for real-time calculation. In this paper, the coupling of the models in the engineering simulator is taken as an example to expound the coupling methods, i.e., coupling between parallel running RELAPSim codes, and coupling between RELAPSim code and other types of simulation codes. However, the coupling methods are also referable in other simulator, for example, a simulator employing ATHLETE instead of RELAP5, other logic code instead of SIMULINK. It is believed the coupling method is commonly used for NPP simulator regardless of the specific codes chosen in this paper.

  7. Design Flow Instantiation for Run-Time Reconfigurable Systems: A Case Study

    Directory of Open Access Journals (Sweden)

    Yang Qu

    2007-12-01

    Full Text Available Reconfigurable system is a promising alternative to deliver both flexibility and performance at the same time. New reconfigurable technologies and technology-dependent tools have been developed, but a complete overview of the whole design flow for run-time reconfigurable systems is missing. In this work, we present a design flow instantiation for such systems using a real-life application. The design flow is roughly divided into two parts: system level and implementation. At system level, our supports for hardware resource estimation and performance evaluation are applied. At implementation level, technology-dependent tools are used to realize the run-time reconfiguration. The design case is part of a WCDMA decoder on a commercially available reconfigurable platform. The results show that using run-time reconfiguration can save over 40% area when compared to a functionally equivalent fixed system and achieve 30 times speedup in processing time when compared to a functionally equivalent pure software design.

  8. Wheel set run profile renewing method effectiveness estimation

    OpenAIRE

    Somov, Dmitrij; Bazaras, Žilvinas; Žukauskaite, Orinta

    2010-01-01

    At all the repair enterprises, despite decreased rim wear-off resistance, after every grinding only geometry wheel profile parameters are renewed. Exploit wheel rim work edge decrease tendency is noticed what induces acquiring new wheels. This is related to considerable axle load and train speed increase and also because of wheel work edge repair method imperfection.

  9. Running speed during training and percent body fat predict race time in recreational male marathoners.

    Science.gov (United States)

    Barandun, Ursula; Knechtle, Beat; Knechtle, Patrizia; Klipstein, Andreas; Rüst, Christoph Alexander; Rosemann, Thomas; Lepers, Romuald

    2012-01-01

    Recent studies have shown that personal best marathon time is a strong predictor of race time in male ultramarathoners. We aimed to determine variables predictive of marathon race time in recreational male marathoners by using the same characteristics of anthropometry and training as used for ultramarathoners. Anthropometric and training characteristics of 126 recreational male marathoners were bivariately and multivariately related to marathon race times. After multivariate regression, running speed of the training units (β = -0.52, P marathon race times. Marathon race time for recreational male runners may be estimated to some extent by using the following equation (r (2) = 0.44): race time ( minutes) = 326.3 + 2.394 × (percent body fat, %) - 12.06 × (speed in training, km/hours). Running speed during training sessions correlated with prerace percent body fat (r = 0.33, P = 0.0002). The model including anthropometric and training variables explained 44% of the variance of marathon race times, whereas running speed during training sessions alone explained 40%. Thus, training speed was more predictive of marathon performance times than anthropometric characteristics. The present results suggest that low body fat and running speed during training close to race pace (about 11 km/hour) are two key factors for a fast marathon race time in recreational male marathoner runners.

  10. Run-Time and Compiler Support for Programming in Adaptive Parallel Environments

    Directory of Open Access Journals (Sweden)

    Guy Edjlali

    1997-01-01

    Full Text Available For better utilization of computing resources, it is important to consider parallel programming environments in which the number of available processors varies at run-time. In this article, we discuss run-time support for data-parallel programming in such an adaptive environment. Executing programs in an adaptive environment requires redistributing data when the number of processors changes, and also requires determining new loop bounds and communication patterns for the new set of processors. We have developed a run-time library to provide this support. We discuss how the run-time library can be used by compilers of high-performance Fortran (HPF-like languages to generate code for an adaptive environment. We present performance results for a Navier-Stokes solver and a multigrid template run on a network of workstations and an IBM SP-2. Our experiments show that if the number of processors is not varied frequently, the cost of data redistribution is not significant compared to the time required for the actual computation. Overall, our work establishes the feasibility of compiling HPF for a network of nondedicated workstations, which are likely to be an important resource for parallel programming in the future.

  11. Relationship between running kinematic changes and time limit at vVO2max

    Directory of Open Access Journals (Sweden)

    Leonardo De Lucca

    2012-06-01

    Exhaustive running at maximal oxygen uptake velocity (vVO2max can alter running kinematic parameters and increase energy cost along the time. The aims of the present study were to compare characteristics of ankle and knee kinematics during running at vVO2max and to verify the relationship between changes in kinematic variables and time limit (Tlim. Eleven male volunteers, recreational players of team sports, performed an incremental running test until volitional exhaustion to determine vVO2max and a constant velocity test at vVO2max. Subjects were filmed continuously from the left sagittal plane at 210 Hz for further kinematic analysis. The maximal plantar flexion during swing (p<0.01 was the only variable that increased significantly from beginning to end of the run. Increase in ankle angle at contact was the only variable related to Tlim (r=0.64; p=0.035 and explained 34% of the performance in the test. These findings suggest that the individuals under study maintained a stable running style at vVO2max and that increase in plantar flexion explained the performance in this test when it was applied in non-runners.

  12. Effect of Different Training Methods on Stride Parameters in Speed Maintenance Phase of 100-m Sprint Running.

    Science.gov (United States)

    Cetin, Emel; Hindistan, I Ethem; Ozkaya, Y Gul

    2018-05-01

    Cetin, E, Hindistan, IE, Ozkaya, YG. Effect of different training methods on stride parameters in speed maintenance phase of 100-m sprint running. J Strength Cond Res 32(5): 1263-1272, 2018-This study examined the effects of 2 different training methods relevant to sloping surface on stride parameters in speed maintenance phase of 100-m sprint running. Twenty recreationally active students were assigned into one of 3 groups: combined training (Com), horizontal training (H), and control (C) group. Com group performed uphill and downhill training on a sloping surface with an angle of 4°, whereas H group trained on a horizontal surface, 3 days a week for 8 weeks. Speed maintenance and deceleration phases were divided into distances with 10-m intervals, and running time (t), running velocity (RV), step frequency (SF), and step length (SL) were measured at preexercise, and postexercise period. After 8 weeks of training program, t was shortened by 3.97% in Com group, and 2.37% in H group. Running velocity also increased for totally 100 m of running distance by 4.13 and 2.35% in Com, and H groups, respectively. At the speed maintenance phase, although t and maximal RV (RVmax) found to be statistically unaltered during overall phase, t was found to be decreased, and RVmax was preceded by 10 m in distance in both training groups. Step length was increased at 60-70 m, and SF was decreased at 70-80 m in H group. Step length was increased with concomitant decrease in SF at 80-90 m in Com group. Both training groups maintained the RVmax with a great percentage at the speed maintenance phase. In conclusion, although both training methods resulted in an increase in running time and RV, Com training method was more prominently effective method in improving RV, and this improvement was originated from the positive changes in SL during the speed maintaining phase.

  13. Haemoglobin mass and running time trial performance after recombinant human erythropoietin administration in trained men.

    Directory of Open Access Journals (Sweden)

    Jérôme Durussel

    Full Text Available UNLABELLED: Recombinant human erythropoietin (rHuEpo increases haemoglobin mass (Hb(mass and maximal oxygen uptake (v O(2 max. PURPOSE: This study defined the time course of changes in Hb(mass, v O(2 max as well as running time trial performance following 4 weeks of rHuEpo administration to determine whether the laboratory observations would translate into actual improvements in running performance in the field. METHODS: 19 trained men received rHuEpo injections of 50 IU•kg(-1 body mass every two days for 4 weeks. Hb(mass was determined weekly using the optimized carbon monoxide rebreathing method until 4 weeks after administration. v O(2 max and 3,000 m time trial performance were measured pre, post administration and at the end of the study. RESULTS: Relative to baseline, running performance significantly improved by ∼6% after administration (10:30±1:07 min:sec vs. 11:08±1:15 min:sec, p<0.001 and remained significantly enhanced by ∼3% 4 weeks after administration (10:46±1:13 min:sec, p<0.001, while v O(2 max was also significantly increased post administration (60.7±5.8 mL•min(-1•kg(-1 vs. 56.0±6.2 mL•min(-1•kg(-1, p<0.001 and remained significantly increased 4 weeks after rHuEpo (58.0±5.6 mL•min(-1•kg(-1, p = 0.021. Hb(mass was significantly increased at the end of administration compared to baseline (15.2±1.5 g•kg(-1 vs. 12.7±1.2 g•kg(-1, p<0.001. The rate of decrease in Hb(mass toward baseline values post rHuEpo was similar to that of the increase during administration (-0.53 g•kg(-1•wk(-1, 95% confidence interval (CI (-0.68, -0.38 vs. 0.54 g•kg(-1•wk(-1, CI (0.46, 0.63 but Hb(mass was still significantly elevated 4 weeks after administration compared to baseline (13.7±1.1 g•kg(-1, p<0.001. CONCLUSION: Running performance was improved following 4 weeks of rHuEpo and remained elevated 4 weeks after administration compared to baseline. These field performance effects coincided with r

  14. Shorter Ground Contact Time and Better Running Economy: Evidence From Female Kenyan Runners.

    Science.gov (United States)

    Mooses, Martin; Haile, Diresibachew W; Ojiambo, Robert; Sang, Meshack; Mooses, Kerli; Lane, Amy R; Hackney, Anthony C

    2018-06-25

    Mooses, M, Haile, DW, Ojiambo, R, Sang, M, Mooses, K, Lane, AR, and Hackney, AC. Shorter ground contact time and better running economy: evidence from female Kenyan runners. J Strength Cond Res XX(X): 000-000, 2018-Previously, it has been concluded that the improvement in running economy (RE) might be considered as a key to the continued improvement in performance when no further increase in V[Combining Dot Above]O2max is observed. To date, RE has been extensively studied among male East African distance runners. By contrast, there is a paucity of data on the RE of female East African runners. A total of 10 female Kenyan runners performed 3 × 1,600-m steady-state run trials on a flat outdoor clay track (400-m lap) at the intensities that corresponded to their everyday training intensities for easy, moderate, and fast running. Running economy together with gait characteristics was determined. Participants showed moderate to very good RE at the first (202 ± 26 ml·kg·km) and second (188 ± 12 ml·kg·km) run trials, respectively. Correlation analysis revealed significant relationship between ground contact time (GCT) and RE at the second run (r = 0.782; p = 0.022), which represented the intensity of anaerobic threshold. This study is the first to report the RE and gait characteristics of East African female athletes measured under everyday training settings. We provided the evidence that GCT is associated with the superior RE of the female Kenyan runners.

  15. An investigation of the relation between the 30 meter running time and the femoral volume fraction in the thigh

    Directory of Open Access Journals (Sweden)

    MY Tasmektepligil

    2009-12-01

    Full Text Available Leg components are thought to be a related to speed. Only a limited number of studies have, however, examined the interaction between speed and bone size. In this study, we examined the relationship between the time taken by football players to run thirty meters and the fraction which the femur forms compared to the entire thigh region. Data collected from thirty male football players of average age 17.3 (between 16-19 years old were analyzed. First we detected the thirty meter running times and then we estimated the volume fraction of the femur to the entire thigh region using stereological methods on magnetic resonance images. Our data showed that there was a highly negative relationship between the 30 meter running times and the volume fraction of the bone to the thigh region. Thus, 30 meter running time decreases as the fraction of the bone to the thigh region increases. In other words, speed increases as the fraction of bone volume increases. Our data indicate that selecting sportsman whose femoral volume fractions are high will provide a significant benefit to enhancing performance in those branches of sports which require speed. Moreover, we concluded that training which can increase the bone volume fraction should be practiced when an increase in speed is desired and that the changes in the fraction of thigh region components should be monitored during these trainings.

  16. Comparing internal and external run-time coupling of CFD and building energy simulation software

    NARCIS (Netherlands)

    Djunaedy, E.; Hensen, J.L.M.; Loomans, M.G.L.C.

    2004-01-01

    This paper describes a comparison between internal and external run-time coupling of CFD and building energy simulation software. Internal coupling can be seen as the "traditional" way of developing software, i.e. the capabilities of existing software are expanded by merging codes. With external

  17. Ada Run Time Support Environments and a common APSE Interface Set. [Ada Programming Support Environment

    Science.gov (United States)

    Mckay, C. W.; Bown, R. L.

    1985-01-01

    The paper discusses the importance of linking Ada Run Time Support Environments to the Common Ada Programming Support Environment (APSE) Interface Set (CAIS). A non-stop network operating systems scenario is presented to serve as a forum for identifying the important issues. The network operating system exemplifies the issues involved in the NASA Space Station data management system.

  18. Differences in ground contact time explain the less efficient running economy in north african runners.

    Science.gov (United States)

    Santos-Concejero, J; Granados, C; Irazusta, J; Bidaurrazaga-Letona, I; Zabala-Lili, J; Tam, N; Gil, S M

    2013-09-01

    The purpose of this study was to investigate the relationship between biomechanical variables and running economy in North African and European runners. Eight North African and 13 European male runners of the same athletic level ran 4-minute stages on a treadmill at varying set velocities. During the test, biomechanical variables such as ground contact time, swing time, stride length, stride frequency, stride angle and the different sub-phases of ground contact were recorded using an optical measurement system. Additionally, oxygen uptake was measured to calculate running economy. The European runners were more economical than the North African runners at 19.5 km · h(-1), presented lower ground contact time at 18 km · h(-1) and 19.5 km · h(-1) and experienced later propulsion sub-phase at 10.5 km · h(-1),12 km · h(-1), 15 km · h(-1), 16.5 km · h(-1) and 19.5 km · h(-1) than the European runners (P Running economy at 19.5 km · h(-1) was negatively correlated with swing time (r = -0.53) and stride angle (r = -0.52), whereas it was positively correlated with ground contact time (r = 0.53). Within the constraints of extrapolating these findings, the less efficient running economy in North African runners may imply that their outstanding performance at international athletic events appears not to be linked to running efficiency. Further, the differences in metabolic demand seem to be associated with differing biomechanical characteristics during ground contact, including longer contact times.

  19.  Running speed during training and percent body fat predict race time in recreational male marathoners

    Directory of Open Access Journals (Sweden)

    Barandun U

    2012-07-01

    Full Text Available  Background: Recent studies have shown that personal best marathon time is a strong predictor of race time in male ultramarathoners. We aimed to determine variables predictive of marathon race time in recreational male marathoners by using the same characteristics of anthropometry and training as used for ultramarathoners.Methods: Anthropometric and training characteristics of 126 recreational male marathoners were bivariately and multivariately related to marathon race times.Results: After multivariate regression, running speed of the training units (β=-0.52, P<0.0001 and percent body fat (β=0.27, P <0.0001 were the two variables most strongly correlated with marathon race times. Marathon race time for recreational male runners may be estimated to some extent by using the following equation (r2 = 0.44: race time (minutes = 326.3 + 2.394 × (percent body fat, % – 12.06 × (speed in training, km/hours. Running speed during training sessions correlated with prerace percent body fat (r=0.33, P=0.0002. The model including anthropometric and training variables explained 44% of the variance of marathon race times, whereas running speed during training sessions alone explained 40%. Thus, training speed was more predictive of marathon performance times than anthropometric characteristics.Conclusion: The present results suggest that low body fat and running speed during training close to race pace (about 11 km/hour are two key factors for a fast marathon race time in recreational male marathoner runners.Keywords: body fat, skinfold thickness, anthropometry, endurance, athlete

  20. Home cage wheel running is an objective and clinically relevant method to assess inflammatory pain in male and female rats

    Science.gov (United States)

    Kandasamy, Ram; Calsbeek, Jonas J.; Morgan, Michael M.

    2016-01-01

    Background The assessment of nociception in preclinical studies is undergoing a transformation from pain-evoked to pain-depressed tests to more closely mimic the effects of clinical pain. Many inflammatory pain-depressed behaviors (reward seeking, locomotion) have been examined, but these tests are limited because of confounds such as stress and difficulties in quantifying behavior. New Method The present study evaluates home cage wheel running as an objective method to assess the magnitude and duration of inflammatory pain in male and female rats. Results Injection of Complete Freund’s Adjuvant (CFA) into the right hindpaw to induce inflammatory pain almost completely inhibited wheel running for 2 days in males and females. Wheel running gradually returned to baseline levels within 12 days despite persistent mechanical hypersensitivity (von Frey test). Comparison with Existing Methods Continuously monitoring home cage wheel running improves on previous studies examining inflammatory pain-depressed wheel running because it is more sensitive to noxious stimuli, avoids the stress of removing the rat from its cage for testing, and provides a complete analysis of the time course for changes in nociception. Conclusions The present data indicate that home cage wheel running is a clinically relevant method to assess inflammatory pain in the rat. The decrease in activity caused by inflammatory pain and subsequent gradual recovery mimics the changes in activity caused by pain in humans. The tendency for pain-depressed wheel running to be greater in female than male rats is consistent with the tendency for women to be at greater risk of chronic pain than men. PMID:26891874

  1. Short- and long-run time-of-use price elasticities in Swiss residential electricity demand

    International Nuclear Information System (INIS)

    Filippini, Massimo

    2011-01-01

    This paper presents an empirical analysis on the residential demand for electricity by time-of-day. This analysis has been performed using aggregate data at the city level for 22 Swiss cities for the period 2000-2006. For this purpose, we estimated two log-log demand equations for peak and off-peak electricity consumption using static and dynamic partial adjustment approaches. These demand functions were estimated using several econometric approaches for panel data, for example LSDV and RE for static models, and LSDV and corrected LSDV estimators for dynamic models. The attempt of this empirical analysis has been to highlight some of the characteristics of the Swiss residential electricity demand. The estimated short-run own price elasticities are lower than 1, whereas in the long-run these values are higher than 1. The estimated short-run and long-run cross-price elasticities are positive. This result shows that peak and off-peak electricity are substitutes. In this context, time differentiated prices should provide an economic incentive to customers so that they can modify consumption patterns by reducing peak demand and shifting electricity consumption from peak to off-peak periods. - Highlights: → Empirical analysis on the residential demand for electricity by time-of-day. → Estimators for dynamic panel data. → Peak and off-peak residential electricity are substitutes.

  2. Novel Real-time Calibration and Alignment Procedure for LHCb Run II

    CERN Multimedia

    Prouve, Claire

    2016-01-01

    In order to achieve optimal detector performance the LHCb experiment has introduced a novel real-time detector alignment and calibration strategy for Run II of the LHC. For the alignment tasks, data is collected and processed at the beginning of each fill while the calibrations are performed for each run. This real time alignment and calibration allows the same constants being used in both the online and offline reconstruction, thus improving the correlation between triggered and offline selected events. Additionally the newly computed alignment and calibration constants can be instantly used in the trigger, making it more efficient. The online alignment and calibration of the RICH detectors also enable the use of hadronic particle identification in the trigger. The computing time constraints are met through the use of a new dedicated framework using the multi-core farm infrastructure for the LHCb trigger. An overview of all alignment and calibration tasks is presented and their performance is shown.

  3. Implementering Run-time Evaluation of Distributed Timing Constraints in a Micro Kernel

    DEFF Research Database (Denmark)

    Kristensen, C.H.; Drejer, N.; Nielsen, Jens Frederik Dalsgaard

    In the present paper we describe a solution to the problem of implementing time-optimal evaluation of timing constraints in distributed real-time systems......In the present paper we describe a solution to the problem of implementing time-optimal evaluation of timing constraints in distributed real-time systems...

  4. Operating Security System Support for Run-Time Security with a Trusted Execution Environment

    DEFF Research Database (Denmark)

    Gonzalez, Javier

    Software services have become an integral part of our daily life. Cyber-attacks have thus become a problem of increasing importance not only for the IT industry, but for society at large. A way to contain cyber-attacks is to guarantee the integrity of IT systems at run-time. Put differently......, it is safe to assume that any complex software is compromised. The problem is then to monitor and contain it when it executes in order to protect sensitive data and other sensitive assets. To really have an impact, any solution to this problem should be integrated in commodity operating systems...... sensitive assets at run-time that we denote split-enforcement, and provide an implementation for ARM-powered devices using ARM TrustZone security extensions. We design, build, and evaluate a prototype Trusted Cell that provides trusted services. We also present the first generic TrustZone driver...

  5. LHCb : Novel real-time alignment and calibration of the LHCb Detector in Run2

    CERN Multimedia

    Tobin, Mark

    2015-01-01

    LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run 2. Data collected at the start of the fill will be processed in a few minutes and used to update the alignment, while the calibration constants will be evaluated for each run. This procedure will improve the quality of the online alignment. For example, the vertex locator is retracted and reinserted for stable beam collisions in each fill to be centred on the primary vertex position in the transverse plane. Consequently its position changes on a fill-by-fill basis. Critically, this new realtime alignment and calibration procedure allows identical constants to be used in the online and offline reconstruction, thus improving the correlation between triggered and offline selected events. This offers the opportunity to optimise the event selection in the trigger by applying stronger constraints. The online calibration facilitates the use of hadronic particle identification using the RICH detectors at the trigger level. T...

  6. Novel real-time alignment and calibration of the LHCb detector in Run II

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Z., E-mail: zhirui.xu@epfl.ch; Tobin, M.

    2016-07-11

    An automatic real-time alignment and calibration strategy of the LHCb detector was developed for the Run II. Thanks to the online calibration, tighter event selection criteria can be used in the trigger. Furthermore, the online calibration facilitates the use of hadronic particle identification using the Ring Imaging Cherenkov (RICH) detectors at the trigger level. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from both the operational and physics performance points of view. Specific challenges of this novel configuration are discussed, as well as the working procedures of the framework and its performance.

  7. Novel real-time alignment and calibration of the LHCb detector in Run II

    CERN Document Server

    AUTHOR|(CDS)2086132; Tobin, Mark

    2016-01-01

    An automatic real-time alignment and calibration strategy of the LHCb detector was developed for the Run II. Thanks to the online calibration, tighter event selection criteria can be used in the trigger. Furthermore, the online calibration facilitates the use of hadronic particle identification using the Ring Imaging Cherenkov (RICH) detectors at the trigger level. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from both the operational and physics performance points of view. Specific challenges of this novel configuration are discussed, as well as the working procedures of the framework and its performance.

  8. Facilitating arrhythmia simulation: the method of quantitative cellular automata modeling and parallel running

    Directory of Open Access Journals (Sweden)

    Mondry Adrian

    2004-08-01

    Full Text Available Abstract Background Many arrhythmias are triggered by abnormal electrical activity at the ionic channel and cell level, and then evolve spatio-temporally within the heart. To understand arrhythmias better and to diagnose them more precisely by their ECG waveforms, a whole-heart model is required to explore the association between the massively parallel activities at the channel/cell level and the integrative electrophysiological phenomena at organ level. Methods We have developed a method to build large-scale electrophysiological models by using extended cellular automata, and to run such models on a cluster of shared memory machines. We describe here the method, including the extension of a language-based cellular automaton to implement quantitative computing, the building of a whole-heart model with Visible Human Project data, the parallelization of the model on a cluster of shared memory computers with OpenMP and MPI hybrid programming, and a simulation algorithm that links cellular activity with the ECG. Results We demonstrate that electrical activities at channel, cell, and organ levels can be traced and captured conveniently in our extended cellular automaton system. Examples of some ECG waveforms simulated with a 2-D slice are given to support the ECG simulation algorithm. A performance evaluation of the 3-D model on a four-node cluster is also given. Conclusions Quantitative multicellular modeling with extended cellular automata is a highly efficient and widely applicable method to weave experimental data at different levels into computational models. This process can be used to investigate complex and collective biological activities that can be described neither by their governing differentiation equations nor by discrete parallel computation. Transparent cluster computing is a convenient and effective method to make time-consuming simulation feasible. Arrhythmias, as a typical case, can be effectively simulated with the methods

  9. Novel Real-time Alignment and Calibration of the LHCb detector in Run2

    Science.gov (United States)

    Martinelli, Maurizio; LHCb Collaboration

    2017-10-01

    LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run2. Data collected at the start of the fill are processed in a few minutes and used to update the alignment parameters, while the calibration constants are evaluated for each run. This procedure improves the quality of the online reconstruction. For example, the vertex locator is retracted and reinserted for stable beam conditions in each fill to be centred on the primary vertex position in the transverse plane. Consequently its position changes on a fill-by-fill basis. Critically, this new real-time alignment and calibration procedure allows identical constants to be used in the online and offline reconstruction, thus improving the correlation between triggered and offline-selected events. This offers the opportunity to optimise the event selection in the trigger by applying stronger constraints. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructure for the trigger. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from both the operational and physics performance points of view. Specific challenges of this novel configuration are discussed, as well as the working procedures of the framework and its performance.

  10. A novel method for calculating the energy cost of turning during running

    Directory of Open Access Journals (Sweden)

    Hatamoto Y

    2013-05-01

    Full Text Available Yoichi Hatamoto,1 Yosuke Yamada,2 Tatsuya Fujii,3 Yasuki Higaki,3 Akira Kiyonaga,3 Hiroaki Tanaka31Graduate School of Sports and Health Science, Fukuoka University, Nanakuma Jonan-ku Fukuoka, Japan; 2The Fukuoka University Institute for Physical Activity, Nanakuma Jonan-ku Fukuoka, Japan; 3Faculty of Sports and Health Science, Fukuoka University, Nanakuma Jonan-ku Fukuoka, JapanAbstract: Although changes of direction are one of the essential locomotor patterns in ball sports, the physiological demand of turning during running has not been previously investigated. We proposed a novel approach by which to evaluate the physiological demand of turning. The purposes of this study were to establish a method of measuring the energy expenditure (EE of a 180° turn during running and to investigate the effect of two different running speeds on the EE of a 180° turn. Eleven young, male participants performed measurement sessions at two different running speeds (4.3 and 5.4 km/hour. Each measurement session consisted of five trials, and each trial had a different frequency of turns. At both running speeds, as the turn frequency increased the gross oxygen consumption (V · O2 also increased linearly (4.3 km/hour, r = 0.973; 5.4 km/hour, r = 0.996. The V · O2 of a turn at 5.4 km/hour (0.55 [SD 0.09] mL/kg was higher than at 4.3 km/hour (0.34 [SD 0.13] mL/kg (P < 0.001. We conclude that the gross V · O2 of running at a fixed speed with turns is proportional to turn frequency and that the EE of a turn is different at different running speeds. The Different Frequency Accumulation Method is a useful tool for assessing the physiological demands of complex locomotor activity.Keywords: energy expenditure, turning, turn frequency, running speed, V · O2, heart rate

  11. Novel real-time alignment and calibration of the LHCb detector in Run2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00144085

    2017-01-01

    LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run2. Data collected at the start of the fill are processed in a few minutes and used to update the alignment parameters, while the calibration constants are evaluated for each run. This procedure improves the quality of the online reconstruction. For example, the vertex locator is retracted and reinserted for stable beam conditions in each fill to be centred on the primary vertex position in the transverse plane. Consequently its position changes on a fill-by-fill basis. Critically, this new real-time alignment and calibration procedure allows identical constants to be used in the online and offline reconstruction, thus improving the correlation between triggered and offline-selected events. This offers the opportunity to optimise the event selection in the trigger by applying stronger constraints. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructur...

  12. Real-time alignment and calibration of the LHCb Detector in Run II

    CERN Multimedia

    Dujany, Giulio

    2016-01-01

    Stable, precise spatial alignment and PID calibration are necessary to achieve optimal detector performance. During Run2, LHCb has a new real-time detector alignment and calibration to allow equivalent performance in the online and offline reconstruction to be reached. This offers the opportunity to optimise the event selection by applying stronger constraints, and to use hadronic particle identification at the trigger level. The computing time constraints are met through the use of a new dedicated framework using the multi-core farm infrastructure for the trigger. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from the operative and physics performance point of view. Specific challenges of this configuration are discussed, as well as the designed framework and its performance.

  13. Real-time alignment and calibration of the LHCb Detector in Run II

    CERN Multimedia

    Dujany, Giulio

    2015-01-01

    Stable, precise spatial alignment and PID calibration are necessary to achieve optimal detector performance. During Run2, LHCb will have a new real-time detector alignment and calibration to allow equivalent performance in the online and offline reconstruction to be reached. This offers the opportunity to optimise the event selection by applying stronger constraints, and to use hadronic particle identification at the trigger level. The computing time constraints are met through the use of a new dedicated framework using the multi-core farm infrastructure for the trigger. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from the operative and physics performance point of view. Specific challenges of this configuration are discussed, as well as the designed framework and its performance.

  14. Reinforcement of drinking by running: effect of fixed ratio and reinforcement time1

    Science.gov (United States)

    Premack, David; Schaeffer, Robert W.; Hundt, Alan

    1964-01-01

    Rats were required to complete varying numbers of licks (FR), ranging from 10 to 300, in order to free an activity wheel for predetermined times (CT) ranging from 2 to 20 sec. The reinforcement of drinking by running was shown both by an increased frequency of licking, and by changes in length of the burst of licking relative to operant-level burst length. In log-log coordinates, instrumental licking tended to be a linear increasing function of FR for the range tested, a linear decreasing function of CT for the range tested. Pause time was implicated in both of the above relations, being a generally increasing function of both FR and CT. PMID:14120150

  15. REINFORCEMENT OF DRINKING BY RUNNING: EFFECT OF FIXED RATIO AND REINFORCEMENT TIME.

    Science.gov (United States)

    PREMACK, D; SCHAEFFER, R W; HUNDT, A

    1964-01-01

    Rats were required to complete varying numbers of licks (FR), ranging from 10 to 300, in order to free an activity wheel for predetermined times (CT) ranging from 2 to 20 sec. The reinforcement of drinking by running was shown both by an increased frequency of licking, and by changes in length of the burst of licking relative to operant-level burst length. In log-log coordinates, instrumental licking tended to be a linear increasing function of FR for the range tested, a linear decreasing function of CT for the range tested. Pause time was implicated in both of the above relations, being a generally increasing function of both FR and CT.

  16. Radionuclide inventories for short run-time space nuclear reactor systems

    International Nuclear Information System (INIS)

    Coats, R.L.

    1993-01-01

    Space Nuclear Reactor Systems, especially those used for propulsion, often have expected operation run times much shorter than those for land-based nuclear power plants. This produces substantially different radionuclide inventories to be considered in the safety analyses of space nuclear systems. This presentation describes an analysis utilizing ORIGEN2 and DKPOWER to provide comparisons among representative land-based and space systems. These comparisons enable early, conceptual considerations of safety issues and features in the preliminary design phases of operational systems, test facilities, and operations by identifying differences between the requirements for space systems and the established practice for land-based power systems. Early indications are that separation distance is much more effective as a safety measure for space nuclear systems than for power reactors because greater decay of the radionuclide activity occurs during the time to transport the inventory a given distance. In addition, the inventories of long-lived actinides are very low for space reactor systems

  17. The effect of time constraints and running phases on combined event pistol shooting performance.

    Science.gov (United States)

    Dadswell, Clare; Payton, Carl; Holmes, Paul; Burden, Adrian

    2016-01-01

    The combined event is a crucial aspect of the modern pentathlon competition, but little is known about how shooting performance changes through the event. This study aimed to identify (i) how performance-related variables changed within each shooting series and (ii) how performance-related variables changed between each shooting series. Seventeen modern pentathletes completed combined event trials. An optoelectronic shooting system recorded score and pistol movement, and force platforms recorded centre of pressure movement 1 s prior to every shot. Heart rate and blood lactate values were recorded throughout the event. Whilst heart rate and blood lactate significantly increased between series (P  0.05). Thus, combined event shooting performance following each running phase appears similar to shooting performance following only 20 m of running. This finding has potential implications for the way in which modern pentathletes train for combined event shooting, and highlights the need for modern pentathletes to establish new methods with which to enhance shooting accuracy.

  18. Exposure time, running and skill-related performance in international u20 rugby union players during an intensified tournament

    Science.gov (United States)

    Carling, Christopher J.; Flanagan, Eamon; O’Doherty, Pearse; Piscione, Julien

    2017-01-01

    Purpose This study investigated exposure time, running and skill-related performance in two international u20 rugby union teams during an intensified tournament: the 2015 Junior World Rugby Championship. Method Both teams played 5 matches in 19 days. Analyses were conducted using global positioning system (GPS) tracking (Viper 2™, Statsports Technologies Ltd) and event coding (Opta Pro®). Results Of the 62 players monitored, 36 (57.1%) participated in 4 matches and 23 (36.5%) in all 5 matches while player availability for selection was 88%. Analyses of team running output (all players completing >60-min play) showed that the total and peak 5-minute high metabolic load distances covered were likely-to-very likely moderately higher in the final match compared to matches 1 and 2 in back and forward players. In individual players with the highest match-play exposure (participation in >75% of total competition playing time and >75-min in each of the final 3 matches), comparisons of performance in matches 4 and 5 versus match 3 (three most important matches) reported moderate-to-large decreases in total and high metabolic load distance in backs while similar magnitude reductions occurred in high-speed distance in forwards. In contrast, skill-related performance was unchanged, albeit with trivial and unclear changes, while there were no alterations in either total or high-speed running distance covered at the end of matches. Conclusions These findings suggest that despite high availability for selection, players were not over-exposed to match-play during an intensified u20 international tournament. They also imply that the teams coped with the running and skill-related demands. Similarly, individual players with the highest exposure to match-play were also able to maintain skill-related performance and end-match running output (despite an overall reduction in the latter). These results support the need for player rotation and monitoring of performance, recovery and

  19. Time delayed Ensemble Nudging Method

    Science.gov (United States)

    An, Zhe; Abarbanel, Henry

    Optimal nudging method based on time delayed embedding theory has shows potentials on analyzing and data assimilation in previous literatures. To extend the application and promote the practical implementation, new nudging assimilation method based on the time delayed embedding space is presented and the connection with other standard assimilation methods are studied. Results shows the incorporating information from the time series of data can reduce the sufficient observation needed to preserve the quality of numerical prediction, making it a potential alternative in the field of data assimilation of large geophysical models.

  20. Run-time Phenomena in Dynamic Software Updating: Causes and Effects

    DEFF Research Database (Denmark)

    Gregersen, Allan Raundahl; Jørgensen, Bo Nørregaard

    2011-01-01

    The development of a dynamic software updating system for statically-typed object-oriented programming languages has turned out to be a challenging task. Despite the fact that the present state of the art in dynamic updating systems, like JRebel, Dynamic Code Evolution VM, JVolve and Javeleon, all...... written in statically-typed object-oriented programming languages. In this paper, we present our experience from developing dynamically updatable applications using a state-of-the-art dynamic updating system for Java. We believe that the findings presented in this paper provide an important step towards...... provide very transparent and flexible technical solutions to dynamic updating, case studies have shown that designing dynamically updatable applications still remains a challenging task. This challenge has its roots in a number of run-time phenomena that are inherent to dynamic updating of applications...

  1. The Trick Simulation Toolkit: A NASA/Opensource Framework for Running Time Based Physics Models

    Science.gov (United States)

    Penn, John M.

    2016-01-01

    The Trick Simulation Toolkit is a simulation development environment used to create high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. Its purpose is to generate a simulation executable from a collection of user-supplied models and a simulation definition file. For each Trick-based simulation, Trick automatically provides job scheduling, numerical integration, the ability to write and restore human readable checkpoints, data recording, interactive variable manipulation, a run-time interpreter, and many other commonly needed capabilities. This allows simulation developers to concentrate on their domain expertise and the algorithms and equations of their models. Also included in Trick are tools for plotting recorded data and various other supporting utilities and libraries. Trick is written in C/C++ and Java and supports both Linux and MacOSX computer operating systems. This paper describes Trick's design and use at NASA Johnson Space Center.

  2. Novel time-dependent alignment of the ATLAS Inner Detector in the LHC Run 2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00386283; The ATLAS collaboration

    2016-01-01

    ATLAS is a multipurpose experiment at the LHC proton-proton collider. Its physics goals require an unbiased and high resolution measurement of the charged particle kinematic parameters. These critically depend on the layout and performance of the tracking system and the quality of the alignment of its components. For the LHC Run 2, the system has been upgraded with the installation of a new pixel layer, the Insertable B-layer (IBL). ATLAS Inner Detector alignment framework has been adapted and upgraded to correct very short time scale movements of the sub-detectors. In particular, a mechanical distortion of the IBL staves up to 20 μm and a vertical displacement of the Pixel detector of ~6 μm have been observed during data-taking. The techniques used to correct for these effects and to match the required Inner Detector performance will be presented.

  3. Operating Security System Support for Run-Time Security with a Trusted Execution Environment

    DEFF Research Database (Denmark)

    Gonzalez, Javier

    , it is safe to assume that any complex software is compromised. The problem is then to monitor and contain it when it executes in order to protect sensitive data and other sensitive assets. To really have an impact, any solution to this problem should be integrated in commodity operating systems...... in the Linux operating system. We are in the process of making this driver part of the mainline Linux kernel.......Software services have become an integral part of our daily life. Cyber-attacks have thus become a problem of increasing importance not only for the IT industry, but for society at large. A way to contain cyber-attacks is to guarantee the integrity of IT systems at run-time. Put differently...

  4. Supporting Multiprocessors in the Icecap Safety-Critical Java Run-Time Environment

    DEFF Research Database (Denmark)

    Zhao, Shuai; Wellings, Andy; Korsholm, Stephan Erbs

    The current version of the Safety Critical Java (SCJ) specification defines three compliance levels. Level 0 targets single processor programs while Level 1 and 2 can support multiprocessor platforms. Level 1 programs must be fully partitioned but Level 2 programs can also be more globally...... scheduled. As of yet, there is no official Reference Implementation for SCJ. However, the icecap project has produced a Safety-Critical Java Run-time Environment based on the Hardware-near Virtual Machine (HVM). This supports SCJ at all compliance levels and provides an implementation of the safety......-critical Java (javax.safetycritical) package. This is still work-in-progress and lacks certain key features. Among these is the ability to support multiprocessor platforms. In this paper, we explore two possible options to adding multiprocessor support to this environment: the “green thread” and the “native...

  5. New method of steganalysis for text data obtained by synonym run-length encoding

    Directory of Open Access Journals (Sweden)

    Ivan V. Nechta

    2018-05-01

    Full Text Available In this article, we present a new stegoanalysis method for detecting a text obtained by the synonym Run-Length Encoding. The analyzed RLE-method allows us to keep some statistical properties of the text after a secret message embedding. In particular, the probabilities distribution of the bits in the extracted message and the probabilities distribution of using text synonyms keep unchanged, that ensures a high secrecy degree of the considered embedding method. In this paper we show that the embedded message changes the probabilities distribution of bit-series lengths in the extracted message, and this fact is used for our stegoanalysis. It was shown that the embedded message breaks the statistical structure of the container, and this fact is used for the stegoanalysis. The constructed stegotest compares the probability distribution of runs (with length no more than 5 bits in the message extracted from the container with reference distributions corresponding to an empty and embedded containers.  Reference distributions were obtained by analysing of 1000 natural-text containers taken from the Gutenberg Project library. In this paper we consider two approaches for obtaining reference distributions. The first approach deals with analyzing the statistic of the message extracted from the container in the usual way (using the Tyrannosaurus Lex program. The second approach involves an additional decoding of the message in accordance with the analyzed run-length encoding algorithm. Experimental results allow us to assert that the first approach is more effective. The Kullback-Leibler measure is used as a divergence measure of two probability distributions. It was shown that the proposed method makes it possible to detect presence of the secret message in the container with a number of synonyms equal to 500, while false negative error is 1.5% and false positive error is 1.3%. In comparison with the known analogs, the proposed method demonstrates higher

  6. Running vacuum in the Universe and the time variation of the fundamental constants of Nature

    Energy Technology Data Exchange (ETDEWEB)

    Fritzsch, Harald [Nanyang Technological University, Institute for Advanced Study, Singapore (Singapore); Universitaet Muenchen, Physik-Department, Munich (Germany); Sola, Joan [Nanyang Technological University, Institute for Advanced Study, Singapore (Singapore); Universitat de Barcelona, Departament de Fisica Quantica i Astrofisica, Barcelona, Catalonia (Spain); Universitat de Barcelona (ICCUB), Institute of Cosmos Sciences, Barcelona, Catalonia (Spain); Nunes, Rafael C. [Universidade Federal de Juiz de Fora, Dept. de Fisica, Juiz de Fora, MG (Brazil)

    2017-03-15

    We compute the time variation of the fundamental constants (such as the ratio of the proton mass to the electron mass, the strong coupling constant, the fine-structure constant and Newton's constant) within the context of the so-called running vacuum models (RVMs) of the cosmic evolution. Recently, compelling evidence has been provided that these models are able to fit the main cosmological data (SNIa+BAO+H(z)+LSS+BBN+CMB) significantly better than the concordance ΛCDM model. Specifically, the vacuum parameters of the RVM (i.e. those responsible for the dynamics of the vacuum energy) prove to be nonzero at a confidence level >or similar 3σ. Here we use such remarkable status of the RVMs to make definite predictions on the cosmic time variation of the fundamental constants. It turns out that the predicted variations are close to the present observational limits. Furthermore, we find that the time evolution of the dark matter particle masses should be crucially involved in the total mass variation of our Universe. A positive measurement of this kind of effects could be interpreted as strong support to the ''micro-macro connection'' (viz. the dynamical feedback between the evolution of the cosmological parameters and the time variation of the fundamental constants of the microscopic world), previously proposed by two of us (HF and JS). (orig.)

  7. Effect of Light/Dark Cycle on Wheel Running and Responding Reinforced by the Opportunity to Run Depends on Postsession Feeding Time

    Science.gov (United States)

    Belke, T. W.; Mondona, A. R.; Conrad, K. M.; Poirier, K. F.; Pickering, K. L.

    2008-01-01

    Do rats run and respond at a higher rate to run during the dark phase when they are typically more active? To answer this question, Long Evans rats were exposed to a response-initiated variable interval 30-s schedule of wheel-running reinforcement during light and dark cycles. Wheel-running and local lever-pressing rates increased modestly during…

  8. Running speed during training and percent body fat predict race time in recreational male marathoners

    OpenAIRE

    Knechtle, Beat; Barandun,; Knechtle,Patrizia; Klipstein,; Rüst,Christoph Alexander; Rosemann,Thomas; Lepers,Romuald

    2012-01-01

     Background: Recent studies have shown that personal best marathon time is a strong predictor of race time in male ultramarathoners. We aimed to determine variables predictive of marathon race time in recreational male marathoners by using the same characteristics of anthropometry and training as used for ultramarathoners.Methods: Anthropometric and training characteristics of 126 recreational male marathoners were bivariately and multivariately related to marathon race times.Results...

  9. An Enhanced Run-Length Encoding Compression Method for Telemetry Data

    Directory of Open Access Journals (Sweden)

    Shan Yanhu

    2017-09-01

    Full Text Available The telemetry data are essential in evaluating the performance of aircraft and diagnosing its failures. This work combines the oversampling technology with the run-length encoding compression algorithm with an error factor to further enhance the compression performance of telemetry data in a multichannel acquisition system. Compression of telemetry data is carried out with the use of FPGAs. In the experiments there are used pulse signals and vibration signals. The proposed method is compared with two existing methods. The experimental results indicate that the compression ratio, precision, and distortion degree of the telemetry data are improved significantly compared with those obtained by the existing methods. The implementation and measurement of the proposed telemetry data compression method show its effectiveness when used in a high-precision high-capacity multichannel acquisition system.

  10. QRTEngine: An easy solution for running online reaction time experiments using Qualtrics

    NARCIS (Netherlands)

    Barnhoorn, Jonathan Sebastiaan; Haasnoot, Erwin; Bocanegra, Bruno R.; van Steenbergen, Henk

    2015-01-01

    Performing online behavioral research is gaining increased popularity among researchers in psychological and cognitive science. However, the currently available methods for conducting online reaction time experiments are often complicated and typically require advanced technical skills. In this

  11. Walking, running, and resting under time, distance, and average speed constraints: optimality of walk–run–rest mixtures

    Science.gov (United States)

    Long, Leroy L.; Srinivasan, Manoj

    2013-01-01

    On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk–run mixture at intermediate speeds and a walk–rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients—a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk–run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill. PMID:23365192

  12. Real time analysis with the upgraded LHCb trigger in Run III

    Science.gov (United States)

    Szumlak, Tomasz

    2017-10-01

    The current LHCb trigger system consists of a hardware level, which reduces the LHC bunch-crossing rate of 40 MHz to 1.1 MHz, a rate at which the entire detector is read out. A second level, implemented in a farm of around 20k parallel processing CPUs, the event rate is reduced to around 12.5 kHz. The LHCb experiment plans a major upgrade of the detector and DAQ system in the LHC long shutdown II (2018-2019). In this upgrade, a purely software based trigger system is being developed and it will have to process the full 30 MHz of bunch crossings with inelastic collisions. LHCb will also receive a factor of 5 increase in the instantaneous luminosity, which further contributes to the challenge of reconstructing and selecting events in real time with the CPU farm. We discuss the plans and progress towards achieving efficient reconstruction and selection with a 30 MHz throughput. Another challenge is to exploit the increased signal rate that results from removing the 1.1 MHz readout bottleneck, combined with the higher instantaneous luminosity. Many charm hadron signals can be recorded at up to 50 times higher rate. LHCb is implementing a new paradigm in the form of real time data analysis, in which abundant signals are recorded in a reduced event format that can be fed directly to the physics analyses. These data do not need any further offline event reconstruction, which allows a larger fraction of the grid computing resources to be devoted to Monte Carlo productions. We discuss how this real-time analysis model is absolutely critical to the LHCb upgrade, and how it will evolve during Run-II.

  13. Personal best marathon time and longest training run, not anthropometry, predict performance in recreational 24-hour ultrarunners.

    Science.gov (United States)

    Knechtle, Beat; Knechtle, Patrizia; Rosemann, Thomas; Lepers, Romuald

    2011-08-01

    In recent studies, a relationship between both low body fat and low thicknesses of selected skinfolds has been demonstrated for running performance of distances from 100 m to the marathon but not in ultramarathon. We investigated the association of anthropometric and training characteristics with race performance in 63 male recreational ultrarunners in a 24-hour run using bi and multivariate analysis. The athletes achieved an average distance of 146.1 (43.1) km. In the bivariate analysis, body mass (r = -0.25), the sum of 9 skinfolds (r = -0.32), the sum of upper body skinfolds (r = -0.34), body fat percentage (r = -0.32), weekly kilometers ran (r = 0.31), longest training session before the 24-hour run (r = 0.56), and personal best marathon time (r = -0.58) were related to race performance. Stepwise multiple regression showed that both the longest training session before the 24-hour run (p = 0.0013) and the personal best marathon time (p = 0.0015) had the best correlation with race performance. Performance in these 24-hour runners may be predicted (r2 = 0.46) by the following equation: Performance in a 24-hour run, km) = 234.7 + 0.481 (longest training session before the 24-hour run, km) - 0.594 (personal best marathon time, minutes). For practical applications, training variables such as volume and intensity were associated with performance but not anthropometric variables. To achieve maximum kilometers in a 24-hour run, recreational ultrarunners should have a personal best marathon time of ∼3 hours 20 minutes and complete a long training run of ∼60 km before the race, whereas anthropometric characteristics such as low body fat or low skinfold thicknesses showed no association with performance.

  14. Real-time dual-comb spectroscopy with a free-running bidirectionally mode-locked fiber laser

    Science.gov (United States)

    Mehravar, S.; Norwood, R. A.; Peyghambarian, N.; Kieu, K.

    2016-06-01

    Dual-comb technique has enabled exciting applications in high resolution spectroscopy, precision distance measurements, and 3D imaging. Major advantages over traditional methods can be achieved with dual-comb technique. For example, dual-comb spectroscopy provides orders of magnitude improvement in acquisition speed over standard Fourier-transform spectroscopy while still preserving the high resolution capability. Wider adoption of the technique has, however, been hindered by the need for complex and expensive ultrafast laser systems. Here, we present a simple and robust dual-comb system that employs a free-running bidirectionally mode-locked fiber laser operating at telecommunication wavelength. Two femtosecond frequency combs (with a small difference in repetition rates) are generated from a single laser cavity to ensure mutual coherent properties and common noise cancellation. As the result, we have achieved real-time absorption spectroscopy measurements without the need for complex servo locking with accurate frequency referencing, and relatively high signal-to-noise ratio.

  15. A new view of responses to first-time barefoot running.

    OpenAIRE

    Wilkinson, Mick; Caplan, Nick; Akenhead, Richard; Hayes, Phil

    2015-01-01

    We examined acute alterations in gait and oxygen cost from shod-to-barefoot running in habitually-shod well-trained runners with no prior experience of running barefoot. Thirteen runners completed six-minute treadmill runs shod and barefoot on separate days at a mean speed of 12.5 km·h-1. Steady-state oxygen cost in the final minute was recorded. Kinematic data were captured from 30-consecutive strides. Mean differences between conditions were estimated with 90% confidence intervals. When bar...

  16. The optimal production-run time for a stock-dependent imperfect production process

    Directory of Open Access Journals (Sweden)

    Jain Divya

    2013-01-01

    Full Text Available This paper develops an inventory model for a hypothesized volume flexible manufacturing system in which the production rate is stock-dependent and the system produces both perfect and imperfect quality items. The demand rate of perfect quality items is known and constant, whereas the demand rate of imperfect (non-conforming to specifications quality items is a function of discount offered in the selling price. In this paper, we determine an optimal production-run time and the optimal discount that should be offered in the selling price to influence the sale of imperfect quality items produced by the manufacturing system. The considered model aims to maximize the net profit obtained through the sales of both perfect and imperfect quality items subject to certain constraints of the system. The solution procedure suggests the use of ‘Interior Penalty Function Method’ to solve the associated constrained maximization problem. Finally, a numerical example demonstrating the applicability of proposed model has been included.

  17. Running out of time: exploring women's motivations for social egg freezing.

    Science.gov (United States)

    Baldwin, Kylie; Culley, Lorraine; Hudson, Nicky; Mitchell, Helene

    2018-04-12

    Few qualitative studies have explored women's use of social egg freezing. Derived from an interview study of 31 participants, this article explores the motivations of women using this technology. Semi-structured interviews were conducted with 31 users of social egg freezing resident in UK (n = 23), USA (n = 7) and Norway (n = 1). Interviews were face to face (n = 16), through Skype and Facetime (n = 9) or by telephone (n = 6). Data were analyzed using interpretive thematic analysis. Women's use of egg freezing was shaped by fears of running out of time to form a conventional family, difficulties in finding a partner and concerns about "panic partnering", together with a desire to avoid future regrets and blame. For some women, use of egg freezing was influenced by recent fertility or health diagnoses as well as critical life events. A fifth of the participants also disclosed an underlying fertility or health issue as affecting their decision. The study provides new insights in to the complex motivations women have for banking eggs. It identifies how women's use of egg freezing was an attempt to "preserve fertility" in the absence of the particular set of "life conditions" they regarded as crucial for pursuing parenthood. It also demonstrates that few women were motivated by a desire to enhance their career and that the boundaries between egg freezing for medical and for social reasons may be more porous than first anticipated.

  18. A Test Run of the EGSIEM Near Real-Time Service Based on GRACE Mission Data

    Science.gov (United States)

    Kvas, A.; Gruber, C.; Gouweleeuw, B.; Guntner, A.; Mayer-Gürr, T.; Flechtner, F. M.

    2017-12-01

    To enable the use of GRACE and GRACE-FO data for rapid monitoring applications, the EGSIEM (European Gravity Service for Improved Emergency Management) project, funded by the Horizon 2020 Framework Program for Research and Innovation of the European Union, has implemented a demonstrator for a near real-time (NRT) gravity field service. The goal of this service is to provide daily gravity field solutions with a maximum latency of five days. For this purpose, two independent approaches were developed at the German Research Centre for Geosciences (GFZ) and Graz University of Technology (TUG). Based on these daily gravity field solutions, statistical flood and drought indicators are derived by the EGSIEM Hydrological Service, developed at GFZ. The NRT products are subsequently provided to the Center for Satellite based Crisis Information (ZKI) at the German Aerospace Center as well as the Global Flood Awareness System (GloFAS) at the Joint Research Center of the European Commission. In the first part of this contribution, the performance of the service based on a statistical analysis of historical flood events during the GRACE period is evaluated. Then, results from the six month long operational test run of the service which started on April 1st 2017 are presented and a comparison between historical and operational gravity products and flood indicators is made.

  19. A Run-Time Verification Framework for Smart Grid Applications Implemented on Simulation Frameworks

    Energy Technology Data Exchange (ETDEWEB)

    Ciraci, Selim; Sozer, Hasan; Tekinerdogan, Bedir

    2013-05-18

    Smart grid applications are implemented and tested with simulation frameworks as the developers usually do not have access to large sensor networks to be used as a test bed. The developers are forced to map the implementation onto these frameworks which results in a deviation between the architecture and the code. On its turn this deviation makes it hard to verify behavioral constraints that are de- scribed at the architectural level. We have developed the ConArch toolset to support the automated verification of architecture-level behavioral constraints. A key feature of ConArch is programmable mapping for architecture to the implementation. Here, developers implement queries to identify the points in the target program that correspond to architectural interactions. ConArch generates run- time observers that monitor the flow of execution between these points and verifies whether this flow conforms to the behavioral constraints. We illustrate how the programmable mappings can be exploited for verifying behavioral constraints of a smart grid appli- cation that is implemented with two simulation frameworks.

  20. The Reliability and Validity of a Four-Minute Running Time-Trial in Assessing V˙O2max and Performance

    Directory of Open Access Journals (Sweden)

    Kerry McGawley

    2017-05-01

    Full Text Available Introduction: Traditional graded-exercise tests to volitional exhaustion (GXTs are limited by the need to establish starting workloads, stage durations, and step increments. Short-duration time-trials (TTs may be easier to implement and more ecologically valid in terms of real-world athletic events. The purpose of the current study was to assess the reliability and validity of maximal oxygen uptake (V˙O2max and performance measured during a traditional GXT (STEP and a four-minute running time-trial (RunTT.Methods: Ten recreational runners (age: 32 ± 7 years; body mass: 69 ± 10 kg completed five STEP tests with a verification phase (VER and five self-paced RunTTs on a treadmill. The order of the STEP/VER and RunTT trials was alternated and counter-balanced. Performance was measured as time to exhaustion (TTE for STEP and VER and distance covered for RunTT.Results: The coefficient of variation (CV for V˙O2max was similar between STEP, VER, and RunTT (1.9 ± 1.0, 2.2 ± 1.1, and 1.8 ± 0.8%, respectively, but varied for performance between the three types of test (4.5 ± 1.9, 9.7 ± 3.5, and 1.8 ± 0.7% for STEP, VER, and RunTT, respectively. Bland-Altman limits of agreement (bias ± 95% showed V˙O2max to be 1.6 ± 3.6 mL·kg−1·min−1 higher for STEP vs. RunTT. Peak HR was also significantly higher during STEP compared with RunTT (P = 0.019.Conclusion: A four-minute running time-trial appears to provide more reliable performance data in comparison to an incremental test to exhaustion, but may underestimate V˙O2max.

  1. Generalised functions method in the boundary value problems of elastodynamics by stationary running loads

    International Nuclear Information System (INIS)

    Alexeyeva, L.A.

    2001-01-01

    Investigation of diffraction processes of seismic waves on underground tunnels and pipelines with use of mathematical methods is related to solving boundary value problems (BVP) for hyperbolic system of differential equations in domains with cylindrical cavities when seismic disturbances propagate along boundaries with subsonic or transonic speeds. Also such classes of problems appear when it's necessary to study the behavior of underground constructions and Stress-strain State of environment. But in this case the velocities of running loads are less than velocities of wave propagation in surrounding medium. At present similar problems were solved only for constructions of circular cylindrical form with use of methods of full and not full dividing of variables. For cylindrical constructions of complex cross section strong mathematical theories for solving these problems were absent.(author)

  2. Novel real-time alignment and calibration of LHCb detector for Run II and tracking for the upgrade.

    CERN Document Server

    AUTHOR|(CDS)2091576

    2016-01-01

    LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run II. Data collected at the start of the fill is processed in a few minutes and used to update the alignment, while the calibration constants are evaluated for each run. The procedure aims to improve the quality of the online selection and performance stability. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructure for the trigger. A similar scheme is planned to be used for Run III foreseen to start in 2020. At that time LHCb will run at an instantaneous luminosity of $2 \\times 10^{33}$ cm$^2$ s$^1$ and a fully software based trigger strategy will be used. The new running conditions and the tighter timing constraints in the software trigger (only 13 ms per event are available) represent a big challenge for track reconstruction. The new software based trigger strategy implies a full detector read-out at the collision rate of 40 MHz. High performance ...

  3. A fast running method for predicting the efficiency of core melt spreading for application in ASTEC

    International Nuclear Information System (INIS)

    Spengler, C.

    2010-01-01

    The integral Accident Source Term Evaluation Code (ASTEC) is jointly developed by the French Institut de Radioprotection et de Surete Nucleaire (IRSN) and the German Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH to simulate the complete scenario of a hypothetical severe accident in a nuclear light water reactor, from the initial event until the possible radiological release of fission products out of the containment. In the frame of the new series of ASTEC V2 versions appropriate model extensions to the European Pressurised Water Reactor (EPR) are under development. With view to assessing with ASTEC the proper operation of the ex-vessel melt retention and coolability concept of the EPR with regard to melt spreading an approximation of the area finally covered by the corium and of the distance run by the corium front before freezing is required. A necessary capability of ASTEC is in a first step to identify such boundary cases, for which there is a potential that the melt will freeze before the spreading area is completely filled. This paper presents a fast running method for estimating the final extent of the area covered with melt on which a simplified criterion in ASTEC for detecting such boundary cases will be based. If a boundary case is detected the application of a more-detailed method might be necessary to assess further the consequences for the accident sequence. The major objective here is to provide a reliable method for estimating the final result of the spreading and not to provide highly detailed methods to simulate the dynamics of the transient process. (orig.)

  4. Accuracy analysis of the State-of-Charge and remaining run-time determination for lithium-ion batteries

    NARCIS (Netherlands)

    Pop, V.; Bergveld, H.J.; Notten, P.H.L.; Op het Veld, J.H.G.; Regtien, Paulus P.L.

    2008-01-01

    This paper describes the various error sources in a real-time State-of-Charge (SoC) evaluation system and their effects on the overall accuracy in the calculation of the remaining run-time of a battery-operated system. The SoC algorithm for Li-ion batteries studied in this paper combines direct

  5. Accuracy analysis of the state-of-charge and remaining run-time determination for lithium-ion batteries

    NARCIS (Netherlands)

    Pop, V.; Bergveld, H.J.; Notten, P.H.L.; Op het Veld, J.H.G.; Regtien, P.P.L.

    2009-01-01

    This paper describes the various error sources in a real-time State-of-Charge (SoC) evaluation system and their effects on the overall accuracy in the calculation of the remaining run-time of a battery-operated system. The SoC algorithm for Li-ion batteries studied in this paper combines direct

  6. Depression of home cage wheel running: a reliable and clinically relevant method to assess migraine pain in rats.

    Science.gov (United States)

    Kandasamy, Ram; Lee, Andrea T; Morgan, Michael M

    2017-12-01

    The development of new anti-migraine treatments is limited by the difficulty inassessing migraine pain in laboratory animals. Depression of activity is one of the few diagnostic criteria formigraine that can be mimicked in rats. The goal of the present study was to test the hypothesis thatdepression of home cage wheel running is a reliable and clinically relevant method to assess migraine painin rats. Adult female rats were implanted with a cannula to inject allyl isothiocyanate (AITC) onto the dura to induce migraine pain, as has been shown before. Rats recovered from implantation surgery for 8 days in cages containing a running wheel. Home cage wheel running was recorded 23 h a day. AITC and the migraine medication sumatriptan were administered in the hour prior to onset of the dark phase. Administration of AITC caused a concentration-dependent decrease in wheel running that lasted 3 h. The duration and magnitude of AITC-induced depression of wheel running was consistent following three repeated injections spaced 48 h apart. Administration of sumatriptan attenuated AITC-induced depressionof wheel running when a large dose (1 mg/kg) was administered immediately following AITC administration. Wheel running patterns did not change when sumatriptan was given to naïve rats. These data indicate that home cage wheel running is a sensitive, reliable, and clinically relevant method to assess migraine pain in the rat.

  7. [Research and implementation of a real-time monitoring system for running status of medical monitors based on the internet of things].

    Science.gov (United States)

    Li, Yiming; Qian, Mingli; Li, Long; Li, Bin

    2014-07-01

    This paper proposed a real-time monitoring system for running status of medical monitors based on the internet of things. In the aspect of hardware, a solution of ZigBee networks plus 470 MHz networks is proposed. In the aspect of software, graphical display of monitoring interface and real-time equipment failure alarm is implemented. The system has the function of remote equipment failure detection and wireless localization, which provides a practical and effective method for medical equipment management.

  8. Primary and secondary effects of real-time feedback to reduce vertical loading rate during running.

    Science.gov (United States)

    Baggaley, M; Willy, R W; Meardon, S A

    2017-05-01

    Gait modifications are often proposed to reduce average loading rate (AVLR) during running. While many modifications may reduce AVLR, little work has investigated secondary gait changes. Thirty-two rearfoot runners [16M, 16F, 24.7 (3.3) years, 22.72 (3.01) kg/m 2 , >16 km/week] ran at a self-selected speed (2.9 ± 0.3 m/s) on an instrumented treadmill, while 3D mechanics were calculated via real-time data acquisition. Real-time visual feedback was provided in a randomized order to cue a forefoot strike (FFS), a minimum 7.5% decrease in step length, or a minimum 15% reduction in AVLR. AVLR was reduced by FFS (mean difference = 26.4 BW/s; 95% CI = 20.1, 32.7; P < 0.001), shortened step length (8.4 BW/s; 95% CI = 2.9, 14.0; P = 0.004), and cues to reduce AVLR (14.9 BW/s; 95% CI = 10.2, 19.6; P < 0.001). FFS, shortened step length, and cues to reduce AVLR all reduced eccentric knee joint work per km [(-48.2 J/kg*m; 95% CI = -58.1, -38.3; P < 0.001), (-35.5 J/kg*m; 95% CI = -42.4, 28.6; P < 0.001), (-23.1 J/kg*m; 95% CI = -33.3, -12.9; P < 0.001)]. However, FFS and cues to reduce AVLR also increased eccentric ankle joint work per km [(54.49 J/kg*m; 95% CI = 45.3, 63.7; P < 0.001), (9.20 J/kg*m; 95% CI = 1.7, 16.7; P = 0.035)]. Potentially injurious secondary effects associated with FFS and cues to reduce AVLR may undermine their clinical utility. Alternatively, a shortened step length resulted in small reductions in AVLR, without any potentially injurious secondary effects. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  9. A Time Series Forecasting Method

    Directory of Open Access Journals (Sweden)

    Wang Zhao-Yu

    2017-01-01

    Full Text Available This paper proposes a novel time series forecasting method based on a weighted self-constructing clustering technique. The weighted self-constructing clustering processes all the data patterns incrementally. If a data pattern is not similar enough to an existing cluster, it forms a new cluster of its own. However, if a data pattern is similar enough to an existing cluster, it is removed from the cluster it currently belongs to and added to the most similar cluster. During the clustering process, weights are learned for each cluster. Given a series of time-stamped data up to time t, we divide it into a set of training patterns. By using the weighted self-constructing clustering, the training patterns are grouped into a set of clusters. To estimate the value at time t + 1, we find the k nearest neighbors of the input pattern and use these k neighbors to decide the estimation. Experimental results are shown to demonstrate the effectiveness of the proposed approach.

  10. QRTEngine: An easy solution for running online reaction time experiments using Qualtrics.

    Science.gov (United States)

    Barnhoorn, Jonathan S; Haasnoot, Erwin; Bocanegra, Bruno R; van Steenbergen, Henk

    2015-12-01

    Performing online behavioral research is gaining increased popularity among researchers in psychological and cognitive science. However, the currently available methods for conducting online reaction time experiments are often complicated and typically require advanced technical skills. In this article, we introduce the Qualtrics Reaction Time Engine (QRTEngine), an open-source JavaScript engine that can be embedded in the online survey development environment Qualtrics. The QRTEngine can be used to easily develop browser-based online reaction time experiments with accurate timing within current browser capabilities, and it requires only minimal programming skills. After introducing the QRTEngine, we briefly discuss how to create and distribute a Stroop task. Next, we describe a study in which we investigated the timing accuracy of the engine under different processor loads using external chronometry. Finally, we show that the QRTEngine can be used to reproduce classic behavioral effects in three reaction time paradigms: a Stroop task, an attentional blink task, and a masked-priming task. These findings demonstrate that QRTEngine can be used as a tool for conducting online behavioral research even when this requires accurate stimulus presentation times.

  11. Running the running

    OpenAIRE

    Cabass, Giovanni; Di Valentino, Eleonora; Melchiorri, Alessandro; Pajer, Enrico; Silk, Joseph

    2016-01-01

    We use the recent observations of Cosmic Microwave Background temperature and polarization anisotropies provided by the Planck satellite experiment to place constraints on the running $\\alpha_\\mathrm{s} = \\mathrm{d}n_{\\mathrm{s}} / \\mathrm{d}\\log k$ and the running of the running $\\beta_{\\mathrm{s}} = \\mathrm{d}\\alpha_{\\mathrm{s}} / \\mathrm{d}\\log k$ of the spectral index $n_{\\mathrm{s}}$ of primordial scalar fluctuations. We find $\\alpha_\\mathrm{s}=0.011\\pm0.010$ and $\\beta_\\mathrm{s}=0.027\\...

  12. Recent run-time experience and investigation of impurities in turbines circuit of Helium plant of SST-1

    International Nuclear Information System (INIS)

    Panchal, P.; Panchal, R.; Patel, R.

    2013-01-01

    One of the key sub-systems of Steady State superconducting Tokamak (SST-1) is cryogenic 1.3 kW at 4.5 K Helium refrigerator/liquefier system. The helium plant consists of 3 nos. of screw compressors, oil removal system, purifier and cold-box with 3 turbo expanders (turbines) and helium cold circulator. During the recent SST-1 plasma campaigns, we observed high pressure drop of the order of 3 bar between the wheel outlet of turbine A and the wheel inlet of turbine - B. This was significant higher values of pressures drop across turbines, which reduced the speed of turbine A and B and in turn reduced the overall plant capacity. The helium circuits in the plant have 10-micron filter at the mouth of turbine - B. Initially, major suspects of such high blockage are assumed to be air-impurity, dust particles or collapse of filter. Several breaks in plant operation have been taken to warm up the turbines circuits up to 90 K to remove condensation of air-impurities at filter. Still this exercise did not solve blockage of filter in turbine circuits. A detailed investigation exercise with air/water regeneration and rinsing of cold box as well as purification of helium gas in buffer tanks are carried out to remove air impurities from cold-box. A trial run of cold box was executed in liquefier mode with turbines up to cryogenic temperatures and solved blockage in turbine circuits. The paper describes run-time experience of helium plant with helium impurity in turbine circuits, methods to remove impurity, demonstration of turbine performance and lessons learnt during this operation. (author)

  13. Running and Metabolic Demands of Elite Rugby Union Assessed Using Traditional, Metabolic Power, and Heart Rate Monitoring Methods

    Directory of Open Access Journals (Sweden)

    Romain Dubois, Thierry Paillard, Mark Lyons, David McGrath, Olivier Maurelli, Jacques Prioux

    2017-03-01

    Full Text Available The aims of this study were (1 to analyze elite rugby union game demands using 3 different approaches: traditional, metabolic and heart rate-based methods (2 to explore the relationship between these methods and (3 to explore positional differences between the backs and forwards players. Time motion analysis and game demands of fourteen professional players (24.1 ± 3.4 y, over 5 European challenge cup games, were analyzed. Thresholds of 14.4 km·h-1, 20 W.kg-1 and 85% of maximal heart rate (HRmax were set for high-intensity efforts across the three methods. The mean % of HRmax was 80.6 ± 4.3 % while 42.2 ± 16.5% of game time was spent above 85% of HRmax with no significant differences between the forwards and the backs. Our findings also show that the backs cover greater distances at high-speed than forwards (% difference: +35.2 ± 6.6%; p<0.01 while the forwards cover more distance than the backs (+26.8 ± 5.7%; p<0.05 in moderate-speed zone (10-14.4 km·h-1. However, no significant difference in high-metabolic power distance was found between the backs and forwards. Indeed, the high-metabolic power distances were greater than high-speed running distances of 24.8 ± 17.1% for the backs, and 53.4 ± 16.0% for the forwards with a significant difference (+29.6 ± 6.0% for the forwards; p<0.001 between the two groups. Nevertheless, nearly perfect correlations were found between the total distance assessed using the traditional approach and the metabolic power approach (r = 0.98. Furthermore, there is a strong association (r = 0.93 between the high-speed running distance (assessed using the traditional approach and the high-metabolic power distance. The HR monitoring methods demonstrate clearly the high physiological demands of professional rugby games. The traditional and the metabolic-power approaches shows a close correlation concerning their relative values, nevertheless the difference in absolute values especially for the high

  14. Mapping real-life applications on run-time reconfigurable NoC-based MPSoC on FPGA.

    NARCIS (Netherlands)

    Singh, A.K.; Kumar, A.; Srikanthan, Th.; Ha, Y.

    2010-01-01

    Multiprocessor systems-on-chip (MPSoC) are required to fulfill the performance demand of modern real-life embedded applications. These MPSoCs are employing Network-on-Chip (NoC) for reasons of efficiency and scalability. Additionally, these systems need to support run-time reconfiguration of their

  15. Running retraining to treat lower limb injuries: a mixed-methods study of current evidence synthesised with expert opinion.

    Science.gov (United States)

    Barton, C J; Bonanno, D R; Carr, J; Neal, B S; Malliaras, P; Franklyn-Miller, A; Menz, H B

    2016-05-01

    Running-related injuries are highly prevalent. Synthesise published evidence with international expert opinion on the use of running retraining when treating lower limb injuries. Mixed methods. A systematic review of clinical and biomechanical findings related to running retraining interventions were synthesised and combined with semistructured interviews with 16 international experts covering clinical reasoning related to the implementation of running retraining. Limited evidence supports the effectiveness of transition from rearfoot to forefoot or midfoot strike and increase step rate or altering proximal mechanics in individuals with anterior exertional lower leg pain; and visual and verbal feedback to reduce hip adduction in females with patellofemoral pain. Despite the paucity of clinical evidence, experts recommended running retraining for: iliotibial band syndrome; plantar fasciopathy (fasciitis); Achilles, patellar, proximal hamstring and gluteal tendinopathy; calf pain; and medial tibial stress syndrome. Tailoring approaches to each injury and individual was recommended to optimise outcomes. Substantial evidence exists for the immediate biomechanical effects of running retraining interventions (46 studies), including evaluation of step rate and strike pattern manipulation, strategies to alter proximal kinematics and cues to reduce impact loading variables. Our synthesis of published evidence related to clinical outcomes and biomechanical effects with expert opinion indicates running retraining warrants consideration in the treatment of lower limb injuries in clinical practice. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  16. Long-run sectoral development time series evidence for the German economy

    OpenAIRE

    Dietrich, Andreas; Krüger, Jens J.

    2008-01-01

    In economic development, long-run structural change among the three main sectors of an economy follows a typical pattern with the primary sector (agriculture, mining) first dominating, followed by the secondary sector (manufacturing) and finally by the tertiary sector (services) in terms of employment and value added. We reconsider the verbal theoretical work of Fourastié and build a simple model encompassing its main features, most notably the macroeconomic influences on the sectoral develop...

  17. Methods, media and systems for managing a distributed application running in a plurality of digital processing devices

    Science.gov (United States)

    Laadan, Oren; Nieh, Jason; Phung, Dan

    2012-10-02

    Methods, media and systems for managing a distributed application running in a plurality of digital processing devices are provided. In some embodiments, a method includes running one or more processes associated with the distributed application in virtualized operating system environments on a plurality of digital processing devices, suspending the one or more processes, and saving network state information relating to network connections among the one or more processes. The method further include storing process information relating to the one or more processes, recreating the network connections using the saved network state information, and restarting the one or more processes using the stored process information.

  18. Running and Metabolic Demands of Elite Rugby Union Assessed Using Traditional, Metabolic Power, and Heart Rate Monitoring Methods

    Science.gov (United States)

    Dubois, Romain; Paillard, Thierry; Lyons, Mark; McGrath, David; Maurelli, Olivier; Prioux, Jacques

    2017-01-01

    The aims of this study were (1) to analyze elite rugby union game demands using 3 different approaches: traditional, metabolic and heart rate-based methods (2) to explore the relationship between these methods and (3) to explore positional differences between the backs and forwards players. Time motion analysis and game demands of fourteen professional players (24.1 ± 3.4 y), over 5 European challenge cup games, were analyzed. Thresholds of 14.4 km·h-1, 20 W.kg-1 and 85% of maximal heart rate (HRmax) were set for high-intensity efforts across the three methods. The mean % of HRmax was 80.6 ± 4.3 % while 42.2 ± 16.5% of game time was spent above 85% of HRmax with no significant differences between the forwards and the backs. Our findings also show that the backs cover greater distances at high-speed than forwards (% difference: +35.2 ± 6.6%; pdemands of professional rugby games. The traditional and the metabolic-power approaches shows a close correlation concerning their relative values, nevertheless the difference in absolute values especially for the high-intensity thresholds demonstrates that the metabolic power approach may represent an interesting alternative to the traditional approaches used in evaluating the high-intensity running efforts required in rugby union games. Key points Elite/professional rugby union players Heart rate monitoring during official games Metabolic power approach PMID:28344455

  19. Optimal design and real time control of the integrated urban run-off system

    DEFF Research Database (Denmark)

    Harremoës, Poul; Rauch, Wolfgang

    1999-01-01

    Traditional design of urban run-off systems is based on fixed rules with respect to the points of demarcation between the three systems involved: the sewer system, the treatment plant and the receiving water. An alternative to fixed rules is to model the total system. There is still uncertainty...... and evaluation of competing alternatives for design. However, the complexity of these systems is such that the parameters associated with pollution are hardly identifiable on the basis of reasonable monitoring programmes. The empirical-iterative approach: structures are built on simplified assumptions...

  20. Learning to Run challenge solutions: Adapting reinforcement learning methods for neuromusculoskeletal environments

    OpenAIRE

    Kidziński, Łukasz; Mohanty, Sharada Prasanna; Ong, Carmichael; Huang, Zhewei; Zhou, Shuchang; Pechenko, Anton; Stelmaszczyk, Adam; Jarosik, Piotr; Pavlov, Mikhail; Kolesnikov, Sergey; Plis, Sergey; Chen, Zhibo; Zhang, Zhizheng; Chen, Jiale; Shi, Jun

    2018-01-01

    In the NIPS 2017 Learning to Run challenge, participants were tasked with building a controller for a musculoskeletal model to make it run as fast as possible through an obstacle course. Top participants were invited to describe their algorithms. In this work, we present eight solutions that used deep reinforcement learning approaches, based on algorithms such as Deep Deterministic Policy Gradient, Proximal Policy Optimization, and Trust Region Policy Optimization. Many solutions use similar ...

  1. Estimation of POL-iteration methods in fast running DNBR code

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Hyuk; Kim, S. J.; Seo, K. W.; Hwang, D. H. [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    In this study, various root finding methods are applied to the POL-iteration module in SCOMS and POLiteration efficiency is compared with reference method. On the base of these results, optimum algorithm of POL iteration is selected. The POL requires the iteration until present local power reach limit power. The process to search the limiting power is equivalent with a root finding of nonlinear equation. POL iteration process involved in online monitoring system used a variant bisection method that is the most robust algorithm to find the root of nonlinear equation. The method including the interval accelerating factor and escaping routine out of ill-posed condition assured the robustness of SCOMS system. POL iteration module in SCOMS shall satisfy the requirement which is a minimum calculation time. For this requirement of calculation time, non-iterative algorithm, few channel model, simple steam table are implemented into SCOMS to improve the calculation time. MDNBR evaluation at a given operating condition requires the DNBR calculation at all axial locations. An increasing of POL-iteration number increased a calculation load of SCOMS significantly. Therefore, calculation efficiency of SCOMS is strongly dependent on the POL iteration number. In case study, the iterations of the methods have a superlinear convergence for finding limiting power but Brent method shows a quardratic convergence speed. These methods are effective and better than the reference bisection algorithm.

  2. Hourly Comparison of GPM-IMERG-Final-Run and IMERG-Real-Time (V-03) over a Dense Surface Network in Northeastern Austria

    Science.gov (United States)

    Sharifi, Ehsan; Steinacker, Reinhold; Saghafian, Bahram

    2017-04-01

    Accurate quantitative daily precipitation estimation is key to meteorological and hydrological applications in hazards forecast and management. In-situ observations over mountainous areas are mostly limited, however, currently available satellite precipitation products can potentially provide the precipitation estimation needed for meteorological and hydrological applications. Over the years, blended methods that use multi-satellites and multi-sensors have been developed for estimating of global precipitation. One of the latest satellite precipitation products is GPM-IMERG (Global Precipitation Measurement with 30-minute temporal and 0.1-degree spatial resolutions) which consists of three products: Final-Run (aimed for research), Real-Time early run, and Real-Time late run. The Integrated Multisatellite Retrievals for GPM (IMERG) products built upon the success of TRMM's Multisatellite Precipitation Analysis (TMPA) products continue to make improvements in spatial and temporal resolutions and snowfall estimates. Recently, researchers who evaluated IMERG-Final-Run V-03 and other precipitation products indicated better performance for IMERG-Final-Run against other similar products. In this study two GPM-IMERG products, namely final run and real time-late run, were evaluated against a dense synoptic stations network (62 stations) over Northeastern Austria for mid-March 2015 to end of January 2016 period at hourly time-scale. Both products were examined against the reference data (stations) in capturing the occurrence of precipitation and statistical characteristics of precipitation intensity. Both satellite precipitation products underestimated precipitation events of 0.1 mm/hr to 0.4 mm/hr in intensity. For precipitations 0.4 mm/hr and greater, the trend was reversed and both satellite products overestimated than station recorded data. IMERG-RT outperformed IMERG-FR for precipitation intensity in the range of 0.1 mm/hr to 0.4 mm/hr while in the range of 1.1 to 1.8 mm

  3. Running into trouble with the time-dependent propagation of a wavepacket

    International Nuclear Information System (INIS)

    Garriz, Abel E; Sztrajman, Alejandro; Mitnik, DarIo

    2010-01-01

    The propagation in time of a wavepacket is a conceptually rich problem suitable to be studied in any introductory quantum mechanics course. This subject is covered analytically in most of the standard textbooks. Computer simulations have become a widespread pedagogical tool, easily implemented in computer labs and in classroom demonstrations. However, we have detected issues raising difficulties in the practical effectuation of these codes which are especially evident when discrete grid methods are used. One issue-relatively well known-appears at high incident energies, producing a wavepacket slower than expected theoretically. The other issue, which appears at low wavepacket energies, does not affect the time evolution of the propagating wavepacket proper, but produces dramatic effects on its spectral decomposition. The origin of the troubles is investigated, and different ways to deal with these issues are proposed. Finally, we show how this problem is manifested and solved in the practical case of the electronic spectra of a metal surface ionized by an ultrashort laser pulse.

  4. Running into trouble with the time-dependent propagation of a wavepacket

    Energy Technology Data Exchange (ETDEWEB)

    Garriz, Abel E; Sztrajman, Alejandro; Mitnik, DarIo, E-mail: dmitnik@df.uba.a [Instituto de AstronomIa y Fisica del Espacio, C.C. 67, Suc. 28, (C1428EGA) Buenos Aires (Argentina)

    2010-07-15

    The propagation in time of a wavepacket is a conceptually rich problem suitable to be studied in any introductory quantum mechanics course. This subject is covered analytically in most of the standard textbooks. Computer simulations have become a widespread pedagogical tool, easily implemented in computer labs and in classroom demonstrations. However, we have detected issues raising difficulties in the practical effectuation of these codes which are especially evident when discrete grid methods are used. One issue-relatively well known-appears at high incident energies, producing a wavepacket slower than expected theoretically. The other issue, which appears at low wavepacket energies, does not affect the time evolution of the propagating wavepacket proper, but produces dramatic effects on its spectral decomposition. The origin of the troubles is investigated, and different ways to deal with these issues are proposed. Finally, we show how this problem is manifested and solved in the practical case of the electronic spectra of a metal surface ionized by an ultrashort laser pulse.

  5. Low contrast volume run-off CT angiography with optimized scan time based on double-level test bolus technique – feasibility study

    International Nuclear Information System (INIS)

    Baxa, Jan; Vendiš, Tomáš; Moláček, Jiří; Štěpánková, Lucie; Flohr, Thomas; Schmidt, Bernhard; Korporaal, Johannes G.; Ferda, Jiří

    2014-01-01

    Purpose: To verify the technical feasibility of low contrast volume (40 mL) run-off CT angiography (run-off CTA) with the individual scan time optimization based on double-level test bolus technique. Materials and methods: A prospective study of 92 consecutive patients who underwent run-off CTA performed with 40 mL of contrast medium (injection rate of 6 mL/s) and optimized scan times on a second generation of dual-source CT. Individual optimized scan times were calculated from aortopopliteal transit times obtained on the basis of double-level test bolus technique – the single injection of 10 mL test bolus and dynamic acquisitions in two levels (abdominal aorta and popliteal arteries). Intraluminal attenuation (HU) was measured in 6 levels (aorta, iliac, femoral and popliteal arteries, middle and distal lower-legs) and subjective quality (3-point score) was assessed. Relations of image quality, test bolus parameters and arterial circulation involvement were analyzed. Results: High mean attenuation (HU) values (468; 437; 442; 440; 342; 274) and quality score in all monitored levels was achieved. In 91 patients (0.99) the sufficient diagnostic quality (score 1–2) in aorta, iliac and femoral arteries was determined. A total of 6 patients (0.07) were not evaluable in distal lower-legs. Only the weak indirect correlation of image quality and test-bolus parameters was proved in iliac, femoral and popliteal levels (r values: −0.263, −0.298 and −0.254). The statistically significant difference of the test-bolus parameters and image quality was proved in patients with occlusive and aneurysmal disease. Conclusion: We proved the technical feasibility and sufficient quality of run-off CTA with low volume of contrast medium and optimized scan time according to aortopopliteal transit time calculated from double-level test bolus

  6. Safety, Liveness and Run-time Refinement for Modular Process-Aware Information Systems with Dynamic Sub Processes

    DEFF Research Database (Denmark)

    Debois, Søren; Hildebrandt, Thomas; Slaats, Tijs

    2015-01-01

    and verification of flexible, run-time adaptable process-aware information systems, moved into practice via the Dynamic Condition Response (DCR) Graphs notation co-developed with our industrial partner. Our key contributions are: (1) A formal theory of dynamic sub-process instantiation for declarative, event......We study modularity, run-time adaptation and refinement under safety and liveness constraints in event-based process models with dynamic sub-process instantiation. The study is part of a larger programme to provide semantically well-founded technologies for modelling, implementation......-based processes under safety and liveness constraints, given as the DCR* process language, equipped with a compositional operational semantics and conservatively extending the DCR Graphs notation; (2) an expressiveness analysis revealing that the DCR* process language is Turing-complete, while the fragment cor...

  7. Design and Implementation of a New Run-time Life-cycle for Interactive Public Display Applications

    OpenAIRE

    Cardoso, Jorge C. S.; Perpétua, Alice

    2015-01-01

    Public display systems are becoming increasingly complex. They are moving from passive closed systems to open interactive systems that are able to accommodate applications from several independent sources. This shift needs to be accompanied by a more flexible and powerful application management. In this paper, we propose a run-time life-cycle model for interactive public display applications that addresses several shortcomings of current display systems. Our mo...

  8. Effect of injection timing on combustion and performance of a direct injection diesel engine running on Jatropha methyl ester

    Energy Technology Data Exchange (ETDEWEB)

    Jindal, S. [Mechanical Engineering Department, College of Technology & Engineering, Maharana Pratap University of Agriculture and Technology, Udaipur 313001 (India)

    2011-07-01

    The present study aims at evaluation of effect of injection timing on the combustion, performance and emissions of a small power diesel engine, commonly used for agriculture purpose, running on pure biodiesel, prepared from Jatropha (Jatropha curcas) vegetable oil. The effect of varying injection timing was evaluated in terms of thermal efficiency, specific fuel consumption, power and mean effective pressure, exhaust temperature, cylinder pressure, rate of pressure rise and the heat release rate. It was found that retarding the injection timing by 3 degrees enhances the thermal efficiency by about 8 percent.

  9. Interface Testing for RTOS System Tasks based on the Run-Time Monitoring

    International Nuclear Information System (INIS)

    Sung, Ahyoung; Choi, Byoungju

    2006-01-01

    Safety critical embedded system requires high dependability of not only hardware but also software. It is intricate to modify embedded software once embedded. Therefore, it is necessary to have rigorous regulations to assure the quality of safety critical embedded software. IEEE V and V (Verification and Validation) process is recommended for software dependability, but a more quantitative evaluation method like software testing is necessary. In case of safety critical embedded software, it is essential to have a test that reflects unique features of the target hardware and its operating system. The safety grade PLC (Programmable Logic Controller) is a safety critical embedded system where hardware and software are tightly coupled. The PLC has HdS (Hardware dependent Software) and it is tightly coupled with RTOS (Real Time Operating System). Especially, system tasks that are tightly coupled with target hardware and RTOS kernel have large influence on the dependability of the entire PLC. Therefore, interface testing for system tasks that reflects the features of target hardware and RTOS kernel becomes the core of the PLC integration test. Here, we define interfaces as overlapped parts between two different layers on the system architecture. In this paper, we identify interfaces for system tasks and apply the identified interfaces to the safety grade PLC. Finally, we show the test results through the empirical study

  10. Interface Testing for RTOS System Tasks based on the Run-Time Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Ahyoung; Choi, Byoungju [Ewha University, Seoul (Korea, Republic of)

    2006-07-01

    Safety critical embedded system requires high dependability of not only hardware but also software. It is intricate to modify embedded software once embedded. Therefore, it is necessary to have rigorous regulations to assure the quality of safety critical embedded software. IEEE V and V (Verification and Validation) process is recommended for software dependability, but a more quantitative evaluation method like software testing is necessary. In case of safety critical embedded software, it is essential to have a test that reflects unique features of the target hardware and its operating system. The safety grade PLC (Programmable Logic Controller) is a safety critical embedded system where hardware and software are tightly coupled. The PLC has HdS (Hardware dependent Software) and it is tightly coupled with RTOS (Real Time Operating System). Especially, system tasks that are tightly coupled with target hardware and RTOS kernel have large influence on the dependability of the entire PLC. Therefore, interface testing for system tasks that reflects the features of target hardware and RTOS kernel becomes the core of the PLC integration test. Here, we define interfaces as overlapped parts between two different layers on the system architecture. In this paper, we identify interfaces for system tasks and apply the identified interfaces to the safety grade PLC. Finally, we show the test results through the empirical study.

  11. Upper Bounds Prediction of the Execution Time of Programs Running on ARM Cortex-A Systems

    OpenAIRE

    Fedotova , Irina; Krause , Bernd; Siemens , Eduard

    2017-01-01

    Part 6: Embedded and Real Time Systems; International audience; This paper describes the application of statistical analysis of the timing behavior for a generic real-time task model. Using specific processor of ARM Cortex-A series and an empirical approach of time values retrieval, the algorithm to predict the upper bounds for the task of the time acquisition operation has been formulated. For the experimental verification of the algorithm, we have used the robust Measurement-Based Probabili...

  12. Experiences of a student-run clinic in primary care: a mixed-method study with students, patients and supervisors

    Science.gov (United States)

    Fröberg, Maria; Leanderson, Charlotte; Fläckman, Birgitta; Hedman-Lagerlöf, Erik; Björklund, Karin; Nilsson, Gunnar H.; Stenfors, Terese

    2018-01-01

    Objective To explore how a student-run clinic (SRC) in primary health care (PHC) was perceived by students, patients and supervisors. Design A mixed methods study. Clinical learning environment, supervision and nurse teacher evaluation scale (CLES + T) assessed student satisfaction. Client satisfaction questionnaire-8 (CSQ-8) assessed patient satisfaction. Semi-structured interviews were conducted with supervisors. Setting Gustavsberg PHC Center, Stockholm County, Sweden. Subjects Students in medicine, nursing, physiotherapy, occupational therapy and psychology and their patients filled in questionnaires. Supervisors in medicine, nursing and physiotherapy were interviewed. Main outcome measures Mean values and medians of CLES + T and CSQ-8 were calculated. Interviews were analyzed using content analysis. Results A majority of 199 out of 227 student respondents reported satisfaction with the pedagogical atmosphere and the supervisory relationship. Most of the 938 patient respondents reported satisfaction with the care given. Interviews with 35 supervisors showed that the organization of the SRC provided time and support to focus on the tutorial assignment. Also, the pedagogical role became more visible and targeted toward the student’s individual needs. However, balancing the student’s level of autonomy and the own control over care was described as a challenge. Many expressed the need for further pedagogical education. Conclusions High student and patient satisfaction reported from five disciplines indicate that a SRC in PHC can be adapted for heterogeneous student groups. Supervisors experienced that the SRC facilitated and clarified their pedagogical role. Simultaneously their need for continuous pedagogical education was highlighted. The SRC model has the potential to enhance student-centered tuition in PHC. Key Points Knowledge of student-run clinics (SRCs) as learning environments within standard primary health care (PHC) is limited. We report

  13. Experiences of a student-run clinic in primary care: a mixed-method study with students, patients and supervisors.

    Science.gov (United States)

    Fröberg, Maria; Leanderson, Charlotte; Fläckman, Birgitta; Hedman-Lagerlöf, Erik; Björklund, Karin; Nilsson, Gunnar H; Stenfors, Terese

    2018-03-01

    To explore how a student-run clinic (SRC) in primary health care (PHC) was perceived by students, patients and supervisors. A mixed methods study. Clinical learning environment, supervision and nurse teacher evaluation scale (CLES + T) assessed student satisfaction. Client satisfaction questionnaire-8 (CSQ-8) assessed patient satisfaction. Semi-structured interviews were conducted with supervisors. Gustavsberg PHC Center, Stockholm County, Sweden. Students in medicine, nursing, physiotherapy, occupational therapy and psychology and their patients filled in questionnaires. Supervisors in medicine, nursing and physiotherapy were interviewed. Mean values and medians of CLES + T and CSQ-8 were calculated. Interviews were analyzed using content analysis. A majority of 199 out of 227 student respondents reported satisfaction with the pedagogical atmosphere and the supervisory relationship. Most of the 938 patient respondents reported satisfaction with the care given. Interviews with 35 supervisors showed that the organization of the SRC provided time and support to focus on the tutorial assignment. Also, the pedagogical role became more visible and targeted toward the student's individual needs. However, balancing the student's level of autonomy and the own control over care was described as a challenge. Many expressed the need for further pedagogical education. High student and patient satisfaction reported from five disciplines indicate that a SRC in PHC can be adapted for heterogeneous student groups. Supervisors experienced that the SRC facilitated and clarified their pedagogical role. Simultaneously their need for continuous pedagogical education was highlighted. The SRC model has the potential to enhance student-centered tuition in PHC. Key Points Knowledge of student-run clinics (SRCs) as learning environments within standard primary health care (PHC) is limited. We report experiences from the perspectives of students, their patients and supervisors

  14. Methods for acquiring data on terrain geomorphology, course geometry and kinematics of competitors' runs in alpine skiing: a historical review.

    Science.gov (United States)

    Erdmann, Włodzimierz S; Giovanis, Vassilis; Aschenbrenner, Piotr; Kiriakis, Vaios; Suchanowski, Andrzej

    2017-01-01

    This paper aims at the description and comparison of methods of topographic analysis of racing courses at all disciplines of alpine skiing sports for the purposes of obtaining: terrain geomorphology (snowless and with snow), course geometry, and competitors' runs. The review presents specific methods and instruments according to the order of their historical appearance as follows: (1) azimuth method with the use of a compass, tape and goniometer instruments; (2) optical method with geodetic theodolite, laser and photocells; (3) triangulation method with the aid of a tape and goniometer; (4) image method with the use of video cameras; (5) differential global positioning system and carrier phase global positioning system methods. Described methods were used at homologation procedure, at training sessions, during competitions of local level and during International Ski Federation World Championships or World Cups. Some methods were used together. In order to provide detailed data on course setting and skiers' running it is recommended to analyse course geometry and kinematics data of competitors' running for all important competitions.

  15. Run Clever - No difference in risk of injury when comparing progression in running volume and running intensity in recreational runners

    DEFF Research Database (Denmark)

    Ramskov, Daniel; Rasmussen, Sten; Sørensen, Henrik

    2018-01-01

    Background/aim: The Run Clever trial investigated if there was a difference in injury occurrence across two running schedules, focusing on progression in volume of running intensity (Sch-I) or in total running volume (Sch-V). It was hypothesised that 15% more runners with a focus on progression...... in volume of running intensity would sustain an injury compared with runners with a focus on progression in total running volume. Methods: Healthy recreational runners were included and randomly allocated to Sch-I or Sch-V. In the first eight weeks of the 24-week follow-up, all participants (n=839) followed...... participants received real-time, individualised feedback on running intensity and running volume. The primary outcome was running-related injury (RRI). Results: After preconditioning a total of 80 runners sustained an RRI (Sch-I n=36/Sch-V n=44). The cumulative incidence proportion (CIP) in Sch-V (reference...

  16. Free-running ADC- and FPGA-based signal processing method for brain PET using GAPD arrays

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Wei [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Choi, Yong, E-mail: ychoi.image@gmail.com [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Hong, Key Jo [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Kang, Jihoon [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Jung, Jin Ho [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Huh, Youn Suk [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Lim, Hyun Keong; Kim, Sang Su [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Kim, Byung-Tae [Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Chung, Yonghyun [Department of Radiological Science, Yonsei University College of Health Science, 234 Meaji, Heungup Wonju, Kangwon-Do 220-710 (Korea, Republic of)

    2012-02-01

    Currently, for most photomultiplier tube (PMT)-based PET systems, constant fraction discriminators (CFD) and time to digital converters (TDC) have been employed to detect gamma ray signal arrival time, whereas anger logic circuits and peak detection analog-to-digital converters (ADCs) have been implemented to acquire position and energy information of detected events. As compared to PMT the Geiger-mode avalanche photodiodes (GAPDs) have a variety of advantages, such as compactness, low bias voltage requirement and MRI compatibility. Furthermore, the individual read-out method using a GAPD array coupled 1:1 with an array scintillator can provide better image uniformity than can be achieved using PMT and anger logic circuits. Recently, a brain PET using 72 GAPD arrays (4 Multiplication-Sign 4 array, pixel size: 3 mm Multiplication-Sign 3 mm) coupled 1:1 with LYSO scintillators (4 Multiplication-Sign 4 array, pixel size: 3 mm Multiplication-Sign 3 mm Multiplication-Sign 20 mm) has been developed for simultaneous PET/MRI imaging in our laboratory. Eighteen 64:1 position decoder circuits (PDCs) were used to reduce GAPD channel number and three off-the-shelf free-running ADC and field programmable gate array (FPGA) combined data acquisition (DAQ) cards were used for data acquisition and processing. In this study, a free-running ADC- and FPGA-based signal processing method was developed for the detection of gamma ray signal arrival time, energy and position information all together for each GAPD channel. For the method developed herein, three DAQ cards continuously acquired 18 channels of pre-amplified analog gamma ray signals and 108-bit digital addresses from 18 PDCs. In the FPGA, the digitized gamma ray pulses and digital addresses were processed to generate data packages containing pulse arrival time, baseline value, energy value and GAPD channel ID. Finally, these data packages were saved to a 128 Mbyte on-board synchronous dynamic random access memory (SDRAM) and

  17. Effects of Training Leaders in Needs-Based Methods of Running Meetings

    Science.gov (United States)

    Douglass, Emily M.; Malouff, John M.; Rangan, Julie A.

    2015-01-01

    This study evaluated the effects of brief training in how to lead organizational meetings. The training was based on an attendee-needs-based model of running meetings. Twelve mid-level managers completed the training. The study showed a significant pre to post increase in the number of needs-based behaviors displayed by meeting leaders and in…

  18. Apparatus and method for heat-run test on high-power PWM ...

    Indian Academy of Sciences (India)

    Due to its bi-directional power flow capability, this is used in a ..... generation, and selection of controller parameters for the voltage and current loops is presented ..... For example, the total power consumed for heat-run test on converters at a.

  19. Time dependent view factor methods

    International Nuclear Information System (INIS)

    Kirkpatrick, R.C.

    1998-03-01

    View factors have been used for treating radiation transport between opaque surfaces bounding a transparent medium for several decades. However, in recent years they have been applied to problems involving intense bursts of radiation in enclosed volumes such as in the laser fusion hohlraums. In these problems, several aspects require treatment of time dependence

  20. Comparison of the hanging-drop technique and running-drip method for identifying the epidural space in dogs.

    Science.gov (United States)

    Martinez-Taboada, Fernando; Redondo, José I

    2017-03-01

    To compare the running-drip and hanging-drop techniques for locating the epidural space in dogs. Prospective, randomized, clinical trial. Forty-five healthy dogs requiring epidural anaesthesia. Dogs were randomized into four groups and administered epidural anaesthesia in sternal (S) or lateral (L) recumbency. All blocks were performed by the same person using Tuohy needles with either a fluid-prefilled hub (HDo) or connected to a drip set attached to a fluid bag elevated 60 cm (RDi). The number of attempts, 'pop' sensation, clear drop aspiration or fluid dripping, time to locate the epidural space (TTLES) and presence of cerebrospinal fluid (CSF) were recorded. A morphine-bupivacaine combination was injected after positive identification. The success of the block was assessed by experienced observers based on perioperative usage of rescue analgesia. Data were checked for normality. Binomial variables were analysed with the chi-squared or Fisher's exact test as appropriate. Non-parametric data were analysed using Kruskal-Wallis and Mann-Whitney tests. Normal data were studied with an anova followed by a Tukey's means comparison for groups of the same size. A p-value of Drop aspiration was observed more often in SHDo (nine of 11 dogs) than in LHDo (two of 11 dogs) (p = 0.045). Mean (range) TTLES was longer in LHDo [47 (18-82) seconds] than in SHDo [20 (14-79) seconds] (p = 0.006) and SRDi [(34 (17-53) seconds] (p = 0.038). There were no differences in 'pop' sensation, presence of CSF, rescue analgesia or pain scores between the groups. The running-drip method is a useful and fast alternative technique for identifying the epidural space in dogs. The hanging-drop technique in lateral recumbency was more difficult to perform than the other methods, requiring more time and attempts. Copyright © 2017 Association of Veterinary Anaesthetists and American College of Veterinary Anesthesia and Analgesia. Published by Elsevier Ltd. All rights reserved.

  1. Run-time anomaly detection and mitigation in information-rich cyber-physical systems

    Data.gov (United States)

    National Aeronautics and Space Administration — Next generation space missions require autonomous systems to operate without human intervention for long periods of times in highly dynamic environments. Such...

  2. Temporal analysis and scheduling of hard real-time radios running on a multi-processor

    NARCIS (Netherlands)

    Moreira, O.

    2012-01-01

    On a multi-radio baseband system, multiple independent transceivers must share the resources of a multi-processor, while meeting each its own hard real-time requirements. Not all possible combinations of transceivers are known at compile time, so a solution must be found that either allows for

  3. Precise and accurate train run data: Approximation of actual arrival and departure times

    DEFF Research Database (Denmark)

    Richter, Troels; Landex, Alex; Andersen, Jonas Lohmann Elkjær

    with the approximated actual arrival and departure times. As a result, all future statistics can now either be based on track circuit data with high precision or approximated actual arrival times with a high accuracy. Consequently, performance analysis will be more accurate, punctuality statistics more correct, KPI...

  4. Effect of advanced injection timing on emission characteristics of diesel engine running on natural gas

    Energy Technology Data Exchange (ETDEWEB)

    Nwafor, O.M.I. [Department of Mechanical Engineering, Federal University of Technology, Owerri, Imo State (Nigeria)

    2007-11-15

    There has been a growing concern on the emission of greenhouse gases into the atmosphere, whose consequence is global warming. The sources of greenhouse gases have been identified, of which the major contributor is the combustion of fossil fuel. Researchers have intensified efforts towards identifying greener alternative fuel substitutes for the present fossil fuel. Natural gas is now being investigated as potential alternative fuel for diesel engines. Natural gas appears more attractive due to its high octane number and perhaps, due to its environmental friendly nature. The test results showed that alternative fuels exhibit longer ignition delay, with slow burning rates. Longer delays will lead to unacceptable rates of pressure rise with the result of diesel knock. This work examines the effect of advanced injection timing on the emission characteristics of dual-fuel engine. The engine has standard injection timing of 30 BTDC. The injection was first advanced by 5.5 and given injection timing of 35.5 BTDC. The engine performance was erratic on this timing. The injection was then advanced by 3.5 . The engine performance was smooth on this timing especially at low loading conditions. The ignition delay was reduced through advanced injection timing but tended to incur a slight increase in fuel consumption. The CO and CO{sub 2} emissions were reduced through advanced injection timing. (author)

  5. RUN COORDINATION

    CERN Multimedia

    Christophe Delaere

    2013-01-01

    The focus of Run Coordination during LS1 is to monitor closely the advance of maintenance and upgrade activities, to smooth interactions between subsystems and to ensure that all are ready in time to resume operations in 2015 with a fully calibrated and understood detector. After electricity and cooling were restored to all equipment, at about the time of the last CMS week, recommissioning activities were resumed for all subsystems. On 7 October, DCS shifts began 24/7 to allow subsystems to remain on to facilitate operations. That culminated with the Global Run in November (GriN), which   took place as scheduled during the week of 4 November. The GriN has been the first centrally managed operation since the beginning of LS1, and involved all subdetectors but the Pixel Tracker presently in a lab upstairs. All nights were therefore dedicated to long stable runs with as many subdetectors as possible. Among the many achievements in that week, three items may be highlighted. First, the Strip...

  6. Exposure time, running and skill-related performance in international u20 rugby union players during an intensified tournament.

    Directory of Open Access Journals (Sweden)

    Christopher J Carling

    Full Text Available This study investigated exposure time, running and skill-related performance in two international u20 rugby union teams during an intensified tournament: the 2015 Junior World Rugby Championship.Both teams played 5 matches in 19 days. Analyses were conducted using global positioning system (GPS tracking (Viper 2™, Statsports Technologies Ltd and event coding (Opta Pro®.Of the 62 players monitored, 36 (57.1% participated in 4 matches and 23 (36.5% in all 5 matches while player availability for selection was 88%. Analyses of team running output (all players completing >60-min play showed that the total and peak 5-minute high metabolic load distances covered were likely-to-very likely moderately higher in the final match compared to matches 1 and 2 in back and forward players. In individual players with the highest match-play exposure (participation in >75% of total competition playing time and >75-min in each of the final 3 matches, comparisons of performance in matches 4 and 5 versus match 3 (three most important matches reported moderate-to-large decreases in total and high metabolic load distance in backs while similar magnitude reductions occurred in high-speed distance in forwards. In contrast, skill-related performance was unchanged, albeit with trivial and unclear changes, while there were no alterations in either total or high-speed running distance covered at the end of matches.These findings suggest that despite high availability for selection, players were not over-exposed to match-play during an intensified u20 international tournament. They also imply that the teams coped with the running and skill-related demands. Similarly, individual players with the highest exposure to match-play were also able to maintain skill-related performance and end-match running output (despite an overall reduction in the latter. These results support the need for player rotation and monitoring of performance, recovery and intervention strategies during

  7. Formal Specification and Run-time Monitoring Within the Ballistic Missile Defense Project

    National Research Council Canada - National Science Library

    Caffall, Dale S; Cook, Thomas; Drusinsky, Doron; Michael, James B; Shing, Man-Tak; Sklavounos, Nicholas

    2005-01-01

    .... Ballistic Missile Defense Advanced Battle Manager (ABM) project in an effort that is amongst the most comprehensive application of formal methods to a large-scale safety-critical software application ever reported...

  8. A Novel Earphone Type Sensor for Measuring Mealtime: Consideration of the Method to Distinguish between Running and Meals

    Directory of Open Access Journals (Sweden)

    Kazuhiro Taniguchi

    2017-01-01

    Full Text Available In this study, we describe a technique for estimating meal times using an earphone-type wearable sensor. A small optical sensor composed of a light-emitting diode and phototransistor is inserted into the ear hole of a user and estimates the meal times of the user from the time variations in the amount of light received. This is achieved by emitting light toward the inside of the ear canal and receiving light reflected back from the ear canal. This proposed technique allowed “meals” to be differentiated from having conversations, sneezing, walking, ascending and descending stairs, operating a computer, and using a smartphone. Conventional devices worn on the head of users and that measure food intake can vibrate during running as the body is jolted more violently than during walking; this can result in the misidentification of running as eating by these devices. To solve this problem, we used two of our sensors simultaneously: one in the left ear and one in the right ear. This was based on our finding that measurements from the left and right ear canals have a strong correlation during running but no correlation during eating. This allows running and eating to be distinguished based on correlation coefficients, which can reduce misidentification. Moreover, by using an optical sensor composed of a semiconductor, a small and lightweight device can be created. This measurement technique can also measure body motion associated with running, and the data obtained from the optical sensor inserted into the ear can be used to support a healthy lifestyle regarding both eating and exercise.

  9. Marathon Kids UK: study design and protocol for a mixed methods evaluation of a school-based running programme

    Science.gov (United States)

    Routen, Ash C; Harris, Jo P; Cale, Lorraine A; Gorely, Trish; Sherar, Lauren B

    2018-01-01

    Introduction Schools are promising settings for physical activity promotion; however, they are complex and adaptive systems that can influence the quality of programme implementation. This paper presents an evaluation of a school-based running programme (Marathon Kids). The aims of this study are (1) to identify the processes by which schools implement the programme, (2) identify and explain the contextual factors affecting implementation and explications of effectiveness and (3) examine the relationship between the level of implementation and perceived outcomes. Methods Using a realist evaluation framework, a mixed method single-group before-and-after design, strengthened by multiple interim measurements, will be used. Year 5 (9–10 years old) pupils and their teachers will be recruited from six state-funded primary schools in Leicestershire, UK. Data will be collected once prior to implementation, at five discrete time points during implementation and twice following implementation. A weekly implementation log will also be used. At time point 1 (TP1) (September 2016), data on school environment, teacher and pupil characteristics will be collected. At TP1 and TP6 (July 2017), accelerometry, pupil self-reported physical activity and psychosocial data (eg, social support and intention to be active) will be collected. At TP2, TP3 and TP5 (January, March and June 2017), observations will be conducted. At TP2 and TP5, there will be teacher interviews and pupil focus groups. Follow-up teacher interviews will be conducted at TP7 and TP8 (October 2017 and March 2018) and pupil focus group at TP8. In addition, synthesised member checking will be conducted (June 2018) with a mixed sample of schools. Ethics and dissemination Ethical approval for this study was obtained through Loughborough University Human Participants Ethics Subcommittee (R16-P032 & R16-P116). Findings will be disseminated via print, online media and dissemination events as well as practitioner and

  10. State Space Methods for Timed Petri Nets

    DEFF Research Database (Denmark)

    Christensen, Søren; Jensen, Kurt; Mailund, Thomas

    2001-01-01

    it possible to condense the usually infinite state space of a timed Petri net into a finite condensed state space without loosing analysis power. The second method supports on-the-fly verification of certain safety properties of timed systems. We discuss the application of the two methods in a number......We present two recently developed state space methods for timed Petri nets. The two methods reconciles state space methods and time concepts based on the introduction of a global clock and associating time stamps to tokens. The first method is based on an equivalence relation on states which makes...

  11. Run-time Adaptable VLIW Processors : Resources, Performance, Power Consumption, and Reliability Trade-offs

    NARCIS (Netherlands)

    Anjam, F.

    2013-01-01

    In this dissertation, we propose to combine programmability with reconfigurability by implementing an adaptable programmable VLIW processor in a reconfigurable hardware. The approach allows applications to be developed at high-level (C language level), while at the same time, the processor

  12. [Professor Feng Run-Shen's essential experience in penetration needling method].

    Science.gov (United States)

    Feng, Mu-Lan

    2009-04-01

    Professor Feng Run-Shen is engaged in medicine for more than 60 years. He pays attention to medical ethics and has perfect medical skill. He energetically advocates combination of acupuncture with medication and stresses the concept of viewing the situation as a whole in selection of acupoints and treatment, particularly, clinical application of point properties. Clinically, he is accomplished in penetration needling, for which one needle acts on two or more points, enlarging the range of needling sensation, so it has very good therapeutic effects on many diseases. In the paper, the case samples about penetration needling in his clinical practice are summarized and introduced.

  13. Deriving Tools from Real-time Runs: A New CCMC Support for SEC and AFWA

    Science.gov (United States)

    Hesse, Michael; Rastatter, Lutz; MacNeice, Peter; Kuznetsova, Masha

    2008-01-01

    The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions. the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. The second focus of CCMC activities is on validation and verification of space weather models. and on the transition of appropriate models to space weather forecast centers. As part of the latter activity. the CCMC develops real-time simulation systems that stress models through routine execution. A by-product of these real-time calculations is the ability to derive model products, which may be useful for space weather operators. After consultations with NOA/SEC and with AFWA, CCMC has developed a set of tools as a first step to make real-time model output useful to forecast centers. In this presentation, we will discuss the motivation for this activity, the actions taken so far, and options for future tools from model output.

  14. The long-run dynamic relationship between exchange rate and its attention index: Based on DCCA and TOP method

    Science.gov (United States)

    Wang, Xuan; Guo, Kun; Lu, Xiaolin

    2016-07-01

    The behavior information of financial market plays a more and more important role in modern economic system. The behavior information reflected in INTERNET search data has already been used in short-term prediction for exchange rate, stock market return, house price and so on. However, the long-run relationship between behavior information and financial market fluctuation has not been studied systematically. Further, most traditional statistic methods and econometric models could not catch the dynamic and non-linear relationship. An attention index of CNY/USD exchange rate is constructed based on search data from 360 search engine of China in this paper. Then the DCCA and Thermal Optimal Path methods are used to explore the long-run dynamic relationship between CNY/USD exchange rate and the corresponding attention index. The results show that the significant interdependency exists and the change of exchange rate is 1-2 days lag behind the attention index.

  15. Languages, compilers and run-time environments for distributed memory machines

    CERN Document Server

    Saltz, J

    1992-01-01

    Papers presented within this volume cover a wide range of topics related to programming distributed memory machines. Distributed memory architectures, although having the potential to supply the very high levels of performance required to support future computing needs, present awkward programming problems. The major issue is to design methods which enable compilers to generate efficient distributed memory programs from relatively machine independent program specifications. This book is the compilation of papers describing a wide range of research efforts aimed at easing the task of programmin

  16. Design and development of a run-time monitor for multi-core architectures in cloud computing.

    Science.gov (United States)

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  17. Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Junghoon Lee

    2011-03-01

    Full Text Available Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  18. Effect of Minimalist Footwear on Running Efficiency

    Science.gov (United States)

    Gillinov, Stephen M.; Laux, Sara; Kuivila, Thomas; Hass, Daniel; Joy, Susan M.

    2015-01-01

    Background: Although minimalist footwear is increasingly popular among runners, claims that minimalist footwear enhances running biomechanics and efficiency are controversial. Hypothesis: Minimalist and barefoot conditions improve running efficiency when compared with traditional running shoes. Study Design: Randomized crossover trial. Level of Evidence: Level 3. Methods: Fifteen experienced runners each completed three 90-second running trials on a treadmill, each trial performed in a different type of footwear: traditional running shoes with a heavily cushioned heel, minimalist running shoes with minimal heel cushioning, and barefoot (socked). High-speed photography was used to determine foot strike, ground contact time, knee angle, and stride cadence with each footwear type. Results: Runners had more rearfoot strikes in traditional shoes (87%) compared with minimalist shoes (67%) and socked (40%) (P = 0.03). Ground contact time was longest in traditional shoes (265.9 ± 10.9 ms) when compared with minimalist shoes (253.4 ± 11.2 ms) and socked (250.6 ± 16.2 ms) (P = 0.005). There was no difference between groups with respect to knee angle (P = 0.37) or stride cadence (P = 0.20). When comparing running socked to running with minimalist running shoes, there were no differences in measures of running efficiency. Conclusion: When compared with running in traditional, cushioned shoes, both barefoot (socked) running and minimalist running shoes produce greater running efficiency in some experienced runners, with a greater tendency toward a midfoot or forefoot strike and a shorter ground contact time. Minimalist shoes closely approximate socked running in the 4 measurements performed. Clinical Relevance: With regard to running efficiency and biomechanics, in some runners, barefoot (socked) and minimalist footwear are preferable to traditional running shoes. PMID:26131304

  19. Real time analysis with the upgraded LHCb trigger in Run-III

    CERN Multimedia

    Szumlak, Tomasz

    2016-01-01

    The current LHCb trigger system consists of a hardware level, which reduces the LHC bunch-crossing rate of 40 MHz to 1 MHz, a rate at which the entire detector is read out. A second level, implemented in a farm of around 20k parallel processing CPUs, the event rate is reduced to around 12.5 kHz. The LHCb experiment plans a major upgrade of the detector and DAQ system in the LHC long shutdown II (2018-2019 ). In this upgrade, a purely software based trigger system is being developed and it will have to process the full 30 MHz of bunch crossings with inelastic collisions. LHCb will also receive a factor of 5 increase in the instantaneous luminosity, which further contributes to the challenge of reconstructing and selecting events in real time with the CPU farm. We discuss the plans and progress towards achieving efficient reconstruction and selection with a 30 MHz throughput. Another challenge is to exploit the increased signal rate that results from removing the 1 MHz readout bottleneck, combined with the high...

  20. Methods for determining time of death.

    Science.gov (United States)

    Madea, Burkhard

    2016-12-01

    Medicolegal death time estimation must estimate the time since death reliably. Reliability can only be provided empirically by statistical analysis of errors in field studies. Determining the time since death requires the calculation of measurable data along a time-dependent curve back to the starting point. Various methods are used to estimate the time since death. The current gold standard for death time estimation is a previously established nomogram method based on the two-exponential model of body cooling. Great experimental and practical achievements have been realized using this nomogram method. To reduce the margin of error of the nomogram method, a compound method was developed based on electrical and mechanical excitability of skeletal muscle, pharmacological excitability of the iris, rigor mortis, and postmortem lividity. Further increasing the accuracy of death time estimation involves the development of conditional probability distributions for death time estimation based on the compound method. Although many studies have evaluated chemical methods of death time estimation, such methods play a marginal role in daily forensic practice. However, increased precision of death time estimation has recently been achieved by considering various influencing factors (i.e., preexisting diseases, duration of terminal episode, and ambient temperature). Putrefactive changes may be used for death time estimation in water-immersed bodies. Furthermore, recently developed technologies, such as H magnetic resonance spectroscopy, can be used to quantitatively study decompositional changes. This review addresses the gold standard method of death time estimation in forensic practice and promising technological and scientific developments in the field.

  1. A Novel Time Synchronization Method for Dynamic Reconfigurable Bus

    Directory of Open Access Journals (Sweden)

    Zhang Weigong

    2016-01-01

    Full Text Available UM-BUS is a novel dynamically reconfigurable high-speed serial bus for embedded systems. It can achieve fault tolerance by detecting the channel status in real time and reconfigure dynamically at run-time. The bus supports direct interconnections between up to eight master nodes and multiple slave nodes. In order to solve the time synchronization problem among master nodes, this paper proposes a novel time synchronization method, which can meet the requirement of time precision in UM-BUS. In this proposed method, time is firstly broadcasted through time broadcast packets. Then, the transmission delay and time deviations via three handshakes during link self-checking and channel detection can be worked out referring to the IEEE 1588 protocol. Thereby, each node calibrates its own time according to the broadcasted time. The proposed method has been proved to meet the requirement of real-time time synchronization. The experimental results show that the synchronous precision can achieve a bias less than 20 ns.

  2. Novel methods and expected run II performance of ATLAS track reconstruction in dense environments

    CERN Document Server

    Jansky, Roland Wolfgang; The ATLAS collaboration

    2015-01-01

    Detailed understanding and optimal track reconstruction performance of ATLAS in the core of high pT objects is paramount for a number of techniques such as jet energy and mass calibration, jet flavour tagging, and hadronic tau identification as well as measurements of physics quantities like jet fragmentation functions. These dense environments are characterized by charged particle separations on the order of the granularity of ATLAS’s inner detector. With the insertion of a new innermost layer in this tracking detector, which allows measurements closer to the interaction point, and an increase in the centre of mass energy, these difficult environments will become even more relevant in Run II, such as in searches for heavy resonances. Novel algorithmic developments to the ATLAS track reconstruction software targeting these topologies as well as the expected improved performance will be presented.

  3. Effect of the coefficient of friction of a running surface on sprint time in a sled-towing exercise.

    Science.gov (United States)

    Linthorne, Nicholas P; Cooper, James E

    2013-06-01

    This study investigated the effect of the coefficient of friction of a running surface on an athlete's sprint time in a sled-towing exercise. The coefficients of friction of four common sports surfaces (a synthetic athletics track, a natural grass rugby pitch, a 3G football pitch, and an artificial grass hockey pitch) were determined from the force required to tow a weighted sled across the surface. Timing gates were then used to measure the 30-m sprint time for six rugby players when towing a sled of varied weight across the surfaces. There were substantial differences between the coefficients of friction for the four surfaces (micro = 0.21-0.58), and in the sled-towing exercise the athlete's 30-m sprint time increased linearly with increasing sled weight. The hockey pitch (which had the lowest coefficient of friction) produced a substantially lower rate of increase in 30-m sprint time, but there were no significant differences between the other surfaces. The results indicate that although an athlete's sprint time in a sled-towing exercise is affected by the coefficient offriction of the surface, the relationship relationship between the athlete's rate of increase in 30-m sprint time and the coefficient of friction is more complex than expected.

  4. Study on control method of running velocity for the permanent magnet-HTSC hybrid magnetically levitated conveyance system

    International Nuclear Information System (INIS)

    Nishio, R.; Ikeda, M.; Sasaki, R.; Ohashi, S.

    2011-01-01

    The hybrid magnetically levitated carrying system is developed. Control method of running velocity of the carrier is studied. Running velocity is controlled by current of the propulsion coils. Propulsion characteristcs are improved. We have developed the magnetically levitated carrying system. In this system, pinning force of high temperature bulk super conductor (HTSC) is used for the levitation and guidance. Four HTSCs are installed on the carrier. The magnetic rail is set on the ground, and flux from the magnetic rail is pinned by HTSCs. To increase levitation force, repulsive force of the permanent magnet is used. The hybrid levitation system is composed. The permanent magnet is installed under the load stage of the carrier. Repulsive force by the permanent magnet between the load stage on the carrier and the magnetic rail on the ground is used to support the load weight. Levitation and guidance one by pinning effect of the YBaCuO HTSC in the carrier is used to levitate the carrier body. The load stage is separated from the carrier flame and can move freely for vertical direction levitation. For the propulsion system, electromagnet is installed on the surface of the magnetic rail. In this paper, control method of running velocity of the carrier is studied. Propulsion force is given as follows; Air core copper coils are installed on the magnetic rail. Interaction between current of these coils and permanent magnets on the carrier generates propulsion force. Running velocity is controlled by current of the propulsion coils. It is also changed by position of the carrier and the load weight. From the results, stability of the propulsion system is given, and propulsion characteristics are improved.

  5. Study on control method of running velocity for the permanent magnet-HTSC hybrid magnetically levitated conveyance system

    Energy Technology Data Exchange (ETDEWEB)

    Nishio, R.; Ikeda, M.; Sasaki, R. [Kansai University, 3-3-35 Yamate-cho, Suita, Osaka 564-8680 (Japan); Ohashi, S., E-mail: ohashi@kansai-u.ac.jp [Kansai University, 3-3-35 Yamate-cho, Suita, Osaka 564-8680 (Japan)

    2011-11-15

    The hybrid magnetically levitated carrying system is developed. Control method of running velocity of the carrier is studied. Running velocity is controlled by current of the propulsion coils. Propulsion characteristcs are improved. We have developed the magnetically levitated carrying system. In this system, pinning force of high temperature bulk super conductor (HTSC) is used for the levitation and guidance. Four HTSCs are installed on the carrier. The magnetic rail is set on the ground, and flux from the magnetic rail is pinned by HTSCs. To increase levitation force, repulsive force of the permanent magnet is used. The hybrid levitation system is composed. The permanent magnet is installed under the load stage of the carrier. Repulsive force by the permanent magnet between the load stage on the carrier and the magnetic rail on the ground is used to support the load weight. Levitation and guidance one by pinning effect of the YBaCuO HTSC in the carrier is used to levitate the carrier body. The load stage is separated from the carrier flame and can move freely for vertical direction levitation. For the propulsion system, electromagnet is installed on the surface of the magnetic rail. In this paper, control method of running velocity of the carrier is studied. Propulsion force is given as follows; Air core copper coils are installed on the magnetic rail. Interaction between current of these coils and permanent magnets on the carrier generates propulsion force. Running velocity is controlled by current of the propulsion coils. It is also changed by position of the carrier and the load weight. From the results, stability of the propulsion system is given, and propulsion characteristics are improved.

  6. Another method of dead time correction

    International Nuclear Information System (INIS)

    Sabol, J.

    1988-01-01

    A new method of the correction of counting losses caused by a non-extended dead time of pulse detection systems is presented. The approach is based on the distribution of time intervals between pulses at the output of the system. The method was verified both experimentally and by using the Monte Carlo simulations. The results show that the suggested technique is more reliable and accurate than other methods based on a separate measurement of the dead time. (author) 5 refs

  7. Addressing Thermal Model Run Time Concerns of the Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA)

    Science.gov (United States)

    Peabody, Hume; Guerrero, Sergio; Hawk, John; Rodriguez, Juan; McDonald, Carson; Jackson, Cliff

    2016-01-01

    The Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) utilizes an existing 2.4 m diameter Hubble sized telescope donated from elsewhere in the federal government for near-infrared sky surveys and Exoplanet searches to answer crucial questions about the universe and dark energy. The WFIRST design continues to increase in maturity, detail, and complexity with each design cycle leading to a Mission Concept Review and entrance to the Mission Formulation Phase. Each cycle has required a Structural-Thermal-Optical-Performance (STOP) analysis to ensure the design can meet the stringent pointing and stability requirements. As such, the models have also grown in size and complexity leading to increased model run time. This paper addresses efforts to reduce the run time while still maintaining sufficient accuracy for STOP analyses. A technique was developed to identify slews between observing orientations that were sufficiently different to warrant recalculation of the environmental fluxes to reduce the total number of radiation calculation points. The inclusion of a cryocooler fluid loop in the model also forced smaller time-steps than desired, which greatly increases the overall run time. The analysis of this fluid model required mitigation to drive the run time down by solving portions of the model at different time scales. Lastly, investigations were made into the impact of the removal of small radiation couplings on run time and accuracy. Use of these techniques allowed the models to produce meaningful results within reasonable run times to meet project schedule deadlines.

  8. A Pressure Plate-Based Method for the Automatic Assessment of Foot Strike Patterns During Running.

    Science.gov (United States)

    Santuz, Alessandro; Ekizos, Antonis; Arampatzis, Adamantios

    2016-05-01

    The foot strike pattern (FSP, description of how the foot touches the ground at impact) is recognized to be a predictor of both performance and injury risk. The objective of the current investigation was to validate an original foot strike pattern assessment technique based on the numerical analysis of foot pressure distribution. We analyzed the strike patterns during running of 145 healthy men and women (85 male, 60 female). The participants ran on a treadmill with integrated pressure plate at three different speeds: preferred (shod and barefoot 2.8 ± 0.4 m/s), faster (shod 3.5 ± 0.6 m/s) and slower (shod 2.3 ± 0.3 m/s). A custom-designed algorithm allowed the automatic footprint recognition and FSP evaluation. Incomplete footprints were simultaneously identified and corrected from the software itself. The widely used technique of analyzing high-speed video recordings was checked for its reliability and has been used to validate the numerical technique. The automatic numerical approach showed a good conformity with the reference video-based technique (ICC = 0.93, p < 0.01). The great improvement in data throughput and the increased completeness of results allow the use of this software as a powerful feedback tool in a simple experimental setup.

  9. Generalized Time-Limited Balanced Reduction Method

    DEFF Research Database (Denmark)

    Shaker, Hamid Reza; Shaker, Fatemeh

    2013-01-01

    In this paper, a new method for model reduction of bilinear systems is presented. The proposed technique is from the family of gramian-based model reduction methods. The method uses time-interval generalized gramians in the reduction procedure rather than the ordinary generalized gramians...... and in such a way it improves the accuracy of the approximation within the time-interval which the method is applied. The time-interval generalized gramians are the solutions to the generalized time-interval Lyapunov equations. The conditions for these equations to be solvable are derived and an algorithm...

  10. Evaluation of the 1996 predictions of the run-timing of wild migrant spring/summer yearling chinook in the Snake River Basin using Program RealTime

    International Nuclear Information System (INIS)

    Townsend, R.L.; Yasuda, D.; Skalski, J.R.

    1997-03-01

    This report is a post-season analysis of the accuracy of the 1996 predictions from the program RealTime. Observed 1996 migration data collected at Lower Granite Dam were compared to the predictions made by RealTime for the spring outmigration of wild spring/summer chinook. Appendix A displays the graphical reports of the RealTime program that were interactively accessible via the World Wide Web during the 1996 migration season. Final reports are available at address http://www.cqs.washington.edu/crisprt/. The CRISP model incorporated the predictions of the run status to move the timing forecasts further down the Snake River to Little Goose, Lower Monumental and McNary Dams. An analysis of the dams below Lower Granite Dam is available separately

  11. Time-efficient multidimensional threshold tracking method

    DEFF Research Database (Denmark)

    Fereczkowski, Michal; Kowalewski, Borys; Dau, Torsten

    2015-01-01

    Traditionally, adaptive methods have been used to reduce the time it takes to estimate psychoacoustic thresholds. However, even with adaptive methods, there are many cases where the testing time is too long to be clinically feasible, particularly when estimating thresholds as a function of anothe...

  12. Suicide Method Runs in Families: A Birth Certificate Cohort Study of Adolescent Suicide in Taiwan

    Science.gov (United States)

    Lu, Tsung-Hsueh; Chang, Wan-Ting; Lin, Jin-Jia; Li, Chung-Yi

    2011-01-01

    Suicide method used by adolescents was examined to determine if it was the same as that employed by their suicidal parents. Six hundred eighty adolescents completed suicide between 1997 and 2007, of whom 12 had parents who had previously died by suicide. The suicide method used by these adolescents was compared with that employed by their suicidal…

  13. A time-domain digitally controlled oscillator composed of a free running ring oscillator and flying-adder

    International Nuclear Information System (INIS)

    Liu Wei; Zhang Shengdong; Wang Yangyuan; Li Wei; Ren Peng; Lin Qinglong

    2009-01-01

    A time-domain digitally controlled oscillator (DCO) is proposed. The DCO is composed of a free-running ring oscillator (FRO) and a two lap-selectors integrated flying-adder (FA). With a coiled cell array which allows uniform loading capacitances of the delay cells, the FRO produces 32 outputs with consistent tap spacing for the FA as reference clocks. The FA uses the outputs from the FRO to generate the output of the DCO according to the control number, resulting in a linear dependence of the output period, instead of the frequency on the digital controlling word input. Thus the proposed DCO ensures a good conversion linearity in a time-domain, and is suitable for time-domain all-digital phase locked loop applications. The DCO was implemented in a standard 0.13 μm digital logic CMOS process. The measurement results show that the DCO has a linear and monotonic tuning curve with gain variation of less than 10%, and a very low root mean square period jitter of 9.3 ps in the output clocks. The DCO works well at supply voltages ranging from 0.6 to 1.2 V, and consumes 4 mW of power with 500 MHz frequency output at 1.2 V supply voltage.

  14.  Running speed during training and percent body fat predict race time in recreational male marathoners

    OpenAIRE

    Barandun U; Knechtle B; Knechtle P; Klipstein A; Rust CA; Rosemann T; Lepers R

    2012-01-01

     Background: Recent studies have shown that personal best marathon time is a strong predictor of race time in male ultramarathoners. We aimed to determine variables predictive of marathon race time in recreational male marathoners by using the same characteristics of anthropometry and training as used for ultramarathoners.Methods: Anthropometric and training characteristics of 126 recreational male marathoners were bivariately and multivariately related to marathon race times.Results...

  15. Comparative analysis of methods of training and dietary habits of skilled bodybuilders in the run-general preparatory stage

    Directory of Open Access Journals (Sweden)

    Dzhym V.Y.

    2015-02-01

    Full Text Available Purpose : comparative analysis of the characteristics of methods of training and nutrition bodybuilders in the run-general of the preparatory phase (duration 4 - 5 months or 20 microcycles. Analyzed the characteristics of different methods of training bodybuilders to increase muscle mass. Material : the study involved 8 skilled bodybuilders, are included in the team of the Kharkiv region. Results : a comparative characteristic of the most commonly used methods of exercise and nutrition in bodybuilding. Discovered and proved the optimal technique for athletes depending on the original form at the beginning of general-preparatory phase of training. Driven changes in body weight, depending on the amount used Athlete of carbohydrates, proteins and fats. Conclusions : throughout the training period was characterized by severe protein diet orientation. The proportion of the nutrient was 40% in the first quarter, 50% - in the second, 60% in the third. Only in the last two microcycle decreased to 50%.

  16. Multiple time scale methods in tokamak magnetohydrodynamics

    International Nuclear Information System (INIS)

    Jardin, S.C.

    1984-01-01

    Several methods are discussed for integrating the magnetohydrodynamic (MHD) equations in tokamak systems on other than the fastest time scale. The dynamical grid method for simulating ideal MHD instabilities utilizes a natural nonorthogonal time-dependent coordinate transformation based on the magnetic field lines. The coordinate transformation is chosen to be free of the fast time scale motion itself, and to yield a relatively simple scalar equation for the total pressure, P = p + B 2 /2μ 0 , which can be integrated implicitly to average over the fast time scale oscillations. Two methods are described for the resistive time scale. The zero-mass method uses a reduced set of two-fluid transport equations obtained by expanding in the inverse magnetic Reynolds number, and in the small ratio of perpendicular to parallel mobilities and thermal conductivities. The momentum equation becomes a constraint equation that forces the pressure and magnetic fields and currents to remain in force balance equilibrium as they evolve. The large mass method artificially scales up the ion mass and viscosity, thereby reducing the severe time scale disparity between wavelike and diffusionlike phenomena, but not changing the resistive time scale behavior. Other methods addressing the intermediate time scales are discussed

  17. [A new measurement method of time-resolved spectrum].

    Science.gov (United States)

    Shi, Zhi-gang; Huang, Shi-hua; Liang, Chun-jun; Lei, Quan-sheng

    2007-02-01

    A new method for measuring time-resolved spectrum (TRS) is brought forward. Programming with assemble language controlled the micro-control-processor (AT89C51), and a kind of peripheral circuit constituted the drive circuit, which drived the stepping motor to run the monochromator. So the light of different kinds of expected wavelength could be obtained. The optical signal was transformed to electrical signal by optical-to-electrical transform with the help of photomultiplier tube (Hamamatsu 1P28). The electrical signal of spectrum data was transmitted to the oscillograph. Connecting the two serial interfaces of RS232 between the oscillograph and computer, the electrical signal of spectrum data could be transmitted to computer for programming to draw the attenuation curve and time-resolved spectrum (TRS) of the swatch. The method for measuring time-resolved spectrum (TRS) features parallel measurement in time scale but serial measurement in wavelength scale. Time-resolved spectrum (TRS) and integrated emission spectrum of Tb3+ in swatch Tb(o-BBA)3 phen were measured using this method. Compared with the real time-resolved spectrum (TRS). It was validated to be feasible, credible and convenient. The 3D spectra of fluorescence intensity-wavelength-time, and the integrated spectrum of the swatch Tb(o-BBA)3 phen are given.

  18. Paragogy and Flipped Assessment: Experience of Designing and Running a MOOC on Research Methods

    Science.gov (United States)

    Lee, Yenn; Rofe, J. Simon

    2016-01-01

    This study draws on the authors' first-hand experience of designing, developing and delivering (3Ds) a massive open online course (MOOC) entitled "Understanding Research Methods" since 2014, largely but not exclusively for learners in the humanities and social sciences. The greatest challenge facing us was to design an assessment…

  19. Implementation of a fast running full core pin power reconstruction method in DYN3D

    International Nuclear Information System (INIS)

    Gomez-Torres, Armando Miguel; Sanchez-Espinoza, Victor Hugo; Kliem, Sören; Gommlich, Andre

    2014-01-01

    Highlights: • New pin power reconstruction (PPR) method for the nodal diffusion code DYN3D. • Flexible PPR method applicable to a single, a group or to all fuel assemblies (square, hex). • Combination of nodal with pin-wise solutions (non-conform geometry). • PPR capabilities shown for REA of a Minicore (REA) PWR whole core. - Abstract: This paper presents a substantial extension of the pin power reconstruction (PPR) method used in the reactor dynamics code DYN3D with the aim to better describe the heterogeneity within the fuel assembly during reactor simulations. The flexibility of the new implemented PPR permits the local spatial refinement of one fuel assembly, of a cluster of fuel assemblies, of a quarter or eight of a core or even of a whole core. The application of PPR in core regions of interest will pave the way for the coupling with sub-channel codes enabling the prediction of local safety parameters. One of the main advantages of considering regions and not only a hot fuel assembly (FA) is the fact that the cross flow within this region can be taken into account by the subchannel code. The implementation of the new PPR method has been tested analysing a rod ejection accident (REA) in a PWR minicore consisting of 3 × 3 FA. Finally, the new capabilities of DNY3D are demonstrated by the analysing a boron dilution transient in a PWR MOX core and the pin power of a VVER-1000 reactor at stationary conditions

  20. Implementation of a fast running full core pin power reconstruction method in DYN3D

    Energy Technology Data Exchange (ETDEWEB)

    Gomez-Torres, Armando Miguel [Instituto Nacional de Investigaciones Nucleares, Department of Nuclear Systems, Carretera Mexico – Toluca s/n, La Marquesa, 52750 Ocoyoacac (Mexico); Sanchez-Espinoza, Victor Hugo, E-mail: victor.sanchez@kit.edu [Karlsruhe Institute of Technology, Institute for Neutron Physics and Reactor Technology, Hermann-vom-Helmhotz-Platz 1, D-76344 Eggenstein-Leopoldshafen (Germany); Kliem, Sören; Gommlich, Andre [Helmholtz-Zentrum Dresden-Rossendorf, Bautzner Landstraße 400, 01328 Dresden (Germany)

    2014-07-01

    Highlights: • New pin power reconstruction (PPR) method for the nodal diffusion code DYN3D. • Flexible PPR method applicable to a single, a group or to all fuel assemblies (square, hex). • Combination of nodal with pin-wise solutions (non-conform geometry). • PPR capabilities shown for REA of a Minicore (REA) PWR whole core. - Abstract: This paper presents a substantial extension of the pin power reconstruction (PPR) method used in the reactor dynamics code DYN3D with the aim to better describe the heterogeneity within the fuel assembly during reactor simulations. The flexibility of the new implemented PPR permits the local spatial refinement of one fuel assembly, of a cluster of fuel assemblies, of a quarter or eight of a core or even of a whole core. The application of PPR in core regions of interest will pave the way for the coupling with sub-channel codes enabling the prediction of local safety parameters. One of the main advantages of considering regions and not only a hot fuel assembly (FA) is the fact that the cross flow within this region can be taken into account by the subchannel code. The implementation of the new PPR method has been tested analysing a rod ejection accident (REA) in a PWR minicore consisting of 3 × 3 FA. Finally, the new capabilities of DNY3D are demonstrated by the analysing a boron dilution transient in a PWR MOX core and the pin power of a VVER-1000 reactor at stationary conditions.

  1. Determination of patellofemoral pain sub-groups and development of a method for predicting treatment outcome using running gait kinematics.

    Science.gov (United States)

    Watari, Ricky; Kobsar, Dylan; Phinyomark, Angkoon; Osis, Sean; Ferber, Reed

    2016-10-01

    Not all patients with patellofemoral pain exhibit successful outcomes following exercise therapy. Thus, the ability to identify patellofemoral pain subgroups related to treatment response is important for the development of optimal therapeutic strategies to improve rehabilitation outcomes. The purpose of this study was to use baseline running gait kinematic and clinical outcome variables to classify patellofemoral pain patients on treatment response retrospectively. Forty-one individuals with patellofemoral pain that underwent a 6-week exercise intervention program were sub-grouped as treatment Responders (n=28) and Non-responders (n=13) based on self-reported measures of pain and function. Baseline three-dimensional running kinematics, and self-reported measures underwent a linear discriminant analysis of the principal components of the variables to retrospectively classify participants based on treatment response. The significance of the discriminant function was verified with a Wilk's lambda test (α=0.05). The model selected 2 gait principal components and had a 78.1% classification accuracy. Overall, Non-responders exhibited greater ankle dorsiflexion, knee abduction and hip flexion during the swing phase and greater ankle inversion during the stance phase, compared to Responders. This is the first study to investigate an objective method to use baseline kinematic and self-report outcome variables to classify on patellofemoral pain treatment outcome. This study represents a significant first step towards a method to help clinicians make evidence-informed decisions regarding optimal treatment strategies for patients with patellofemoral pain. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Short run hydrothermal coordination with network constraints using an interior point method

    International Nuclear Information System (INIS)

    Lopez Lezama, Jesus Maria; Gallego Pareja, Luis Alfonso; Mejia Giraldo, Diego

    2008-01-01

    This paper presents a lineal optimization model to solve the hydrothermal coordination problem. The main contribution of this work is the inclusion of the network constraints to the hydrothermal coordination problem and its solution using an interior point method. The proposed model allows working with a system that can be completely hydraulic, thermal or mixed. Results are presented on the IEEE 14 bus test system

  3. AQUAPEAT 95. New methods for purifying the run-offs of peat production areas

    International Nuclear Information System (INIS)

    Selin, P.; Marja-aho, J.; Madekivi, O.

    1994-01-01

    The aim of Aqua Peat 95-project was to develop new methods for purifying the runoff coming from the peat production areas. The national water protection program for the year 1995 (Ympaeristoeministerioe 1988) as well as the level of the requirements and instructions from the authorities will obligate the peat producers to find new and practical methods for water purification. The chemical treatment reduced the load of peat production areas and the quality of treated water was almost equal to the runoffs coming from the natural bog area. The chemicals were the same as used in purifying drinking water. This purifying method is quite expensive and for this reason applicable only in special cases. The transpiration and evaporation and the soil filtering capacity of the forest area was also observed. The purifying capacity was very good, especially for the total nutrients and suspended solids. The changes of the ground water quality were insignificant but the level of the ground water in the field areas was higher than before. The long term changes of the vegetation and the trees could not be seen, yet. The most important water management practice is the detention of the discharge. The capacity of the sedimentation will increase by using the flow regulation in the sedimentation ponds and ditches. The changes in the water biology downstreams the Laeynioensuo peat production area were clearly seen near the main ditch. Because of the suspended solids the bottom sediment changed which lead to impacts to the bottom fauna. The colour of the runoffs as well as the changes in the sediment influenced on the macrophytes

  4. Time-dependent problems and difference methods

    CERN Document Server

    Gustafsson, Bertil; Oliger, Joseph

    2013-01-01

    Praise for the First Edition "". . . fills a considerable gap in the numerical analysis literature by providing a self-contained treatment . . . this is an important work written in a clear style . . . warmly recommended to any graduate student or researcher in the field of the numerical solution of partial differential equations."" -SIAM Review Time-Dependent Problems and Difference Methods, Second Edition continues to provide guidance for the analysis of difference methods for computing approximate solutions to partial differential equations for time-de

  5. Method and codes for solving the optimization problem of initial material distribution and controlling of reactor during the run

    International Nuclear Information System (INIS)

    Isakova, L.Ya.; Rachkova, D.A.; Vtorova, O.Yu.; Matekin, M.P.; Sobol, I.M.

    1992-01-01

    The optimization problem of initial distribution of fuel composition and controlling of the reactor during the run is solved. The optimization problem is formulated as a multicriterial one with different types of constraints. The distinguished feature of the method proposed is the systematic scanning of multidimensional ares, where the trial points in the space of parameters are the points of uniformly distributed LP τ -sequences. The reactor computation is carried out by the four group diffusion method in two-dimensional cylindrical geometry. The burnup absorbers are taken into account as additional absorption cross-sections, represented by approximants. The tables of trials make possible the estimation of the values of global extrema. The coordinates of the points where the external values are attained can be estimated too

  6. Predicting timing of foot strike during running, independent of striking technique, using principal component analysis of joint angles.

    Science.gov (United States)

    Osis, Sean T; Hettinga, Blayne A; Leitch, Jessica; Ferber, Reed

    2014-08-22

    As 3-dimensional (3D) motion-capture for clinical gait analysis continues to evolve, new methods must be developed to improve the detection of gait cycle events based on kinematic data. Recently, the application of principal component analysis (PCA) to gait data has shown promise in detecting important biomechanical features. Therefore, the purpose of this study was to define a new foot strike detection method for a continuum of striking techniques, by applying PCA to joint angle waveforms. In accordance with Newtonian mechanics, it was hypothesized that transient features in the sagittal-plane accelerations of the lower extremity would be linked with the impulsive application of force to the foot at foot strike. Kinematic and kinetic data from treadmill running were selected for 154 subjects, from a database of gait biomechanics. Ankle, knee and hip sagittal plane angular acceleration kinematic curves were chained together to form a row input to a PCA matrix. A linear polynomial was calculated based on PCA scores, and a 10-fold cross-validation was performed to evaluate prediction accuracy against gold-standard foot strike as determined by a 10 N rise in the vertical ground reaction force. Results show 89-94% of all predicted foot strikes were within 4 frames (20 ms) of the gold standard with the largest error being 28 ms. It is concluded that this new foot strike detection is an improvement on existing methods and can be applied regardless of whether the runner exhibits a rearfoot, midfoot, or forefoot strike pattern. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. On characteristics of magnetization by alternating current yoke method with running four poles

    International Nuclear Information System (INIS)

    Maeda, N.; Miyoshi, S.; Toriumi, T.

    1988-01-01

    In magnetic particle examinations, defects are most easily detected when a magnetic field is applied in a direction normal to the longitudinal direction of defects. It is well known that application of magnetic field intersecting at not less than 45 degrees to defects is necessary for detection of defects. Therefore, it is a general practice to perform magnetic particle examination with magnetization from two perpendicular directions in order to assure detection of defects whose direction is unknown. For example, in the case of the yoke method for welds, a common practice is to magnetize from two directions. The authors report how, to improve the ineffectiveness that testing must be performed twice, a new type of four-pole yoke was made

  8. Liquidity Runs

    NARCIS (Netherlands)

    Matta, R.; Perotti, E.

    2016-01-01

    Can the risk of losses upon premature liquidation produce bank runs? We show how a unique run equilibrium driven by asset liquidity risk arises even under minimal fundamental risk. To study the role of illiquidity we introduce realistic norms on bank default, such that mandatory stay is triggered

  9. Who runs public health? A mixed-methods study combining qualitative and network analyses.

    Science.gov (United States)

    Oliver, Kathryn; de Vocht, Frank; Money, Annemarie; Everett, Martin

    2013-09-01

    Persistent health inequalities encourage researchers to identify new ways of understanding the policy process. Informal relationships are implicated in finding evidence and making decisions for public health policy (PHP), but few studies use specialized methods to identify key actors in the policy process. We combined network and qualitative data to identify the most influential individuals in PHP in a UK conurbation and describe their strategies to influence policy. Network data were collected by asking for nominations of powerful and influential people in PHP (n = 152, response rate 80%), and 23 semi-structured interviews were analysed using a framework approach. The most influential PHP makers in this conurbation were mid-level managers in the National Health Service and local government, characterized by managerial skills: controlling policy processes through gate keeping key organizations, providing policy content and managing selected experts and executives to lead on policies. Public health professionals and academics are indirectly connected to policy via managers. The most powerful individuals in public health are managers, not usually considered targets for research. As we show, they are highly influential through all stages of the policy process. This study shows the importance of understanding the daily activities of influential policy individuals.

  10. Running Club

    CERN Multimedia

    Running Club

    2010-01-01

    The 2010 edition of the annual CERN Road Race will be held on Wednesday 29th September at 18h. The 5.5km race takes place over 3 laps of a 1.8 km circuit in the West Area of the Meyrin site, and is open to everyone working at CERN and their families. There are runners of all speeds, with times ranging from under 17 to over 34 minutes, and the race is run on a handicap basis, by staggering the starting times so that (in theory) all runners finish together. Children (< 15 years) have their own race over 1 lap of 1.8km. As usual, there will be a “best family” challenge (judged on best parent + best child). Trophies are awarded in the usual men’s, women’s and veterans’ categories, and there is a challenge for the best age/performance. Every adult will receive a souvenir prize, financed by a registration fee of 10 CHF. Children enter free (each child will receive a medal). More information, and the online entry form, can be found at http://cern.ch/club...

  11. Time Scale in Least Square Method

    Directory of Open Access Journals (Sweden)

    Özgür Yeniay

    2014-01-01

    Full Text Available Study of dynamic equations in time scale is a new area in mathematics. Time scale tries to build a bridge between real numbers and integers. Two derivatives in time scale have been introduced and called as delta and nabla derivative. Delta derivative concept is defined as forward direction, and nabla derivative concept is defined as backward direction. Within the scope of this study, we consider the method of obtaining parameters of regression equation of integer values through time scale. Therefore, we implemented least squares method according to derivative definition of time scale and obtained coefficients related to the model. Here, there exist two coefficients originating from forward and backward jump operators relevant to the same model, which are different from each other. Occurrence of such a situation is equal to total number of values of vertical deviation between regression equations and observation values of forward and backward jump operators divided by two. We also estimated coefficients for the model using ordinary least squares method. As a result, we made an introduction to least squares method on time scale. We think that time scale theory would be a new vision in least square especially when assumptions of linear regression are violated.

  12. Energy expended and knee joint load accumulated when walking, running, or standing for the same amount of time.

    Science.gov (United States)

    Miller, Ross H; Edwards, W Brent; Deluzio, Kevin J

    2015-01-01

    Evidence suggests prolonged bouts of sitting are unhealthy, and some public health messages have recently recommended replacing sitting with more standing. However, the relative benefits of replacing sitting with standing compared to locomotion are not known. Specifically, the biomechanical consequences of standing compared to other sitting-alternatives like walking and running are not well known and are usually not considered in studies on sitting. We compared the total knee joint load accumulated (TKJLA) and the total energy expended (TEE) when performing either walking, running, or standing for a common exercise bout duration (30 min). Walking and running both (unsurprisingly) had much more TEE than standing (+300% and +1100%, respectively). TKJLA was similar between walking and standing and 74% greater in running. The results suggest that standing is a poor replacement for walking and running if one wishes to increases energy expenditure, and may be particularly questionable for use in individuals at-risk for knee osteoarthritis due to its surprisingly high TKJLA (just as high as walking, 56% of the load in running) and the type of loading (continuous compression) it places on cartilage. However, standing has health benefits as an "inactivity interrupter" that extend beyond its direct energy expenditure. We suggest that future studies on standing as an inactivity intervention consider the potential biomechanical consequences of standing more often throughout the day, particularly in the case of prolonged bouts of standing. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. THE MATRYOSHKA RUN. II. TIME-DEPENDENT TURBULENCE STATISTICS, STOCHASTIC PARTICLE ACCELERATION, AND MICROPHYSICS IMPACT IN A MASSIVE GALAXY CLUSTER

    International Nuclear Information System (INIS)

    Miniati, Francesco

    2015-01-01

    We use the Matryoshka run to study the time-dependent statistics of structure-formation-driven turbulence in the intracluster medium of a 10 15 M ☉ galaxy cluster. We investigate the turbulent cascade in the inner megaparsec for both compressional and incompressible velocity components. The flow maintains approximate conditions of fully developed turbulence, with departures thereof settling in about an eddy-turnover time. Turbulent velocity dispersion remains above 700 km s –1 even at low mass accretion rate, with the fraction of compressional energy between 10% and 40%. The normalization and the slope of the compressional turbulence are susceptible to large variations on short timescales, unlike the incompressible counterpart. A major merger occurs around redshift z ≅ 0 and is accompanied by a long period of enhanced turbulence, ascribed to temporal clustering of mass accretion related to spatial clustering of matter. We test models of stochastic acceleration by compressional modes for the origin of diffuse radio emission in galaxy clusters. The turbulence simulation model constrains an important unknown of this complex problem and brings forth its dependence on the elusive microphysics of the intracluster plasma. In particular, the specifics of the plasma collisionality and the dissipation physics of weak shocks affect the cascade of compressional modes with strong impact on the acceleration rates. In this context radio halos emerge as complex phenomena in which a hierarchy of processes acting on progressively smaller scales are at work. Stochastic acceleration by compressional modes implies statistical correlation of radio power and spectral index with merging cores distance, both testable in principle with radio surveys

  14. Fast-timing methods for semiconductor detectors

    International Nuclear Information System (INIS)

    Spieler, H.

    1982-03-01

    The basic parameters are discussed which determine the accuracy of timing measurements and their effect in a practical application, specifically timing with thin-surface barrier detectors. The discussion focusses on properties of the detector, low-noise amplifiers, trigger circuits and time converters. New material presented in this paper includes bipolar transistor input stages with noise performance superior to currently available FETs, noiseless input terminations in sub-nanosecond preamplifiers and methods using transmission lines to couple the detector to remotely mounted preamplifiers. Trigger circuits are characterized in terms of effective rise time, equivalent input noise and residual jitter

  15. Fast timing methods for semiconductor detectors. Revision

    International Nuclear Information System (INIS)

    Spieler, H.

    1984-10-01

    This tutorial paper discusses the basic parameters which determine the accuracy of timing measurements and their effect in a practical application, specifically timing with thin-surface barrier detectors. The discussion focusses on properties of the detector, low-noise amplifiers, trigger circuits and time converters. New material presented in this paper includes bipolar transistor input stages with noise performance superior to currently available FETs, noiseless input terminations in sub-nanosecond preamplifiers and methods using transmission lines to couple the detector to remotely mounted preamplifiers. Trigger circuits are characterized in terms of effective rise time, equivalent input noise and residual jitter

  16. Nuclear energy as a 'golden bridge'? Constitutional legal problems of the negotiation of the prolongation of the running time against skimming of profits

    International Nuclear Information System (INIS)

    Waldhoff, Christian; Aswege, Hanka von

    2010-01-01

    The coalition agreement of Christian Demographic Union (CDU), Christian Social Union (CSU) and Free Democratic Party (FDP) from 26th October, 2009 characterizes the nuclear energy as a bridge technology. The coalition parties explain to prolong the running times of German nuclear power stations up to a reliable replacement by renewable energies. The conditions for the prolongation of the running times are to be regulated in agreement with energy supply companies. In the contribution under consideration, the authors report on the fiscal legal problems of the skimming of profits. Constitutional legal problems of the earmaking of a skimming of profits as well as a consensual agreement are discussed in this contribution. In the result, a financial constitutionally reliable way for the skimming of added profits due to prolongation of the running time is not evident. The legal earmaking of the duty advent for the promotion of renewable energies increases the constitutional doubts.

  17. Multiple Shooting and Time Domain Decomposition Methods

    CERN Document Server

    Geiger, Michael; Körkel, Stefan; Rannacher, Rolf

    2015-01-01

    This book offers a comprehensive collection of the most advanced numerical techniques for the efficient and effective solution of simulation and optimization problems governed by systems of time-dependent differential equations. The contributions present various approaches to time domain decomposition, focusing on multiple shooting and parareal algorithms.  The range of topics covers theoretical analysis of the methods, as well as their algorithmic formulation and guidelines for practical implementation. Selected examples show that the discussed approaches are mandatory for the solution of challenging practical problems. The practicability and efficiency of the presented methods is illustrated by several case studies from fluid dynamics, data compression, image processing and computational biology, giving rise to possible new research topics.  This volume, resulting from the workshop Multiple Shooting and Time Domain Decomposition Methods, held in Heidelberg in May 2013, will be of great interest to applied...

  18. RUN COORDINATION

    CERN Multimedia

    Christophe Delaere

    2012-01-01

      On Wednesday 14 March, the machine group successfully injected beams into LHC for the first time this year. Within 48 hours they managed to ramp the beams to 4 TeV and proceeded to squeeze to β*=0.6m, settings that are used routinely since then. This brought to an end the CMS Cosmic Run at ~Four Tesla (CRAFT), during which we collected 800k cosmic ray events with a track crossing the central Tracker. That sample has been since then topped up to two million, allowing further refinements of the Tracker Alignment. The LHC started delivering the first collisions on 5 April with two bunches colliding in CMS, giving a pile-up of ~27 interactions per crossing at the beginning of the fill. Since then the machine has increased the number of colliding bunches to reach 1380 bunches and peak instantaneous luminosities around 6.5E33 at the beginning of fills. The average bunch charges reached ~1.5E11 protons per bunch which results in an initial pile-up of ~30 interactions per crossing. During the ...

  19. The time domain triple probe method

    International Nuclear Information System (INIS)

    Meier, M.A.; Hallock, G.A.; Tsui, H.Y.W.; Bengtson, R.D.

    1994-01-01

    A new Langmuir probe technique based on the triple probe method is being developed to provide simultaneous measurement of plasma temperature, potential, and density with the temporal and spatial resolution required to accurately characterize plasma turbulence. When the conventional triple probe method is used in an inhomogeneous plasma, local differences in the plasma measured at each probe introduce significant error in the estimation of turbulence parameters. The Time Domain Triple Probe method (TDTP) uses high speed switching of Langmuir probe potential, rather than spatially separated probes, to gather the triple probe information thus avoiding these errors. Analysis indicates that plasma response times and recent electronics technology meet the requirements to implement the TDTP method. Data reduction techniques of TDTP data are to include linear and higher order correlation analysis to estimate fluctuation induced particle and thermal transport, as well as energy relationships between temperature, density, and potential fluctuations

  20. The acute effects of a caffeine-containing supplement on bench press strength and time to running exhaustion.

    Science.gov (United States)

    Beck, Travis W; Housh, Terry J; Malek, Moh H; Mielke, Michelle; Hendrix, Russell

    2008-09-01

    The purpose of the present study was to examine the acute effects of a caffeine-containing supplement (SUPP) on one-repetition maximum (1-RM) bench press strength and time to running exhaustion (TRE) at a velocity that corresponded to 85% of the peak oxygen uptake ([latin capital V with dot above]O2peak). The study used a double-blinded, placebo-controlled, crossover design. Thirty-one men (mean +/- SD age = 23.0 +/- 2.6 years) were randomly assigned to take either the SUPP or placebo (PLAC) first. The SUPP contained 201 mg of caffeine, and the PLAC was microcrystalline cellulose. All subjects were tested for 1-RM bench press strength and TRE at 45 minutes after taking either the SUPP or PLAC. After 1 week of rest, the subjects returned to the laboratory and ingested the opposite substance (SUPP or PLAC) from what was taken during the previous visit. The 1-RM bench press and TRE tests were then performed in the same manner as before. The results indicated that the SUPP had no effect on 1-RM bench press strength or TRE at 85% [latin capital V with dot above]O2peak. It is possible that the acute effects of caffeine are affected by differences in training status and/or the relative intensity of the exercise task. Future studies should examine these issues, in addition to testing the acute effects of various caffeine doses on performance during maximal strength, power, and aerobic activities. These findings do not, however, support the use of caffeine as an ergogenic aid in untrained to moderately trained individuals.

  1. Probabilistic real-time contingency ranking method

    International Nuclear Information System (INIS)

    Mijuskovic, N.A.; Stojnic, D.

    2000-01-01

    This paper describes a real-time contingency method based on a probabilistic index-expected energy not supplied. This way it is possible to take into account the stochastic nature of the electric power system equipment outages. This approach enables more comprehensive ranking of contingencies and it is possible to form reliability cost values that can form the basis for hourly spot price calculations. The electric power system of Serbia is used as an example for the method proposed. (author)

  2. Analysis and Design of Bi-Directional DC-DC Converter in the Extended Run Time DC UPS System Based on Fuel Cell and Supercapacitor

    DEFF Research Database (Denmark)

    Zhang, Zhe; Thomsen, Ole Cornelius; Andersen, Michael A. E.

    2009-01-01

    Abstract-In this paper, an extended run time DC UPS system structure with fuel cell and supercapacitor is investigated. A wide input range bi-directional dc-dc converter is described along with the phase-shift modulation scheme and phase-shift with duty cycle control, in different modes. The deli......Abstract-In this paper, an extended run time DC UPS system structure with fuel cell and supercapacitor is investigated. A wide input range bi-directional dc-dc converter is described along with the phase-shift modulation scheme and phase-shift with duty cycle control, in different modes...

  3. A single-run liquid chromatography mass spectrometry method to quantify neuroactive kynurenine pathway metabolites in rat plasma.

    Science.gov (United States)

    Orsatti, Laura; Speziale, Roberto; Orsale, Maria Vittoria; Caretti, Fulvia; Veneziano, Maria; Zini, Matteo; Monteagudo, Edith; Lyons, Kathryn; Beconi, Maria; Chan, Kelvin; Herbst, Todd; Toledo-Sherman, Leticia; Munoz-Sanjuan, Ignacio; Bonelli, Fabio; Dominguez, Celia

    2015-03-25

    Neuroactive metabolites in the kynurenine pathway of tryptophan catabolism are associated with neurodegenerative disorders. Tryptophan is transported across the blood-brain barrier and converted via the kynurenine pathway to N-formyl-L-kynurenine, which is further degraded to L-kynurenine. This metabolite can then generate a group of metabolites called kynurenines, most of which have neuroactive properties. The association of tryptophan catabolic pathway alterations with various central nervous system (CNS) pathologies has raised interest in analytical methods to accurately quantify kynurenines in body fluids. We here describe a rapid and sensitive reverse-phase HPLC-MS/MS method to quantify L-kynurenine (KYN), kynurenic acid (KYNA), 3-hydroxy-L-kynurenine (3HK) and anthranilic acid (AA) in rat plasma. Our goal was to quantify these metabolites in a single run; given their different physico-chemical properties, major efforts were devoted to develop a chromatography suitable for all metabolites that involves plasma protein precipitation with acetonitrile followed by chromatographic separation by C18 RP chromatography, detected by electrospray mass spectrometry. Quantitation range was 0.098-100 ng/ml for 3HK, 9.8-20,000 ng/ml for KYN, 0.49-1000 ng/ml for KYNA and AA. The method was linear (r>0.9963) and validation parameters were within acceptance range (calibration standards and QC accuracy within ±30%). Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Triathlon: running injuries.

    Science.gov (United States)

    Spiker, Andrea M; Dixit, Sameer; Cosgarea, Andrew J

    2012-12-01

    The running portion of the triathlon represents the final leg of the competition and, by some reports, the most important part in determining a triathlete's overall success. Although most triathletes spend most of their training time on cycling, running injuries are the most common injuries encountered. Common causes of running injuries include overuse, lack of rest, and activities that aggravate biomechanical predisposers of specific injuries. We discuss the running-associated injuries in the hip, knee, lower leg, ankle, and foot of the triathlete, and the causes, presentation, evaluation, and treatment of each.

  5. Patellofemoral Joint Loads During Running at the Time of Return to Sport in Elite Athletes With ACL Reconstruction.

    Science.gov (United States)

    Herrington, Lee; Alarifi, Saud; Jones, Richard

    2017-10-01

    Patellofemoral joint pain and degeneration are common in patients who undergo anterior cruciate ligament reconstruction (ACLR). The presence of patellofemoral joint pain significantly affects the patient's ability to continue sport participation and may even affect participation in activities of daily living. The mechanisms behind patellofemoral joint pain and degeneration are unclear, but previous research has identified altered patellofemoral joint loading in individuals with patellofemoral joint pain when running. It is unclear whether this process occurs after ACLR. To assess the patellofemoral joint stresses during running in ACLR knees and compare the findings to the noninjured knee and matched control knees. Controlled laboratory study. Thirty-four elite sports practitioners who had undergone ACLR and 34 age- and sex-matched controls participated in the study. The participants' running gait was assessed via 3D motion capture, and knee loads and forces were calculated by use of inverse dynamics. A significance difference was found in knee extensor moment, knee flexion angles, patellofemoral contact force (about 23% greater), and patellofemoral contact pressure (about 27% greater) between the ACLR and the noninjured limb ( P ≤ .04) and between the ACLR and the control limb ( P ≤ .04); no significant differences were found between the noninjured and control limbs ( P ≥ .44). Significantly greater levels of patellofemoral joint stress and load were found in the ACLR knee compared with the noninjured and control knees. Altered levels of patellofemoral stress in the ACLR knee during running may predispose individuals to patellofemoral joint pain.

  6. Effects of selective breeding for increased wheel-running behavior on circadian timing of substrate oxidation and ingestive behavior

    NARCIS (Netherlands)

    Jonas, I.; Vaanholt, L. M.; Doornbos, M.; Garland, T.; Scheurink, A. J. W.; Nyakas, C.; van Dijk, G.; Garland Jr., T.

    2010-01-01

    Fluctuations in substrate preference and utilization across the circadian cycle may be influenced by the degree of physical activity and nutritional status. In the present study, we assessed these relationships in control mice and in mice from a line selectively bred for high voluntary wheel-running

  7. Method for Determining the Time Parameter

    Directory of Open Access Journals (Sweden)

    K. P. Baslyk

    2014-01-01

    Full Text Available This article proposes a method for calculating one of the characteristics that represents the flight program of the first stage of ballistic rocket i.e. time parameter of the program of attack angle.In simulation of placing the payload for the first stage, a program of flight is used which consists of three segments, namely a vertical climb of the rocket, a segment of programmed reversal by attack angle, and a segment of gravitational reversal with zero angle of attack.The programed reversal by attack angle is simulated as a rapidly decreasing and increasing function. This function depends on the attack angle amplitude, time and time parameter.If the projected and ballistic parameters and the amplitude of attack angle were determined this coefficient is calculated based the constraint that the rocket velocity is equal to 0.8 from the sound velocity (0,264 km/sec when the angle of attack becomes equal to zero. Such constraint is transformed to the nonlinear equation, which can be solved using a Newton method.The attack angle amplitude value is unknown for the design analysis. Exceeding some maximum admissible value for this parameter may lead to excessive trajectory collapsing (foreshortening, which can be identified as an arising negative trajectory angle.Consequently, therefore it is necessary to compute the maximum value of the attack angle amplitude with the following constraints: a trajectory angle is positive during the entire first stage flight and the rocket velocity is equal to 0,264 km/sec by the end of program of angle attack. The problem can be formulated as a task of the nonlinear programming, minimization of the modified Lagrange function, which is solved using the multipliers method.If multipliers and penalty parameter are constant the optimization problem without constraints takes place. Using the determined coordinate descent method allows solving the problem of modified Lagrange function of unconstrained minimization with fixed

  8. Palmprint Verification Using Time Series Method

    Directory of Open Access Journals (Sweden)

    A. A. Ketut Agung Cahyawan Wiranatha

    2013-11-01

    Full Text Available The use of biometrics as an automatic recognition system is growing rapidly in solving security problems, palmprint is one of biometric system which often used. This paper used two steps in center of mass moment method for region of interest (ROI segmentation and apply the time series method combined with block window method as feature representation. Normalized Euclidean Distance is used to measure the similarity degrees of two feature vectors of palmprint. System testing is done using 500 samples palms, with 4 samples as the reference image and the 6 samples as test images. Experiment results show that this system can achieve a high performance with success rate about 97.33% (FNMR=1.67%, FMR=1.00 %, T=0.036.

  9. Just in Time - Expecting Failure: Do JIT Principles Run Counter to DoD’s Business Nature?

    Science.gov (United States)

    2014-04-01

    Regiment. The last several years witnessed both commercial industry and the Department of Defense (DoD) logistics supply chains trending to-ward an...moving items through a production system only when needed. Equating inventory to an avoidable waste instead of adding value to a company directly...Louisiana plant for a week, Honda Motor Company to suspend orders for Japanese-built Honda and Acura models, and pro- ducers of Boeing’s 787 to run billions

  10. Super-nodal methods for space-time kinetics

    Science.gov (United States)

    Mertyurek, Ugur

    The purpose of this research has been to develop an advanced Super-Nodal method to reduce the run time of 3-D core neutronics models, such as in the NESTLE reactor core simulator and FORMOSA nuclear fuel management optimization codes. Computational performance of the neutronics model is increased by reducing the number of spatial nodes used in the core modeling. However, as the number of spatial nodes decreases, the error in the solution increases. The Super-Nodal method reduces the error associated with the use of coarse nodes in the analyses by providing a new set of cross sections and ADFs (Assembly Discontinuity Factors) for the new nodalization. These so called homogenization parameters are obtained by employing consistent collapsing technique. During this research a new type of singularity, namely "fundamental mode singularity", is addressed in the ANM (Analytical Nodal Method) solution. The "Coordinate Shifting" approach is developed as a method to address this singularity. Also, the "Buckling Shifting" approach is developed as an alternative and more accurate method to address the zero buckling singularity, which is a more common and well known singularity problem in the ANM solution. In the course of addressing the treatment of these singularities, an effort was made to provide better and more robust results from the Super-Nodal method by developing several new methods for determining the transverse leakage and collapsed diffusion coefficient, which generally are the two main approximations in the ANM methodology. Unfortunately, the proposed new transverse leakage and diffusion coefficient approximations failed to provide a consistent improvement to the current methodology. However, improvement in the Super-Nodal solution is achieved by updating the homogenization parameters at several time points during a transient. The update is achieved by employing a refinement technique similar to pin-power reconstruction. A simple error analysis based on the relative

  11. Running Linux

    CERN Document Server

    Dalheimer, Matthias Kalle

    2006-01-01

    The fifth edition of Running Linux is greatly expanded, reflecting the maturity of the operating system and the teeming wealth of software available for it. Hot consumer topics such as audio and video playback applications, groupware functionality, and spam filtering are covered, along with the basics in configuration and management that always made the book popular.

  12. RUN COORDINATION

    CERN Multimedia

    C. Delaere

    2013-01-01

    Since the LHC ceased operations in February, a lot has been going on at Point 5, and Run Coordination continues to monitor closely the advance of maintenance and upgrade activities. In the last months, the Pixel detector was extracted and is now stored in the pixel lab in SX5; the beam pipe has been removed and ME1/1 removal has started. We regained access to the vactank and some work on the RBX of HB has started. Since mid-June, electricity and cooling are back in S1 and S2, allowing us to turn equipment back on, at least during the day. 24/7 shifts are not foreseen in the next weeks, and safety tours are mandatory to keep equipment on overnight, but re-commissioning activities are slowly being resumed. Given the (slight) delays accumulated in LS1, it was decided to merge the two global runs initially foreseen into a single exercise during the week of 4 November 2013. The aim of the global run is to check that we can run (parts of) CMS after several months switched off, with the new VME PCs installed, th...

  13. Method of Storing Raster Image in Run Lengths Having Variable Numbers of Bytes and Medium with Raster Image Thus Stored

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The invention implements a run-length file format with improved space-sav qualities. The file starts with a header in ASCII format and includes information such as...

  14. The LHCb Run Control

    CERN Document Server

    Alessio, F; Callot, O; Duval, P-Y; Franek, B; Frank, M; Galli, D; Gaspar, C; v Herwijnen, E; Jacobsson, R; Jost, B; Neufeld, N; Sambade, A; Schwemmer, R; Somogyi, P

    2010-01-01

    LHCb has designed and implemented an integrated Experiment Control System. The Control System uses the same concepts and the same tools to control and monitor all parts of the experiment: the Data Acquisition System, the Timing and the Trigger Systems, the High Level Trigger Farm, the Detector Control System, the Experiment's Infrastructure and the interaction with the CERN Technical Services and the Accelerator. LHCb's Run Control, the main interface used by the experiment's operator, provides access in a hierarchical, coherent and homogeneous manner to all areas of the experiment and to all its sub-detectors. It allows for automated (or manual) configuration and control, including error recovery, of the full experiment in its different running modes. Different instances of the same Run Control interface are used by the various sub-detectors for their stand-alone activities: test runs, calibration runs, etc. The architecture and the tools used to build the control system, the guidelines and components provid...

  15. Change of time methods in quantitative finance

    CERN Document Server

    Swishchuk, Anatoliy

    2016-01-01

    This book is devoted to the history of Change of Time Methods (CTM), the connections of CTM to stochastic volatilities and finance, fundamental aspects of the theory of CTM, basic concepts, and its properties. An emphasis is given on many applications of CTM in financial and energy markets, and the presented numerical examples are based on real data. The change of time method is applied to derive the well-known Black-Scholes formula for European call options, and to derive an explicit option pricing formula for a European call option for a mean-reverting model for commodity prices. Explicit formulas are also derived for variance and volatility swaps for financial markets with a stochastic volatility following a classical and delayed Heston model. The CTM is applied to price financial and energy derivatives for one-factor and multi-factor alpha-stable Levy-based models. Readers should have a basic knowledge of probability and statistics, and some familiarity with stochastic processes, such as Brownian motion, ...

  16. Running Club

    CERN Multimedia

    Running Club

    2011-01-01

    The cross country running season has started well this autumn with two events: the traditional CERN Road Race organized by the Running Club, which took place on Tuesday 5th October, followed by the ‘Cross Interentreprises’, a team event at the Evaux Sports Center, which took place on Saturday 8th October. The participation at the CERN Road Race was slightly down on last year, with 65 runners, however the participants maintained the tradition of a competitive yet friendly atmosphere. An ample supply of refreshments before the prize giving was appreciated by all after the race. Many thanks to all the runners and volunteers who ensured another successful race. The results can be found here: https://espace.cern.ch/Running-Club/default.aspx CERN participated successfully at the cross interentreprises with very good results. The teams succeeded in obtaining 2nd and 6th place in the Mens category, and 2nd place in the Mixed category. Congratulations to all. See results here: http://www.c...

  17. RUN COORDINATION

    CERN Multimedia

    M. Chamizo

    2012-01-01

      On 17th January, as soon as the services were restored after the technical stop, sub-systems started powering on. Since then, we have been running 24/7 with reduced shift crew — Shift Leader and DCS shifter — to allow sub-detectors to perform calibration, noise studies, test software upgrades, etc. On 15th and 16th February, we had the first Mid-Week Global Run (MWGR) with the participation of most sub-systems. The aim was to bring CMS back to operation and to ensure that we could run after the winter shutdown. All sub-systems participated in the readout and the trigger was provided by a fraction of the muon systems (CSC and the central RPC wheel). The calorimeter triggers were not available due to work on the optical link system. Initial checks of different distributions from Pixels, Strips, and CSC confirmed things look all right (signal/noise, number of tracks, phi distribution…). High-rate tests were done to test the new CSC firmware to cure the low efficiency ...

  18. Combustion and emission based optimization of turbocharged diesel engine run on biodiesel using grey-taguchi method

    International Nuclear Information System (INIS)

    Masood, M.I.

    2015-01-01

    in this work it is attempted to optimize the combustion parameters such as instantaneous heal release (IR), cylinder Pressure (P) and rate of change oj pressure per degree crank angle (dP/do)) and the emissions characteristics such as NOx and Smoke of 2 turbocharged direct injection (DI) compression ignition (Cl) engine alternatively run on pure biodiesel (Bl 00), diesel and biodiesel-diesel blend (B20: applying Grey Taguchi method (GTM), GTM is used to convert multi variables into a single objective function The process environment comprising three input parameters (speed of the engine, load and type of fuel:, were used in this case, The design of experiment (DOE: was selected on an orthogonal array based on L9 (33) The Optimum Parameters were found on the basis ol Grey Relational Grade (GRG) and signal to noise (SN: ratio using GTM, The resulted optimum combination of the input parameters was used to get maximum possible values of IR, P and least possible values ol NOx, smoke and dP/do, The higher values of IH and I measure the better performance of the engine, while lower values of NO x' smoke and dP/do are the ultimate objectives of the study, According to the results It was revealed that B 1 00 fuel, 1800 rpm speed and 10% load offer the optimum combination for the desired performance of the engine along with reduced pollutants, Analysis of Variance (ANOVA) based on, software Minitab 16 was used to get the mos: significant input parameter keeping in view responses Fuel type and engine load were found to be the dominant factors with 48,16% and 43.18% impact or the output parameters, respectively, Finally the results were validated using Artificial Neural Network (ANN) through Mat lab. (author)

  19. Using Simulated Partial Dynamic Run-Time Reconfiguration to Share Embedded FPGA Compute and Power Resources across a Swarm of Unpiloted Airborne Vehicles

    Directory of Open Access Journals (Sweden)

    Kearney David

    2007-01-01

    Full Text Available We show how the limited electrical power and FPGA compute resources available in a swarm of small UAVs can be shared by moving FPGA tasks from one UAV to another. A software and hardware infrastructure that supports the mobility of embedded FPGA applications on a single FPGA chip and across a group of networked FPGA chips is an integral part of the work described here. It is shown how to allocate a single FPGA's resources at run time and to share a single device through the use of application checkpointing, a memory controller, and an on-chip run-time reconfigurable network. A prototype distributed operating system is described for managing mobile applications across the swarm based on the contents of a fuzzy rule base. It can move applications between UAVs in order to equalize power use or to enable the continuous replenishment of fully fueled planes into the swarm.

  20. Effects of cognitive stimulation with a self-modeling video on time to exhaustion while running at maximal aerobic velocity: a pilot study.

    Science.gov (United States)

    Hagin, Vincent; Gonzales, Benoît R; Groslambert, Alain

    2015-04-01

    This study assessed whether video self-modeling improves running performance and influences the rate of perceived exertion and heart rate response. Twelve men (M age=26.8 yr., SD=6; M body mass index=22.1 kg.m(-2), SD=1) performed a time to exhaustion running test at 100 percent maximal aerobic velocity while focusing on a video self-modeling loop to synchronize their stride. Compared to the control condition, there was a significant increase of time to exhaustion. Perceived exertion was lower also, but there was no significant change in mean heart rate. In conclusion, the video self-modeling used as a pacer apparently increased endurance by decreasing perceived exertion without affecting the heart rate.

  1. Passenger Sharing of the High-Speed Railway from Sensitivity Analysis Caused by Price and Run-time Based on the Multi-Agent System

    Directory of Open Access Journals (Sweden)

    Ma Ning

    2013-09-01

    Full Text Available Purpose: Nowadays, governments around the world are active in constructing the high-speed railway. Therefore, it is significant to make research on this increasingly prevalent transport.Design/methodology/approach: In this paper, we simulate the process of the passenger’s travel mode choice by adjusting the ticket fare and the run-time based on the multi-agent system (MAS.Findings: From the research we get the conclusion that increasing the run-time appropriately and reducing the ticket fare in some extent are effective ways to enhance the passenger sharing of the high-speed railway.Originality/value: We hope it can provide policy recommendations for the railway sectors in developing the long-term plan on high-speed railway in the future.

  2. Symmetry in running.

    Science.gov (United States)

    Raibert, M H

    1986-03-14

    Symmetry plays a key role in simplifying the control of legged robots and in giving them the ability to run and balance. The symmetries studied describe motion of the body and legs in terms of even and odd functions of time. A legged system running with these symmetries travels with a fixed forward speed and a stable upright posture. The symmetries used for controlling legged robots may help in elucidating the legged behavior of animals. Measurements of running in the cat and human show that the feet and body sometimes move as predicted by the even and odd symmetry functions.

  3. RUNNING INJURY DEVELOPMENT

    DEFF Research Database (Denmark)

    Johansen, Karen Krogh; Hulme, Adam; Damsted, Camma

    2017-01-01

    BACKGROUND: Behavioral science methods have rarely been used in running injury research. Therefore, the attitudes amongst runners and their coaches regarding factors leading to running injuries warrants formal investigation. PURPOSE: To investigate the attitudes of middle- and long-distance runners...... able to compete in national championships and their coaches about factors associated with running injury development. METHODS: A link to an online survey was distributed to middle- and long-distance runners and their coaches across 25 Danish Athletics Clubs. The main research question was: "Which...... factors do you believe influence the risk of running injuries?". In response to this question, the athletes and coaches had to click "Yes" or "No" to 19 predefined factors. In addition, they had the possibility to submit a free-text response. RESULTS: A total of 68 athletes and 19 coaches were included...

  4. Running Injury Development

    DEFF Research Database (Denmark)

    Krogh Johansen, Karen; Hulme, Adam; Damsted, Camma

    2017-01-01

    Background: Behavioral science methods have rarely been used in running injury research. Therefore, the attitudes amongst runners and their coaches regarding factors leading to running injuries warrants formal investigation. Purpose: To investigate the attitudes of middle- and long-distance runners...... able to compete in national championships and their coaches about factors associated with running injury development. Methods: A link to an online survey was distributed to middle- and long-distance runners and their coaches across 25 Danish Athletics Clubs. The main research question was: “Which...... factors do you believe influence the risk of running injuries?”. In response to this question, the athletes and coaches had to click “Yes” or “No” to 19 predefined factors. In addition, they had the possibility to submit a free-text response. Results: A total of 68 athletes and 19 coaches were included...

  5. Distance walked and run as improved metrics over time-based energy estimation in epidemiological studies and prevention; evidence from medication use.

    Directory of Open Access Journals (Sweden)

    Paul T Williams

    Full Text Available The guideline physical activity levels are prescribed in terms of time, frequency, and intensity (e.g., 30 minutes brisk walking, five days a week or its energy equivalence and assume that different activities may be combined to meet targeted goals (exchangeability premise. Habitual runners and walkers may quantify exercise in terms of distance (km/day, and for them, the relationship between activity dose and health benefits may be better assessed in terms of distance rather than time. Analyses were therefore performed to test: 1 whether time-based or distance-based estimates of energy expenditure provide the best metric for relating running and walking to hypertensive, high cholesterol, and diabetes medication use (conditions known to be diminished by exercise, and 2 the exchangeability premise.Logistic regression analyses of medication use (dependent variable vs. metabolic equivalent hours per day (METhr/d of running, walking and other exercise (independent variables using cross-sectional data from the National Runners' (17,201 male, 16,173 female and Walkers' Health Studies (3,434 male, 12,384 female.Estimated METhr/d of running and walking activity were 38% and 31% greater, respectively, when calculated from self-reported time than distance in men, and 43% and 37% greater in women, respectively. Percent reductions in the odds for hypertension and high cholesterol medication use per METhr/d run or per METhr/d walked were ≥ 2-fold greater when estimated from reported distance (km/wk than from time (hr/wk. The per METhr/d odds reduction was significantly greater for the distance- than the time-based estimate for hypertension (runners: P<10(-5 for males and P=0.003 for females; walkers: P=0.03 for males and P<10(-4 for females, high cholesterol medication use in runners (P<10(-4 for males and P=0.02 for females and male walkers (P=0.01 for males and P=0.08 for females and for diabetes medication use in male runners (P<10(-3.Although causality

  6. Suitability of voltage stability study methods for real-time assessment

    DEFF Research Database (Denmark)

    Perez, Angel; Jóhannsson, Hjörtur; Vancraeyveld, Pieter

    2013-01-01

    This paper analyzes the suitability of existing methods for long-term voltage stability assessment for real-time operation. An overview of the relevant methods is followed with a comparison that takes into account the accuracy, computational efficiency and characteristics when used for security...... assessment. The results enable an evaluation of the run time of each method with respect to the number of inputs. Furthermore, the results assist in identifying which of the methods is most suitable for realtime operation in future power system with production based on fluctuating energy sources....

  7. RUN COORDINATION

    CERN Multimedia

    G. Rakness.

    2013-01-01

    After three years of running, in February 2013 the era of sub-10-TeV LHC collisions drew to an end. Recall, the 2012 run had been extended by about three months to achieve the full complement of high-energy and heavy-ion physics goals prior to the start of Long Shutdown 1 (LS1), which is now underway. The LHC performance during these exciting years was excellent, delivering a total of 23.3 fb–1 of proton-proton collisions at a centre-of-mass energy of 8 TeV, 6.2 fb–1 at 7 TeV, and 5.5 pb–1 at 2.76 TeV. They also delivered 170 μb–1 lead-lead collisions at 2.76 TeV/nucleon and 32 nb–1 proton-lead collisions at 5 TeV/nucleon. During these years the CMS operations teams and shift crews made tremendous strides to commission the detector, repeatedly stepping up to meet the challenges at every increase of instantaneous luminosity and energy. Although it does not fully cover the achievements of the teams, a way to quantify their success is the fact that that...

  8. Effect of foot orthoses on magnitude and timing of rearfoot and tibial motions, ground reaction force and knee moment during running.

    Science.gov (United States)

    Eslami, Mansour; Begon, Mickaël; Hinse, Sébastien; Sadeghi, Heydar; Popov, Peter; Allard, Paul

    2009-11-01

    Changes in magnitude and timing of rearfoot eversion and tibial internal rotation by foot orthoses and their contributions to vertical ground reaction force and knee joint moments are not well understood. The objectives of this study were to test if orthoses modify the magnitude and time to peak rearfoot eversion, tibial internal rotation, active ground reaction force and knee adduction moment and determine if rearfoot eversion, tibial internal rotation magnitudes are correlated to peak active ground reaction force and knee adduction moment during the first 60% stance phase of running. Eleven healthy men ran at 170 steps per minute in shod and with foot orthoses conditions. Video and force-plate data were collected simultaneously to calculate foot joint angular displacement, ground reaction forces and knee adduction moments. Results showed that wearing semi-rigid foot orthoses significantly reduced rearfoot eversion 40% (4.1 degrees ; p=0.001) and peak active ground reaction force 6% (0.96N/kg; p=0.008). No significant time differences occurred among the peak rearfoot eversion, tibial internal rotation and peak active ground reaction force in both conditions. A positive and significant correlation was observed between peak knee adduction moment and the magnitude of rearfoot eversion during shod (r=0.59; p=0.04) and shod/orthoses running (r=0.65; p=0.02). In conclusion, foot orthoses could reduce rearfoot eversion so that this can be associated with a reduction of knee adduction moment during the first 60% stance phase of running. Finding implies that modifying rearfoot and tibial motions during running could not be related to a reduction of the ground reaction force.

  9. Impact of Different Time Series Streamflow Data on Energy Generation of a Run-of-River Hydropower Plant

    Science.gov (United States)

    Kentel, E.; Cetinkaya, M. A.

    2013-12-01

    Global issues such as population increase, power supply crises, oil prices, social and environmental concerns have been forcing countries to search for alternative energy sources such as renewable energy to satisfy the sustainable development goals. Hydropower is the most common form of renewable energy in the world. Hydropower does not require any fuel, produces relatively less pollution and waste and it is a reliable energy source with relatively low operating cost. In order to estimate the average annual energy production of a hydropower plant, sufficient and dependable streamflow data is required. The goal of this study is to investigate impact of streamflow data on annual energy generation of Balkusan HEPP which is a small run-of-river hydropower plant at Karaman, Turkey. Two different stream gaging stations are located in the vicinity of Balkusan HEPP and these two stations have different observation periods: one from 1986 to 2004 and the other from 2000 to 2009. These two observation periods show different climatic characteristics. Thus, annual energy estimations based on data from these two different stations differ considerably. Additionally, neither of these stations is located at the power plant axis, thus streamflow observations from these two stream gaging stations need to be transferred to the plant axis. This requirement introduces further errors into energy estimations. Impact of different streamflow data and transfer of streamflow observations to plant axis on annual energy generation of a small hydropower plant is investigated in this study.

  10. Effects of selective breeding for increased wheel-running behavior on circadian timing of substrate oxidation and ingestive behavior.

    Science.gov (United States)

    Jónás, I; Vaanholt, L M; Doornbos, M; Garland, T; Scheurink, A J W; Nyakas, C; van Dijk, G

    2010-04-19

    Fluctuations in substrate preference and utilization across the circadian cycle may be influenced by the degree of physical activity and nutritional status. In the present study, we assessed these relationships in control mice and in mice from a line selectively bred for high voluntary wheel-running behavior, either when feeding a carbohydrate-rich/low-fat (LF) or a high-fat (HF) diet. Housed without wheels, selected mice, and in particular the females, exhibited higher cage activity than their non-selected controls during the dark phase and at the onset of the light phase, irrespective of diet. This was associated with increases in energy expenditure in both sexes of the selection line. In selected males, carbohydrate oxidation appeared to be increased compared to controls. In contrast, selected females had profound increases in fat oxidation above the levels in control females to cover the increased energy expenditure during the dark phase. This is remarkable in light of the finding that the selected mice, and in particular the females showed higher preference for the LF diet relative to controls. It is likely that hormonal and/or metabolic signals increase carbohydrate preference in the selected females, which may serve optimal maintenance of cellular metabolism in the presence of augmented fat oxidation. (c) 2010 Elsevier Inc. All rights reserved.

  11. Pre-Exercise Hyperhydration-Induced Bodyweight Gain Does Not Alter Prolonged Treadmill Running Time-Trial Performance in Warm Ambient Conditions

    Directory of Open Access Journals (Sweden)

    Eric D. B. Goulet

    2012-08-01

    Full Text Available This study compared the effect of pre-exercise hyperhydration (PEH and pre-exercise euhydration (PEE upon treadmill running time-trial (TT performance in the heat. Six highly trained runners or triathletes underwent two 18 km TT runs (~28 °C, 25%–30% RH on a motorized treadmill, in a randomized, crossover fashion, while being euhydrated or after hyperhydration with 26 mL/kg bodyweight (BW of a 130 mmol/L sodium solution. Subjects then ran four successive 4.5 km blocks alternating between 2.5 km at 1% and 2 km at 6% gradient, while drinking a total of 7 mL/kg BW of a 6% sports drink solution (Gatorade, USA. PEH increased BW by 1.00 ± 0.34 kg (P < 0.01 and, compared with PEE, reduced BW loss from 3.1% ± 0.3% (EUH to 1.4% ± 0.4% (HYP (P < 0.01 during exercise. Running TT time did not differ between groups (PEH: 85.6 ± 11.6 min; PEE: 85.3 ± 9.6 min, P = 0.82. Heart rate (5 ± 1 beats/min and rectal (0.3 ± 0.1 °C and body (0.2 ± 0.1 °C temperatures of PEE were higher than those of PEH (P < 0.05. There was no significant difference in abdominal discomfort and perceived exertion or heat stress between groups. Our results suggest that pre-exercise sodium-induced hyperhydration of a magnitude of 1 L does not alter 80–90 min running TT performance under warm conditions in highly-trained runners drinking ~500 mL sports drink during exercise.

  12. The design of the run Clever randomized trial: running volume, -intensity and running-related injuries.

    Science.gov (United States)

    Ramskov, Daniel; Nielsen, Rasmus Oestergaard; Sørensen, Henrik; Parner, Erik; Lind, Martin; Rasmussen, Sten

    2016-04-23

    Injury incidence and prevalence in running populations have been investigated and documented in several studies. However, knowledge about injury etiology and prevention is needed. Training errors in running are modifiable risk factors and people engaged in recreational running need evidence-based running schedules to minimize the risk of injury. The existing literature on running volume and running intensity and the development of injuries show conflicting results. This may be related to previously applied study designs, methods used to quantify the performed running and the statistical analysis of the collected data. The aim of the Run Clever trial is to investigate if a focus on running intensity compared with a focus on running volume in a running schedule influences the overall injury risk differently. The Run Clever trial is a randomized trial with a 24-week follow-up. Healthy recreational runners between 18 and 65 years and with an average of 1-3 running sessions per week the past 6 months are included. Participants are randomized into two intervention groups: Running schedule-I and Schedule-V. Schedule-I emphasizes a progression in running intensity by increasing the weekly volume of running at a hard pace, while Schedule-V emphasizes a progression in running volume, by increasing the weekly overall volume. Data on the running performed is collected by GPS. Participants who sustain running-related injuries are diagnosed by a diagnostic team of physiotherapists using standardized diagnostic criteria. The members of the diagnostic team are blinded. The study design, procedures and informed consent were approved by the Ethics Committee Northern Denmark Region (N-20140069). The Run Clever trial will provide insight into possible differences in injury risk between running schedules emphasizing either running intensity or running volume. The risk of sustaining volume- and intensity-related injuries will be compared in the two intervention groups using a competing

  13. Time series analysis time series analysis methods and applications

    CERN Document Server

    Rao, Tata Subba; Rao, C R

    2012-01-01

    The field of statistics not only affects all areas of scientific activity, but also many other matters such as public policy. It is branching rapidly into so many different subjects that a series of handbooks is the only way of comprehensively presenting the various aspects of statistical methodology, applications, and recent developments. The Handbook of Statistics is a series of self-contained reference books. Each volume is devoted to a particular topic in statistics, with Volume 30 dealing with time series. The series is addressed to the entire community of statisticians and scientists in various disciplines who use statistical methodology in their work. At the same time, special emphasis is placed on applications-oriented techniques, with the applied statistician in mind as the primary audience. Comprehensively presents the various aspects of statistical methodology Discusses a wide variety of diverse applications and recent developments Contributors are internationally renowened experts in their respect...

  14. RUN COORDINATION

    CERN Multimedia

    C. Delaere

    2012-01-01

      With the analysis of the first 5 fb–1 culminating in the announcement of the observation of a new particle with mass of around 126 GeV/c2, the CERN directorate decided to extend the LHC run until February 2013. This adds three months to the original schedule. Since then the LHC has continued to perform extremely well, and the total luminosity delivered so far this year is 22 fb–1. CMS also continues to perform excellently, recording data with efficiency higher than 95% for fills with the magnetic field at nominal value. The highest instantaneous luminosity achieved by LHC to date is 7.6x1033 cm–2s–1, which translates into 35 interactions per crossing. On the CMS side there has been a lot of work to handle these extreme conditions, such as a new DAQ computer farm and trigger menus to handle the pile-up, automation of recovery procedures to minimise the lost luminosity, better training for the shift crews, etc. We did suffer from a couple of infrastructure ...

  15. Timing, methods and prospective in citizenship training

    Directory of Open Access Journals (Sweden)

    Alessia Carta

    2010-07-01

    Full Text Available The current models of development are changing the balance between human activity and Nature on a local ands global level and the urgent need to establish a new relationship between Man and the environment is increasingly apparent. The move towards a more caring approach to the planet introducing concepts such as limits, impact on future generations, regeneration of resources, social and environmental justice and the right to citizenship should make us consider (aside from international undertakings by governments exactly how we can promote a culture of sustainability in schools in terms of methods, time scales, and location. Schools are directly involved in these processes of change however it is necessary to plan carefully and establish situations that will result in greater attention being paid to the interaction between man and the environment, and highlighting the lifestyles and attitudes that are currently incompatible with a sustainable future. These solutions, although based on technical-scientific knowledge, cannot be brought about without the involvement of the individual and local agencies working together. However we have chosen to concentrate on the links between educational policies and local areas interpreting declarations made by international bodies such as UNESCO and suggestions aimed at bringing sustainability to the centre of specific policies. Bringing about these aims requires great educational effort that goes well beyond simple environmental education since it requires a permanent process for educating adults. Looking at stages of the history of the theories regarding the development and education of adults shows how the topic of sustainability made its entry into the debate about permanent education and how in the last ten years it has taken on an unrivalled importance as a point of reference for educational policies and pedagogical reflection. The origin of the concept of sustainability, although belonging to natural

  16. Click trains and the rate of information processing: does "speeding up" subjective time make other psychological processes run faster?

    Science.gov (United States)

    Jones, Luke A; Allely, Clare S; Wearden, John H

    2011-02-01

    A series of experiments demonstrated that a 5-s train of clicks that have been shown in previous studies to increase the subjective duration of tones they precede (in a manner consistent with "speeding up" timing processes) could also have an effect on information-processing rate. Experiments used studies of simple and choice reaction time (Experiment 1), or mental arithmetic (Experiment 2). In general, preceding trials by clicks made response times significantly shorter than those for trials without clicks, but white noise had no effects on response times. Experiments 3 and 4 investigated the effects of clicks on performance on memory tasks, using variants of two classic experiments of cognitive psychology: Sperling's (1960) iconic memory task and Loftus, Johnson, and Shimamura's (1985) iconic masking task. In both experiments participants were able to recall or recognize significantly more information from stimuli preceded by clicks than those preceded by silence.

  17. The LHCb Run Control

    Energy Technology Data Exchange (ETDEWEB)

    Alessio, F; Barandela, M C; Frank, M; Gaspar, C; Herwijnen, E v; Jacobsson, R; Jost, B; Neufeld, N; Sambade, A; Schwemmer, R; Somogyi, P [CERN, 1211 Geneva 23 (Switzerland); Callot, O [LAL, IN2P3/CNRS and Universite Paris 11, Orsay (France); Duval, P-Y [Centre de Physique des Particules de Marseille, Aix-Marseille Universite, CNRS/IN2P3, Marseille (France); Franek, B [Rutherford Appleton Laboratory, Chilton, Didcot, OX11 0QX (United Kingdom); Galli, D, E-mail: Clara.Gaspar@cern.c [Universita di Bologna and INFN, Bologna (Italy)

    2010-04-01

    LHCb has designed and implemented an integrated Experiment Control System. The Control System uses the same concepts and the same tools to control and monitor all parts of the experiment: the Data Acquisition System, the Timing and the Trigger Systems, the High Level Trigger Farm, the Detector Control System, the Experiment's Infrastructure and the interaction with the CERN Technical Services and the Accelerator. LHCb's Run Control, the main interface used by the experiment's operator, provides access in a hierarchical, coherent and homogeneous manner to all areas of the experiment and to all its sub-detectors. It allows for automated (or manual) configuration and control, including error recovery, of the full experiment in its different running modes. Different instances of the same Run Control interface are used by the various sub-detectors for their stand-alone activities: test runs, calibration runs, etc. The architecture and the tools used to build the control system, the guidelines and components provided to the developers, as well as the first experience with the usage of the Run Control will be presented

  18. Subjective time runs faster under the influence of bright rather than dim light conditions during the forenoon.

    Science.gov (United States)

    Morita, Takeshi; Fukui, Tomoe; Morofushi, Masayo; Tokura, Hiromi

    2007-05-16

    The study investigated if 6 h morning bright light exposure, compared with dim light exposure, could influence time sense (range: 5-15 s). Eight women served as participants. The participant entered a bioclimatic chamber at 10:00 h on the day before the test day, where an ambient temperature and relative humidity were controlled at 25 degrees C and 60%RH. She sat quietly in a sofa in 50 lx until 22:00 h, retired at 22:00 h and then slept in total darkness. She rose at 07:00 h the following morning and again sat quietly in a sofa till 13:00 h, either in bright (2500 lx) or dim light (50 lx), the order of light intensities between the two occasions being randomized. The time-estimation test was performed from 13:00 to 13:10 h in 200 lx. The participant estimated the time that had elapsed between two buzzers, ranging over 5-15 s, and inputting the estimate into a computer. The test was carried out separately upon each individual. Results showed that the participants estimated higher durations of the given time intervals after previous exposure to 6 h of bright rather than dim light. The finding is discussed in terms of different load errors (difference between the actual core temperature and its thermoregulatory set-point) following 6-h exposure to bright or dim light in the morning.

  19. An approach to profiling for run-time checking of computational properties and performance debugging in logic programs.

    OpenAIRE

    Mera, E.; Trigo, Teresa; López García, Pedro; Hermenegildo, Manuel V.

    2010-01-01

    Although several profiling techniques for identifying performance bottlenecks in logic programs have been developed, they are generally not automatic and in most cases they do not provide enough information for identifying the root causes of such bottlenecks. This complicates using their results for guiding performance improvement. We present a profiling method and tool that provides such explanations. Our profiler associates cost centers to certain program elements and can measure different ...

  20. Draft Forecasts from Real-Time Runs of Physics-Based Models - A Road to the Future

    Science.gov (United States)

    Hesse, Michael; Rastatter, Lutz; MacNeice, Peter; Kuznetsova, Masha

    2008-01-01

    The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. The second focus of CCMC activities is on validation and verification of space weather models, and on the transition of appropriate models to space weather forecast centers. As part of the latter activity, the CCMC develops real-time simulation systems that stress models through routine execution. A by-product of these real-time calculations is the ability to derive model products, which may be useful for space weather operators. After consultations with NOAA/SEC and with AFWA, CCMC has developed a set of tools as a first step to make real-time model output useful to forecast centers. In this presentation, we will discuss the motivation for this activity, the actions taken so far, and options for future tools from model output.

  1. Value and depreciation of mineral resources over the very long run: An empirical contrast of different methods

    OpenAIRE

    Rubio Varas, M. del Mar

    2005-01-01

    The paper contrasts empirically the results of alternative methods for estimating the value and the depreciation of mineral resources. The historical data of Mexico and Venezuela, covering the period 1920s-1980s, is used to contrast the results of several methods. These are the present value, the net price method, the user cost method and the imputed income method. The paper establishes that the net price and the user cost are not competing methods as such, but alternative adjustments to diff...

  2. Simultaneous real-time data collection methods

    Science.gov (United States)

    Klincsek, Thomas

    1992-01-01

    This paper describes the development of electronic test equipment which executes, supervises, and reports on various tests. This validation process uses computers to analyze test results and report conclusions. The test equipment consists of an electronics component and the data collection and reporting unit. The PC software, display screens, and real-time data-base are described. Pass-fail procedures and data replay are discussed. The OS2 operating system and Presentation Manager user interface system were used to create a highly interactive automated system. The system outputs are hardcopy printouts and MS DOS format files which may be used as input for other PC programs.

  3. Time dependent variational method in quantum mechanics

    International Nuclear Information System (INIS)

    Torres del Castillo, G.F.

    1987-01-01

    Using the fact that the solutions to the time-dependent Schodinger equation can be obtained from a variational principle, by restricting the evolution of the state vector to some surface in the corresponding Hilbert space, approximations to the exact solutions can be obtained, which are determined by equations similar to Hamilton's equations. It is shown that, in order for the approximate evolution to be well defined on a given surface, the imaginary part of the inner product restricted to the surface must be non-singular. (author)

  4. 26 CFR 301.6503(d)-1 - Suspension of running of period of limitation; extension of time for payment of estate tax.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 18 2010-04-01 2010-04-01 false Suspension of running of period of limitation... ADMINISTRATION Limitations Limitations on Assessment and Collection § 301.6503(d)-1 Suspension of running of... payment of any estate tax, the running of the period of limitations for collection of such tax is...

  5. Seismic response of three-dimensional topographies using a time-domain boundary element method

    Science.gov (United States)

    Janod, François; Coutant, Olivier

    2000-08-01

    We present a time-domain implementation for a boundary element method (BEM) to compute the diffraction of seismic waves by 3-D topographies overlying a homogeneous half-space. This implementation is chosen to overcome the memory limitations arising when solving the boundary conditions with a frequency-domain approach. This formulation is flexible because it allows one to make an adaptive use of the Green's function time translation properties: the boundary conditions solving scheme can be chosen as a trade-off between memory and cpu requirements. We explore here an explicit method of solution that requires little memory but a high cpu cost in order to run on a workstation computer. We obtain good results with four points per minimum wavelength discretization for various topographies and plane wave excitations. This implementation can be used for two different aims: the time-domain approach allows an easier implementation of the BEM in hybrid methods (e.g. coupling with finite differences), and it also allows one to run simple BEM models with reasonable computer requirements. In order to keep reasonable computation times, we do not introduce any interface and we only consider homogeneous models. Results are shown for different configurations: an explosion near a flat free surface, a plane wave vertically incident on a Gaussian hill and on a hemispherical cavity, and an explosion point below the surface of a Gaussian hill. Comparison is made with other numerical methods, such as finite difference methods (FDMs) and spectral elements.

  6. Measurement of gastric emptying time: a comparative study between nonisotopic aspiration method and new radioisotopic technique

    International Nuclear Information System (INIS)

    Chaudhuri, T.K.; Greenwald, A.J.; Heading, R.C.; Chaudhuri, T.K.

    1975-01-01

    A comparative study between modified conventional saline load test and the more recently introduced radioisotopic method was run in 8 normal volunteers. Each subject underwent at least three studies by each of the two methods: (1) aspiration method Goldstein and Boyle incorporating our modification, and (2) an isotopic method employing a gamma camera with a computer. A liquid metal of isotonic saline was used with or without /sup 99m/Tc-DTPA. The results indicated that the gastric emptying T 1 / 2 (8.8 +- 3.5 min) obtained by saline load test was shorter than that obtained by isotopic method (12 +- 3 min). This discrepancy was most likely due to inherent error (incomplete aspiration of the gastric fluid) in the former method giving rise to a false faster emptying time. Moreover, the variations in T 1 / 2 value in the same individual was much more in the aspiration method than it was in the isotopic method

  7. A method for measuring the time structure of synchrotron x-ray beams

    International Nuclear Information System (INIS)

    Moses, W.W.; Derenzo, S.E.

    1989-08-01

    We describe a method employing a plastic scintillator coupled to a fast photomultiplier tube to generate a timing pulse from the x-ray bursts emitted from a synchrotron radiation source. This technique is useful for performing synchrotron experiments where detailed knowledge of the timing distribution is necessary, such as time resolved spectroscopy or fluorescence lifetime experiments. By digitizing the time difference between the timing signal generated on one beam crossing with the timing signal generated on the next beam crossing, the time structure of a synchrotron beam can be analyzed. Using this technique, we have investigated the single bunch time structure at the National Synchrotron Light Source (NSLS) during pilot runs in January, 1989, and found that the majority of the beam (96%) is contained in one rf bucket, while the remainder of the beam (4%) is contained in satellite rf buckets preceeding and following the main rf bucket by 19 ns. 1 ref., 4 figs

  8. Reliable Viscosity Calculation from Equilibrium Molecular Dynamics Simulations: A Time Decomposition Method.

    Science.gov (United States)

    Zhang, Yong; Otani, Akihito; Maginn, Edward J

    2015-08-11

    Equilibrium molecular dynamics is often used in conjunction with a Green-Kubo integral of the pressure tensor autocorrelation function to compute the shear viscosity of fluids. This approach is computationally expensive and is subject to a large amount of variability because the plateau region of the Green-Kubo integral is difficult to identify unambiguously. Here, we propose a time decomposition approach for computing the shear viscosity using the Green-Kubo formalism. Instead of one long trajectory, multiple independent trajectories are run and the Green-Kubo relation is applied to each trajectory. The averaged running integral as a function of time is fit to a double-exponential function with a weighting function derived from the standard deviation of the running integrals. Such a weighting function minimizes the uncertainty of the estimated shear viscosity and provides an objective means of estimating the viscosity. While the formal Green-Kubo integral requires an integration to infinite time, we suggest an integration cutoff time tcut, which can be determined by the relative values of the running integral and the corresponding standard deviation. This approach for computing the shear viscosity can be easily automated and used in computational screening studies where human judgment and intervention in the data analysis are impractical. The method has been applied to the calculation of the shear viscosity of a relatively low-viscosity liquid, ethanol, and relatively high-viscosity ionic liquid, 1-n-butyl-3-methylimidazolium bis(trifluoromethane-sulfonyl)imide ([BMIM][Tf2N]), over a range of temperatures. These test cases show that the method is robust and yields reproducible and reliable shear viscosity values.

  9. Development of econometric models for cost and time over-runs: an empirical study of major road construction projects in pakistan

    International Nuclear Information System (INIS)

    Khan, A.; Chaudhary, M.A.

    2016-01-01

    The construction industry is flourishing worldwide and contributes about 10% to the GDP of the world i.e. up to the tune of 4.6 Trillion US dollars. It employs almost 7% of the total employee dpersons and, consumes around 40% of the total energy. The Pakistani construction sector has displayed impressive growth in recent past years. The efficient road network is a key part of construction business and plays a significant role in the economic uplift of country. The overruns in costs and delays in completion of projects are very common phenomena and it has also been observed that the projects involving construction of roads also face problems of delays and cost over runs especially in developing countries. The causes of cost overruns and delays in road projects being undertaken by the premier road construction organization of Pakistan National Highway Authority (NHA) have been considered in this study. It has been done specifically in the context of impact of cause(s) determined from project report of a total of one hundred and thirty one (131) projects. The ten causative factors which we recognize as Design, Planning and Scheduling Related problems, Financial Constraint Related reasons, Social Problem Related reasons, Technical Reasons, Administrative Reasons, Scope Increase, Specification Changes, Cost Escalation Related reasons, Non-Availability of Equipment or Material and Force Majeure play a commanding role in determination of the cost and time over runs. It has also been observed that among these identified causes, the factors of Administrative Reason, Design, Planning and Scheduling Related, Technical Reasons and Force Majeure are the most significant reasons in cost and time overruns. Whereas, the Cost Escalation related reasons has the least impact on cost increase and delays. The NHA possesses a financial worth of around Rs. 36 billion and with an annual turn over amounting to Rs. 22 billion is responsible to perform road construction project in entire

  10. Damped time advance methods for particles and EM fields

    International Nuclear Information System (INIS)

    Friedman, A.; Ambrosiano, J.J.; Boyd, J.K.; Brandon, S.T.; Nielsen, D.E. Jr.; Rambo, P.W.

    1990-01-01

    Recent developments in the application of damped time advance methods to plasma simulations include the synthesis of implicit and explicit ''adjustably damped'' second order accurate methods for particle motion and electromagnetic field propagation. This paper discusses this method

  11. Software Design Methods for Real-Time Systems

    Science.gov (United States)

    1989-12-01

    This module describes the concepts and methods used in the software design of real time systems . It outlines the characteristics of real time systems , describes...the role of software design in real time system development, surveys and compares some software design methods for real - time systems , and

  12. Design of an EEG-based brain-computer interface (BCI) from standard components running in real-time under Windows.

    Science.gov (United States)

    Guger, C; Schlögl, A; Walterspacher, D; Pfurtscheller, G

    1999-01-01

    An EEG-based brain-computer interface (BCI) is a direct connection between the human brain and the computer. Such a communication system is needed by patients with severe motor impairments (e.g. late stage of Amyotrophic Lateral Sclerosis) and has to operate in real-time. This paper describes the selection of the appropriate components to construct such a BCI and focuses also on the selection of a suitable programming language and operating system. The multichannel system runs under Windows 95, equipped with a real-time Kernel expansion to obtain reasonable real-time operations on a standard PC. Matlab controls the data acquisition and the presentation of the experimental paradigm, while Simulink is used to calculate the recursive least square (RLS) algorithm that describes the current state of the EEG in real-time. First results of the new low-cost BCI show that the accuracy of differentiating imagination of left and right hand movement is around 95%.

  13. Driving-Simulator-Based Test on the Effectiveness of Auditory Red-Light Running Vehicle Warning System Based on Time-To-Collision Sensor

    Directory of Open Access Journals (Sweden)

    Xuedong Yan

    2014-02-01

    Full Text Available The collision avoidance warning system is an emerging technology designed to assist drivers in avoiding red-light running (RLR collisions at intersections. The aim of this paper is to evaluate the effect of auditory warning information on collision avoidance behaviors in the RLR pre-crash scenarios and further to examine the casual relationships among the relevant factors. A driving-simulator-based experiment was designed and conducted with 50 participants. The data from the experiments were analyzed by approaches of ANOVA and structural equation modeling (SEM. The collisions avoidance related variables were measured in terms of brake reaction time (BRT, maximum deceleration and lane deviation in this study. It was found that the collision avoidance warning system can result in smaller collision rates compared to the without-warning condition and lead to shorter reaction times, larger maximum deceleration and less lane deviation. Furthermore, the SEM analysis illustrate that the audio warning information in fact has both direct and indirect effect on occurrence of collisions, and the indirect effect plays a more important role on collision avoidance than the direct effect. Essentially, the auditory warning information can assist drivers in detecting the RLR vehicles in a timely manner, thus providing drivers more adequate time and space to decelerate to avoid collisions with the conflicting vehicles.

  14. Voluntary Wheel Running in Mice.

    Science.gov (United States)

    Goh, Jorming; Ladiges, Warren

    2015-12-02

    Voluntary wheel running in the mouse is used to assess physical performance and endurance and to model exercise training as a way to enhance health. Wheel running is a voluntary activity in contrast to other experimental exercise models in mice, which rely on aversive stimuli to force active movement. This protocol consists of allowing mice to run freely on the open surface of a slanted, plastic saucer-shaped wheel placed inside a standard mouse cage. Rotations are electronically transmitted to a USB hub so that frequency and rate of running can be captured via a software program for data storage and analysis for variable time periods. Mice are individually housed so that accurate recordings can be made for each animal. Factors such as mouse strain, gender, age, and individual motivation, which affect running activity, must be considered in the design of experiments using voluntary wheel running. Copyright © 2015 John Wiley & Sons, Inc.

  15. Species interactions and response time to climate change: ice-cover and terrestrial run-off shaping Arctic char and brown trout competitive asymmetries

    Science.gov (United States)

    Finstad, A. G.; Palm Helland, I.; Jonsson, B.; Forseth, T.; Foldvik, A.; Hessen, D. O.; Hendrichsen, D. K.; Berg, O. K.; Ulvan, E.; Ugedal, O.

    2011-12-01

    There has been a growing recognition that single species responses to climate change often mainly are driven by interaction with other organisms and single species studies therefore not are sufficient to recognize and project ecological climate change impacts. Here, we study how performance, relative abundance and the distribution of two common Arctic and sub-Arctic freshwater fishes (brown trout and Arctic char) are driven by competitive interactions. The interactions are modified both by direct climatic effects on temperature and ice-cover, and indirectly through climate forcing of terrestrial vegetation pattern and associated carbon and nutrient run-off. We first use laboratory studies to show that Arctic char, which is the world's most northernmost distributed freshwater fish, outperform trout under low light levels and also have comparable higher growth efficiency. Corresponding to this, a combination of time series and time-for-space analyses show that ice-cover duration and carbon and nutrient load mediated by catchment vegetation properties strongly affected the outcome of the competition and likely drive the species distribution pattern through competitive exclusion. In brief, while shorter ice-cover period and decreased carbon load favored brown trout, increased ice-cover period and increased carbon load favored Arctic char. Length of ice-covered period and export of allochthonous material from catchments are major, but contrasting, climatic drivers of competitive interaction between these two freshwater lake top-predators. While projected climate change lead to decreased ice-cover, corresponding increase in forest and shrub cover amplify carbon and nutrient run-off. Although a likely outcome of future Arctic and sub-arctic climate scenarios are retractions of the Arctic char distribution area caused by competitive exclusion, the main drivers will act on different time scales. While ice-cover will change instantaneously with increasing temperature

  16. A Model For Teaching Advanced Neuroscience Methods: A Student-Run Seminar to Increase Practical Understanding and Confidence.

    Science.gov (United States)

    Harrison, Theresa M; Ching, Christopher R K; Andrews, Anne M

    2016-01-01

    Neuroscience doctoral students must master specific laboratory techniques and approaches to complete their thesis work (hands-on learning). Due to the highly interdisciplinary nature of the field, learning about a diverse range of methodologies through literature surveys and coursework is also necessary for student success (hands-off learning). Traditional neuroscience coursework stresses what is known about the nervous system with relatively little emphasis on the details of the methods used to obtain this knowledge. Furthermore, hands-off learning is made difficult by a lack of detail in methods sections of primary articles, subfield-specific jargon and vague experimental rationales. We designed a student-taught course to enable first-year neuroscience doctoral students to overcome difficulties in hands-off learning by introducing a new approach to reading and presenting primary research articles that focuses on methodology. In our literature-based course students were encouraged to present a method with which they had no previous experience. To facilitate weekly discussions, "experts" were invited to class sessions. Experts were advanced graduate students who had hands-on experience with the method being covered and served as discussion co-leaders. Self-evaluation worksheets were administered on the first and last days of the 10-week course and used to assess students' confidence in discussing research and methods outside of their primary research expertise. These evaluations revealed that the course significantly increased the students' confidence in reading, presenting and discussing a wide range of advanced neuroscience methods.

  17. Determinants of the abilities to jump higher and shorten the contact time in a running 1-legged vertical jump in basketball.

    Science.gov (United States)

    Miura, Ken; Yamamoto, Masayoshi; Tamaki, Hiroyuki; Zushi, Koji

    2010-01-01

    This study was conducted to obtain useful information for developing training techniques for the running 1-legged vertical jump in basketball (lay-up shot jump). The ability to perform the lay-up shot jump and various basic jumps was measured by testing 19 male basketball players. The basic jumps consisted of the 1-legged repeated rebound jump, the 2-legged repeated rebound jump, and the countermovement jump. Jumping height, contact time, and jumping index (jumping height/contact time) were measured and calculated using a contact mat/computer system that recorded the contact and air times. The jumping index indicates power. No significant correlation existed between the jumping height and contact time of the lay-up shot jump, the 2 components of the lay-up shot jump index. As a result, jumping height and contact time were found to be mutually independent abilities. The relationships in contact time between the lay-up shot jump to the 1-legged repeated rebound jump and the 2-legged repeated rebound jump were correlated on the same significance levels (p jumping height existed between the 1-legged repeated rebound jump and the lay-up shot jump (p jumping height between the lay-up shot jump and both the 2-legged repeated rebound jump and countermovement jump. The lay-up shot index correlated more strongly to the 1-legged repeated rebound jump index (p jump index (p jump is effective in improving both contact time and jumping height in the lay-up shot jump.

  18. Changes in Running Mechanics During a 6-Hour Running Race.

    Science.gov (United States)

    Giovanelli, Nicola; Taboga, Paolo; Lazzer, Stefano

    2017-05-01

    To investigate changes in running mechanics during a 6-h running race. Twelve ultraendurance runners (age 41.9 ± 5.8 y, body mass 68.3 ± 12.6 kg, height 1.72 ± 0.09 m) were asked to run as many 874-m flat loops as possible in 6 h. Running speed, contact time (t c ), and aerial time (t a ) were measured in the first lap and every 30 ± 2 min during the race. Peak vertical ground-reaction force (F max ), stride length (SL), vertical downward displacement of the center of mass (Δz), leg-length change (ΔL), vertical stiffness (k vert ), and leg stiffness (k leg ) were then estimated. Mean distance covered by the athletes during the race was 62.9 ± 7.9 km. Compared with the 1st lap, running speed decreased significantly from 4 h 30 min onward (mean -5.6% ± 0.3%, P running, reaching the maximum difference after 5 h 30 min (+6.1%, P = .015). Conversely, k vert decreased after 4 h, reaching the lowest value after 5 h 30 min (-6.5%, P = .008); t a and F max decreased after 4 h 30 min through to the end of the race (mean -29.2% and -5.1%, respectively, P running, suggesting a possible time threshold that could affect performance regardless of absolute running speed.

  19. Similar Running Economy With Different Running Patterns Along the Aerial-Terrestrial Continuum.

    Science.gov (United States)

    Lussiana, Thibault; Gindre, Cyrille; Hébert-Losier, Kim; Sagawa, Yoshimasa; Gimenez, Philippe; Mourot, Laurent

    2017-04-01

    No unique or ideal running pattern is the most economical for all runners. Classifying the global running patterns of individuals into 2 categories (aerial and terrestrial) using the Volodalen method could permit a better understanding of the relationship between running economy (RE) and biomechanics. The main purpose was to compare the RE of aerial and terrestrial runners. Two coaches classified 58 runners into aerial (n = 29) or terrestrial (n = 29) running patterns on the basis of visual observations. RE, muscle activity, kinematics, and spatiotemporal parameters of both groups were measured during a 5-min run at 12 km/h on a treadmill. Maximal oxygen uptake (V̇O 2 max) and peak treadmill speed (PTS) were assessed during an incremental running test. No differences were observed between aerial and terrestrial patterns for RE, V̇O 2 max, and PTS. However, at 12 km/h, aerial runners exhibited earlier gastrocnemius lateralis activation in preparation for contact, less dorsiflexion at ground contact, higher coactivation indexes, and greater leg stiffness during stance phase than terrestrial runners. Terrestrial runners had more pronounced semitendinosus activation at the start and end of the running cycle, shorter flight time, greater leg compression, and a more rear-foot strike. Different running patterns were associated with similar RE. Aerial runners appear to rely more on elastic energy utilization with a rapid eccentric-concentric coupling time, whereas terrestrial runners appear to propel the body more forward rather than upward to limit work against gravity. Excluding runners with a mixed running pattern from analyses did not affect study interpretation.

  20. Determination of beta attenuation coefficients by means of timing method

    International Nuclear Information System (INIS)

    Ermis, E.E.; Celiktas, C.

    2012-01-01

    Highlights: ► Beta attenuation coefficients of absorber materials were found in this study. ► For this process, a new method (timing method) was suggested. ► The obtained beta attenuation coefficients were compatible with the results from the traditional one. ► The timing method can be used to determine beta attenuation coefficient. - Abstract: Using a counting system with plastic scintillation detector, beta linear and mass attenuation coefficients were determined for bakelite, Al, Fe and plexiglass absorbers by means of timing method. To show the accuracy and reliability of the obtained results through this method, the coefficients were also found via conventional energy method. Obtained beta attenuation coefficients from both methods were compared with each other and the literature values. Beta attenuation coefficients obtained through timing method were found to be compatible with the values obtained from conventional energy method and the literature.

  1. Effects of running time of a cattle-cooling system on core body temperature of cows on dairy farms in an arid environment.

    Science.gov (United States)

    Ortiz, X A; Smith, J F; Bradford, B J; Harner, J P; Oddy, A

    2010-10-01

    Two experiments were conducted on a commercial dairy farm to describe the effects of a reduction in Korral Kool (KK; Korral Kool Inc., Mesa, AZ) system operating time on core body temperature (CBT) of primiparous and multiparous cows. In the first experiment, KK systems were operated for 18, 21, or 24 h/d while CBT of 63 multiparous Holstein dairy cows was monitored. All treatments started at 0600 h, and KK systems were turned off at 0000 h and 0300 h for the 18-h and 21-h treatments, respectively. Animals were housed in 9 pens and assigned randomly to treatment sequences in a 3 × 3 Latin square design. In the second experiment, 21 multiparous and 21 primiparous cows were housed in 6 pens and assigned randomly to treatment sequences (KK operated for 21 or 24 h/d) in a switchback design. All treatments started at 0600 h, and KK systems were turned off at 0300 h for the 21-h treatments. In experiment 1, cows in the 24-h treatment had a lower mean CBT than cows in the 18- and 21-h treatments (38.97, 39.08, and 39.03±0.04°C, respectively). The significant treatment by time interaction showed that the greatest treatment effects occurred at 0600 h; treatment means at this time were 39.43, 39.37, and 38.88±0.18°C for 18-, 21-, and 24-h treatments, respectively. These results demonstrate that a reduction in KK system running time of ≥3 h/d will increase CBT. In experiment 2, a significant parity by treatment interaction was found. Multiparous cows on the 24-h treatment had lower mean CBT than cows on the 21-h treatment (39.23 and 39.45±0.17°C, respectively), but treatment had no effect on mean CBT of primiparous cows (39.50 and 39.63±0.20°C for 21- and 24-h treatments, respectively). A significant treatment by time interaction was observed, with the greatest treatment effects occurring at 0500 h; treatment means at this time were 39.57, 39.23, 39.89, and 39.04±0.24°C for 21-h primiparous, 24-h primiparous, 21-h multiparous, and 24-h multiparous cows

  2. An Efficient Explicit-time Description Method for Timed Model Checking

    Directory of Open Access Journals (Sweden)

    Hao Wang

    2009-12-01

    Full Text Available Timed model checking, the method to formally verify real-time systems, is attracting increasing attention from both the model checking community and the real-time community. Explicit-time description methods verify real-time systems using general model constructs found in standard un-timed model checkers. Lamport proposed an explicit-time description method using a clock-ticking process (Tick to simulate the passage of time together with a group of global variables to model time requirements. Two methods, the Sync-based Explicit-time Description Method using rendezvous synchronization steps and the Semaphore-based Explicit-time Description Method using only one global variable were proposed; they both achieve better modularity than Lamport's method in modeling the real-time systems. In contrast to timed automata based model checkers like UPPAAL, explicit-time description methods can access and store the current time instant for future calculations necessary for many real-time systems, especially those with pre-emptive scheduling. However, the Tick process in the above three methods increments the time by one unit in each tick; the state spaces therefore grow relatively fast as the time parameters increase, a problem when the system's time period is relatively long. In this paper, we propose a more efficient method which enables the Tick process to leap multiple time units in one tick. Preliminary experimental results in a high performance computing environment show that this new method significantly reduces the state space and improves both the time and memory efficiency.

  3. Singular perturbation methods for nonlinear dynamic systems with time delays

    International Nuclear Information System (INIS)

    Hu, H.Y.; Wang, Z.H.

    2009-01-01

    This review article surveys the recent advances in the dynamics and control of time-delay systems, with emphasis on the singular perturbation methods, such as the method of multiple scales, the method of averaging, and two newly developed methods, the energy analysis and the pseudo-oscillator analysis. Some examples are given to demonstrate the advantages of the methods. The comparisons with other methods show that these methods lead to easier computations and higher accurate prediction on the local dynamics of time-delay systems near a Hopf bifurcation.

  4. Numerical simulation of electromagnetic waves in Schwarzschild space-time by finite difference time domain method and Green function method

    Science.gov (United States)

    Jia, Shouqing; La, Dongsheng; Ma, Xuelian

    2018-04-01

    The finite difference time domain (FDTD) algorithm and Green function algorithm are implemented into the numerical simulation of electromagnetic waves in Schwarzschild space-time. FDTD method in curved space-time is developed by filling the flat space-time with an equivalent medium. Green function in curved space-time is obtained by solving transport equations. Simulation results validate both the FDTD code and Green function code. The methods developed in this paper offer a tool to solve electromagnetic scattering problems.

  5. CDF run II run control and online monitor

    International Nuclear Information System (INIS)

    Arisawa, T.; Ikado, K.; Badgett, W.; Chlebana, F.; Maeshima, K.; McCrory, E.; Meyer, A.; Patrick, J.; Wenzel, H.; Stadie, H.; Wagner, W.; Veramendi, G.

    2001-01-01

    The authors discuss the CDF Run II Run Control and online event monitoring system. Run Control is the top level application that controls the data acquisition activities across 150 front end VME crates and related service processes. Run Control is a real-time multi-threaded application implemented in Java with flexible state machines, using JDBC database connections to configure clients, and including a user friendly and powerful graphical user interface. The CDF online event monitoring system consists of several parts: the event monitoring programs, the display to browse their results, the server program which communicates with the display via socket connections, the error receiver which displays error messages and communicates with Run Control, and the state manager which monitors the state of the monitor programs

  6. Estimating Stair Running Performance Using Inertial Sensors

    Directory of Open Access Journals (Sweden)

    Lauro V. Ojeda

    2017-11-01

    Full Text Available Stair running, both ascending and descending, is a challenging aerobic exercise that many athletes, recreational runners, and soldiers perform during training. Studying biomechanics of stair running over multiple steps has been limited by the practical challenges presented while using optical-based motion tracking systems. We propose using foot-mounted inertial measurement units (IMUs as a solution as they enable unrestricted motion capture in any environment and without need for external references. In particular, this paper presents methods for estimating foot velocity and trajectory during stair running using foot-mounted IMUs. Computational methods leverage the stationary periods occurring during the stance phase and known stair geometry to estimate foot orientation and trajectory, ultimately used to calculate stride metrics. These calculations, applied to human participant stair running data, reveal performance trends through timing, trajectory, energy, and force stride metrics. We present the results of our analysis of experimental data collected on eleven subjects. Overall, we determine that for either ascending or descending, the stance time is the strongest predictor of speed as shown by its high correlation with stride time.

  7. Using Real Time Workshop for rapid and reliable control implementation in the Frascati Tokamak Upgrade Feedback Control System running under RTAI-GNU/Linux

    International Nuclear Information System (INIS)

    Centioli, C.; Iannone, F.; Ledauphin, M.; Panella, M.; Pangione, L.; Podda, S.; Vitale, V.; Zaccarian, L.

    2005-01-01

    The Feedback Control System running at FTU has been recently ported from a commercial platform (based on LynxOS) to an open-source GNU/Linux-based RTAI-LXRT platform, thereby, obtaining significant performance and cost improvements. Based on the new open-source platform, it is now possible to experiment novel control strategies aimed at improving the robustness and accuracy of the feedback control. Nevertheless, the implementation of control ideas still requires a great deal of coding of the control algorithms that, if carried out manually, may be prone to coding errors, therefore time consuming both in the development phase and in the subsequent validation tests consisting of dedicated experiments carried out on FTU. In this paper, we report on recent developments based on Mathworks' Simulink and Real Time Workshop (RTW) packages to obtain a user-friendly environment where the real time code implementing novel control algorithms can be easily generated, tested and validated. Thanks to this new tool, the control designer only needs to specify the block diagram of the control task (namely, a high level and functional description of the new algorithm under consideration) and the corresponding real time code generation and testing is completely automated without any need of dedicated experiments. In the paper, the necessary work carried out to adapt the Real Time Workshop to our RTAI-LXRT context will be illustrated. A necessary re-organization of the previous real time software, aimed at incorporating the code coming from the adapted RTW, will also be discussed. Moreover, we will report on a performance comparison between the code obtained using the automated RTW-based procedure and the hand-written C code, appropriately optimised; at the moment, a preliminary performance comparison consisting of dummy algorithms has shown that the code automatically generated from RTW is faster (about 30% up) than the manually written one. This preliminary result combined with the

  8. 20 CFR 617.35 - Time and method of payment.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Time and method of payment. 617.35 Section 617.35 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR TRADE ADJUSTMENT ASSISTANCE FOR WORKERS UNDER THE TRADE ACT OF 1974 Job Search Allowances § 617.35 Time and method...

  9. Real-time hybrid simulation using the convolution integral method

    International Nuclear Information System (INIS)

    Kim, Sung Jig; Christenson, Richard E; Wojtkiewicz, Steven F; Johnson, Erik A

    2011-01-01

    This paper proposes a real-time hybrid simulation method that will allow complex systems to be tested within the hybrid test framework by employing the convolution integral (CI) method. The proposed CI method is potentially transformative for real-time hybrid simulation. The CI method can allow real-time hybrid simulation to be conducted regardless of the size and complexity of the numerical model and for numerical stability to be ensured in the presence of high frequency responses in the simulation. This paper presents the general theory behind the proposed CI method and provides experimental verification of the proposed method by comparing the CI method to the current integration time-stepping (ITS) method. Real-time hybrid simulation is conducted in the Advanced Hazard Mitigation Laboratory at the University of Connecticut. A seismically excited two-story shear frame building with a magneto-rheological (MR) fluid damper is selected as the test structure to experimentally validate the proposed method. The building structure is numerically modeled and simulated, while the MR damper is physically tested. Real-time hybrid simulation using the proposed CI method is shown to provide accurate results

  10. The immediate effect of long-distance running on T2 and T2* relaxation times of articular cartilage of the knee in young healthy adults at 3.0 T MR imaging.

    Science.gov (United States)

    Behzadi, Cyrus; Welsch, Goetz H; Laqmani, Azien; Henes, Frank O; Kaul, Michael G; Schoen, Gerhard; Adam, Gerhard; Regier, Marc

    2016-08-01

    To quantitatively assess the immediate effect of long-distance running on T2 and T2* relaxation times of the articular cartilage of the knee at 3.0 T in young healthy adults. 30 healthy male adults (18-31 years) who perform sports at an amateur level underwent an initial MRI at 3.0 T with T2 weighted [16 echo times (TEs): 9.7-154.6 ms] and T2* weighted (24 TEs: 4.6-53.6 ms) relaxation measurements. Thereafter, all participants performed a 45-min run. After the run, all individuals were immediately re-examined. Data sets were post-processed using dedicated software (ImageJ; National Institute of Health, Bethesda, MD). 22 regions of interest were manually drawn in segmented areas of the femoral, tibial and patellar cartilage. For statistical evaluation, Pearson product-moment correlation coefficients and confidence intervals were computed. Mean initial values were 35.7 ms for T2 and 25.1 ms for T2*. After the run, a significant decrease in the mean T2 and T2* relaxation times was observed for all segments in all participants. A mean decrease of relaxation time was observed for T2 with 4.6 ms (±3.6 ms) and for T2* with 3.6 ms (±5.1 ms) after running. A significant decrease could be observed in all cartilage segments for both biomarkers. Both quantitative techniques, T2 and T2*, seem to be valuable parameters in the evaluation of immediate changes in the cartilage ultrastructure after running. This is the first direct comparison of immediate changes in T2 and T2* relaxation times after running in healthy adults.

  11. Methods optimization for the first time core critical

    International Nuclear Information System (INIS)

    Yan Liang

    2014-01-01

    The PWR reactor core commissioning programs the content of the first critical reactor physics experiment, and describes thc physical test method. However, all the methods arc not exactly the same but efficient. This article aims to enhance the reactor for the first time in the process of critical safety, shorten the overall time of critical physical test for the first time, and improve the integrity of critical physical test data for the first time and accuracy, eventually to improve the operation of the plant economic benefit adopting sectional dilution, power feedback for Doppler point improvement of physical test methods, and so on. (author)

  12. Finite element method for time-space-fractional Schrodinger equation

    Directory of Open Access Journals (Sweden)

    Xiaogang Zhu

    2017-07-01

    Full Text Available In this article, we develop a fully discrete finite element method for the nonlinear Schrodinger equation (NLS with time- and space-fractional derivatives. The time-fractional derivative is described in Caputo's sense and the space-fractional derivative in Riesz's sense. Its stability is well derived; the convergent estimate is discussed by an orthogonal operator. We also extend the method to the two-dimensional time-space-fractional NLS and to avoid the iterative solvers at each time step, a linearized scheme is further conducted. Several numerical examples are implemented finally, which confirm the theoretical results as well as illustrate the accuracy of our methods.

  13. Dr. Sheehan on Running.

    Science.gov (United States)

    Sheehan, George A.

    This book is both a personal and technical account of the experience of running by a heart specialist who began a running program at the age of 45. In its seventeen chapters, there is information presented on the spiritual, psychological, and physiological results of running; treatment of athletic injuries resulting from running; effects of diet…

  14. Influence of regression model and initial intensity of an incremental test on the relationship between the lactate threshold estimated by the maximal-deviation method and running performance.

    Science.gov (United States)

    Santos-Concejero, Jordan; Tucker, Ross; Granados, Cristina; Irazusta, Jon; Bidaurrazaga-Letona, Iraia; Zabala-Lili, Jon; Gil, Susana María

    2014-01-01

    This study investigated the influence of the regression model and initial intensity during an incremental test on the relationship between the lactate threshold estimated by the maximal-deviation method and performance in elite-standard runners. Twenty-three well-trained runners completed a discontinuous incremental running test on a treadmill. Speed started at 9 km · h(-1) and increased by 1.5 km · h(-1) every 4 min until exhaustion, with a minute of recovery for blood collection. Lactate-speed data were fitted by exponential and polynomial models. The lactate threshold was determined for both models, using all the co-ordinates, excluding the first and excluding the first and second points. The exponential lactate threshold was greater than the polynomial equivalent in any co-ordinate condition (P performance and is independent of the initial intensity of the test.

  15. Relationship between running kinematic changes and time limit at vVO2max. DOI: http://dx.doi.org/10.5007/1980-0037.2012v14n4p428

    Directory of Open Access Journals (Sweden)

    Sebastião Iberes Lopes Melo

    2012-07-01

    Full Text Available DOI: http://dx.doi.org/10.5007/1980-0037.2012v14n4p428Exhaustive running at maximal oxygen uptake velocity (vVO2max can alter running kinematic parameters and increase energy cost along the time. The aims of the present study were to compare characteristics of ankle and knee kinematics during running at vVO2max and to verify the relationship between changes in kinematic variables and time limit (Tlim. Eleven male volunteers, recreational players of team sports, performed an incremental running test until volitional exhaustion to determine vVO2max and a constant velocity test at vVO2max. Subjects were filmed continuously from the left sagittal plane at 210 Hz for further kinematic analysis. The maximal plantar flexion during swing (p<0.01 was the only variable that increased significantly from beginning to end of the run. Increase in ankle angle at contact was the only variable related to Tlim (r=0.64; p=0.035 and explained 34% of the performance in the test. These findings suggest that the individuals under study maintained a stable running style at vVO2max and that increase in plantar flexion explained the performance in this test when it was applied in non-runners.

  16. A time-delayed method for controlling chaotic maps

    International Nuclear Information System (INIS)

    Chen Maoyin; Zhou Donghua; Shang Yun

    2005-01-01

    Combining the repetitive learning strategy and the optimality principle, this Letter proposes a time-delayed method to control chaotic maps. This method can effectively stabilize unstable periodic orbits within chaotic attractors in the sense of least mean square. Numerical simulations of some chaotic maps verify the effectiveness of this method

  17. Novel crystal timing calibration method based on total variation

    Science.gov (United States)

    Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng

    2016-11-01

    A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.

  18. Verifying Real-Time Systems using Explicit-time Description Methods

    Directory of Open Access Journals (Sweden)

    Hao Wang

    2009-12-01

    Full Text Available Timed model checking has been extensively researched in recent years. Many new formalisms with time extensions and tools based on them have been presented. On the other hand, Explicit-Time Description Methods aim to verify real-time systems with general untimed model checkers. Lamport presented an explicit-time description method using a clock-ticking process (Tick to simulate the passage of time together with a group of global variables for time requirements. This paper proposes a new explicit-time description method with no reliance on global variables. Instead, it uses rendezvous synchronization steps between the Tick process and each system process to simulate time. This new method achieves better modularity and facilitates usage of more complex timing constraints. The two explicit-time description methods are implemented in DIVINE, a well-known distributed-memory model checker. Preliminary experiment results show that our new method, with better modularity, is comparable to Lamport's method with respect to time and memory efficiency.

  19. A comparison of three time-domain anomaly detection methods

    Energy Technology Data Exchange (ETDEWEB)

    Schoonewelle, H.; Hagen, T.H.J.J. van der; Hoogenboom, J.E. [Delft University of Technology (Netherlands). Interfaculty Reactor Institute

    1996-01-01

    Three anomaly detection methods based on a comparison of signal values with predictions from an autoregressive model are presented. These methods are: the extremes method, the {chi}{sup 2} method and the sequential probability ratio test. The methods are used to detect a change of the standard deviation of the residual noise obtained from applying an autoregressive model. They are fast and can be used in on-line applications. For each method some important anomaly detection parameters are determined by calculation or simulation. These parameters are: the false alarm rate, the average time to alarm and - being of minor importance -the alarm failure rate. Each method is optimized with respect to the average time to alarm for a given value of the false alarm rate. The methods are compared with each other, resulting in the sequential probability ratio test being clearly superior. (author).

  20. A comparison of three time-domain anomaly detection methods

    International Nuclear Information System (INIS)

    Schoonewelle, H.; Hagen, T.H.J.J. van der; Hoogenboom, J.E.

    1996-01-01

    Three anomaly detection methods based on a comparison of signal values with predictions from an autoregressive model are presented. These methods are: the extremes method, the χ 2 method and the sequential probability ratio test. The methods are used to detect a change of the standard deviation of the residual noise obtained from applying an autoregressive model. They are fast and can be used in on-line applications. For each method some important anomaly detection parameters are determined by calculation or simulation. These parameters are: the false alarm rate, the average time to alarm and - being of minor importance -the alarm failure rate. Each method is optimized with respect to the average time to alarm for a given value of the false alarm rate. The methods are compared with each other, resulting in the sequential probability ratio test being clearly superior. (author)

  1. Endurance time method for Seismic analysis and design of structures

    International Nuclear Information System (INIS)

    Estekanchi, H.E.; Vafai, A.; Sadeghazar, M.

    2004-01-01

    In this paper, a new method for performance based earthquake analysis and design has been introduced. In this method, the structure is subjected to accelerograms that impose increasing dynamic demand on the structure with time. Specified damage indexes are monitored up to the collapse level or other performance limit that defines the endurance limit point for the structure. Also, a method for generating standard intensifying accelerograms has been described. Three accelerograms have been generated using this method. Furthermore, the concept of Endurance Time has been described by applying these accelerograms to single and multi degree of freedom linear systems. The application of this method for analysis of complex nonlinear systems has been explained. Endurance Time method provides a uniform approach to seismic analysis and design of complex structures that can be applied in numerical and experimental investigations

  2. Well-logging method using well-logging tools run through a drill stem test string for determining in-situ change in formation water saturation values

    International Nuclear Information System (INIS)

    Fertl, W.H.

    1975-01-01

    A logging tool (pulsed neutron or neutron-gamma ray) whose response indicates formation water saturation value, is run through an opening extending through a portion of a drill stem test string. A sample portion of the formation fluid in the zone of interest is removed and another logging run is made. The differences between the plots of the two logging runs indicate the formation potential productivity in the zone of interest

  3. Measurement of the Top Quark Mass at D0 Run II with the Matrix Element Method in the Lepton+Jets Final State

    Energy Technology Data Exchange (ETDEWEB)

    Schieferdecker, Philipp [Ludwig Maximilian Univ. of Munich (Germany)

    2005-08-05

    The mass of the top quark is a fundamental parameter of the Standard Model. Its precise knowledge yields valuable insights into unresolved phenomena in and beyond the Standard Model. A measurement of the top quark mass with the matrix element method in the lepton+jets final state in D0 Run II is presented. Events are selected requiring an isolated energetic charged lepton (electron or muon), significant missing transverse energy, and exactly four calorimeter jets. For each event, the probabilities to originate from the signal and background processes are calculated based on the measured kinematics, the object resolutions and the respective matrix elements. The jet energy scale is known to be the dominant source of systematic uncertainty. The reference scale for the mass measurement is derived from Monte Carlo events. The matrix element likelihood is defined as a function of both, m{sub top} and jet energy scale JES, where the latter represents a scale factor with respect to the reference scale. The top mass is obtained from a two-dimensional correlated fit, and the likelihood yields both the statistical and jet energy scale uncertainty. Using a dataset of 320 pb-1 of D0 Run II data, the mass of the top quark is measured to be: m$ℓ+jets\\atop{top}$ = 169.5 ± 4.4(stat. + JES)$+1.7\\atop{-1.6}$(syst.) GeV; m$e+jets\\atop{top}$ = 168.8 ± 6.0(stat. + JES)$+1.9\\atop{-1.9}$(syst.) GeV; m$μ+jets\\atop{top}$ = 172.3 ± 9.6(stat.+JES)$+3.4\\atop{-3.3}$(syst.) GeV. The jet energy scale measurement in the ℓ+jets sample yields JES = 1.034 ± 0.034, suggesting good consistency of the data with the simulation. The measurement forecasts significant improvements to the total top mass uncertainty during Run II before the startup of the LHC, as the data sample will grow by a factor of ten and D0's tracking capabilities will be employed in jet energy reconstruction and flavor identification.

  4. A novel weight determination method for time series data aggregation

    Science.gov (United States)

    Xu, Paiheng; Zhang, Rong; Deng, Yong

    2017-09-01

    Aggregation in time series is of great importance in time series smoothing, predicting and other time series analysis process, which makes it crucial to address the weights in times series correctly and reasonably. In this paper, a novel method to obtain the weights in time series is proposed, in which we adopt induced ordered weighted aggregation (IOWA) operator and visibility graph averaging (VGA) operator and linearly combine the weights separately generated by the two operator. The IOWA operator is introduced to the weight determination of time series, through which the time decay factor is taken into consideration. The VGA operator is able to generate weights with respect to the degree distribution in the visibility graph constructed from the corresponding time series, which reflects the relative importance of vertices in time series. The proposed method is applied to two practical datasets to illustrate its merits. The aggregation of Construction Cost Index (CCI) demonstrates the ability of proposed method to smooth time series, while the aggregation of The Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) illustrate how proposed method maintain the variation tendency of original data.

  5. Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods.

    Science.gov (United States)

    Kim, Seung-Cheol; Kim, Eun-Soo

    2009-02-20

    In this paper we propose a new approach for fast generation of computer-generated holograms (CGHs) of a 3D object by using the run-length encoding (RLE) and the novel look-up table (N-LUT) methods. With the RLE method, spatially redundant data of a 3D object are extracted and regrouped into the N-point redundancy map according to the number of the adjacent object points having the same 3D value. Based on this redundancy map, N-point principle fringe patterns (PFPs) are newly calculated by using the 1-point PFP of the N-LUT, and the CGH pattern for the 3D object is generated with these N-point PFPs. In this approach, object points to be involved in calculation of the CGH pattern can be dramatically reduced and, as a result, an increase of computational speed can be obtained. Some experiments with a test 3D object are carried out and the results are compared to those of the conventional methods.

  6. Time evolution of the wave equation using rapid expansion method

    KAUST Repository

    Pestana, Reynam C.; Stoffa, Paul L.

    2010-01-01

    Forward modeling of seismic data and reverse time migration are based on the time evolution of wavefields. For the case of spatially varying velocity, we have worked on two approaches to evaluate the time evolution of seismic wavefields. An exact solution for the constant-velocity acoustic wave equation can be used to simulate the pressure response at any time. For a spatially varying velocity, a one-step method can be developed where no intermediate time responses are required. Using this approach, we have solved for the pressure response at intermediate times and have developed a recursive solution. The solution has a very high degree of accuracy and can be reduced to various finite-difference time-derivative methods, depending on the approximations used. Although the two approaches are closely related, each has advantages, depending on the problem being solved. © 2010 Society of Exploration Geophysicists.

  7. Time evolution of the wave equation using rapid expansion method

    KAUST Repository

    Pestana, Reynam C.

    2010-07-01

    Forward modeling of seismic data and reverse time migration are based on the time evolution of wavefields. For the case of spatially varying velocity, we have worked on two approaches to evaluate the time evolution of seismic wavefields. An exact solution for the constant-velocity acoustic wave equation can be used to simulate the pressure response at any time. For a spatially varying velocity, a one-step method can be developed where no intermediate time responses are required. Using this approach, we have solved for the pressure response at intermediate times and have developed a recursive solution. The solution has a very high degree of accuracy and can be reduced to various finite-difference time-derivative methods, depending on the approximations used. Although the two approaches are closely related, each has advantages, depending on the problem being solved. © 2010 Society of Exploration Geophysicists.

  8. Predicting debris-flow initiation and run-out with a depth-averaged two-phase model and adaptive numerical methods

    Science.gov (United States)

    George, D. L.; Iverson, R. M.

    2012-12-01

    much higher resolution grids evolve with the flow. The reduction in computational cost, due to AMR, makes very large-scale problems tractable on personal computers. Model accuracy can be tested by comparison of numerical predictions and empirical data. These comparisons utilize controlled experiments conducted at the USGS debris-flow flume, which provide detailed data about flow mobilization and dynamics. Additionally, we have simulated historical large-scale debris flows, such as the (≈50 million m^3) debris flow that originated on Mt. Meager, British Columbia in 2010. This flow took a very complex route through highly variable topography and provides a valuable benchmark for testing. Maps of the debris flow deposit and data from seismic stations provide evidence regarding flow initiation, transit times and deposition. Our simulations reproduce many of the complex patterns of the event, such as run-out geometry and extent, and the large-scale nature of the flow and the complex topographical features demonstrate the utility of AMR in flow simulations.

  9. Impact of Rainfall, Sales Method, and Time on Land Prices

    OpenAIRE

    Stephens, Steve; Schurle, Bryan

    2013-01-01

    Land prices in Western Kansas are analyzed using regression to estimate the influence of rainfall, sales method, and time of sale. The estimates from regression indicate that land prices decreased about $27 for each range that was farther west which can be converted to about $75 per inch of average rainfall. In addition, the influence of method of sale (private sale or auction) is estimated along with the impact of time of sale. Auction sales prices are approximately $100 higher per acre than...

  10. System and method for traffic signal timing estimation

    KAUST Repository

    Dumazert, Julien; Claudel, Christian G.

    2015-01-01

    A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.

  11. System and method for traffic signal timing estimation

    KAUST Repository

    Dumazert, Julien

    2015-12-30

    A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.

  12. Influence of the Heel-to-Toe Drop of Standard Cushioned Running Shoes on Injury Risk in Leisure-Time Runners: A Randomized Controlled Trial With 6-Month Follow-up.

    Science.gov (United States)

    Malisoux, Laurent; Chambon, Nicolas; Urhausen, Axel; Theisen, Daniel

    2016-11-01

    Modern running shoes are available in a wide range of heel-to-toe drops (ie, the height difference between the forward and rear parts of the inside of the shoe). While shoe drop has been shown to influence strike pattern, its effect on injury risk has never been investigated. Therefore, the reasons for such variety in this parameter are unclear. The first aim of this study was to determine whether the drop of standard cushioned running shoes influences running injury risk. The secondary aim was to investigate whether recent running regularity modifies the relationship between shoe drop and injury risk. Randomized controlled trial; Level of evidence, 1. Leisure-time runners (N = 553) were observed for 6 months after having received a pair of shoes with a heel-to-toe drop of 10 mm (D10), 6 mm (D6), or 0 mm (D0). All participants reported their running activities and injuries (time-loss definition, at least 1 day) in an electronic system. Cox regression analyses were used to compare injury risk between the 3 groups based on hazard rate ratios (HRs) and their 95% CIs. A stratified analysis was conducted to evaluate the effect of shoe drop in occasional runners (running regularity, low-drop shoes (D6 and D0) were found to be associated with a lower injury risk in occasional runners (HR, 0.48; 95% CI, 0.23-0.98), whereas these shoes were associated with a higher injury risk in regular runners (HR, 1.67; 95% CI, 1.07-2.62). Overall, injury risk was not modified by the drop of standard cushioned running shoes. However, low-drop shoes could be more hazardous for regular runners, while these shoes seem to be preferable for occasional runners to limit injury risk. © 2016 The Author(s).

  13. A Blade Tip Timing Method Based on a Microwave Sensor

    Directory of Open Access Journals (Sweden)

    Jilong Zhang

    2017-05-01

    Full Text Available Blade tip timing is an effective method for blade vibration measurements in turbomachinery. This method is increasing in popularity because it is non-intrusive and has several advantages over the conventional strain gauge method. Different kinds of sensors have been developed for blade tip timing, including optical, eddy current and capacitance sensors. However, these sensors are unsuitable in environments with contaminants or high temperatures. Microwave sensors offer a promising potential solution to overcome these limitations. In this article, a microwave sensor-based blade tip timing measurement system is proposed. A patch antenna probe is used to transmit and receive the microwave signals. The signal model and process method is analyzed. Zero intermediate frequency structure is employed to maintain timing accuracy and dynamic performance, and the received signal can also be used to measure tip clearance. The timing method uses the rising and falling edges of the signal and an auto-gain control circuit to reduce the effect of tip clearance change. To validate the accuracy of the system, it is compared experimentally with a fiber optic tip timing system. The results show that the microwave tip timing system achieves good accuracy.

  14. Time interval approach to the pulsed neutron logging method

    International Nuclear Information System (INIS)

    Zhao Jingwu; Su Weining

    1994-01-01

    The time interval of neighbouring neutrons emitted from a steady state neutron source can be treated as that from a time-dependent neutron source. In the rock space, the neutron flux is given by the neutron diffusion equation and is composed of an infinite terms. Each term s composed of two die-away curves. The delay action is discussed and used to measure the time interval with only one detector in the experiment. Nuclear reactions with the time distribution due to different types of radiations observed in the neutron well-logging methods are presented with a view to getting the rock nuclear parameters from the time interval technique

  15. Highly comparative time-series analysis: the empirical structure of time series and their methods.

    Science.gov (United States)

    Fulcher, Ben D; Little, Max A; Jones, Nick S

    2013-06-06

    The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines.

  16. A pseudospectral collocation time-domain method for diffractive optics

    DEFF Research Database (Denmark)

    Dinesen, P.G.; Hesthaven, J.S.; Lynov, Jens-Peter

    2000-01-01

    We present a pseudospectral method for the analysis of diffractive optical elements. The method computes a direct time-domain solution of Maxwell's equations and is applied to solving wave propagation in 2D diffractive optical elements. (C) 2000 IMACS. Published by Elsevier Science B.V. All rights...

  17. An iterated Radau method for time-dependent PDE's

    NARCIS (Netherlands)

    S. Pérez-Rodríguez; S. González-Pinto; B.P. Sommeijer (Ben)

    2008-01-01

    htmlabstractThis paper is concerned with the time integration of semi-discretized, multi-dimensional PDEs of advection-diffusion-reaction type. To cope with the stiffness of these ODEs, an implicit method has been selected, viz., the two-stage, third-order Radau IIA method. The main topic of this

  18. DRK methods for time-domain oscillator simulation

    NARCIS (Netherlands)

    Sevat, M.F.; Houben, S.H.M.J.; Maten, ter E.J.W.; Di Bucchianico, A.; Mattheij, R.M.M.; Peletier, M.A.

    2006-01-01

    This paper presents a new Runge-Kutta type integration method that is well-suited for time-domain simulation of oscillators. A unique property of the new method is that its damping characteristics can be controlled by a continuous parameter.

  19. Dead time corrections using the backward extrapolation method

    Energy Technology Data Exchange (ETDEWEB)

    Gilad, E., E-mail: gilade@bgu.ac.il [The Unit of Nuclear Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel); Dubi, C. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel); Geslot, B.; Blaise, P. [DEN/CAD/DER/SPEx/LPE, CEA Cadarache, Saint-Paul-les-Durance 13108 (France); Kolin, A. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel)

    2017-05-11

    Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1–2%) in restoring the corrected count rate. - Highlights: • A new method for dead time corrections is introduced and experimentally validated. • The method does not depend on any prior calibration nor assumes any specific model. • Different dead times are imposed on the signal and the losses are extrapolated to zero. • The method is implemented and validated using neutron measurements from the MINERVE. • Result show very good correspondence to empirical results.

  20. Immersed Boundary-Lattice Boltzmann Method Using Two Relaxation Times

    Directory of Open Access Journals (Sweden)

    Kosuke Hayashi

    2012-06-01

    Full Text Available An immersed boundary-lattice Boltzmann method (IB-LBM using a two-relaxation time model (TRT is proposed. The collision operator in the lattice Boltzmann equation is modeled using two relaxation times. One of them is used to set the fluid viscosity and the other is for numerical stability and accuracy. A direct-forcing method is utilized for treatment of immersed boundary. A multi-direct forcing method is also implemented to precisely satisfy the boundary conditions at the immersed boundary. Circular Couette flows between a stationary cylinder and a rotating cylinder are simulated for validation of the proposed method. The method is also validated through simulations of circular and spherical falling particles. Effects of the functional forms of the direct-forcing term and the smoothed-delta function, which interpolates the fluid velocity to the immersed boundary and distributes the forcing term to fixed Eulerian grid points, are also examined. As a result, the following conclusions are obtained: (1 the proposed method does not cause non-physical velocity distribution in circular Couette flows even at high relaxation times, whereas the single-relaxation time (SRT model causes a large non-physical velocity distortion at a high relaxation time, (2 the multi-direct forcing reduces the errors in the velocity profile of a circular Couette flow at a high relaxation time, (3 the two-point delta function is better than the four-point delta function at low relaxation times, but worse at high relaxation times, (4 the functional form of the direct-forcing term does not affect predictions, and (5 circular and spherical particles falling in liquids are well predicted by using the proposed method both for two-dimensional and three-dimensional cases.

  1. Spectral methods for time dependent partial differential equations

    Science.gov (United States)

    Gottlieb, D.; Turkel, E.

    1983-01-01

    The theory of spectral methods for time dependent partial differential equations is reviewed. When the domain is periodic Fourier methods are presented while for nonperiodic problems both Chebyshev and Legendre methods are discussed. The theory is presented for both hyperbolic and parabolic systems using both Galerkin and collocation procedures. While most of the review considers problems with constant coefficients the extension to nonlinear problems is also discussed. Some results for problems with shocks are presented.

  2. A finite element method for SSI time history calculation

    International Nuclear Information System (INIS)

    Ni, X.; Gantenbein, F.; Petit, M.

    1989-01-01

    The method which is proposed is based on a finite element modelization for the soil and the structure and a time history calculation. It has been developed for plane and axisymmetric geometries. The principle of this method is presented, then applications are given, first to a linear calculation for which results will be compared to those obtained by standard methods. Then results for a non linear behavior are described

  3. A finite element method for SSI time history calculations

    International Nuclear Information System (INIS)

    Ni, X.M.; Gantenbein, F.; Petit, M.

    1989-01-01

    The method which is proposed is based on a finite element modelisation for the soil and the structure and a time history calculation. It has been developed for plane and axisymmetric geometries. The principle of this method will be presented, then applications will be given, first to a linear calculation for which results will be compared to those obtained by standard methods. Then results for a non linear behavior will be described

  4. Introduction to numerical methods for time dependent differential equations

    CERN Document Server

    Kreiss, Heinz-Otto

    2014-01-01

    Introduces both the fundamentals of time dependent differential equations and their numerical solutions Introduction to Numerical Methods for Time Dependent Differential Equations delves into the underlying mathematical theory needed to solve time dependent differential equations numerically. Written as a self-contained introduction, the book is divided into two parts to emphasize both ordinary differential equations (ODEs) and partial differential equations (PDEs). Beginning with ODEs and their approximations, the authors provide a crucial presentation of fundamental notions, such as the t

  5. Gravity-Assist Trajectories to the Ice Giants: An Automated Method to Catalog Mass-or Time-Optimal Solutions

    Science.gov (United States)

    Hughes, Kyle M.; Knittel, Jeremy M.; Englander, Jacob A.

    2017-01-01

    This work presents an automated method of calculating mass (or time) optimal gravity-assist trajectories without a priori knowledge of the flyby-body combination. Since gravity assists are particularly crucial for reaching the outer Solar System, we use the Ice Giants, Uranus and Neptune, as example destinations for this work. Catalogs are also provided that list the most attractive trajectories found over launch dates ranging from 2024 to 2038. The tool developed to implement this method, called the Python EMTG Automated Trade Study Application (PEATSA), iteratively runs the Evolutionary Mission Trajectory Generator (EMTG), a NASA Goddard Space Flight Center in-house trajectory optimization tool. EMTG finds gravity-assist trajectories with impulsive maneuvers using a multiple-shooting structure along with stochastic methods (such as monotonic basin hopping) and may be run with or without an initial guess provided. PEATSA runs instances of EMTG in parallel over a grid of launch dates. After each set of runs completes, the best results within a neighborhood of launch dates are used to seed all other cases in that neighborhood---allowing the solutions across the range of launch dates to improve over each iteration. The results here are compared against trajectories found using a grid-search technique, and PEATSA is found to outperform the grid-search results for most launch years considered.

  6. A time-dependent neutron transport method of characteristics formulation with time derivative propagation

    International Nuclear Information System (INIS)

    Hoffman, Adam J.; Lee, John C.

    2016-01-01

    A new time-dependent Method of Characteristics (MOC) formulation for nuclear reactor kinetics was developed utilizing angular flux time-derivative propagation. This method avoids the requirement of storing the angular flux at previous points in time to represent a discretized time derivative; instead, an equation for the angular flux time derivative along 1D spatial characteristics is derived and solved concurrently with the 1D transport characteristic equation. This approach allows the angular flux time derivative to be recast principally in terms of the neutron source time derivatives, which are approximated to high-order accuracy using the backward differentiation formula (BDF). This approach, called Source Derivative Propagation (SDP), drastically reduces the memory requirements of time-dependent MOC relative to methods that require storing the angular flux. An SDP method was developed for 2D and 3D applications and implemented in the computer code DeCART in 2D. DeCART was used to model two reactor transient benchmarks: a modified TWIGL problem and a C5G7 transient. The SDP method accurately and efficiently replicated the solution of the conventional time-dependent MOC method using two orders of magnitude less memory.

  7. A time-dependent neutron transport method of characteristics formulation with time derivative propagation

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, Adam J., E-mail: adamhoff@umich.edu; Lee, John C., E-mail: jcl@umich.edu

    2016-02-15

    A new time-dependent Method of Characteristics (MOC) formulation for nuclear reactor kinetics was developed utilizing angular flux time-derivative propagation. This method avoids the requirement of storing the angular flux at previous points in time to represent a discretized time derivative; instead, an equation for the angular flux time derivative along 1D spatial characteristics is derived and solved concurrently with the 1D transport characteristic equation. This approach allows the angular flux time derivative to be recast principally in terms of the neutron source time derivatives, which are approximated to high-order accuracy using the backward differentiation formula (BDF). This approach, called Source Derivative Propagation (SDP), drastically reduces the memory requirements of time-dependent MOC relative to methods that require storing the angular flux. An SDP method was developed for 2D and 3D applications and implemented in the computer code DeCART in 2D. DeCART was used to model two reactor transient benchmarks: a modified TWIGL problem and a C5G7 transient. The SDP method accurately and efficiently replicated the solution of the conventional time-dependent MOC method using two orders of magnitude less memory.

  8. Financial time series analysis based on information categorization method

    Science.gov (United States)

    Tian, Qiang; Shang, Pengjian; Feng, Guochen

    2014-12-01

    The paper mainly applies the information categorization method to analyze the financial time series. The method is used to examine the similarity of different sequences by calculating the distances between them. We apply this method to quantify the similarity of different stock markets. And we report the results of similarity in US and Chinese stock markets in periods 1991-1998 (before the Asian currency crisis), 1999-2006 (after the Asian currency crisis and before the global financial crisis), and 2007-2013 (during and after global financial crisis) by using this method. The results show the difference of similarity between different stock markets in different time periods and the similarity of the two stock markets become larger after these two crises. Also we acquire the results of similarity of 10 stock indices in three areas; it means the method can distinguish different areas' markets from the phylogenetic trees. The results show that we can get satisfactory information from financial markets by this method. The information categorization method can not only be used in physiologic time series, but also in financial time series.

  9. Crystal timing offset calibration method for time of flight PET scanners

    Science.gov (United States)

    Ye, Jinghan; Song, Xiyun

    2016-03-01

    In time-of-flight (TOF) positron emission tomography (PET), precise calibration of the timing offset of each crystal of a PET scanner is essential. Conventionally this calibration requires a specially designed tool just for this purpose. In this study a method that uses a planar source to measure the crystal timing offsets (CTO) is developed. The method uses list mode acquisitions of a planar source placed at multiple orientations inside the PET scanner field-of-view (FOV). The placement of the planar source in each acquisition is automatically figured out from the measured data, so that a fixture for exactly placing the source is not required. The expected coincidence time difference for each detected list mode event can be found from the planar source placement and the detector geometry. A deviation of the measured time difference from the expected one is due to CTO of the two crystals. The least squared solution of the CTO is found iteratively using the list mode events. The effectiveness of the crystal timing calibration method is evidenced using phantom images generated by placing back each list mode event into the image space with the timing offset applied to each event. The zigzagged outlines of the phantoms in the images become smooth after the crystal timing calibration is applied. In conclusion, a crystal timing calibration method is developed. The method uses multiple list mode acquisitions of a planar source to find the least squared solution of crystal timing offsets.

  10. Mathematical methods in time series analysis and digital image processing

    CERN Document Server

    Kurths, J; Maass, P; Timmer, J

    2008-01-01

    The aim of this volume is to bring together research directions in theoretical signal and imaging processing developed rather independently in electrical engineering, theoretical physics, mathematics and the computer sciences. In particular, mathematically justified algorithms and methods, the mathematical analysis of these algorithms, and methods as well as the investigation of connections between methods from time series analysis and image processing are reviewed. An interdisciplinary comparison of these methods, drawing upon common sets of test problems from medicine and geophysical/enviromental sciences, is also addressed. This volume coherently summarizes work carried out in the field of theoretical signal and image processing. It focuses on non-linear and non-parametric models for time series as well as on adaptive methods in image processing.

  11. Multiple time-scale methods in particle simulations of plasmas

    International Nuclear Information System (INIS)

    Cohen, B.I.

    1985-01-01

    This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling

  12. Limitations of the time slide method of background estimation

    International Nuclear Information System (INIS)

    Was, Michal; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Leroy, Nicolas; Robinet, Florent; Vavoulidis, Miltiadis

    2010-01-01

    Time shifting the output of gravitational wave detectors operating in coincidence is a convenient way of estimating the background in a search for short-duration signals. In this paper, we show how non-stationary data affect the background estimation precision. We present a method of measuring the fluctuations of the data and computing its effects on a coincident search. In particular, we show that for fluctuations of moderate amplitude, time slides larger than the fluctuation time scales can be used. We also recall how the false alarm variance saturates with the number of time shifts.

  13. Limitations of the time slide method of background estimation

    Energy Technology Data Exchange (ETDEWEB)

    Was, Michal; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Leroy, Nicolas; Robinet, Florent; Vavoulidis, Miltiadis, E-mail: mwas@lal.in2p3.f [LAL, Universite Paris-Sud, CNRS/IN2P3, Orsay (France)

    2010-10-07

    Time shifting the output of gravitational wave detectors operating in coincidence is a convenient way of estimating the background in a search for short-duration signals. In this paper, we show how non-stationary data affect the background estimation precision. We present a method of measuring the fluctuations of the data and computing its effects on a coincident search. In particular, we show that for fluctuations of moderate amplitude, time slides larger than the fluctuation time scales can be used. We also recall how the false alarm variance saturates with the number of time shifts.

  14. Running and osteoarthritis.

    Science.gov (United States)

    Willick, Stuart E; Hansen, Pamela A

    2010-07-01

    The overall health benefits of cardiovascular exercise, such as running, are well established. However, it is also well established that in certain circumstances running can lead to overload injuries of muscle, tendon, and bone. In contrast, it has not been established that running leads to degeneration of articular cartilage, which is the hallmark of osteoarthritis. This article reviews the available literature on the association between running and osteoarthritis, with a focus on clinical epidemiologic studies. The preponderance of clinical reports refutes an association between running and osteoarthritis. Copyright 2010 Elsevier Inc. All rights reserved.

  15. Arterial wave intensity and ventricular-arterial coupling by vascular ultrasound: rationale and methods for the automated analysis of forwards and backwards running waves.

    Science.gov (United States)

    Rakebrandt, F; Palombo, C; Swampillai, J; Schön, F; Donald, A; Kozàkovà, M; Kato, K; Fraser, A G

    2009-02-01

    Wave intensity (WI) in the circulation is estimated noninvasively as the product of instantaneous changes in pressure and velocity. We recorded diameter as a surrogate for pressure, and velocity in the right common carotid artery using an Aloka SSD-5500 ultrasound scanner. We developed automated software, applying the water hammer equation to obtain local wave speed from the slope of a pressure/velocity loop during early systole to separate net WI into individual forwards and backwards-running waves. A quality index was developed to test for noisy data. The timing, duration, peak amplitude and net energy of separated WI components were measured in healthy subjects with a wide age range. Age and arterial stiffness were independent predictors of local wave speed, whereas backwards-travelling waves correlated more strongly with ventricular systolic function than with age-related changes in arterial stiffness. Separated WI offers detailed insight into ventricular-arterial interactions that may be useful for assessing the relative contributions of ventricular and vascular function to wave travel.

  16. An Optimization Method of Time Window Based on Travel Time and Reliability

    Directory of Open Access Journals (Sweden)

    Fengjie Fu

    2015-01-01

    Full Text Available The dynamic change of urban road travel time was analyzed using video image detector data, and it showed cyclic variation, so the signal cycle length at the upstream intersection was conducted as the basic unit of time window; there was some evidence of bimodality in the actual travel time distributions; therefore, the fitting parameters of the travel time bimodal distribution were estimated using the EM algorithm. Then the weighted average value of the two means was indicated as the travel time estimation value, and the Modified Buffer Time Index (MBIT was expressed as travel time variability; based on the characteristics of travel time change and MBIT along with different time windows, the time window was optimized dynamically for minimum MBIT, requiring that the travel time change be lower than the threshold value and traffic incidents can be detected real time; finally, travel times on Shandong Road in Qingdao were estimated every 10 s, 120 s, optimal time windows, and 480 s and the comparisons demonstrated that travel time estimation in optimal time windows can exactly and steadily reflect the real-time traffic. It verifies the effectiveness of the optimization method.

  17. Reduction Methods for Real-time Simulations in Hybrid Testing

    DEFF Research Database (Denmark)

    Andersen, Sebastian

    2016-01-01

    Hybrid testing constitutes a cost-effective experimental full scale testing method. The method was introduced in the 1960's by Japanese researchers, as an alternative to conventional full scale testing and small scale material testing, such as shake table tests. The principle of the method...... is performed on a glass fibre reinforced polymer composite box girder. The test serves as a pilot test for prospective real-time tests on a wind turbine blade. The Taylor basis is implemented in the test, used to perform the numerical simulations. Despite of a number of introduced errors in the real...... is to divide a structure into a physical substructure and a numerical substructure, and couple these in a test. If the test is conducted in real-time it is referred to as real time hybrid testing. The hybrid testing concept has developed significantly since its introduction in the 1960', both with respect...

  18. Real time simulation method for fast breeder reactors dynamics

    International Nuclear Information System (INIS)

    Miki, Tetsushi; Mineo, Yoshiyuki; Ogino, Takamichi; Kishida, Koji; Furuichi, Kenji.

    1985-01-01

    The development of multi-purpose real time simulator models with suitable plant dynamics was made; these models can be used not only in training operators but also in designing control systems, operation sequences and many other items which must be studied for the development of new type reactors. The prototype fast breeder reactor ''Monju'' is taken as an example. Analysis is made on various factors affecting the accuracy and computer load of its dynamic simulation. A method is presented which determines the optimum number of nodes in distributed systems and time steps. The oscillations due to the numerical instability are observed in the dynamic simulation of evaporators with a small number of nodes, and a method to cancel these oscillations is proposed. It has been verified through the development of plant dynamics simulation codes that these methods can provide efficient real time dynamics models of fast breeder reactors. (author)

  19. Fault detection of gearbox using time-frequency method

    Science.gov (United States)

    Widodo, A.; Satrijo, Dj.; Prahasto, T.; Haryanto, I.

    2017-04-01

    This research deals with fault detection and diagnosis of gearbox by using vibration signature. In this work, fault detection and diagnosis are approached by employing time-frequency method, and then the results are compared with cepstrum analysis. Experimental work has been conducted for data acquisition of vibration signal thru self-designed gearbox test rig. This test-rig is able to demonstrate normal and faulty gearbox i.e., wears and tooth breakage. Three accelerometers were used for vibration signal acquisition from gearbox, and optical tachometer was used for shaft rotation speed measurement. The results show that frequency domain analysis using fast-fourier transform was less sensitive to wears and tooth breakage condition. However, the method of short-time fourier transform was able to monitor the faults in gearbox. Wavelet Transform (WT) method also showed good performance in gearbox fault detection using vibration signal after employing time synchronous averaging (TSA).

  20. A time-domain method to generate artificial time history from a given reference response spectrum

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Gang Sik [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of); Song, Oh Seop [Dept. of Mechanical Engineering, Chungnam National University, Daejeon (Korea, Republic of)

    2016-06-15

    Seismic qualification by test is widely used as a way to show the integrity and functionality of equipment that is related to the overall safety of nuclear power plants. Another means of seismic qualification is by direct integration analysis. Both approaches require a series of time histories as an input. However, in most cases, the possibility of using real earthquake data is limited. Thus, artificial time histories are widely used instead. In many cases, however, response spectra are given. Thus, most of the artificial time histories are generated from the given response spectra. Obtaining the response spectrum from a given time history is straightforward. However, the procedure for generating artificial time histories from a given response spectrum is difficult and complex to understand. Thus, this paper presents a simple time-domain method for generating a time history from a given response spectrum; the method was shown to satisfy conditions derived from nuclear regulatory guidance.

  1. A time-domain method to generate artificial time history from a given reference response spectrum

    International Nuclear Information System (INIS)

    Shin, Gang Sik; Song, Oh Seop

    2016-01-01

    Seismic qualification by test is widely used as a way to show the integrity and functionality of equipment that is related to the overall safety of nuclear power plants. Another means of seismic qualification is by direct integration analysis. Both approaches require a series of time histories as an input. However, in most cases, the possibility of using real earthquake data is limited. Thus, artificial time histories are widely used instead. In many cases, however, response spectra are given. Thus, most of the artificial time histories are generated from the given response spectra. Obtaining the response spectrum from a given time history is straightforward. However, the procedure for generating artificial time histories from a given response spectrum is difficult and complex to understand. Thus, this paper presents a simple time-domain method for generating a time history from a given response spectrum; the method was shown to satisfy conditions derived from nuclear regulatory guidance

  2. A method for generating high resolution satellite image time series

    Science.gov (United States)

    Guo, Tao

    2014-10-01

    There is an increasing demand for satellite remote sensing data with both high spatial and temporal resolution in many applications. But it still is a challenge to simultaneously improve spatial resolution and temporal frequency due to the technical limits of current satellite observation systems. To this end, much R&D efforts have been ongoing for years and lead to some successes roughly in two aspects, one includes super resolution, pan-sharpen etc. methods which can effectively enhance the spatial resolution and generate good visual effects, but hardly preserve spectral signatures and result in inadequate analytical value, on the other hand, time interpolation is a straight forward method to increase temporal frequency, however it increase little informative contents in fact. In this paper we presented a novel method to simulate high resolution time series data by combing low resolution time series data and a very small number of high resolution data only. Our method starts with a pair of high and low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and then projected onto the high resolution data plane and assigned to each high resolution pixel according to the predefined temporal change patterns of each type of ground objects. Finally the simulated high resolution data is generated. A preliminary experiment shows that our method can simulate a high resolution data with a reasonable accuracy. The contribution of our method is to enable timely monitoring of temporal changes through analysis of time sequence of low resolution images only, and usage of costly high resolution data can be reduces as much as possible, and it presents a highly effective way to build up an economically operational monitoring solution for agriculture, forest, land use investigation

  3. Exact methods for time constrained routing and related scheduling problems

    DEFF Research Database (Denmark)

    Kohl, Niklas

    1995-01-01

    of customers. In the VRPTW customers must be serviced within a given time period - a so called time window. The objective can be to minimize operating costs (e.g. distance travelled), fixed costs (e.g. the number of vehicles needed) or a combination of these component costs. During the last decade optimization......This dissertation presents a number of optimization methods for the Vehicle Routing Problem with Time Windows (VRPTW). The VRPTW is a generalization of the well known capacity constrained Vehicle Routing Problem (VRP), where a fleet of vehicles based at a central depot must service a set...... of J?rnsten, Madsen and S?rensen (1986), which has been tested computationally by Halse (1992). Both methods decompose the problem into a series of time and capacity constrained shotest path problems. This yields a tight lower bound on the optimal objective, and the dual gap can often be closed...

  4. Evaluation of the filtered leapfrog-trapezoidal time integration method

    International Nuclear Information System (INIS)

    Roache, P.J.; Dietrich, D.E.

    1988-01-01

    An analysis and evaluation are presented for a new method of time integration for fluid dynamic proposed by Dietrich. The method, called the filtered leapfrog-trapezoidal (FLT) scheme, is analyzed for the one-dimensional constant-coefficient advection equation and is shown to have some advantages for quasi-steady flows. A modification (FLTW) using a weighted combination of FLT and leapfrog is developed which retains the advantages for steady flows, increases accuracy for time-dependent flows, and involves little coding effort. Merits and applicability are discussed

  5. Novel Verification Method for Timing Optimization Based on DPSO

    Directory of Open Access Journals (Sweden)

    Chuandong Chen

    2018-01-01

    Full Text Available Timing optimization for logic circuits is one of the key steps in logic synthesis. Extant research data are mainly proposed based on various intelligence algorithms. Hence, they are neither comparable with timing optimization data collected by the mainstream electronic design automation (EDA tool nor able to verify the superiority of intelligence algorithms to the EDA tool in terms of optimization ability. To address these shortcomings, a novel verification method is proposed in this study. First, a discrete particle swarm optimization (DPSO algorithm was applied to optimize the timing of the mixed polarity Reed-Muller (MPRM logic circuit. Second, the Design Compiler (DC algorithm was used to optimize the timing of the same MPRM logic circuit through special settings and constraints. Finally, the timing optimization results of the two algorithms were compared based on MCNC benchmark circuits. The timing optimization results obtained using DPSO are compared with those obtained from DC, and DPSO demonstrates an average reduction of 9.7% in the timing delays of critical paths for a number of MCNC benchmark circuits. The proposed verification method directly ascertains whether the intelligence algorithm has a better timing optimization ability than DC.

  6. The RATIO method for time-resolved Laue crystallography

    International Nuclear Information System (INIS)

    Coppens, P.; Pitak, M.; Gembicky, M.; Messerschmidt, M.; Scheins, S.; Benedict, J.; Adachi, S.-I.; Sato, T.; Nozawa, S.; Ichiyanagi, K.; Chollet, M.; Koshihara, S.-Y.

    2009-01-01

    A RATIO method for analysis of intensity changes in time-resolved pump-probe Laue diffraction experiments is described. The method eliminates the need for scaling the data with a wavelength curve representing the spectral distribution of the source and removes the effect of possible anisotropic absorption. It does not require relative scaling of series of frames and removes errors due to all but very short term fluctuations in the synchrotron beam.

  7. Neutron spectrum measurement using rise-time discrimination method

    International Nuclear Information System (INIS)

    Luo Zhiping; Suzuki, C.; Kosako, T.; Ma Jizeng

    2009-01-01

    PSD method can be used to measure the fast neutron spectrum in n/γ mixed field. A set of assemblies for measuring the pulse height distribution of neutrons is built up,based on a large volume NE213 liquid scintillator and standard NIM circuits,through the rise-time discrimination method. After that,the response matrix is calculated using Monte Carlo method. The energy calibration of the pulse height distribution is accomplished using 60 Co radioisotope. The neutron spectrum of the mono-energetic accelerator neutron source is achieved by unfolding process. Suggestions for further improvement of the system are presented at last. (authors)

  8. Minimum entropy density method for the time series analysis

    Science.gov (United States)

    Lee, Jeong Won; Park, Joongwoo Brian; Jo, Hang-Hyun; Yang, Jae-Suk; Moon, Hie-Tae

    2009-01-01

    The entropy density is an intuitive and powerful concept to study the complicated nonlinear processes derived from physical systems. We develop the minimum entropy density method (MEDM) to detect the structure scale of a given time series, which is defined as the scale in which the uncertainty is minimized, hence the pattern is revealed most. The MEDM is applied to the financial time series of Standard and Poor’s 500 index from February 1983 to April 2006. Then the temporal behavior of structure scale is obtained and analyzed in relation to the information delivery time and efficient market hypothesis.

  9. Formal methods for dependable real-time systems

    Science.gov (United States)

    Rushby, John

    1993-01-01

    The motivation for using formal methods to specify and reason about real time properties is outlined and approaches that were proposed and used are sketched. The formal verifications of clock synchronization algorithms are concluded as showing that mechanically supported reasoning about complex real time behavior is feasible. However, there was significant increase in the effectiveness of verification systems since those verifications were performed, at it is to be expected that verifications of comparable difficulty will become fairly routine. The current challenge lies in developing perspicuous and economical approaches to the formalization and specification of real time properties.

  10. Method for determining thermal neutron decay times of earth formations

    International Nuclear Information System (INIS)

    Arnold, D.M.

    1976-01-01

    A method is disclosed for measuring the thermal neutron decay time of earth formations in the vicinity of a well borehole. A harmonically intensity modulated source of fast neutrons is used to irradiate the earth formations with fast neutrons at three different intensity modulation frequencies. The tangents of the relative phase angles of the fast neutrons and the resulting thermal neutrons at each of the three frequencies of modulation are measured. First and second approximations to the earth formation thermal neutron decay time are derived from the three tangent measurements. These approximations are then combined to derive a value for the true earth formation thermal neutron decay time

  11. Cardiovascular responses during deep water running versus shallow water running in school children

    Directory of Open Access Journals (Sweden)

    Anerao Urja M, Shinde Nisha K, Khatri SM

    2014-03-01

    Full Text Available Overview: As the school going children especially the adolescents’ need workout routine; it is advisable that the routine is imbibed in the school’s class time table. In India as growing number of schools provide swimming as one of the recreational activities; school staff often fails to notice the boredom that is caused by the same activity. Deep as well as shallow water running can be one of the best alternatives to swimming. Hence the present study was conducted to find out the cardiovascular response in these individuals. Methods: This was a Prospective Cross-Sectional Comparative Study done in 72 healthy school going students (males grouped into 2 according to the interventions (Deep water running and Shallow water running. Cardiovascular parameters such as Heart rate (HR, Saturation of oxygen (SpO2, Maximal oxygen consumption (VO2max and Rate of Perceived Exertion (RPE were assessed. Results: Significant improvements in cardiovascular parameters were seen in both the groups i.e. by both the interventions. Conclusion: Deep water running and Shallow water running can be used to improve cardiac function in terms of various outcome measures used in the study.

  12. Adding Timing Requirements to the CODARTS Real-Time Software Design Method

    DEFF Research Database (Denmark)

    Bach, K.R.

    The CODARTS software design method consideres how concurrent, distributed and real-time applications can be designed. Although accounting for the important issues of task and communication, the method does not provide means for expressing the timeliness of the tasks and communication directly...

  13. Statistical methods of parameter estimation for deterministically chaotic time series

    Science.gov (United States)

    Pisarenko, V. F.; Sornette, D.

    2004-03-01

    We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A “segmentation fitting” maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x1 considered as an additional unknown parameter. The segmentation fitting method, called “piece-wise” ML, is similar in spirit but simpler and has smaller bias than the “multiple shooting” previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).

  14. Learning to Run with Actor-Critic Ensemble

    OpenAIRE

    Huang, Zhewei; Zhou, Shuchang; Zhuang, BoEr; Zhou, Xinyu

    2017-01-01

    We introduce an Actor-Critic Ensemble(ACE) method for improving the performance of Deep Deterministic Policy Gradient(DDPG) algorithm. At inference time, our method uses a critic ensemble to select the best action from proposals of multiple actors running in parallel. By having a larger candidate set, our method can avoid actions that have fatal consequences, while staying deterministic. Using ACE, we have won the 2nd place in NIPS'17 Learning to Run competition, under the name of "Megvii-hzw...

  15. On the solution of high order stable time integration methods

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Blaheta, Radim; Sysala, Stanislav; Ahmad, B.

    2013-01-01

    Roč. 108, č. 1 (2013), s. 1-22 ISSN 1687-2770 Institutional support: RVO:68145535 Keywords : evolution equations * preconditioners for quadratic matrix polynomials * a stiffly stable time integration method Subject RIV: BA - General Mathematics Impact factor: 0.836, year: 2013 http://www.boundaryvalueproblems.com/content/2013/1/108

  16. Non-linear shape functions over time in the space-time finite element method

    Directory of Open Access Journals (Sweden)

    Kacprzyk Zbigniew

    2017-01-01

    Full Text Available This work presents a generalisation of the space-time finite element method proposed by Kączkowski in his seminal of 1970’s and early 1980’s works. Kączkowski used linear shape functions in time. The recurrence formula obtained by Kączkowski was conditionally stable. In this paper, non-linear shape functions in time are proposed.

  17. Real-Time Pore Pressure Detection: Indicators and Improved Methods

    Directory of Open Access Journals (Sweden)

    Jincai Zhang

    2017-01-01

    Full Text Available High uncertainties may exist in the predrill pore pressure prediction in new prospects and deepwater subsalt wells; therefore, real-time pore pressure detection is highly needed to reduce drilling risks. The methods for pore pressure detection (the resistivity, sonic, and corrected d-exponent methods are improved using the depth-dependent normal compaction equations to adapt to the requirements of the real-time monitoring. A new method is proposed to calculate pore pressure from the connection gas or elevated background gas, which can be used for real-time pore pressure detection. The pore pressure detection using the logging-while-drilling, measurement-while-drilling, and mud logging data is also implemented and evaluated. Abnormal pore pressure indicators from the well logs, mud logs, and wellbore instability events are identified and analyzed to interpret abnormal pore pressures for guiding real-time drilling decisions. The principles for identifying abnormal pressure indicators are proposed to improve real-time pore pressure monitoring.

  18. Real-time earthquake monitoring using a search engine method.

    Science.gov (United States)

    Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong

    2014-12-04

    When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake's parameters in <1 s after receiving the long-period surface wave data.

  19. Seasonal adjustment methods and real time trend-cycle estimation

    CERN Document Server

    Bee Dagum, Estela

    2016-01-01

    This book explores widely used seasonal adjustment methods and recent developments in real time trend-cycle estimation. It discusses in detail the properties and limitations of X12ARIMA, TRAMO-SEATS and STAMP - the main seasonal adjustment methods used by statistical agencies. Several real-world cases illustrate each method and real data examples can be followed throughout the text. The trend-cycle estimation is presented using nonparametric techniques based on moving averages, linear filters and reproducing kernel Hilbert spaces, taking recent advances into account. The book provides a systematical treatment of results that to date have been scattered throughout the literature. Seasonal adjustment and real time trend-cycle prediction play an essential part at all levels of activity in modern economies. They are used by governments to counteract cyclical recessions, by central banks to control inflation, by decision makers for better modeling and planning and by hospitals, manufacturers, builders, transportat...

  20. Perfectly matched layer for the time domain finite element method

    International Nuclear Information System (INIS)

    Rylander, Thomas; Jin Jianming

    2004-01-01

    A new perfectly matched layer (PML) formulation for the time domain finite element method is described and tested for Maxwell's equations. In particular, we focus on the time integration scheme which is based on Galerkin's method with a temporally piecewise linear expansion of the electric field. The time stepping scheme is constructed by forming a linear combination of exact and trapezoidal integration applied to the temporal weak form, which reduces to the well-known Newmark scheme in the case without PML. Extensive numerical tests on scattering from infinitely long metal cylinders in two dimensions show good accuracy and no signs of instabilities. For a circular cylinder, the proposed scheme indicates the expected second order convergence toward the analytic solution and gives less than 2% root-mean-square error in the bistatic radar cross section (RCS) for resolutions with more than 10 points per wavelength. An ogival cylinder, which has sharp corners supporting field singularities, shows similar accuracy in the monostatic RCS

  1. Method to implement the CCD timing generator based on FPGA

    Science.gov (United States)

    Li, Binhua; Song, Qian; He, Chun; Jin, Jianhui; He, Lin

    2010-07-01

    With the advance of the PFPA technology, the design methodology of digital systems is changing. In recent years we develop a method to implement the CCD timing generator based on FPGA and VHDL. This paper presents the principles and implementation skills of the method. Taking a developed camera as an example, we introduce the structure, input and output clocks/signals of a timing generator implemented in the camera. The generator is composed of a top module and a bottom module. The bottom one is made up of 4 sub-modules which correspond to 4 different operation modes. The modules are implemented by 5 VHDL programs. Frame charts of the architecture of these programs are shown in the paper. We also describe implementation steps of the timing generator in Quartus II, and the interconnections between the generator and a Nios soft core processor which is the controller of this generator. Some test results are presented in the end.

  2. Electron run-away

    International Nuclear Information System (INIS)

    Levinson, I.B.

    1975-01-01

    The run-away effect of electrons for the Coulomb scattering has been studied by Dricer, but the question for other scattering mechanisms is not yet studied. Meanwhile, if the scattering is quasielastic, a general criterion for the run-away may be formulated; in this case the run-away influence on the distribution function may also be studied in somewhat general and qualitative manner. (Auth.)

  3. Explicit time marching methods for the time-dependent Euler computations

    International Nuclear Information System (INIS)

    Tai, C.H.; Chiang, D.C.; Su, Y.P.

    1997-01-01

    Four explicit type time marching methods, including one proposed by the authors, are examined. The TVD conditions of this method are analyzed with the linear conservation law as the model equation. Performance of these methods when applied to the Euler equations are numerically tested. Seven examples are tested, the main concern is the performance of the methods when discontinuities with different strengths are encountered. When the discontinuity is getting stronger, spurious oscillation shows up for three existing methods, while the method proposed by the authors always gives the results with satisfaction. The effect of the limiter is also investigated. To put these methods in the same basis for the comparison the same spatial discretization is used. Roe's solver is used to evaluate the fluxes at the cell interface; spatially second-order accuracy is achieved by the MUSCL reconstruction. 19 refs., 8 figs

  4. A Multivariate Time Series Method for Monte Carlo Reactor Analysis

    International Nuclear Information System (INIS)

    Taro Ueki

    2008-01-01

    A robust multivariate time series method has been established for the Monte Carlo calculation of neutron multiplication problems. The method is termed Coarse Mesh Projection Method (CMPM) and can be implemented using the coarse statistical bins for acquisition of nuclear fission source data. A novel aspect of CMPM is the combination of the general technical principle of projection pursuit in the signal processing discipline and the neutron multiplication eigenvalue problem in the nuclear engineering discipline. CMPM enables reactor physicists to accurately evaluate major eigenvalue separations of nuclear reactors with continuous energy Monte Carlo calculation. CMPM was incorporated in the MCNP Monte Carlo particle transport code of Los Alamos National Laboratory. The great advantage of CMPM over the traditional Fission Matrix method is demonstrated for the three space-dimensional modeling of the initial core of a pressurized water reactor

  5. Limitations in simulator time-based human reliability analysis methods

    International Nuclear Information System (INIS)

    Wreathall, J.

    1989-01-01

    Developments in human reliability analysis (HRA) methods have evolved slowly. Current methods are little changed from those of almost a decade ago, particularly in the use of time-reliability relationships. While these methods were suitable as an interim step, the time (and the need) has come to specify the next evolution of HRA methods. As with any performance-oriented data source, power plant simulator data have no direct connection to HRA models. Errors reported in data are normal deficiencies observed in human performance; failures are events modeled in probabilistic risk assessments (PRAs). Not all errors cause failures; not all failures are caused by errors. Second, the times at which actions are taken provide no measure of the likelihood of failures to act correctly within an accident scenario. Inferences can be made about human reliability, but they must be made with great care. Specific limitations are discussed. Simulator performance data are useful in providing qualitative evidence of the variety of error types and their potential influences on operating systems. More work is required to combine recent developments in the psychology of error with the qualitative data collected at stimulators. Until data become openly available, however, such an advance will not be practical

  6. The design of the run Clever randomized trial

    DEFF Research Database (Denmark)

    Ramskov, Daniel; Nielsen, Rasmus Oestergaard; Sørensen, Henrik

    2016-01-01

    BACKGROUND: Injury incidence and prevalence in running populations have been investigated and documented in several studies. However, knowledge about injury etiology and prevention is needed. Training errors in running are modifiable risk factors and people engaged in recreational running need...... evidence-based running schedules to minimize the risk of injury. The existing literature on running volume and running intensity and the development of injuries show conflicting results. This may be related to previously applied study designs, methods used to quantify the performed running...... and the statistical analysis of the collected data. The aim of the Run Clever trial is to investigate if a focus on running intensity compared with a focus on running volume in a running schedule influences the overall injury risk differently. METHODS/DESIGN: The Run Clever trial is a randomized trial with a 24-week...

  7. Multiple-time-stepping generalized hybrid Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  8. Single photon imaging and timing array sensor apparatus and method

    Science.gov (United States)

    Smith, R. Clayton

    2003-06-24

    An apparatus and method are disclosed for generating a three-dimension image of an object or target. The apparatus is comprised of a photon source for emitting a photon at a target. The emitted photons are received by a photon receiver for receiving the photon when reflected from the target. The photon receiver determines a reflection time of the photon and further determines an arrival position of the photon on the photon receiver. An analyzer is communicatively coupled to the photon receiver, wherein the analyzer generates a three-dimensional image of the object based upon the reflection time and the arrival position.

  9. Time series analysis methods and applications for flight data

    CERN Document Server

    Zhang, Jianye

    2017-01-01

    This book focuses on different facets of flight data analysis, including the basic goals, methods, and implementation techniques. As mass flight data possesses the typical characteristics of time series, the time series analysis methods and their application for flight data have been illustrated from several aspects, such as data filtering, data extension, feature optimization, similarity search, trend monitoring, fault diagnosis, and parameter prediction, etc. An intelligent information-processing platform for flight data has been established to assist in aircraft condition monitoring, training evaluation and scientific maintenance. The book will serve as a reference resource for people working in aviation management and maintenance, as well as researchers and engineers in the fields of data analysis and data mining.

  10. Which DTW Method Applied to Marine Univariate Time Series Imputation

    OpenAIRE

    Phan , Thi-Thu-Hong; Caillault , Émilie; Lefebvre , Alain; Bigand , André

    2017-01-01

    International audience; Missing data are ubiquitous in any domains of applied sciences. Processing datasets containing missing values can lead to a loss of efficiency and unreliable results, especially for large missing sub-sequence(s). Therefore, the aim of this paper is to build a framework for filling missing values in univariate time series and to perform a comparison of different similarity metrics used for the imputation task. This allows to suggest the most suitable methods for the imp...

  11. Method of parallel processing in SANPO real time system

    International Nuclear Information System (INIS)

    Ostrovnoj, A.I.; Salamatin, I.M.

    1981-01-01

    A method of parellel processing in SANPO real time system is described. Algorithms of data accumulation and preliminary processing in this system as a parallel processes using a specialized high level programming language are described. Hierarchy of elementary processes are also described. It provides the synchronization of concurrent processes without semaphors. The developed means are applied to the systems of experiment automation using SM-3 minicomputers [ru

  12. Overcoming the "Run" Response

    Science.gov (United States)

    Swanson, Patricia E.

    2013-01-01

    Recent research suggests that it is not simply experiencing anxiety that affects mathematics performance but also how one responds to and regulates that anxiety (Lyons and Beilock 2011). Most people have faced mathematics problems that have triggered their "run response." The issue is not whether one wants to run, but rather…

  13. Overuse injuries in running

    DEFF Research Database (Denmark)

    Larsen, Lars Henrik; Rasmussen, Sten; Jørgensen, Jens Erik

    2016-01-01

    What is an overuse injury in running? This question is a corner stone of clinical documentation and research based evidence.......What is an overuse injury in running? This question is a corner stone of clinical documentation and research based evidence....

  14. PRECIS Runs at IITM

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. PRECIS Runs at IITM. Evaluation experiment using LBCs derived from ERA-15 (1979-93). Runs (3 ensembles in each experiment) already completed with LBCs having a length of 30 years each, for. Baseline (1961-90); A2 scenario (2071-2100); B2 scenario ...

  15. Formulation of an explicit-multiple-time-step time integration method for use in a global primitive equation grid model

    Science.gov (United States)

    Chao, W. C.

    1982-01-01

    With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.

  16. Normalization methods in time series of platelet function assays

    Science.gov (United States)

    Van Poucke, Sven; Zhang, Zhongheng; Roest, Mark; Vukicevic, Milan; Beran, Maud; Lauwereins, Bart; Zheng, Ming-Hua; Henskens, Yvonne; Lancé, Marcus; Marcus, Abraham

    2016-01-01

    Abstract Platelet function can be quantitatively assessed by specific assays such as light-transmission aggregometry, multiple-electrode aggregometry measuring the response to adenosine diphosphate (ADP), arachidonic acid, collagen, and thrombin-receptor activating peptide and viscoelastic tests such as rotational thromboelastometry (ROTEM). The task of extracting meaningful statistical and clinical information from high-dimensional data spaces in temporal multivariate clinical data represented in multivariate time series is complex. Building insightful visualizations for multivariate time series demands adequate usage of normalization techniques. In this article, various methods for data normalization (z-transformation, range transformation, proportion transformation, and interquartile range) are presented and visualized discussing the most suited approach for platelet function data series. Normalization was calculated per assay (test) for all time points and per time point for all tests. Interquartile range, range transformation, and z-transformation demonstrated the correlation as calculated by the Spearman correlation test, when normalized per assay (test) for all time points. When normalizing per time point for all tests, no correlation could be abstracted from the charts as was the case when using all data as 1 dataset for normalization. PMID:27428217

  17. Computational electrodynamics the finite-difference time-domain method

    CERN Document Server

    Taflove, Allen

    2005-01-01

    This extensively revised and expanded third edition of the Artech House bestseller, Computational Electrodynamics: The Finite-Difference Time-Domain Method, offers engineers the most up-to-date and definitive resource on this critical method for solving Maxwell's equations. The method helps practitioners design antennas, wireless communications devices, high-speed digital and microwave circuits, and integrated optical devices with unsurpassed efficiency. There has been considerable advancement in FDTD computational technology over the past few years, and the third edition brings professionals the very latest details with entirely new chapters on important techniques, major updates on key topics, and new discussions on emerging areas such as nanophotonics. What's more, to supplement the third edition, the authors have created a Web site with solutions to problems, downloadable graphics and videos, and updates, making this new edition the ideal textbook on the subject as well.

  18. Iterative Refinement Methods for Time-Domain Equalizer Design

    Directory of Open Access Journals (Sweden)

    Evans Brian L

    2006-01-01

    Full Text Available Commonly used time domain equalizer (TEQ design methods have been recently unified as an optimization problem involving an objective function in the form of a Rayleigh quotient. The direct generalized eigenvalue solution relies on matrix decompositions. To reduce implementation complexity, we propose an iterative refinement approach in which the TEQ length starts at two taps and increases by one tap at each iteration. Each iteration involves matrix-vector multiplications and vector additions with matrices and two-element vectors. At each iteration, the optimization of the objective function either improves or the approach terminates. The iterative refinement approach provides a range of communication performance versus implementation complexity tradeoffs for any TEQ method that fits the Rayleigh quotient framework. We apply the proposed approach to three such TEQ design methods: maximum shortening signal-to-noise ratio, minimum intersymbol interference, and minimum delay spread.

  19. Efficient methods for time-absorption (α) eigenvalue calculations

    International Nuclear Information System (INIS)

    Hill, T.R.

    1983-01-01

    The time-absorption eigenvalue (α) calculation is one of the options found in most discrete-ordinates transport codes. Several methods have been developed at Los Alamos to improve the efficiency of this calculation. Two procedures, based on coarse-mesh rebalance, to accelerate the α eigenvalue search are derived. A hybrid scheme to automatically choose the more-effective rebalance method is described. The α rebalance scheme permits some simple modifications to the iteration strategy that eliminates many unnecessary calculations required in the standard search procedure. For several fast supercritical test problems, these methods resulted in convergence with one-fifth the number of iterations required for the conventional eigenvalue search procedure

  20. Formal methods for discrete-time dynamical systems

    CERN Document Server

    Belta, Calin; Aydin Gol, Ebru

    2017-01-01

    This book bridges fundamental gaps between control theory and formal methods. Although it focuses on discrete-time linear and piecewise affine systems, it also provides general frameworks for abstraction, analysis, and control of more general models. The book is self-contained, and while some mathematical knowledge is necessary, readers are not expected to have a background in formal methods or control theory. It rigorously defines concepts from formal methods, such as transition systems, temporal logics, model checking and synthesis. It then links these to the infinite state dynamical systems through abstractions that are intuitive and only require basic convex-analysis and control-theory terminology, which is provided in the appendix. Several examples and illustrations help readers understand and visualize the concepts introduced throughout the book.

  1. BOX-COX REGRESSION METHOD IN TIME SCALING

    Directory of Open Access Journals (Sweden)

    ATİLLA GÖKTAŞ

    2013-06-01

    Full Text Available Box-Cox regression method with λj, for j = 1, 2, ..., k, power transformation can be used when dependent variable and error term of the linear regression model do not satisfy the continuity and normality assumptions. The situation obtaining the smallest mean square error  when optimum power λj, transformation for j = 1, 2, ..., k, of Y has been discussed. Box-Cox regression method is especially appropriate to adjust existence skewness or heteroscedasticity of error terms for a nonlinear functional relationship between dependent and explanatory variables. In this study, the advantage and disadvantage use of Box-Cox regression method have been discussed in differentiation and differantial analysis of time scale concept.

  2. The method of covariant symbols in curved space-time

    International Nuclear Information System (INIS)

    Salcedo, L.L.

    2007-01-01

    Diagonal matrix elements of pseudodifferential operators are needed in order to compute effective Lagrangians and currents. For this purpose the method of symbols is often used, which however lacks manifest covariance. In this work the method of covariant symbols, introduced by Pletnev and Banin, is extended to curved space-time with arbitrary gauge and coordinate connections. For the Riemannian connection we compute the covariant symbols corresponding to external fields, the covariant derivative and the Laplacian, to fourth order in a covariant derivative expansion. This allows one to obtain the covariant symbol of general operators to the same order. The procedure is illustrated by computing the diagonal matrix element of a nontrivial operator to second order. Applications of the method are discussed. (orig.)

  3. The Application of Time-Frequency Methods to HUMS

    Science.gov (United States)

    Pryor, Anna H.; Mosher, Marianne; Lewicki, David G.; Norvig, Peter (Technical Monitor)

    2001-01-01

    This paper reports the study of four time-frequency transforms applied to vibration signals and presents a new metric for comparing them for fault detection. The four methods to be described and compared are the Short Time Frequency Transform (STFT), the Choi-Williams Distribution (WV-CW), the Continuous Wavelet Transform (CWT) and the Discrete Wavelet Transform (DWT). Vibration data of bevel gear tooth fatigue cracks, under a variety of operating load levels, are analyzed using these methods. The new metric for automatic fault detection is developed and can be produced from any systematic numerical representation of the vibration signals. This new metric reveals indications of gear damage with all of the methods on this data set. Analysis with the CWT detects mechanical problems with the test rig not found with the other transforms. The WV-CW and CWT use considerably more resources than the STFT and the DWT. More testing of the new metric is needed to determine its value for automatic fault detection and to develop methods of setting the threshold for the metric.

  4. Design of time interval generator based on hybrid counting method

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Yuan [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Wang, Zhaoqi [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Lu, Houbing [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Hefei Electronic Engineering Institute, Hefei 230037 (China); Chen, Lian [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Jin, Ge, E-mail: goldjin@ustc.edu.cn [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2016-10-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some “off-the-shelf” TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  5. Design of time interval generator based on hybrid counting method

    International Nuclear Information System (INIS)

    Yao, Yuan; Wang, Zhaoqi; Lu, Houbing; Chen, Lian; Jin, Ge

    2016-01-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some “off-the-shelf” TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  6. A method for investigating relative timing information on phylogenetic trees.

    Science.gov (United States)

    Ford, Daniel; Matsen, Frederick A; Stadler, Tanja

    2009-04-01

    In this paper, we present a new way to describe the timing of branching events in phylogenetic trees. Our description is in terms of the relative timing of diversification events between sister clades; as such it is complementary to existing methods using lineages-through-time plots which consider diversification in aggregate. The method can be applied to look for evidence of diversification happening in lineage-specific "bursts", or the opposite, where diversification between 2 clades happens in an unusually regular fashion. In order to be able to distinguish interesting events from stochasticity, we discuss 2 classes of neutral models on trees with relative timing information and develop a statistical framework for testing these models. These model classes include both the coalescent with ancestral population size variation and global rate speciation-extinction models. We end the paper with 2 example applications: first, we show that the evolution of the hepatitis C virus deviates from the coalescent with arbitrary population size. Second, we analyze a large tree of ants, demonstrating that a period of elevated diversification rates does not appear to have occurred in a bursting manner.

  7. The time-dependent density matrix renormalisation group method

    Science.gov (United States)

    Ma, Haibo; Luo, Zhen; Yao, Yao

    2018-04-01

    Substantial progress of the time-dependent density matrix renormalisation group (t-DMRG) method in the recent 15 years is reviewed in this paper. By integrating the time evolution with the sweep procedures in density matrix renormalisation group (DMRG), t-DMRG provides an efficient tool for real-time simulations of the quantum dynamics for one-dimensional (1D) or quasi-1D strongly correlated systems with a large number of degrees of freedom. In the illustrative applications, the t-DMRG approach is applied to investigate the nonadiabatic processes in realistic chemical systems, including exciton dissociation and triplet fission in polymers and molecular aggregates as well as internal conversion in pyrazine molecule.

  8. Of faeces and sweat. How much a mouse is willing to run: having a hard time measuring spontaneous physical activity in different mouse sub-strains

    Directory of Open Access Journals (Sweden)

    Dario Coletti

    2017-03-01

    Full Text Available Physical activity has multiple beneficial effects in the physiology and pathology of the organism. In particular, we and other groups have shown that running counteracts cancer cachexia in both humans and rodents. The latter are prone to exercise in wheel-equipped cages even at advanced stages of cachexia. However, when we wanted to replicate the experimental model routinely used at the University of Rome in a different laboratory (i.e. at Paris 6 University, we had to struggle with puzzling results due to unpredicted mouse behavior. Here we report the experience and offer the explanation underlying these apparently irreproducible results. The original data are currently used for teaching purposes in undergraduate student classes of biological sciences.

  9. IMPLEMENTING FISCAL OR MONETARY POLICY IN TIME OF CRISIS? RUNNING GRANGER CAUSALITY TO TEST THE PHILLIPS CURVE IN SOME EURO ZONE COUNTRIES

    Directory of Open Access Journals (Sweden)

    Nico Gianluigi

    2014-12-01

    Full Text Available This paper aims to provide empirical evidence about the theoretical relationship between inflation and unemployment in 9 European countries. Based on two major goals for economic policymakers namely, to keep both inflation and unemployment low, we use the ingredients of the Phillips curve to orient fiscal and monetary policies. These policies are prerogative for the achievement of a desirable combination of unemployment and inflation. More in detail, we attempt to address two basic issues. One strand of the study examines the size and sign of the impact of unemployment rate on percentage changes in inflation. In our preferred econometric model, we have made explicit the evidence according to which one unit increase (% in unemployment reduces inflation of roughly 0.73 percent, on average. Next, we turn to the question concerning the causal link between inflation and unemployment and we derive a political framework enables to orient European policymakers in the implementation of either fiscal or monetary policy. In this context, by means of the Granger causality test, we mainly find evidence of a directional causality which runs from inflation to unemployment in 4 out of 9 European countries under analysis. This result implies that political authorities of Austria, Belgium, Germany and Italy should implement monetary policy in order to achieve pre-established targets of unemployment and inflation. In the same context, a directional causality running from unemployment to inflation has been found in France and Cyprus suggesting that a reduction in the unemployment level can be achieved through controlling fiscal policy. However, succeeding in this goal may lead to an increasing demand for goods and services which, in turn, might cause a higher inflation than expected. Finally, while there is no statistical evidence of a causal link between unemployment and inflation in Finland and Greece, a bidirectional causality has been found in Estonia. This

  10. Advances in Time Estimation Methods for Molecular Data.

    Science.gov (United States)

    Kumar, Sudhir; Hedges, S Blair

    2016-04-01

    Molecular dating has become central to placing a temporal dimension on the tree of life. Methods for estimating divergence times have been developed for over 50 years, beginning with the proposal of molecular clock in 1962. We categorize the chronological development of these methods into four generations based on the timing of their origin. In the first generation approaches (1960s-1980s), a strict molecular clock was assumed to date divergences. In the second generation approaches (1990s), the equality of evolutionary rates between species was first tested and then a strict molecular clock applied to estimate divergence times. The third generation approaches (since ∼2000) account for differences in evolutionary rates across the tree by using a statistical model, obviating the need to assume a clock or to test the equality of evolutionary rates among species. Bayesian methods in the third generation require a specific or uniform prior on the speciation-process and enable the inclusion of uncertainty in clock calibrations. The fourth generation approaches (since 2012) allow rates to vary from branch to branch, but do not need prior selection of a statistical model to describe the rate variation or the specification of speciation model. With high accuracy, comparable to Bayesian approaches, and speeds that are orders of magnitude faster, fourth generation methods are able to produce reliable timetrees of thousands of species using genome scale data. We found that early time estimates from second generation studies are similar to those of third and fourth generation studies, indicating that methodological advances have not fundamentally altered the timetree of life, but rather have facilitated time estimation by enabling the inclusion of more species. Nonetheless, we feel an urgent need for testing the accuracy and precision of third and fourth generation methods, including their robustness to misspecification of priors in the analysis of large phylogenies and data

  11. Seismic assessment of a site using the time series method

    International Nuclear Information System (INIS)

    Krutzik, N.J.; Rotaru, I.; Bobei, M.; Mingiuc, C.; Serban, V.; Androne, M.

    1997-01-01

    To increase the safety of a NPP located on a seismic site, the seismic acceleration level to which the NPP should be qualified must be as representative as possible for that site, with a conservative degree of safety but not too exaggerated. The consideration of the seismic events affecting the site as independent events and the use of statistic methods to define some safety levels with very low annual occurrence probability (10 -4 ) may lead to some exaggerations of the seismic safety level. The use of some very high value for the seismic acceleration imposed by the seismic safety levels required by the hazard analysis may lead to very costly technical solutions that can make the plant operation more difficult and increase maintenance costs. The considerations of seismic events as a time series with dependence among the events produced, may lead to a more representative assessment of a NPP site seismic activity and consequently to a prognosis on the seismic level values to which the NPP would be ensured throughout its life-span. That prognosis should consider the actual seismic activity (including small earthquakes in real time) of the focuses that affect the plant site. The paper proposes the applications of Autoregressive Time Series to issue a prognosis on the seismic activity of a focus and presents the analysis on Vrancea focus that affects NPP Cernavoda site, by this method. The paper also presents the manner to analyse the focus activity as per the new approach and it assesses the maximum seismic acceleration that may affect NPP Cernavoda throughout its life-span (∼ 30 years). Development and applications of new mathematical analysis method, both for long - and short - time intervals, may lead to important contributions in the process of foretelling the seismic events in the future. (authors)

  12. A Novel Time-Varying Friction Compensation Method for Servomechanism

    Directory of Open Access Journals (Sweden)

    Bin Feng

    2015-01-01

    Full Text Available Friction is an inevitable nonlinear phenomenon existing in servomechanisms. Friction errors often affect their motion and contour accuracies during the reverse motion. To reduce friction errors, a novel time-varying friction compensation method is proposed to solve the problem that the traditional friction compensation methods hardly deal with. This problem leads to an unsatisfactory friction compensation performance and the motion and contour accuracies cannot be maintained effectively. In this method, a trapezoidal compensation pulse is adopted to compensate for the friction errors. A generalized regression neural network algorithm is used to generate the optimal pulse amplitude function. The optimal pulse duration function and the pulse amplitude function can be established by the pulse characteristic parameter learning and then the optimal friction compensation pulse can be generated. The feasibility of friction compensation method was verified on a high-precision X-Y worktable. The experimental results indicated that the motion and contour accuracies were improved greatly with reduction of the friction errors, in different working conditions. Moreover, the overall friction compensation performance indicators were decreased by more than 54% and this friction compensation method can be implemented easily on most of servomechanisms in industry.

  13. A window-based time series feature extraction method.

    Science.gov (United States)

    Katircioglu-Öztürk, Deniz; Güvenir, H Altay; Ravens, Ursula; Baykal, Nazife

    2017-10-01

    This study proposes a robust similarity score-based time series feature extraction method that is termed as Window-based Time series Feature ExtraCtion (WTC). Specifically, WTC generates domain-interpretable results and involves significantly low computational complexity thereby rendering itself useful for densely sampled and populated time series datasets. In this study, WTC is applied to a proprietary action potential (AP) time series dataset on human cardiomyocytes and three precordial leads from a publicly available electrocardiogram (ECG) dataset. This is followed by comparing WTC in terms of predictive accuracy and computational complexity with shapelet transform and fast shapelet transform (which constitutes an accelerated variant of the shapelet transform). The results indicate that WTC achieves a slightly higher classification performance with significantly lower execution time when compared to its shapelet-based alternatives. With respect to its interpretable features, WTC has a potential to enable medical experts to explore definitive common trends in novel datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Comparison of methods for calculating conditional expectations of sufficient statistics for continuous time Markov chains.

    Science.gov (United States)

    Tataru, Paula; Hobolth, Asger

    2011-12-05

    Continuous time Markov chains (CTMCs) is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications past evolutionary events (exact times and types of changes) are unaccessible and the past must be inferred from DNA sequence data observed in the present. We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned on the end-points of the chain, and compare their performance with respect to accuracy and running time. The first algorithm is based on an eigenvalue decomposition of the rate matrix (EVD), the second on uniformization (UNI), and the third on integrals of matrix exponentials (EXPM). The implementation in R of the algorithms is available at http://www.birc.au.dk/~paula/. We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually faster than EVD.

  15. Comparison of methods for calculating conditional expectations of sufficient statistics for continuous time Markov chains

    Directory of Open Access Journals (Sweden)

    Tataru Paula

    2011-12-01

    Full Text Available Abstract Background Continuous time Markov chains (CTMCs is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications past evolutionary events (exact times and types of changes are unaccessible and the past must be inferred from DNA sequence data observed in the present. Results We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned on the end-points of the chain, and compare their performance with respect to accuracy and running time. The first algorithm is based on an eigenvalue decomposition of the rate matrix (EVD, the second on uniformization (UNI, and the third on integrals of matrix exponentials (EXPM. The implementation in R of the algorithms is available at http://www.birc.au.dk/~paula/. Conclusions We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually faster than EVD.

  16. Creep behavior of bone cement: a method for time extrapolation using time-temperature equivalence.

    Science.gov (United States)

    Morgan, R L; Farrar, D F; Rose, J; Forster, H; Morgan, I

    2003-04-01

    The clinical lifetime of poly(methyl methacrylate) (PMMA) bone cement is considerably longer than the time over which it is convenient to perform creep testing. Consequently, it is desirable to be able to predict the long term creep behavior of bone cement from the results of short term testing. A simple method is described for prediction of long term creep using the principle of time-temperature equivalence in polymers. The use of the method is illustrated using a commercial acrylic bone cement. A creep strain of approximately 0.6% is predicted after 400 days under a constant flexural stress of 2 MPa. The temperature range and stress levels over which it is appropriate to perform testing are described. Finally, the effects of physical aging on the accuracy of the method are discussed and creep data from aged cement are reported.

  17. FREEZING AND THAWING TIME PREDICTION METHODS OF FOODS II: NUMARICAL METHODS

    Directory of Open Access Journals (Sweden)

    Yahya TÜLEK

    1999-03-01

    Full Text Available Freezing is one of the excellent methods for the preservation of foods. If freezing and thawing processes and frozen storage method are carried out correctly, the original characteristics of the foods can remain almost unchanged over an extended periods of time. It is very important to determine the freezing and thawing time period of the foods, as they strongly influence the both quality of food material and process productivity and the economy. For developing a simple and effectively usable mathematical model, less amount of process parameters and physical properties should be enrolled in calculations. But it is a difficult to have all of these in one prediction method. For this reason, various freezing and thawing time prediction methods were proposed in literature and research studies have been going on.

  18. Hybrid perturbation methods based on statistical time series models

    Science.gov (United States)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  19. Full Waveform Inversion Using Oriented Time Migration Method

    KAUST Repository

    Zhang, Zhendong

    2016-04-12

    Full waveform inversion (FWI) for reflection events is limited by its linearized update requirements given by a process equivalent to migration. Unless the background velocity model is reasonably accurate the resulting gradient can have an inaccurate update direction leading the inversion to converge into what we refer to as local minima of the objective function. In this thesis, I first look into the subject of full model wavenumber to analysis the root of local minima and suggest the possible ways to avoid this problem. And then I analysis the possibility of recovering the corresponding wavenumber components through the existing inversion and migration algorithms. Migration can be taken as a generalized inversion method which mainly retrieves the high wavenumber part of the model. Conventional impedance inversion method gives a mapping relationship between the migration image (high wavenumber) and model parameters (full wavenumber) and thus provides a possible cascade inversion strategy to retrieve the full wavenumber components from seismic data. In the proposed approach, consider a mild lateral variation in the model, I find an analytical Frechet derivation corresponding to the new objective function. In the proposed approach, the gradient is given by the oriented time-domain imaging method. This is independent of the background velocity. Specifically, I apply the oriented time-domain imaging (which depends on the reflection slope instead of a background velocity) on the data residual to obtain the geometrical features of the velocity perturbation. Assuming that density is constant, the conventional 1D impedance inversion method is also applicable for 2D or 3D velocity inversion within the process of FWI. This method is not only capable of inverting for velocity, but it is also capable of retrieving anisotropic parameters relying on linearized representations of the reflection response. To eliminate the cross-talk artifacts between different parameters, I

  20. Time-Frequency Methods for Structural Health Monitoring

    Directory of Open Access Journals (Sweden)

    Alexander L. Pyayt

    2014-03-01

    Full Text Available Detection of early warning signals for the imminent failure of large and complex engineered structures is a daunting challenge with many open research questions. In this paper we report on novel ways to perform Structural Health Monitoring (SHM of flood protection systems (levees, earthen dikes and concrete dams using sensor data. We present a robust data-driven anomaly detection method that combines time-frequency feature extraction, using wavelet analysis and phase shift, with one-sided classification techniques to identify the onset of failure anomalies in real-time sensor measurements. The methodology has been successfully tested at three operational levees. We detected a dam leakage in the retaining dam (Germany and “strange” behaviour of sensors installed in a Boston levee (UK and a Rhine levee (Germany.

  1. Seismic assessment of a site using the time series method

    International Nuclear Information System (INIS)

    Krutzik, N.J.; Rotaru, I.; Bobei, M.; Mingiuc, C.; Serban, V.; Androne, M.

    2001-01-01

    1. To increase the safety of a NPP located on a seismic site, the seismic acceleration level to which the NPP should be qualified must be as representative as possible for that site, with a conservative degree of safety but not too exaggerated. 2. The consideration of the seismic events affecting the site as independent events and the use of statistic methods to define some safety levels with very low annual occurrence probabilities (10 -4 ) may lead to some exaggerations of the seismic safety level. 3. The use of some very high values for the seismic accelerations imposed by the seismic safety levels required by the hazard analysis may lead to very expensive technical solutions that can make the plant operation more difficult and increase the maintenance costs. 4. The consideration of seismic events as a time series with dependence among the events produced may lead to a more representative assessment of a NPP site seismic activity and consequently to a prognosis on the seismic level values to which the NPP would be ensured throughout its life-span. That prognosis should consider the actual seismic activity (including small earthquakes in real time) of the focuses that affect the plant site. The method is useful for two purposes: a) research, i.e. homogenizing the history data basis by the generation of earthquakes during periods lacking information and correlation of the information with the existing information. The aim is to perform the hazard analysis using a homogeneous data set in order to determine the seismic design data for a site; b) operation, i.e. the performance of a prognosis on the seismic activity on a certain site and consideration of preventive measures to minimize the possible effects of an earthquake. 5. The paper proposes the application of Autoregressive Time Series to issue a prognosis on the seismic activity of a focus and presents the analysis on Vrancea focus that affects Cernavoda NPP site by this method. 6. The paper also presents the

  2. Recommender engine for continuous-time quantum Monte Carlo methods

    Science.gov (United States)

    Huang, Li; Yang, Yi-feng; Wang, Lei

    2017-03-01

    Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.

  3. Excessive Progression in Weekly Running Distance and Risk of Running-related Injuries

    DEFF Research Database (Denmark)

    Nielsen, R.O.; Parner, Erik Thorlund; Nohr, Ellen Aagaard

    2014-01-01

    Study Design An explorative, 1-year prospective cohort study. Objective To examine whether an association between a sudden change in weekly running distance and running-related injury varies according to injury type. Background It is widely accepted that a sudden increase in running distance...... is strongly related to injury in runners. But the scientific knowledge supporting this assumption is limited. Methods A volunteer sample of 874 healthy novice runners who started a self-structured running regimen were provided a global-positioning-system watch. After each running session during the study...... period, participants were categorized into 1 of the following exposure groups, based on the progression of their weekly running distance: less than 10% or regression, 10% to 30%, or more than 30%. The primary outcome was running-related injury. Results A total of 202 runners sustained a running...

  4. Set up and programming of an ALICE Time-Of-Flight trigger facility and software implementation for its Quality Assurance (QA) during LHC Run 2

    CERN Document Server

    Toschi, Francesco

    2016-01-01

    The Cosmic and Topology Trigger Module (CTTM) is the main component of a trigger based on the ALICE TOF detector. Taking advantage of the TOF fast response, this VME board implements the trigger logic and delivers several L0 trigger outputs, used since Run 1, to provide cosmic triggers and rare triggers in pp, p+Pb and Pb+Pb data taking. Due to TOF DCS architectural change of the PCs controlling the CTTM (from 32 bits to 64 bits) it is mandatory to upgrade the software related to the CTTM including the code programming the FPGA firmware. A dedicated CTTM board will be installed in a CERN lab (Meyrin site), with the aim of recreating the electronics chain of the TOF trigger, to get a comfortable porting of the code to the 64 bit environment. The project proposed to the summer student is the setting up of the CTTM and the porting of the software. Moreover, in order to monitor the CTTM Trigger board during the real data taking, the implementation of a new Quality Assurance (QA) code is also crucial, together wit...

  5. A parallel algorithm for the two-dimensional time fractional diffusion equation with implicit difference method.

    Science.gov (United States)

    Gong, Chunye; Bao, Weimin; Tang, Guojian; Jiang, Yuewen; Liu, Jie

    2014-01-01

    It is very time consuming to solve fractional differential equations. The computational complexity of two-dimensional fractional differential equation (2D-TFDE) with iterative implicit finite difference method is O(M(x)M(y)N(2)). In this paper, we present a parallel algorithm for 2D-TFDE and give an in-depth discussion about this algorithm. A task distribution model and data layout with virtual boundary are designed for this parallel algorithm. The experimental results show that the parallel algorithm compares well with the exact solution. The parallel algorithm on single Intel Xeon X5540 CPU runs 3.16-4.17 times faster than the serial algorithm on single CPU core. The parallel efficiency of 81 processes is up to 88.24% compared with 9 processes on a distributed memory cluster system. We do think that the parallel computing technology will become a very basic method for the computational intensive fractional applications in the near future.

  6. Imaging Method Based on Time Reversal Channel Compensation

    Directory of Open Access Journals (Sweden)

    Bing Li

    2015-01-01

    Full Text Available The conventional time reversal imaging (TRI method builds imaging function by using the maximal value of signal amplitude. In this circumstance, some remote targets are missed (near-far problem or low resolution is obtained in lossy and/or dispersive media, and too many transceivers are employed to locate targets, which increases the complexity and cost of system. To solve these problems, a novel TRI algorithm is presented in this paper. In order to achieve a high resolution, the signal amplitude corresponding to focal time observed at target position is used to reconstruct the target image. For disposing near-far problem and suppressing spurious images, combining with cross-correlation property and amplitude compensation, channel compensation function (CCF is introduced. Moreover, the complexity and cost of system are reduced by employing only five transceivers to detect four targets whose number is close to that of transceivers. For the sake of demonstrating the practicability of the proposed analytical framework, the numerical experiments are actualized in both nondispersive-lossless (NDL media and dispersive-conductive (DPC media. Results show that the performance of the proposed method is superior to that of conventional TRI algorithm even under few echo signals.

  7. Radiographic apparatus and method for monitoring film exposure time

    International Nuclear Information System (INIS)

    Vatne, R.S.; Woodmansee, W.E.

    1981-01-01

    In connection with radiographic inspection of structural and industrial materials, method and apparatus are disclosed for automatically determining and displaying the time required to expose a radiographic film positioned to receive radiation passed by a test specimen, so that the finished film is exposed to an optimum blackening (density) for maximum film contrast. A plot is made of the variations in a total exposure parameter (representing the product of detected radiation rate and time needed to cause optimum film blackening) as a function of the voltage level applied to an X-ray tube. An electronic function generator storing the shape of this plot is incorporated into an exposure monitoring apparatus, such that for a selected tube voltage setting, the function generator produces an electrical analog signal of the corresponding exposure parameter. During the exposure, another signal is produced representing the rate of radiation as monitored by a diode detector positioned so as to receive the same radiation that is incident on the film. The signal representing the detected radiation rate is divided, by an electrical divider circuit into the signal representing total exposure, and the resulting quotient is an electrical signal representing the required exposure time. (author)

  8. The Effect of Training in Minimalist Running Shoes on Running Economy.

    Science.gov (United States)

    Ridge, Sarah T; Standifird, Tyler; Rivera, Jessica; Johnson, A Wayne; Mitchell, Ulrike; Hunter, Iain

    2015-09-01

    The purpose of this study was to examine the effect of minimalist running shoes on oxygen uptake during running before and after a 10-week transition from traditional to minimalist running shoes. Twenty-five recreational runners (no previous experience in minimalist running shoes) participated in submaximal VO2 testing at a self-selected pace while wearing traditional and minimalist running shoes. Ten of the 25 runners gradually transitioned to minimalist running shoes over 10 weeks (experimental group), while the other 15 maintained their typical training regimen (control group). All participants repeated submaximal VO2 testing at the end of 10 weeks. Testing included a 3 minute warm-up, 3 minutes of running in the first pair of shoes, and 3 minutes of running in the second pair of shoes. Shoe order was randomized. Average oxygen uptake was calculated during the last minute of running in each condition. The average change from pre- to post-training for the control group during testing in traditional and minimalist shoes was an improvement of 3.1 ± 15.2% and 2.8 ± 16.2%, respectively. The average change from pre- to post-training for the experimental group during testing in traditional and minimalist shoes was an improvement of 8.4 ± 7.2% and 10.4 ± 6.9%, respectively. Data were analyzed using a 2-way repeated measures ANOVA. There were no significant interaction effects, but the overall improvement in running economy across time (6.15%) was significant (p = 0.015). Running in minimalist running shoes improves running economy in experienced, traditionally shod runners, but not significantly more than when running in traditional running shoes. Improvement in running economy in both groups, regardless of shoe type, may have been due to compliance with training over the 10-week study period and/or familiarity with testing procedures. Key pointsRunning in minimalist footwear did not result in a change in running economy compared to running in traditional footwear

  9. How to run ions in the future?

    International Nuclear Information System (INIS)

    Küchler, D; Manglunki, D; Scrivens, R

    2014-01-01

    In the light of different running scenarios potential source improvements will be discussed (e.g. one month every year versus two month every other year and impact of the different running options [e.g. an extended ion run] on the source). As the oven refills cause most of the down time the oven design and refilling strategies will be presented. A test stand for off-line developments will be taken into account. Also the implications on the necessary manpower for extended runs will be discussed

  10. Design of ProjectRun21

    DEFF Research Database (Denmark)

    Damsted, Camma; Parner, Erik Thorlund; Sørensen, Henrik

    2017-01-01

    BACKGROUND: Participation in half-marathon has been steeply increasing during the past decade. In line, a vast number of half-marathon running schedules has surfaced. Unfortunately, the injury incidence proportion for half-marathoners has been found to exceed 30% during 1-year follow......-up. The majority of running-related injuries are suggested to develop as overuse injuries, which leads to injury if the cumulative training load over one or more training sessions exceeds the runners' load capacity for adaptive tissue repair. Owing to an increase of load capacity along with adaptive running...... the association between running experience or running pace and the risk of running-related injury. METHODS: Healthy runners using Global Positioning System (GPS) watch between 18 and 65 years will be invited to participate in this 14-week prospective cohort study. Runners will be allowed to self-select one...

  11. A high-order time-accurate interrogation method for time-resolved PIV

    International Nuclear Information System (INIS)

    Lynch, Kyle; Scarano, Fulvio

    2013-01-01

    A novel method is introduced for increasing the accuracy and extending the dynamic range of time-resolved particle image velocimetry (PIV). The approach extends the concept of particle tracking velocimetry by multiple frames to the pattern tracking by cross-correlation analysis as employed in PIV. The working principle is based on tracking the patterned fluid element, within a chosen interrogation window, along its individual trajectory throughout an image sequence. In contrast to image-pair interrogation methods, the fluid trajectory correlation concept deals with variable velocity along curved trajectories and non-zero tangential acceleration during the observed time interval. As a result, the velocity magnitude and its direction are allowed to evolve in a nonlinear fashion along the fluid element trajectory. The continuum deformation (namely spatial derivatives of the velocity vector) is accounted for by adopting local image deformation. The principle offers important reductions of the measurement error based on three main points: by enlarging the temporal measurement interval, the relative error becomes reduced; secondly, the random and peak-locking errors are reduced by the use of least-squares polynomial fits to individual trajectories; finally, the introduction of high-order (nonlinear) fitting functions provides the basis for reducing the truncation error. Lastly, the instantaneous velocity is evaluated as the temporal derivative of the polynomial representation of the fluid parcel position in time. The principal features of this algorithm are compared with a single-pair iterative image deformation method. Synthetic image sequences are considered with steady flow (translation, shear and rotation) illustrating the increase of measurement precision. An experimental data set obtained by time-resolved PIV measurements of a circular jet is used to verify the robustness of the method on image sequences affected by camera noise and three-dimensional motions. In

  12. Winter Holts Oscillatory Method: A New Method of Resampling in Time Series.

    Directory of Open Access Journals (Sweden)

    Muhammad Imtiaz Subhani

    2016-12-01

    Full Text Available The core proposition behind this research is to create innovative methods of bootstrapping that can be applied in time series data. In order to find new methods of bootstrapping, various methods were reviewed; The data of automotive Sales, Market Shares and Net Exports of the top 10 countries, which includes China, Europe, United States of America (USA, Japan, Germany, South Korea, India, Mexico, Brazil, Spain and, Canada from 2002 to 2014 were collected through various sources which includes UN Comtrade, Index Mundi and World Bank. The findings of this paper confirmed that Bootstrapping for resampling through winter forecasting by Oscillation and Average methods give more robust results than the winter forecasting by any general methods.

  13. Two methods of space--time energy densification

    International Nuclear Information System (INIS)

    Sahlin, R.L.

    1976-01-01

    With a view to the goal of net energy production from a DT microexplosion, we study two ideas (methods) through which (separately or in combination) energy may be ''concentrated'' into a small volume and short period of time--the so-called space-time energy densification or compression. We first discuss the advantages and disadvantages of lasers and relativistic electron-beam (E-beam) machines as the sources of such energy and identify the amplification of laser pulses as a key factor in energy compression. The pulse length of present relativistic E-beam machines is the most serious limitation of this pulsed-power source. The first energy-compression idea we discuss is the reasonably efficient production of short-duration, high-current relativistic electron pulses by the self interruption and restrike of a current in a plasma pinch due to the rapid onset of strong turbulence. A 1-MJ plasma focus based on this method is nearing completion at this Laboratory. The second energy-compression idea is based on laser-pulse production through the parametric amplification of a self-similar or solitary wave pulse, for which analogs can be found in other wave processes. Specifically, the second energy-compression idea is a proposal for parametric amplification of a solitary, transverse magnetic pulse in a coaxial cavity with a Bennett dielectric rod as an inner coax. Amplifiers of this type can be driven by the pulsed power from a relativistic E-beam machine. If the end of the inner dielectric coax is made of LiDT or another fusionable material, the amplified pulse can directly drive a fusion reaction--there would be no need to switch the pulse out of the system toward a remote target

  14. Two methods of space-time energy densification

    International Nuclear Information System (INIS)

    Sahlin, H.L.

    1975-01-01

    With a view to the goal of net energy production from a DT microexplosion, two ideas (methods) are studied through which (separately or in combination) energy may be ''concentrated'' into a small volume and short period of time--the so-called space-time energy densification or compression. The advantages and disadvantages of lasers and relativistic electron-beam (E-beam) machines as the sources of such energy are studied and the amplification of laser pulses as a key factor in energy compression is discussed. The pulse length of present relativistic E-beam machines is the most serious limitation of this pulsed-power source. The first energy-compression idea discussed is the reasonably efficient production of short-duration, high-current relativistic electron pulses by the self interruption and restrike of a current in a plasma pinch due to the rapid onset of strong turbulence. A 1-MJ plasma focus based on this method is nearing completion at this Laboratory. The second energy-compression idea is based on laser-pulse production through the parametric amplification of a self-similar or solitary wave pulse, for which analogs can be found in other wave processes. Specifically, the second energy-compression idea is a proposal for parametric amplification of a solitary, transverse magnetic pulse in a coaxial cavity with a Bennett dielectric rod as an inner coax. Amplifiers of this type can be driven by the pulsed power from a relativistic E-beam machine. If the end of the inner dielectric coax is made of LiDT or another fusionable material, the amplified pulse can directly drive a fusion reaction--there would be no need to switch the pulse out of the system toward a remote target. (auth)

  15. Actual situation analyses of rat-run traffic on community streets based on car probe data

    Science.gov (United States)

    Sakuragi, Yuki; Matsuo, Kojiro; Sugiki, Nao

    2017-10-01

    Lowering of so-called "rat-run" traffic on community streets has been one of significant challenges for improving the living environment of neighborhood. However, it has been difficult to quantitatively grasp the actual situation of rat-run traffic by the traditional surveys such as point observations. This study aims to develop a method for extracting rat-run traffic based on car probe data. In addition, based on the extracted rat-run traffic in Toyohashi city, Japan, we try to analyze the actual situation such as time and location distribution of the rat-run traffic. As a result, in Toyohashi city, the rate of using rat-run route increases in peak time period. Focusing on the location distribution of rat-run traffic, in addition, they pass through a variety of community streets. There is no great inter-district bias of the route frequently used as rat-run traffic. Next, we focused on some trips passing through a heavily used route as rat-run traffic. As a result, we found the possibility that they habitually use the route as rat-run because their trips had some commonalities. We also found that they tend to use the rat-run route due to shorter distance than using the alternative highway route, and that the travel speeds were faster than using the alternative highway route. In conclusions, we confirmed that the proposed method can quantitatively grasp the actual situation and the phenomenal tendencies of the rat-run traffic.

  16. A new method of detection for a positron emission tomograph using a time of flight method

    International Nuclear Information System (INIS)

    Gresset, Christian.

    1981-05-01

    In the first chapter, it is shown the advantages of positron radioemitters (β + ) of low period, and the essential characteristics of positron tomographs realized at the present time. The second chapter presents the interest of an original technique of image reconstruction: the time of flight technique. The third chapter describes the characterization methods which were set for verifying the feasibility of cesium fluoride in tomography. Chapter four presents the results obtained by these methods. It appears that the cesium fluoride constitute presently the best positron emission associated to time of flight technique. The hypotheses made on eventual performances of such machines are validated by experiments with phantom. The results obtained with a detector (bismuth germanate) conserves all its interest in skull tomography [fr

  17. Scattering in an intense radiation field: Time-independent methods

    International Nuclear Information System (INIS)

    Rosenberg, L.

    1977-01-01

    The standard time-independent formulation of nonrelativistic scattering theory is here extended to take into account the presence of an intense external radiation field. In the case of scattering by a static potential the extension is accomplished by the introduction of asymptotic states and intermediate-state propagators which account for the absorption and induced emission of photons by the projectile as it propagates through the field. Self-energy contributions to the propagator are included by a systematic summation of forward-scattering terms. The self-energy analysis is summarized in the form of a modified perturbation expansion of the type introduced by Watson some time ago in the context of nuclear-scattering theory. This expansion, which has a simple continued-fraction structure in the case of a single-mode field, provides a generally applicable successive approximation procedure for the propagator and the asymptotic states. The problem of scattering by a composite target is formulated using the effective-potential method. The modified perturbation expansion which accounts for self-energy effects is applicable here as well. A discussion of a coupled two-state model is included to summarize and clarify the calculational procedures

  18. Improved methods for nightside time domain Lunar Electromagnetic Sounding

    Science.gov (United States)

    Fuqua-Haviland, H.; Poppe, A. R.; Fatemi, S.; Delory, G. T.; De Pater, I.

    2017-12-01

    Time Domain Electromagnetic (TDEM) Sounding isolates induced magnetic fields to remotely deduce material properties at depth. The first step of performing TDEM Sounding at the Moon is to fully characterize the dynamic plasma environment, and isolate geophysically induced currents from concurrently present plasma currents. The transfer function method requires a two-point measurement: an upstream reference measuring the pristine solar wind, and one downstream near the Moon. This method was last performed during Apollo assuming the induced fields on the nightside of the Moon expand as in an undisturbed vacuum within the wake cavity [1]. Here we present an approach to isolating induction and performing TDEM with any two point magnetometer measurement at or near the surface of the Moon. Our models include a plasma induction model capturing the kinetic plasma environment within the wake cavity around a conducting Moon, and a geophysical forward model capturing induction in a vacuum. The combination of these two models enable the analysis of magnetometer data within the wake cavity. Plasma hybrid models use the upstream plasma conditions and interplanetary magnetic field (IMF) to capture the wake current systems formed around the Moon. The plasma kinetic equations are solved for ion particles with electrons as a charge-neutralizing fluid. These models accurately capture the large scale lunar wake dynamics for a variety of solar wind conditions: ion density, temperature, solar wind velocity, and IMF orientation [2]. Given the 3D orientation variability coupled with the large range of conditions seen within the lunar plasma environment, we characterize the environment one case at a time. The global electromagnetic induction response of the Moon in a vacuum has been solved numerically for a variety of electrical conductivity models using the finite-element method implemented within the COMSOL software. This model solves for the geophysically induced response in vacuum to

  19. Running Boot Camp

    CERN Document Server

    Toporek, Chuck

    2008-01-01

    When Steve Jobs jumped on stage at Macworld San Francisco 2006 and announced the new Intel-based Macs, the question wasn't if, but when someone would figure out a hack to get Windows XP running on these new "Mactels." Enter Boot Camp, a new system utility that helps you partition and install Windows XP on your Intel Mac. Boot Camp does all the heavy lifting for you. You won't need to open the Terminal and hack on system files or wave a chicken bone over your iMac to get XP running. This free program makes it easy for anyone to turn their Mac into a dual-boot Windows/OS X machine. Running Bo

  20. Use of real-time PCR to evaluate two DNA extraction methods from food

    Directory of Open Access Journals (Sweden)

    Maria Regina Branquinho

    2012-03-01

    Full Text Available The DNA extraction is a critical step in Genetically Modified Organisms analysis based on real-time PCR. In this study, the CTAB and DNeasy methods provided good quality and quantity of DNA from the texturized soy protein, infant formula, and soy milk samples. Concerning the Certified Reference Material consisting of 5% Roundup Ready® soybean, neither method yielded DNA of good quality. However, the dilution test applied in the CTAB extracts showed no interference of inhibitory substances. The PCR efficiencies of lectin target amplification were not statistically different, and the coefficients of correlation (R² demonstrated high degree of correlation between the copy numbers and the threshold cycle (Ct values. ANOVA showed suitable adjustment of the regression and absence of significant linear deviations. The efficiencies of the p35S amplification were not statistically different, and all R² values using DNeasy extracts were above 0.98 with no significant linear deviations. Two out of three R² values using CTAB extracts were lower than 0.98, corresponding to lower degree of correlation, and the lack-of-fit test showed significant linear deviation in one run. The comparative analysis of the Ct values for the p35S and lectin targets demonstrated no statistical significant differences between the analytical curves of each target.

  1. Exploiting Microwave Imaging Methods for Real-Time Monitoring of Thermal Ablation

    Directory of Open Access Journals (Sweden)

    Rosa Scapaticci

    2017-01-01

    Full Text Available Microwave thermal ablation is a cancer treatment that exploits local heating caused by a microwave electromagnetic field to induce coagulative necrosis of tumor cells. Recently, such a technique has significantly progressed in the clinical practice. However, its effectiveness would dramatically improve if paired with a noninvasive system for the real-time monitoring of the evolving dimension and shape of the thermally ablated area. In this respect, microwave imaging can be a potential candidate to monitor the overall treatment evolution in a noninvasive way, as it takes direct advantage from the dependence of the electromagnetic properties of biological tissues from temperature. This paper explores such a possibility by presenting a proof of concept validation based on accurate simulated imaging experiments, run with respect to a scenario that mimics an ex vivo experimental setup. In particular, two model-based inversion algorithms are exploited to tackle the imaging task. These methods provide independent results in real-time and their integration improves the quality of the overall tracking of the variations occurring in the target and surrounding regions.

  2. Order Patterns Networks (orpan – a method toestimate time-evolving functional connectivity frommultivariate time series

    Directory of Open Access Journals (Sweden)

    Stefan eSchinkel

    2012-11-01

    Full Text Available Complex networks provide an excellent framework for studying the functionof the human brain activity. Yet estimating functional networks from mea-sured signals is not trivial, especially if the data is non-stationary and noisyas it is often the case with physiological recordings. In this article we proposea method that uses the local rank structure of the data to define functionallinks in terms of identical rank structures. The method yields temporal se-quences of networks which permits to trace the evolution of the functionalconnectivity during the time course of the observation. We demonstrate thepotentials of this approach with model data as well as with experimentaldata from an electrophysiological study on language processing.

  3. Fermilab DART run control

    International Nuclear Information System (INIS)

    Oleynik, G.; Engelfried, J.; Mengel, L.

    1996-01-01

    DART is the high speed, Unix based data acquisition system being developed by Fermilab in collaboration with seven High Energy Physics Experiments. This paper describes DART run control, which has been developed over the past year and is a flexible, distributed, extensible system for the control and monitoring of the data acquisition systems. The authors discuss the unique and interesting concepts of the run control and some of the experiences in developing it. They also give a brief update and status of the whole DART system

  4. Fermilab DART run control

    International Nuclear Information System (INIS)

    Oleynik, G.; Engelfried, J.; Mengel, L.

    1995-05-01

    DART is the high speed, Unix based data acquisition system being developed by Fermilab in collaboration with seven High Energy Physics Experiments. This paper describes DART run control, which has been developed over the past year and is a flexible, distributed, extensible system for the, control and monitoring of the data acquisition systems. We discuss the unique and interesting concepts of the run control and some of our experiences in developing it. We also give a brief update and status of the whole DART system

  5. Preventing running injuries. Practical approach for family doctors.

    OpenAIRE

    Johnston, C. A. M.; Taunton, J. E.; Lloyd-Smith, D. R.; McKenzie, D. C.

    2003-01-01

    OBJECTIVE: To present a practical approach for preventing running injuries. QUALITY OF EVIDENCE: Much of the research on running injuries is in the form of expert opinion and comparison trials. Recent systematic reviews have summarized research in orthotics, stretching before running, and interventions to prevent soft tissue injuries. MAIN MESSAGE: The most common factors implicated in running injuries are errors in training methods, inappropriate training surfaces and running shoes, malalign...

  6. Non performing loans (NPLs) in a crisis economy: Long-run equilibrium analysis with a real time VEC model for Greece (2001-2015)

    Science.gov (United States)

    Konstantakis, Konstantinos N.; Michaelides, Panayotis G.; Vouldis, Angelos T.

    2016-06-01

    As a result of domestic and international factors, the Greek economy faced a severe crisis which is directly comparable only to the Great Recession. In this context, a prominent victim of this situation was the country's banking system. This paper attempts to shed light on the determining factors of non-performing loans in the Greek banking sector. The analysis presents empirical evidence from the Greek economy, using aggregate data on a quarterly basis, in the time period 2001-2015, fully capturing the recent recession. In this work, we use a relevant econometric framework based on a real time Vector Autoregressive (VAR)-Vector Error Correction (VEC) model, which captures the dynamic interdependencies among the variables used. Consistent with international evidence, the empirical findings show that both macroeconomic and financial factors have a significant impact on non-performing loans in the country. Meanwhile, the deteriorating credit quality feeds back into the economy leading to a self-reinforcing negative loop.

  7. Influence of Advanced Injection Timing and Fuel Additive on Combustion, Performance, and Emission Characteristics of a DI Diesel Engine Running on Plastic Pyrolysis Oil

    Directory of Open Access Journals (Sweden)

    Ioannis Kalargaris

    2017-01-01

    Full Text Available This paper presents the investigation of engine optimisation when plastic pyrolysis oil (PPO is used as the primary fuel of a direct injection diesel engine. Our previous investigation revealed that PPO is a promising fuel; however the results suggested that control parameters should be optimised in order to obtain a better engine performance. In the present work, the injection timing was advanced, and fuel additives were utilised to overcome the issues experienced in the previous work. In addition, spray characteristics of PPO were investigated in comparison with diesel to provide in-depth understanding of the engine behaviour. The experimental results on advanced injection timing (AIT showed reduced brake thermal efficiency and increased carbon monoxide, unburned hydrocarbons, and nitrogen oxides emissions in comparison to standard injection timing. On the other hand, the addition of fuel additive resulted in higher engine efficiency and lower exhaust emissions. Finally, the spray tests revealed that the spray tip penetration for PPO is faster than diesel. The results suggested that AIT is not a preferable option while fuel additive is a promising solution for long-term use of PPO in diesel engines.

  8. 'Outrunning' the running ear

    African Journals Online (AJOL)

    Chantel

    In even the most experienced hands, an adequate physical examination of the ears can be difficult to perform because of common problems such as cerumen blockage of the auditory canal, an unco- operative toddler or an exasperated parent. The most common cause for a running ear in a child is acute purulent otitis.

  9. Scoping reviews: time for clarity in definition, methods, and reporting.

    Science.gov (United States)

    Colquhoun, Heather L; Levac, Danielle; O'Brien, Kelly K; Straus, Sharon; Tricco, Andrea C; Perrier, Laure; Kastner, Monika; Moher, David

    2014-12-01

    The scoping review has become increasingly popular as a form of knowledge synthesis. However, a lack of consensus on scoping review terminology, definition, methodology, and reporting limits the potential of this form of synthesis. In this article, we propose recommendations to further advance the field of scoping review methodology. We summarize current understanding of scoping review publication rates, terms, definitions, and methods. We propose three recommendations for clarity in term, definition and methodology. We recommend adopting the terms "scoping review" or "scoping study" and the use of a proposed definition. Until such time as further guidance is developed, we recommend the use of the methodological steps outlined in the Arksey and O'Malley framework and further enhanced by Levac et al. The development of reporting guidance for the conduct and reporting of scoping reviews is underway. Consistency in the proposed domains and methodologies of scoping reviews, along with the development of reporting guidance, will facilitate methodological advancement, reduce confusion, facilitate collaboration and improve knowledge translation of scoping review findings. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. The Performance and Development of the Inner Detector Trigger Algorithms at ATLAS for LHC Run 2

    CERN Document Server

    Sowden, Benjamin Charles; The ATLAS collaboration

    2015-01-01

    A description of the design and performance of the newly reimplemented tracking algorithms for the ATLAS trigger for LHC Run 2, to commence in spring 2015, is provided. The ATLAS High Level Trigger (HLT) has been restructured to run as a more flexible single stage process, rather than the two separate Level 2 and Event Filter stages used during Run 1. To make optimal use of this new scenario, a new tracking strategy has been implemented for Run 2 for the HLT. This new strategy will use a Fast Track Finder (FTF) algorithm to directly seed the subsequent Precision Tracking, and will result in improved track parameter resolution and significantly faster execution times than achieved during Run 1 but with no significant reduction in efficiency. The performance and timing of the algorithms for numerous physics signatures in the trigger are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performan...

  11. A novel mouse running wheel that senses individual limb forces: biomechanical validation and in vivo testing

    Science.gov (United States)

    Roach, Grahm C.; Edke, Mangesh

    2012-01-01

    Biomechanical data provide fundamental information about changes in musculoskeletal function during development, adaptation, and disease. To facilitate the study of mouse locomotor biomechanics, we modified a standard mouse running wheel to include a force-sensitive rung capable of measuring the normal and tangential forces applied by individual paws. Force data were collected throughout the night using an automated threshold trigger algorithm that synchronized force data with wheel-angle data and a high-speed infrared video file. During the first night of wheel running, mice reached consistent running speeds within the first 40 force events, indicating a rapid habituation to wheel running, given that mice generated >2,000 force-event files/night. Average running speeds and peak normal and tangential forces were consistent throughout the first four nights of running, indicating that one night of running is sufficient to characterize the locomotor biomechanics of healthy mice. Twelve weeks of wheel running significantly increased spontaneous wheel-running speeds (16 vs. 37 m/min), lowered duty factors (ratio of foot-ground contact time to stride time; 0.71 vs. 0.58), and raised hindlimb peak normal forces (93 vs. 115% body wt) compared with inexperienced mice. Peak normal hindlimb-force magnitudes were the primary force component, which were nearly tenfold greater than peak tangential forces. Peak normal hindlimb forces exceed the vertical forces generated during overground running (50-60% body wt), suggesting that wheel running shifts weight support toward the hindlimbs. This force-instrumented running-wheel system provides a comprehensive, noninvasive screening method for monitoring gait biomechanics in mice during spontaneous locomotion. PMID:22723628

  12. Selection of the all-time best World XI Test cricket team using the TOPSIS method

    Directory of Open Access Journals (Sweden)

    Shankar Chakraborty

    2019-01-01

    Full Text Available The aim of this paper is to apply the technique for order preference by similarity to ideal solution (TOPSIS as a multi-criteria decision making tool to form the all-time best World XI Test cricket team while taking into consideration over 2600 cricketers participated in Test matches for more than 100 years of cricket history. From the voluminous database containing the performance of numerous Test cricketers, separate lists are first prepared for different positions in the batting and bowling orders consisting of manageable numbers of candidate alternatives while imposing some constraints with respect to the minimum number of innings played (for batsmen, minimum number of tests played (for wicketkeepers and bowlers, and minimum numbers of runs scored and wickets taken (for all-rounders. The TOPSIS method is later adopted to rank those shortlisted cricketers and identify the best performers for inclusion in the proposed World XI Test team. The best World Test cricket team is thus formed as Alastair Cook (ENG (c, Sunil Gavaskar (IND, Rahul Dravid (IND (vc, Sachin Tendulkar (IND, Shivnarine Chanderpaul (WI, Jacques Kallis (SA, Adam Gilchrist (AUS (wk, Glenn McGrath (AUS, Courtney Walsh (WI, Muttiah Muralitharan (SL and Shane Warne (AUS.

  13. Calculation method for control rod dropping time in reactor

    International Nuclear Information System (INIS)

    Nogami, Takeki; Kato, Yoshifumi; Ishino, Jun-ichi; Doi, Isamu.

    1996-01-01

    If a control rod starts dropping, the dropping speed is rapidly increased, then settled substantially constant, rapidly decreased when it reaches a dash pot. A second detection signal generated by removing an AC component from a first detection signal is differentiated twice. The time when the maximum value among the twice differentiated values is generated is determined as a time when the control rods starts dropping. The time when minimum value among the twice differentiated values is generated is determined as a time when the control rod reaches the dash pot of the reactor. The measuring time within a range from the time when the control rod starts dropping to the time when the control rod reaches the dash pot of the reactor is determined. As a result, processing for the calculation of the dropping start time and dash pot reaching time of the control rod can be automatized. Further, it is suffice to conduct differentiation twice till the reaching time, which can facilitate the processing thereby enabling to determine a reliable time range. (N.H.)

  14. Design and Development of RunForFun Mobile Application

    Directory of Open Access Journals (Sweden)

    Anci Anthony

    2018-01-01

    Full Text Available Race run for 5 km or 10 km has been trending recently in many places in Indonesia, especially in Surabaya where there were at least 11 events of race run. The participant's number also increased significantly compared to years before. However, among several race run events, it was seen that some events tended to be replicative and monotone, while among the participants recently were identified the need for increasing the fun factor. RunForFun is a mobile application which designed for participants to reach new experience when participating in a race run event. The mobile application will run on Android OS. The development method of this mobile application would use Reverse Waterfall method. The development of this mobile application uses Ionic Framework which utilizes Cordova as its base to deploy to smartphone devices. Subsequently, RunForRun was tested on 10 participants, and the test shows a significant increase in the fun factor from run race participants.

  15. When the facts are just not enough: credibly communicating about risk is riskier when emotions run high and time is short.

    Science.gov (United States)

    Reynolds, Barbara J

    2011-07-15

    When discussing risk with people, commonly subject matter experts believe that conveying the facts will be enough to allow people to assess a risk and respond rationally to that risk. Because of this expectation, experts often become exasperated by the seemingly illogical way people assess personal risk and choose to manage that risk. In crisis situations when the risk information is less defined and choices must be made within impossible time constraints, the thought processes may be even more susceptible to faulty heuristics. Understanding the perception of risk is essential to understanding why the public becomes more or less upset by events. This article explores the psychological underpinnings of risk assessment within emotionally laden events and the risk communication practices that may facilitate subject matter experts to provide the facts in a manner so they can be more certain those facts are being heard. Source credibility is foundational to risk communication practices. The public meeting is one example in which these best practices can be exercised. Risks are risky because risk perceptions differ and the psychosocial environment in which risk is discussed complicates making risk decisions. Experts who want to influence the actions of the public related to a threat or risk should understand that decisions often involve emotional as well as logical components. The media and other social entities will also influence the risk context. The Center for Disease Control and Prevention's crisis and emergency-risk communication (CERC) principles are intended to increase credibility and recognize emotional components of an event. During a risk event, CERC works to calm emotions and increase trust which can help people apply the expertise being offered by response officials. Copyright © 2011. Published by Elsevier Inc.

  16. 'Ready to hit the ground running': Alumni and employer accounts of a unique part-time distance learning pre-registration nurse education programme.

    Science.gov (United States)

    Draper, Jan; Beretta, Ruth; Kenward, Linda; McDonagh, Lin; Messenger, Julie; Rounce, Jill

    2014-10-01

    This study explored the impact of The Open University's (OU) preregistration nursing programme on students' employability, career progression and its contribution to developing the nursing workforce across the United Kingdom. Designed for healthcare support workers who are sponsored by their employers, the programme is the only part-time supported open/distance learning programme in the UK leading to registration as a nurse. The international literature reveals that relatively little is known about the impact of previous experience as a healthcare support worker on the experience of transition, employability skills and career progression. To identify alumni and employer views of the perceived impact of the programme on employability, career progression and workforce development. A qualitative design using telephone interviews which were digitally recorded, and transcribed verbatim prior to content analysis to identify recurrent themes. Three geographical areas across the UK. Alumni (n=17) and employers (n=7). Inclusion criterion for alumni was a minimum of two years' post-qualifying experience. Inclusion criteria for employers were those that had responsibility for sponsoring students on the programme and employing them as newly qualified nurses. Four overarching themes were identified: transition, expectations, learning for and in practice, and flexibility. Alumni and employers were of the view that the programme equipped them well to meet the competencies and expectations of being a newly qualified nurse. It provided employers with a flexible route to growing their own workforce and alumni the opportunity to achieve their ambition of becoming a qualified nurse when other more conventional routes would not have been open to them. Some of them had already demonstrated career progression. Generalising results requires caution due to the small, self-selecting sample but findings suggest that a widening participation model of pre-registration nurse education for

  17. When the facts are just not enough: Credibly communicating about risk is riskier when emotions run high and time is short

    International Nuclear Information System (INIS)

    Reynolds, Barbara J.

    2011-01-01

    When discussing risk with people, commonly subject matter experts believe that conveying the facts will be enough to allow people to assess a risk and respond rationally to that risk. Because of this expectation, experts often become exasperated by the seemingly illogical way people assess personal risk and choose to manage that risk. In crisis situations when the risk information is less defined and choices must be made within impossible time constraints, the thought processes may be even more susceptible to faulty heuristics. Understanding the perception of risk is essential to understanding why the public becomes more or less upset by events. This article explores the psychological underpinnings of risk assessment within emotionally laden events and the risk communication practices that may facilitate subject matter experts to provide the facts in a manner so they can be more certain those facts are being heard. Source credibility is foundational to risk communication practices. The public meeting is one example in which these best practices can be exercised. Risks are risky because risk perceptions differ and the psychosocial environment in which risk is discussed complicates making risk decisions. Experts who want to influence the actions of the public related to a threat or risk should understand that decisions often involve emotional as well as logical components. The media and other social entities will also influence the risk context. The Center for Disease Control and Prevention's crisis and emergency-risk communication (CERC) principles are intended to increase credibility and recognize emotional components of an event. During a risk event, CERC works to calm emotions and increase trust which can help people apply the expertise being offered by response officials.

  18. System and method for time synchronization in a wireless network

    Science.gov (United States)

    Gonia, Patrick S.; Kolavennu, Soumitri N.; Mahasenan, Arun V.; Budampati, Ramakrishna S.

    2010-03-30

    A system includes multiple wireless nodes forming a cluster in a wireless network, where each wireless node is configured to communicate and exchange data wirelessly based on a clock. One of the wireless nodes is configured to operate as a cluster master. Each of the other wireless nodes is configured to (i) receive time synchronization information from a parent node, (ii) adjust its clock based on the received time synchronization information, and (iii) broadcast time synchronization information based on the time synchronization information received by that wireless node. The time synchronization information received by each of the other wireless nodes is based on time synchronization information provided by the cluster master so that the other wireless nodes substantially synchronize their clocks with the clock of the cluster master.

  19. Time reversal method with stabilizing boundary conditions for Photoacoustic tomography

    International Nuclear Information System (INIS)

    Chervova, Olga; Oksanen, Lauri

    2016-01-01

    We study an inverse initial source problem that models photoacoustic tomography measurements with array detectors, and introduce a method that can be viewed as a modification of the so called back and forth nudging method. We show that the method converges at an exponential rate under a natural visibility condition, with data given only on a part of the boundary of the domain of wave propagation. In this paper we consider the case of noiseless measurements. (paper)

  20. A moving mesh method with variable relaxation time

    OpenAIRE

    Soheili, Ali Reza; Stockie, John M.

    2006-01-01

    We propose a moving mesh adaptive approach for solving time-dependent partial differential equations. The motion of spatial grid points is governed by a moving mesh PDE (MMPDE) in which a mesh relaxation time \\tau is employed as a regularization parameter. Previously reported results on MMPDEs have invariably employed a constant value of the parameter \\tau. We extend this standard approach by incorporating a variable relaxation time that is calculated adaptively alongside the solution in orde...

  1. Part-Time Sick Leave as a Treatment Method?

    OpenAIRE

    Andrén D; Andrén T

    2009-01-01

    This paper analyzes the effects of being on part-time sick leave compared to full-time sick leave on the probability of recovering (i.e., returning to work with full recovery of lost work capacity). Using a discrete choice one-factor model, we estimate mean treatment parameters and distributional treatment parameters from a common set of structural parameters. Our results show that part-time sick leave increases the likelihood of recovering and dominates full-time sick leave for sickness spel...

  2. Comparison of Interpolation Methods as Applied to Time Synchronous Averaging

    National Research Council Canada - National Science Library

    Decker, Harry

    1999-01-01

    Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...

  3. Demographics and run timing of adult Lost River (Deltistes luxatus) and short nose (Chasmistes brevirostris) suckers in Upper Klamath Lake, Oregon, 2012

    Science.gov (United States)

    Hewitt, David A.; Janney, Eric C.; Hayes, Brian S.; Harris, Alta C.

    2014-01-01

    Data from a long-term capture-recapture program were used to assess the status and dynamics of populations of two long-lived, federally endangered catostomids in Upper Klamath Lake, Oregon. Lost River suckers (Deltistes luxatus) and shortnose suckers (Chasmistes brevirostris) have been captured and tagged with passive integrated transponder (PIT) tags during their spawning migrations in each year since 1995. In addition, beginning in 2005, individuals that had been previously PIT-tagged were re-encountered on remote underwater antennas deployed throughout sucker spawning areas. Captures and remote encounters during spring 2012 were used to describe the spawning migrations in that year and also were incorporated into capture-recapture analyses of population dynamics. Cormack-Jolly-Seber (CJS) open population capture-recapture models were used to estimate annual survival probabilities, and a reverse-time analog of the CJS model was used to estimate recruitment of new individuals into the spawning populations. In addition, data on the size composition of captured fish were examined to provide corroborating evidence of recruitment. Model estimates of survival and recruitment were used to derive estimates of changes in population size over time and to determine the status of the populations in 2011. Separate analyses were conducted for each species and also for each subpopulation of Lost River suckers (LRS). Shortnose suckers (SNS) and one subpopulation of LRS migrate into tributary rivers to spawn, whereas the other LRS subpopulation spawns at groundwater upwelling areas along the eastern shoreline of the lake. In 2012, we captured, tagged, and released 749 LRS at four lakeshore spawning areas and recaptured an additional 969 individuals that had been tagged in previous years. Across all four areas, the remote antennas detected 6,578 individual LRS during the spawning season. Spawning activity peaked in April and most individuals were encountered at Cinder Flats and

  4. Experimental evaluation of tool run-out in micro milling

    Science.gov (United States)

    Attanasio, Aldo; Ceretti, Elisabetta

    2018-05-01

    This paper deals with micro milling cutting process focusing the attention on tool run-out measurement. In fact, among the effects of the scale reduction from macro to micro (i.e., size effects) tool run-out plays an important role. This research is aimed at developing an easy and reliable method to measure tool run-out in micro milling based on experimental tests and an analytical model. From an Industry 4.0 perspective this measuring strategy can be integrated into an adaptive system for controlling cutting forces, with the objective of improving the production quality, the process stability, reducing at the same time the tool wear and the machining costs. The proposed procedure estimates the tool run-out parameters from the tool diameter, the channel width, and the phase angle between the cutting edges. The cutting edge phase measurement is based on the force signal analysis. The developed procedure has been tested on data coming from micro milling experimental tests performed on a Ti6Al4V sample. The results showed that the developed procedure can be successfully used for tool run-out estimation.

  5. Running economy and energy cost of running with backpacks.

    Science.gov (United States)

    Scheer, Volker; Cramer, Leoni; Heitkamp, Hans-Christian

    2018-05-02

    Running is a popular recreational activity and additional weight is often carried in backpacks on longer runs. Our aim was to examine running economy and other physiological parameters while running with a 1kg and 3 kg backpack at different submaximal running velocities. 10 male recreational runners (age 25 ± 4.2 years, VO2peak 60.5 ± 3.1 ml·kg-1·min-1) performed runs on a motorized treadmill of 5 minutes durations at three different submaximal speeds of 70, 80 and 90% of anaerobic lactate threshold (LT) without additional weight, and carrying a 1kg and 3 kg backpack. Oxygen consumption, heart rate, lactate and RPE were measured and analysed. Oxygen consumption, energy cost of running and heart rate increased significantly while running with a backpack weighing 3kg compared to running without additional weight at 80% of speed at lactate threshold (sLT) (p=0.026, p=0.009 and p=0.003) and at 90% sLT (p<0.001, p=0.001 and p=0.001). Running with a 1kg backpack showed a significant increase in heart rate at 80% sLT (p=0.008) and a significant increase in oxygen consumption and heart rate at 90% sLT (p=0.045 and p=0.007) compared to running without additional weight. While running at 70% sLT running economy and cardiovascular effort increased with weighted backpack running compared to running without additional weight, however these increases did not reach statistical significance. Running economy deteriorates and cardiovascular effort increases while running with additional backpack weight especially at higher submaximal running speeds. Backpack weight should therefore be kept to a minimum.

  6. Neural network-based run-to-run controller using exposure and resist thickness adjustment

    Science.gov (United States)

    Geary, Shane; Barry, Ronan

    2003-06-01

    This paper describes the development of a run-to-run control algorithm using a feedforward neural network, trained using the backpropagation training method. The algorithm is used to predict the critical dimension of the next lot using previous lot information. It is compared to a common prediction algorithm - the exponentially weighted moving average (EWMA) and is shown to give superior prediction performance in simulations. The manufacturing implementation of the final neural network showed significantly improved process capability when compared to the case where no run-to-run control was utilised.

  7. HTML 5 up and running

    CERN Document Server

    Pilgrim, Mark

    2010-01-01

    If you don't know about the new features available in HTML5, now's the time to find out. This book provides practical information about how and why the latest version of this markup language will significantly change the way you develop for the Web. HTML5 is still evolving, yet browsers such as Safari, Mozilla, Opera, and Chrome already support many of its features -- and mobile browsers are even farther ahead. HTML5: Up & Running carefully guides you though the important changes in this version with lots of hands-on examples, including markup, graphics, and screenshots. You'll learn how to

  8. Cholinesterase assay by an efficient fixed time endpoint method

    Directory of Open Access Journals (Sweden)

    Mónica Benabent

    2014-01-01

    The method may be adapted to the user needs by modifying the enzyme concentration and applied for simultaneously testing many samples in parallel; i.e. for complex experiments of kinetics assays with organophosphate inhibitors in different tissues.

  9. Real-time trajectory analysis using stacked invariance methods

    OpenAIRE

    Kitts, B.

    1998-01-01

    Invariance methods are used widely in pattern recognition as a preprocessing stage before algorithms such as neural networks are applied to the problem. A pattern recognition system has to be able to recognise objects invariant to scale, translation, and rotation. Presumably the human eye implements some of these preprocessing transforms in making sense of incoming stimuli, for example, placing signals onto a log scale. This paper surveys many of the commonly used invariance methods, and asse...

  10. Novel methods for real-time 3D facial recognition

    OpenAIRE

    Rodrigues, Marcos; Robinson, Alan

    2010-01-01

    In this paper we discuss our approach to real-time 3D face recognition. We argue the need for real time operation in a realistic scenario and highlight the required pre- and post-processing operations for effective 3D facial recognition. We focus attention to some operations including face and eye detection, and fast post-processing operations such as hole filling, mesh smoothing and noise removal. We consider strategies for hole filling such as bilinear and polynomial interpolation and Lapla...

  11. A novel time series link prediction method: Learning automata approach

    Science.gov (United States)

    Moradabadi, Behnaz; Meybodi, Mohammad Reza

    2017-09-01

    Link prediction is a main social network challenge that uses the network structure to predict future links. The common link prediction approaches to predict hidden links use a static graph representation where a snapshot of the network is analyzed to find hidden or future links. For example, similarity metric based link predictions are a common traditional approach that calculates the similarity metric for each non-connected link and sort the links based on their similarity metrics and label the links with higher similarity scores as the future links. Because people activities in social networks are dynamic and uncertainty, and the structure of the networks changes over time, using deterministic graphs for modeling and analysis of the social network may not be appropriate. In the time-series link prediction problem, the time series link occurrences are used to predict the future links In this paper, we propose a new time series link prediction based on learning automata. In the proposed algorithm for each link that must be predicted there is one learning automaton and each learning automaton tries to predict the existence or non-existence of the corresponding link. To predict the link occurrence in time T, there is a chain consists of stages 1 through T - 1 and the learning automaton passes from these stages to learn the existence or non-existence of the corresponding link. Our preliminary link prediction experiments with co-authorship and email networks have provided satisfactory results when time series link occurrences are considered.

  12. Time Series Analysis of Insar Data: Methods and Trends

    Science.gov (United States)

    Osmanoglu, Batuhan; Sunar, Filiz; Wdowinski, Shimon; Cano-Cabral, Enrique

    2015-01-01

    Time series analysis of InSAR data has emerged as an important tool for monitoring and measuring the displacement of the Earth's surface. Changes in the Earth's surface can result from a wide range of phenomena such as earthquakes, volcanoes, landslides, variations in ground water levels, and changes in wetland water levels. Time series analysis is applied to interferometric phase measurements, which wrap around when the observed motion is larger than one-half of the radar wavelength. Thus, the spatio-temporal ''unwrapping" of phase observations is necessary to obtain physically meaningful results. Several different algorithms have been developed for time series analysis of InSAR data to solve for this ambiguity. These algorithms may employ different models for time series analysis, but they all generate a first-order deformation rate, which can be compared to each other. However, there is no single algorithm that can provide optimal results in all cases. Since time series analyses of InSAR data are used in a variety of applications with different characteristics, each algorithm possesses inherently unique strengths and weaknesses. In this review article, following a brief overview of InSAR technology, we discuss several algorithms developed for time series analysis of InSAR data using an example set of results for measuring subsidence rates in Mexico City.

  13. On run-time exploitation of concurrency

    NARCIS (Netherlands)

    Holzenspies, P.K.F.

    2010-01-01

    The `free' speed-up stemming from ever increasing processor speed is over. Performance increase in computer systems can now only be achieved through parallelism. One of the biggest challenges in computer science is how to map applications onto parallel computers. Concurrency, seen as the set of

  14. Ubuntu Up and Running

    CERN Document Server

    Nixon, Robin

    2010-01-01

    Ubuntu for everyone! This popular Linux-based operating system is perfect for people with little technical background. It's simple to install, and easy to use -- with a strong focus on security. Ubuntu: Up and Running shows you the ins and outs of this system with a complete hands-on tour. You'll learn how Ubuntu works, how to quickly configure and maintain Ubuntu 10.04, and how to use this unique operating system for networking, business, and home entertainment. This book includes a DVD with the complete Ubuntu system and several specialized editions -- including the Mythbuntu multimedia re

  15. Practical data collection : establishing methods and procedures for measuring water clarity and turbidity of storm water run-off from active major highway construction sites.

    Science.gov (United States)

    2014-09-12

    In anticipation of regulation involving numeric turbidity limit at highway construction sites, research was : done into the most appropriate, affordable methods for surface water monitoring. Measuring sediment : concentration in streams may be conduc...

  16. A Comparison of Various Forecasting Methods for Autocorrelated Time Series

    Directory of Open Access Journals (Sweden)

    Karin Kandananond

    2012-07-01

    Full Text Available The accuracy of forecasts significantly affects the overall performance of a whole supply chain system. Sometimes, the nature of consumer products might cause difficulties in forecasting for the future demands because of its complicated structure. In this study, two machine learning methods, artificial neural network (ANN and support vector machine (SVM, and a traditional approach, the autoregressive integrated moving average (ARIMA model, were utilized to predict the demand for consumer products. The training data used were the actual demand of six different products from a consumer product company in Thailand. Initially, each set of data was analysed using Ljung‐Box‐Q statistics to test for autocorrelation. Afterwards, each method was applied to different sets of data. The results indicated that the SVM method had a better forecast quality (in terms of MAPE than ANN and ARIMA in every category of products.

  17. Simplified scintigraphic methods for measuring gastrointestinal transit times

    DEFF Research Database (Denmark)

    Graff, J; Brinch, K; Madsen, Jan Lysgård

    2000-01-01

    To investigate whether simple transit measurements based on scintigraphy performed only 0, 2, 4 and 24 h after intake of a radiolabelled meal can be used to predict the mean transit time values for the stomach, the small intestine, and the colon, a study was conducted in 16 healthy volunteers....... After ingestion of a meal containing 111indium-labelled water and 99mtechnetium-labelled omelette, imaging was performed at intervals of 30 min until all radioactivity was located in the colon and henceforth at intervals of 24 h until all radioactivity had cleared from the colon. Gastric, small...... intestinal and colonic mean transit times were calculated for both markers and compared with fractional gastric emptying at 2 h, fractional colonic filling at 4 h, and geometric centre of colonic content at 24 h, respectively. Highly significant correlations were found between gastric mean transit time...

  18. Cutibacterium acnes molecular typing: time to standardize the method.

    Science.gov (United States)

    Dagnelie, M-A; Khammari, A; Dréno, B; Corvec, S

    2018-03-12

    The Gram-positive, anaerobic/aerotolerant bacterium Cutibacterium acnes is a commensal of healthy human skin; it is subdivided into six main phylogenetic groups or phylotypes: IA1, IA2, IB, IC, II and III. To decipher how far specific subgroups of C. acnes are involved in disease physiopathology, different molecular typing methods have been developed to identify these subgroups: i.e. phylotypes, clonal complexes, and types defined by single-locus sequence typing (SLST). However, as several molecular typing methods have been developed over the last decade, it has become a difficult task to compare the results from one article to another. Based on the scientific literature, the aim of this narrative review is to propose a standardized method to perform molecular typing of C. acnes, according to the degree of resolution needed (phylotypes, clonal complexes, or SLST types). We discuss the existing different typing methods from a critical point of view, emphasizing their advantages and drawbacks, and we identify the most frequently used methods. We propose a consensus algorithm according to the needed phylogeny resolution level. We first propose to use multiplex PCR for phylotype identification, MLST9 for clonal complex determination, and SLST for phylogeny investigation including numerous isolates. There is an obvious need to create a consensus about molecular typing methods for C. acnes. This standardization will facilitate the comparison of results between one article and another, and also the interpretation of clinical data. Copyright © 2018 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  19. Running heavy-quark masses in DIS

    International Nuclear Information System (INIS)

    Alekhin, S.; Moch, S.

    2011-07-01

    We report on determinations of the running mass for charm quarks from deep-inelastic scattering reactions. The method provides complementary information on this fundamental parameter from hadronic processes with space-like kinematics. The obtained values are consistent with but systematically lower than the world average as published by the PDG. We also address the consequences of the running mass scheme for heavy-quark parton distributions in global fits to deep-inelastic scattering data. (orig.)

  20. Luminosity Measurements at LHCb for Run II

    CERN Multimedia

    Coombs, George

    2018-01-01

    A precise measurement of the luminosity is a necessary component of many physics analyses, especially cross-section measurements. At LHCb two different direct measurement methods are used to determine the luminosity: the “van der Meer scan” (VDM) and the “Beam Gas Imaging” (BGI) methods. A combined result from these two methods gave a precision of less than 2% for Run I and efforts are ongoing to provide a similar result for Run II. Fixed target luminosity is determined with an indirect method based on the single electron scattering cross-section.

  1. Simplified scintigraphic methods for measuring gastrointestinal transit times

    DEFF Research Database (Denmark)

    Graff, J; Brinch, K; Madsen, Jan Lysgård

    2000-01-01

    . After ingestion of a meal containing 111indium-labelled water and 99mtechnetium-labelled omelette, imaging was performed at intervals of 30 min until all radioactivity was located in the colon and henceforth at intervals of 24 h until all radioactivity had cleared from the colon. Gastric, small...... intestinal and colonic mean transit times were calculated for both markers and compared with fractional gastric emptying at 2 h, fractional colonic filling at 4 h, and geometric centre of colonic content at 24 h, respectively. Highly significant correlations were found between gastric mean transit time...... and fractional gastric emptying at 2 h (111In: r=0.95, P

  2. SALIVARY ANTIMICROBIAL PROTEIN RESPONSE TO PROLONGED RUNNING

    Directory of Open Access Journals (Sweden)

    Suzanne Schneider

    2013-01-01

    Full Text Available Prolonged exercise may compromise immunity through a reduction of salivary antimicrobial proteins (AMPs. Salivary IgA (IgA has been extensively studied, but little is known about the effect of acute, prolonged exercise on AMPs including lysozyme (Lys and lactoferrin (Lac. Objective: To determine the effect of a 50-km trail race on salivary cortisol (Cort, IgA, Lys, and Lac. Methods: 14 subjects: (6 females, 8 males completed a 50km ultramarathon. Saliva was collected pre, immediately after (post and 1.5 hrs post race ( 1.5. Results: Lac concentration was higher at 1.5 hrs post race compared to post exercise (p0.05. IgA concentration, secretion rate, and IgA/Osm were lower 1.5 hrs post compared to pre race (p<0.05. Cort concentration was higher at post compared to 1.5 (p<0.05, but was unaltered from pre race levels. Subjects finished in 7.81 ± 1.2 hrs. Saliva flow rate did not differ between time points. Saliva Osm increased at post (p<0.05 compared to pre race. Conclusions: The intensity could have been too low to alter Lys and Lac secretion rates and thus, may not be as sensitive as IgA to changes in response to prolonged running. Results expand our understanding of the mucosal immune system and may have implications for predicting illness after prolonged running.

  3. Causal Analysis of Railway Running Delays

    DEFF Research Database (Denmark)

    Cerreto, Fabrizio; Nielsen, Otto Anker; Harrod, Steven

    Operating delays and network propagation are inherent characteristics of railway operations. These are traditionally reduced by provision of time supplements or “slack” in railway timetables and operating plans. Supplement allocation policies must trade off reliability in the service commitments...... Denmark (the Danish infrastructure manager). The statistical analysis of the data identifies the minimum running times and the scheduled running time supplements and investigates the evolution of train delays along given train paths. An improved allocation of time supplements would result in smaller...

  4. Three-dimensional impact kinetics with foot-strike manipulations during running

    OpenAIRE

    Andrew D. Nordin; Janet S. Dufek; John A. Mercer

    2017-01-01

    Background: Lack of an observable vertical impact peak in fore/mid-foot running has been suggested as a means of reducing lower extremity impact forces, although it is unclear if impact characteristics exist in other axes. The purpose of the investigation was to compare three-dimensional (3D) impact kinetics among foot-strike conditions in over-ground running using instantaneous loading rate–time profiles. Methods: Impact characteristics were assessed by identifying peak loading rates in e...

  5. Effect of seed collection times and pretreatment methods on ...

    African Journals Online (AJOL)

    STORAGESEVER

    2008-08-18

    Aug 18, 2008 ... Several basic methods are used to overcome seed- coat dormancy in ... The experiment on seed pretreatment were conducted at Forestry. Research ..... applicability to rural areas where these trees are planted may be limited. .... Forestry. Research News: Indicators and Tools for Restoration & Sustainable.

  6. Pharyngeal transit time measured by scintigraphic and biomagnetic method

    International Nuclear Information System (INIS)

    Miquelin, C.A.; Braga, F.J.H.N.; Baffa, O.

    1996-01-01

    A comparative evaluation between scintigraphic and biomagnetic method to measure the pharyngeal transit is presented. Three volunteers have been studied. The aliment (yogurt) was labeled with 9 9 m Technetium for the scintigraphic test and with ferrite for the biomagnetic one. The preliminary results indicate a difference between the values obtained, probably due to the biomagnetic detector resolution

  7. Getting Over Method: Literacy Teaching as Work in "New Times."

    Science.gov (United States)

    Luke, Allan

    1998-01-01

    Shifts the terms of the "great debate" from technical questions about teaching method to questions about how various kinds of literacies work within communities--matters of government cutbacks and institutional downsizing, shrinking resource and taxation bases, and of students, communities, teachers, and schools trying to cope with rapid and…

  8. Time Interval to Initiation of Contraceptive Methods Following ...

    African Journals Online (AJOL)

    Objectives: The objectives of the study were to determine factors affecting the interval between a woman's last childbirth and the initiation of contraception. Materials and Methods: This was a retrospective study. Family planning clinic records of the Barau Dikko Teaching Hospital Kaduna from January 2000 to March 2014 ...

  9. Effect of seed collection times and pretreatment methods on ...

    African Journals Online (AJOL)

    STORAGESEVER

    2008-08-18

    Aug 18, 2008 ... Seeds were subjected to four treatment methods each at four ... were deep-green to brown while second collection was done when all .... discarded and the intact plump seeds were surface sterilized with .... Analysis of variance table for cumulative germination of Terminalia sericea for first seed collection.

  10. A simple time-delayed method to control chaotic systems

    International Nuclear Information System (INIS)

    Chen Maoyin; Zhou Donghua; Shang Yun

    2004-01-01

    Based on the adaptive iterative learning strategy, a simple time-delayed controller is proposed to stabilize unstable periodic orbits (UPOs) embedded in chaotic attractors. This controller includes two parts: one is a linear feedback part; the other is an adaptive iterative learning estimation part. Theoretical analysis and numerical simulation show the effectiveness of this controller

  11. Multi-time-step domain coupling method with energy control

    DEFF Research Database (Denmark)

    Mahjoubi, N.; Krenk, Steen

    2010-01-01

    the individual time step. It is demonstrated that displacement continuity between the subdomains leads to cancelation of the interface contributions to the energy balance equation, and thus stability and algorithmic damping properties of the original algorithms are retained. The various subdomains can...... by a numerical example using a refined mesh around concentrated forces. Copyright © 2010 John Wiley & Sons, Ltd....

  12. Long-memory time series theory and methods

    CERN Document Server

    Palma, Wilfredo

    2007-01-01

    Wilfredo Palma, PhD, is Chairman and Professor of Statistics in the Department of Statistics at Pontificia Universidad Católica de Chile. Dr. Palma has published several refereed articles and has received over a dozen academic honors and awards. His research interests include time series analysis, prediction theory, state space systems, linear models, and econometrics.

  13. A simple data fusion method for instantaneous travel time estimation

    NARCIS (Netherlands)

    Do, Michael; Pueboobpaphan, R.; Miska, Marc; Kuwahara, Masao; van Arem, Bart; Viegas, J.M.; Macario, R.

    2010-01-01

    Travel time is one of the most understandable parameters to describe traffic condition and an important input to many intelligent transportation systems applications. Direct measurement from Electronic Toll Collection (ETC) system is promising but the data arrives too late, only after the vehicles

  14. An Optimization Method of Time Window Based on Travel Time and Reliability

    OpenAIRE

    Fu, Fengjie; Ma, Dongfang; Wang, Dianhai; Qian, Wei

    2015-01-01

    The dynamic change of urban road travel time was analyzed using video image detector data, and it showed cyclic variation, so the signal cycle length at the upstream intersection was conducted as the basic unit of time window; there was some evidence of bimodality in the actual travel time distributions; therefore, the fitting parameters of the travel time bimodal distribution were estimated using the EM algorithm. Then the weighted average value of the two means was indicated as the travel t...

  15. ATLAS people can run!

    CERN Multimedia

    Claudia Marcelloni de Oliveira; Pauline Gagnon

    It must be all the training we are getting every day, running around trying to get everything ready for the start of the LHC next year. This year, the ATLAS runners were in fine form and came in force. Nine ATLAS teams signed up for the 37th Annual CERN Relay Race with six runners per team. Under a blasting sun on Wednesday 23rd May 2007, each team covered the distances of 1000m, 800m, 800m, 500m, 500m and 300m taking the runners around the whole Meyrin site, hills included. A small reception took place in the ATLAS secretariat a week later to award the ATLAS Cup to the best ATLAS team. For the details on this complex calculation which takes into account the age of each runner, their gender and the color of their shoes, see the July 2006 issue of ATLAS e-news. The ATLAS Running Athena Team, the only all-women team enrolled this year, won the much coveted ATLAS Cup for the second year in a row. In fact, they are so good that Peter Schmid and Patrick Fassnacht are wondering about reducing the women's bonus in...

  16. Underwater running device

    International Nuclear Information System (INIS)

    Kogure, Sumio; Matsuo, Takashiro; Yoshida, Yoji

    1996-01-01

    An underwater running device for an underwater inspection device for detecting inner surfaces of a reactor or a water vessel has an outer frame and an inner frame, and both of them are connected slidably by an air cylinder and connected rotatably by a shaft. The outer frame has four outer frame legs, and each of the outer frame legs is equipped with a sucker at the top end. The inner frame has four inner frame legs each equipped with a sucker at the top end. The outer frame legs and the inner frame legs are each connected with the outer frame and the inner frame by the air cylinder. The outer and the inner frame legs can be elevated or lowered (or extended or contracted) by the air cylinder. The sucker is connected with a jet pump-type negative pressure generator. The device can run and move by repeating attraction and releasing of the outer frame legs and the inner frame legs alternately while maintaining the posture of the inspection device stably. (I.N.)

  17. Mean platelet volume (MPV) predicts middle distance running performance.

    Science.gov (United States)

    Lippi, Giuseppe; Salvagno, Gian Luca; Danese, Elisa; Skafidas, Spyros; Tarperi, Cantor; Guidi, Gian Cesare; Schena, Federico

    2014-01-01

    Running economy and performance in middle distance running depend on several physiological factors, which include anthropometric variables, functional characteristics, training volume and intensity. Since little information is available about hematological predictors of middle distance running time, we investigated whether some hematological parameters may be associated with middle distance running performance in a large sample of recreational runners. The study population consisted in 43 amateur runners (15 females, 28 males; median age 47 years), who successfully concluded a 21.1 km half-marathon at 75-85% of their maximal aerobic power (VO2max). Whole blood was collected 10 min before the run started and immediately thereafter, and hematological testing was completed within 2 hours after sample collection. The values of lymphocytes and eosinophils exhibited a significant decrease compared to pre-run values, whereas those of mean corpuscular volume (MCV), platelets, mean platelet volume (MPV), white blood cells (WBCs), neutrophils and monocytes were significantly increased after the run. In univariate analysis, significant associations with running time were found for pre-run values of hematocrit, hemoglobin, mean corpuscular hemoglobin (MCH), red blood cell distribution width (RDW), MPV, reticulocyte hemoglobin concentration (RetCHR), and post-run values of MCH, RDW, MPV, monocytes and RetCHR. In multivariate analysis, in which running time was entered as dependent variable whereas age, sex, blood lactate, body mass index, VO2max, mean training regimen and the hematological parameters significantly associated with running performance in univariate analysis were entered as independent variables, only MPV values before and after the trial remained significantly associated with running time. After adjustment for platelet count, the MPV value before the run (p = 0.042), but not thereafter (p = 0.247), remained significantly associated with running

  18. Mean platelet volume (MPV predicts middle distance running performance.

    Directory of Open Access Journals (Sweden)

    Giuseppe Lippi

    Full Text Available Running economy and performance in middle distance running depend on several physiological factors, which include anthropometric variables, functional characteristics, training volume and intensity. Since little information is available about hematological predictors of middle distance running time, we investigated whether some hematological parameters may be associated with middle distance running performance in a large sample of recreational runners.The study population consisted in 43 amateur runners (15 females, 28 males; median age 47 years, who successfully concluded a 21.1 km half-marathon at 75-85% of their maximal aerobic power (VO2max. Whole blood was collected 10 min before the run started and immediately thereafter, and hematological testing was completed within 2 hours after sample collection.The values of lymphocytes and eosinophils exhibited a significant decrease compared to pre-run values, whereas those of mean corpuscular volume (MCV, platelets, mean platelet volume (MPV, white blood cells (WBCs, neutrophils and monocytes were significantly increased after the run. In univariate analysis, significant associations with running time were found for pre-run values of hematocrit, hemoglobin, mean corpuscular hemoglobin (MCH, red blood cell distribution width (RDW, MPV, reticulocyte hemoglobin concentration (RetCHR, and post-run values of MCH, RDW, MPV, monocytes and RetCHR. In multivariate analysis, in which running time was entered as dependent variable whereas age, sex, blood lactate, body mass index, VO2max, mean training regimen and the hematological parameters significantly associated with running performance in univariate analysis were entered as independent variables, only MPV values before and after the trial remained significantly associated with running time. After adjustment for platelet count, the MPV value before the run (p = 0.042, but not thereafter (p = 0.247, remained significantly associated with running

  19. Optimal control methods for rapidly time-varying Hamiltonians

    International Nuclear Information System (INIS)

    Motzoi, F.; Merkel, S. T.; Wilhelm, F. K.; Gambetta, J. M.

    2011-01-01

    In this article, we develop a numerical method to find optimal control pulses that accounts for the separation of timescales between the variation of the input control fields and the applied Hamiltonian. In traditional numerical optimization methods, these timescales are treated as being the same. While this approximation has had much success, in applications where the input controls are filtered substantially or mixed with a fast carrier, the resulting optimized pulses have little relation to the applied physical fields. Our technique remains numerically efficient in that the dimension of our search space is only dependent on the variation of the input control fields, while our simulation of the quantum evolution is accurate on the timescale of the fast variation in the applied Hamiltonian.

  20. Time Delay Systems Methods, Applications and New Trends

    CERN Document Server

    Vyhlídal, Tomáš; Niculescu, Silviu-Iulian; Pepe, Pierdomenico

    2012-01-01

    This volume is concerned with the control and dynamics of time delay systems; a research field with at least six-decade long history that has been very active especially in the past two decades. In parallel to the new challenges emerging from engineering, physics, mathematics, and economics, the volume covers several new directions including topology induced stability, large-scale interconnected systems, roles of networks in stability, and new trends in predictor-based control and consensus dynamics. The associated applications/problems are described by highly complex models, and require solving inverse problems as well as the development of new theories, mathematical tools, numerically-tractable algorithms for real-time control. The volume, which is targeted to present these developments in this rapidly evolving field, captures a careful selection of the most recent papers contributed by experts and collected under five parts: (i) Methodology: From Retarded to Neutral Continuous Delay Models, (ii) Systems, S...

  1. A new quantum statistical evaluation method for time correlation functions

    International Nuclear Information System (INIS)

    Loss, D.; Schoeller, H.

    1989-01-01

    Considering a system of N identical interacting particles, which obey Fermi-Dirac or Bose-Einstein statistics, the authors derive new formulas for correlation functions of the type C(t) = i= 1 N A i (t) Σ j=1 N B j > (where B j is diagonal in the free-particle states) in the thermodynamic limit. Thereby they apply and extend a superoperator formalism, recently developed for the derivation of long-time tails in semiclassical systems. As an illustrative application, the Boltzmann equation value of the time-integrated correlation function C(t) is derived in a straight-forward manner. Due to exchange effects, the obtained t-matrix and the resulting scattering cross section, which occurs in the Boltzmann collision operator, are now functionals of the Fermi-Dirac or Bose-Einstein distribution

  2. Time and data synchronization methods in competition monitoring systems

    OpenAIRE

    Kerys, Julijus

    2005-01-01

    Information synchronization problems are analyzed in this thesis. Two aspects are being surveyed – clock synchronization, algorithms and their use, and data synchronization and maintaining the functionality of software at the times, when connection with database is broken. Existing products, their uses, cons and pros are overviewed. There are suggested models, how to solve these problems, which were implemented in “Distributed basketball competition registration and analysis software system”,...

  3. Method for Hot Real-Time Sampling of Gasification Products

    Energy Technology Data Exchange (ETDEWEB)

    Pomeroy, Marc D [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-29

    The Thermochemical Process Development Unit (TCPDU) at the National Renewable Energy Laboratory (NREL) is a highly instrumented half-ton/day pilot scale plant capable of demonstrating industrially relevant thermochemical technologies from lignocellulosic biomass conversion, including gasification. Gasification creates primarily Syngas (a mixture of Hydrogen and Carbon Monoxide) that can be utilized with synthesis catalysts to form transportation fuels and other valuable chemicals. Biomass derived gasification products are a very complex mixture of chemical components that typically contain Sulfur and Nitrogen species that can act as catalysis poisons for tar reforming and synthesis catalysts. Real-time hot online sampling techniques, such as Molecular Beam Mass Spectrometry (MBMS), and Gas Chromatographs with Sulfur and Nitrogen specific detectors can provide real-time analysis providing operational indicators for performance. Sampling typically requires coated sampling lines to minimize trace sulfur interactions with steel surfaces. Other materials used inline have also shown conversion of sulfur species into new components and must be minimized. Sample line Residence time within the sampling lines must also be kept to a minimum to reduce further reaction chemistries. Solids from ash and char contribute to plugging and must be filtered at temperature. Experience at NREL has shown several key factors to consider when designing and installing an analytical sampling system for biomass gasification products. They include minimizing sampling distance, effective filtering as close to source as possible, proper line sizing, proper line materials or coatings, even heating of all components, minimizing pressure drops, and additional filtering or traps after pressure drops.

  4. Using Integration and Autonomy to Teach an Elementary Running Unit

    Science.gov (United States)

    Sluder, J. Brandon; Howard-Shaughnessy, Candice

    2015-01-01

    Cardiovascular fitness is an important aspect of overall fitness, health, and wellness, and running can be an excellent lifetime physical activity. One of the most simple and effective means of exercise, running raises heart rate in a short amount of time and can be done with little to no cost for equipment. There are many benefits to running,…

  5. Barefoot versus shoe running: from the past to the present.

    Science.gov (United States)

    Kaplan, Yonatan

    2014-02-01

    Barefoot running is not a new concept, but relatively few people choose to engage in barefoot running on a regular basis. Despite the technological developments in modern running footwear, as many as 79% of runners are injured every year. Although benefits of barefoot running have been proposed, there are also potential risks associated with it. To review the evidence-based literature concerning barefoot/minimal footwear running and the implications for the practicing physician. Multiple publications were reviewed using an electronic search of databases such as Medline, Cinahl, Embase, PubMed, and Cochrane Database from inception until August 30, 2013 using the search terms barefoot running, barefoot running biomechanics, and shoe vs. barefoot running. Ninety-six relevant articles were found. Most were reviews of biomechanical and kinematic studies. There are notable differences in gait and other parameters between barefoot running and shoe running. Based on these findings and much anecdotal evidence, one could conclude that barefoot runners should have fewer injuries, better performance, or both. Several athletic shoe companies have designed running shoes that attempt to mimic the barefoot condition, and thus garner the purported benefits of barefoot running. Although there is no evidence that confirms or refutes improved performance and reduced injuries in barefoot runners, many of the claimed disadvantages to barefoot running are not supported by the literature. Nonetheless, it seems that barefoot running may be an acceptable training method for athletes and coaches, as it may minimize the risks of injury.

  6. Innovative methods for calculation of freeway travel time using limited data : executive summary report.

    Science.gov (United States)

    2008-08-01

    ODOTs policy for Dynamic Message Sign : utilization requires travel time(s) to be displayed as : a default message. The current method of : calculating travel time involves a workstation : operator estimating the travel time based upon : observati...

  7. The running pattern and its importance in running long-distance gears

    Directory of Open Access Journals (Sweden)

    Jarosław Hoffman

    2017-07-01

    Full Text Available The running pattern is individual for each runner, regardless of distance. We can characterize it as the sum of the data of the runner (age, height, training time, etc. and the parameters of his run. Building the proper technique should focus first and foremost on the work of movement coordination and the power of the runner. In training the correct running steps we can use similar tools as working on deep feeling. The aim of this paper was to define what we can call a running pattern, what is its influence in long-distance running, and the relationship between the training technique and the running pattern. The importance of a running pattern in long-distance racing is immense, as the more distracted and departed from the norm, the greater the harm to the body will cause it to repetition in long run. Putting on training exercises that shape the technique is very important and affects the running pattern significantly.

  8. Time-frequency energy density precipitation method for time-of-flight extraction of narrowband Lamb wave detection signals

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Y., E-mail: thuzhangyu@foxmail.com; Huang, S. L., E-mail: huangsling@tsinghua.edu.cn; Wang, S.; Zhao, W. [State Key Laboratory of Power Systems, Department of Electrical Engineering, Tsinghua University, Beijing 100084 (China)

    2016-05-15

    The time-of-flight of the Lamb wave provides an important basis for defect evaluation in metal plates and is the input signal for Lamb wave tomographic imaging. However, the time-of-flight can be difficult to acquire because of the Lamb wave dispersion characteristics. This work proposes a time-frequency energy density precipitation method to accurately extract the time-of-flight of narrowband Lamb wave detection signals in metal plates. In the proposed method, a discrete short-time Fourier transform is performed on the narrowband Lamb wave detection signals to obtain the corresponding discrete time-frequency energy density distribution. The energy density values at the center frequency for all discrete time points are then calculated by linear interpolation. Next, the time-domain energy density curve focused on that center frequency is precipitated by least squares fitting of the calculated energy density values. Finally, the peak times of the energy density curve obtained relative to the initial pulse signal are extracted as the time-of-flight for the narrowband Lamb wave detection signals. An experimental platform is established for time-of-flight extraction of narrowband Lamb wave detection signals, and sensitivity analysis of the proposed time-frequency energy density precipitation method is performed in terms of propagation distance, dispersion characteristics, center frequency, and plate thickness. For comparison, the widely used Hilbert–Huang transform method is also implemented for time-of-flight extraction. The results show that the time-frequency energy density precipitation method can accurately extract the time-of-flight with relative error of <1% and thus can act as a universal time-of-flight extraction method for narrowband Lamb wave detection signals.

  9. Time-frequency energy density precipitation method for time-of-flight extraction of narrowband Lamb wave detection signals

    International Nuclear Information System (INIS)

    Zhang, Y.; Huang, S. L.; Wang, S.; Zhao, W.

    2016-01-01

    The time-of-flight of the Lamb wave provides an important basis for defect evaluation in metal plates and is the input signal for Lamb wave tomographic imaging. However, the time-of-flight can be difficult to acquire because of the Lamb wave dispersion characteristics. This work proposes a time-frequency energy density precipitation method to accurately extract the time-of-flight of narrowband Lamb wave detection signals in metal plates. In the proposed method, a discrete short-time Fourier transform is performed on the narrowband Lamb wave detection signals to obtain the corresponding discrete time-frequency energy density distribution. The energy density values at the center frequency for all discrete time points are then calculated by linear interpolation. Next, the time-domain energy density curve focused on that center frequency is precipitated by least squares fitting of the calculated energy density values. Finally, the peak times of the energy density curve obtained relative to the initial pulse signal are extracted as the time-of-flight for the narrowband Lamb wave detection signals. An experimental platform is established for time-of-flight extraction of narrowband Lamb wave detection signals, and sensitivity analysis of the proposed time-frequency energy density precipitation method is performed in terms of propagation distance, dispersion characteristics, center frequency, and plate thickness. For comparison, the widely used Hilbert–Huang transform method is also implemented for time-of-flight extraction. The results show that the time-frequency energy density precipitation method can accurately extract the time-of-flight with relative error of <1% and thus can act as a universal time-of-flight extraction method for narrowband Lamb wave detection signals.

  10. Time-of-flight cameras principles, methods and applications

    CERN Document Server

    Hansard, Miles; Choi, Ouk; Horaud, Radu

    2012-01-01

    Time-of-flight (TOF) cameras provide a depth value at each pixel, from which the 3D structure of the scene can be estimated. This new type of active sensor makes it possible to go beyond traditional 2D image processing, directly to depth-based and 3D scene processing. Many computer vision and graphics applications can benefit from TOF data, including 3D reconstruction, activity and gesture recognition, motion capture and face detection. It is already possible to use multiple TOF cameras, in order to increase the scene coverage, and to combine the depth data with images from several colour came

  11. Extraction Method of Driver’s Mental Component Based on Empirical Mode Decomposition and Approximate Entropy Statistic Characteristic in Vehicle Running State

    Directory of Open Access Journals (Sweden)

    Shuan-Feng Zhao

    2017-01-01

    Full Text Available In the driver fatigue monitoring technology, the essence is to capture and analyze the driver behavior information, such as eyes, face, heart, and EEG activity during driving. However, ECG and EEG monitoring are limited by the installation electrodes and are not commercially available. The most common fatigue detection method is the analysis of driver behavior, that is, to determine whether the driver is tired by recording and analyzing the behavior characteristics of steering wheel and brake. The driver usually adjusts his or her actions based on the observed road conditions. Obviously the road path information is directly contained in the vehicle driving state; if you want to judge the driver’s driving behavior by vehicle driving status information, the first task is to remove the road information from the vehicle driving state data. Therefore, this paper proposes an effective intrinsic mode function selection method for the approximate entropy of empirical mode decomposition considering the characteristics of the frequency distribution of road and vehicle information and the unsteady and nonlinear characteristics of the driver closed-loop driving system in vehicle driving state data. The objective is to extract the effective component of the driving behavior information and to weaken the road information component. Finally the effectiveness of the proposed method is verified by simulating driving experiments.

  12. Automatic Train Operation Using Autonomic Prediction of Train Runs

    Science.gov (United States)

    Asuka, Masashi; Kataoka, Kenji; Komaya, Kiyotoshi; Nishida, Syogo

    In this paper, we present an automatic train control method adaptable to disturbed train traffic conditions. The proposed method presumes transmission of detected time of a home track clearance to trains approaching to the station by employing equipment of Digital ATC (Automatic Train Control). Using the information, each train controls its acceleration by the method that consists of two approaches. First, by setting a designated restricted speed, the train controls its running time to arrive at the next station in accordance with predicted delay. Second, the train predicts the time at which it will reach the current braking pattern generated by Digital ATC, along with the time when the braking pattern transits ahead. By comparing them, the train correctly chooses the coasting drive mode in advance to avoid deceleration due to the current braking pattern. We evaluated the effectiveness of the proposed method regarding driving conditions, energy consumption and reduction of delays by simulation.

  13. Can bone marrow edema be seen on STIR images of the ankle and foot after 1 week of running?

    International Nuclear Information System (INIS)

    Trappeniers, L.; Maeseneer, M. de; Ridder, F. de; Machiels, F.; Shahabpour, M.; Tebache, C.; Verhellen, R.; Osteaux, M.

    2003-01-01

    Purpose: To evaluate whether initiation of running in sedentary individuals would lead to bone marrow edema on MR images, within the time span of 1 week. Materials and methods: The feet of 10 healthy volunteers were imaged by MR imaging before and after running during 30 min a day for 1 week. The images were evaluated by consensus of 2 musculoskeletal radiologists who graded the presence of bone marrow edema on a 4-point scale. Edema scores and number of bones involved before and after running were compared statistically. Results: Edema was present on the baseline images in 3 subjects. After running edema showed an increase or was present in 5 subjects. The changes after running were statistically significant. Bones involved were the talus, calcaneus, navicular bone, cuboid bone, and 5th metatarsal. Conclusion: Edema patterns can be seen in the feet of asymptomatic individuals. During initiation of running an increase of edema or development of new edema areas can be seen

  14. Barefoot running: biomechanics and implications for running injuries.

    Science.gov (United States)

    Altman, Allison R; Davis, Irene S

    2012-01-01

    Despite the technological developments in modern running footwear, up to 79% of runners today get injured in a given year. As we evolved barefoot, examining this mode of running is insightful. Barefoot running encourages a forefoot strike pattern that is associated with a reduction in impact loading and stride length. Studies have shown a reduction in injuries to shod forefoot strikers as compared with rearfoot strikers. In addition to a forefoot strike pattern, barefoot running also affords the runner increased sensory feedback from the foot-ground contact, as well as increased energy storage in the arch. Minimal footwear is being used to mimic barefoot running, but it is not clear whether it truly does. The purpose of this article is to review current and past research on shod and barefoot/minimal footwear running and their implications for running injuries. Clearly more research is needed, and areas for future study are suggested.

  15. Real-time aircraft continuous descent trajectory optimization with ATC time constraints using direct collocation methods.

    OpenAIRE

    Verhoeven, Ronald; Dalmau Codina, Ramon; Prats Menéndez, Xavier; de Gelder, Nico

    2014-01-01

    1 Abstract In this paper an initial implementation of a real - time aircraft trajectory optimization algorithm is presented . The aircraft trajectory for descent and approach is computed for minimum use of thrust and speed brake in support of a “green” continuous descent and approach flight operation, while complying with ATC time constraints for maintaining runway throughput and co...

  16. Comparison of transfer entropy methods for financial time series

    Science.gov (United States)

    He, Jiayi; Shang, Pengjian

    2017-09-01

    There is a certain relationship between the global financial markets, which creates an interactive network of global finance. Transfer entropy, a measurement for information transfer, offered a good way to analyse the relationship. In this paper, we analysed the relationship between 9 stock indices from the U.S., Europe and China (from 1995 to 2015) by using transfer entropy (TE), effective transfer entropy (ETE), Rényi transfer entropy (RTE) and effective Rényi transfer entropy (ERTE). We compared the four methods in the sense of the effectiveness for identification of the relationship between stock markets. In this paper, two kinds of information flows are given. One reveals that the U.S. took the leading position when in terms of lagged-current cases, but when it comes to the same date, China is the most influential. And ERTE could provide superior results.

  17. Calf Compression Sleeves Change Biomechanics but Not Performance and Physiological Responses in Trail Running

    Directory of Open Access Journals (Sweden)

    Hugo A. Kerhervé

    2017-04-01

    Full Text Available Introduction: The aim of this study was to determine whether calf compression sleeves (CS affects physiological and biomechanical parameters, exercise performance, and perceived sensations of muscle fatigue, pain and soreness during prolonged (~2 h 30 min outdoor trail running.Methods: Fourteen healthy trained males took part in a randomized, cross-over study consisting in two identical 24-km trail running sessions (each including one bout of running at constant rate on moderately flat terrain, and one period of all-out running on hilly terrain wearing either degressive CS (23 ± 2 mmHg or control sleeves (CON, <4 mmHg. Running time, heart rate and muscle oxygenation of the medial gastrocnemius muscle (measured using portable near-infrared spectroscopy were monitored continuously. Muscle functional capabilities (power, stiffness were determined using 20 s of maximal hopping before and after both sessions. Running biomechanics (kinematics, vertical and leg stiffness were determined at 12 km·h−1 at the beginning, during, and at the end of both sessions. Exercise-induced Achilles tendon pain and delayed onset calf muscles soreness (DOMS were assessed using visual analog scales.Results: Muscle oxygenation increased significantly in CS compared to CON at baseline and immediately after exercise (p < 0.05, without any difference in deoxygenation kinetics during the run, and without any significant change in run times. Wearing CS was associated with (i higher aerial time and leg stiffness in running at constant rate, (ii with lower ground contact time, higher leg stiffness, and higher vertical stiffness in all-out running, and (iii with lower ground contact time in hopping. Significant DOMS were induced in both CS and CON (>6 on a 10-cm scale with no difference between conditions. However, Achilles tendon pain was significantly lower after the trial in CS than CON (p < 0.05.Discussion: Calf compression did not modify muscle oxygenation during ~2 h 30

  18. Method for Hot Real-Time Sampling of Pyrolysis Vapors

    Energy Technology Data Exchange (ETDEWEB)

    Pomeroy, Marc D [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-29

    Biomass Pyrolysis has been an increasing topic of research, in particular as a replacement for crude oil. This process utilizes moderate temperatures to thermally deconstruct the biomass which is then condensed into a mixture of liquid oxygenates to be used as fuel precursors. Pyrolysis oils contain more than 400 compounds, up to 60 percent of which do not re-volatilize for subsequent chemical analysis. Vapor chemical composition is also complicated as additional condensation reactions occur during the condensation and collection of the product. Due to the complexity of the pyrolysis oil, and a desire to catalytically upgrade the vapor composition before condensation, online real-time analytical techniques such as Molecular Beam Mass Spectrometry (MBMS) are of great use. However, in order to properly sample hot pyrolysis vapors, many challenges must be overcome. Sampling must occur within a narrow range of temperatures to reduce product composition changes from overheating or partial condensation or plugging of lines from condensed products. Residence times must be kept at a minimum to reduce further reaction chemistries. Pyrolysis vapors also form aerosols that are carried far downstream and can pass through filters resulting in build-up in downstream locations. The co-produced bio-char and ash from the pyrolysis process can lead to plugging of the sample lines, and must be filtered out at temperature, even with the use of cyclonic separators. A practical approach for considerations and sampling system design, as well as lessons learned are integrated into the hot analytical sampling system of the National Renewable Energy Laboratory's (NREL) Thermochemical Process Development Unit (TCPDU) to provide industrially relevant demonstrations of thermochemical transformations of biomass feedstocks at the pilot scale.

  19. 30 CFR 48.3 - Training plans; time of submission; where filed; information required; time for approval; method...

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Training plans; time of submission; where filed....3 Training plans; time of submission; where filed; information required; time for approval; method... training plan shall be filed with the District Manager for the area in which the mine is located. (c) Each...

  20. Time-frequency energy density precipitation method for time-of-flight extraction of narrowband Lamb wave detection signals.

    Science.gov (United States)

    Zhang, Y; Huang, S L; Wang, S; Zhao, W

    2016-05-01

    The time-of-flight of the Lamb wave provides an important basis for defect evaluation in metal plates and is the input signal for Lamb wave tomographic imaging. However, the time-of-flight can be difficult to acquire because of the Lamb wave dispersion characteristics. This work proposes a time-frequency energy density precipitation method to accurately extract the time-of-flight of narrowband Lamb wave detection signals in metal plates. In the proposed method, a discrete short-time Fourier transform is performed on the narrowband Lamb wave detection signals to obtain the corresponding discrete time-frequency energy density distribution. The energy density values at the center frequency for all discrete time points are then calculated by linear interpolation. Next, the time-domain energy density curve focused on that center frequency is precipitated by least squares fitting of the calculated energy density values. Finally, the peak times of the energy density curve obtained relative to the initial pulse signal are extracted as the time-of-flight for the narrowband Lamb wave detection signals. An experimental platform is established for time-of-flight extraction of narrowband Lamb wave detection signals, and sensitivity analysis of the proposed time-frequency energy density precipitation method is performed in terms of propagation distance, dispersion characteristics, center frequency, and plate thickness. For comparison, the widely used Hilbert-Huang transform method is also implemented for time-of-flight extraction. The results show that the time-frequency energy density precipitation method can accurately extract the time-of-flight with relative error of wave detection signals.