WorldWideScience

Sample records for large time steps

  1. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    Science.gov (United States)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  2. Displacement in the parameter space versus spurious solution of discretization with large time step

    International Nuclear Information System (INIS)

    Mendes, Eduardo; Letellier, Christophe

    2004-01-01

    In order to investigate a possible correspondence between differential and difference equations, it is important to possess discretization of ordinary differential equations. It is well known that when differential equations are discretized, the solution thus obtained depends on the time step used. In the majority of cases, such a solution is considered spurious when it does not resemble the expected solution of the differential equation. This often happens when the time step taken into consideration is too large. In this work, we show that, even for quite large time steps, some solutions which do not correspond to the expected ones are still topologically equivalent to solutions of the original continuous system if a displacement in the parameter space is considered. To reduce such a displacement, a judicious choice of the discretization scheme should be made. To this end, a recent discretization scheme, based on the Lie expansion of the original differential equations, proposed by Monaco and Normand-Cyrot will be analysed. Such a scheme will be shown to be sufficient for providing an adequate discretization for quite large time steps compared to the pseudo-period of the underlying dynamics

  3. The large discretization step method for time-dependent partial differential equations

    Science.gov (United States)

    Haras, Zigo; Taasan, Shlomo

    1995-01-01

    A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.

  4. Time step length versus efficiency of Monte Carlo burnup calculations

    International Nuclear Information System (INIS)

    Dufek, Jan; Valtavirta, Ville

    2014-01-01

    Highlights: • Time step length largely affects efficiency of MC burnup calculations. • Efficiency of MC burnup calculations improves with decreasing time step length. • Results were obtained from SIE-based Monte Carlo burnup calculations. - Abstract: We demonstrate that efficiency of Monte Carlo burnup calculations can be largely affected by the selected time step length. This study employs the stochastic implicit Euler based coupling scheme for Monte Carlo burnup calculations that performs a number of inner iteration steps within each time step. In a series of calculations, we vary the time step length and the number of inner iteration steps; the results suggest that Monte Carlo burnup calculations get more efficient as the time step length is reduced. More time steps must be simulated as they get shorter; however, this is more than compensated by the decrease in computing cost per time step needed for achieving a certain accuracy

  5. Newmark local time stepping on high-performance computing architectures

    KAUST Repository

    Rietmann, Max

    2016-11-25

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  6. Newmark local time stepping on high-performance computing architectures

    KAUST Repository

    Rietmann, Max; Grote, Marcus; Peter, Daniel; Schenk, Olaf

    2016-01-01

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  7. Newmark local time stepping on high-performance computing architectures

    Energy Technology Data Exchange (ETDEWEB)

    Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland); Institute of Geophysics, ETH Zurich (Switzerland); Grote, Marcus, E-mail: marcus.grote@unibas.ch [Department of Mathematics and Computer Science, University of Basel (Switzerland); Peter, Daniel, E-mail: daniel.peter@kaust.edu.sa [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland); Institute of Geophysics, ETH Zurich (Switzerland); Schenk, Olaf, E-mail: olaf.schenk@usi.ch [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland)

    2017-04-01

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  8. Aggressive time step selection for the time asymptotic velocity diffusion problem

    International Nuclear Information System (INIS)

    Hewett, D.W.; Krapchev, V.B.; Hizanidis, K.; Bers, A.

    1984-12-01

    An aggressive time step selector for an ADI algorithm is preseneted that is applied to the linearized 2-D Fokker-Planck equation including an externally imposed quasilinear diffusion term. This method provides a reduction in CPU requirements by factors of two or three compared to standard ADI. More important, the robustness of the procedure greatly reduces the work load of the user. The procedure selects a nearly optimal Δt with a minimum of intervention by the user thus relieving the need to supervise the algorithm. In effect, the algorithm does its own supervision by discarding time steps made with Δt too large

  9. An adaptive time-stepping strategy for solving the phase field crystal model

    International Nuclear Information System (INIS)

    Zhang, Zhengru; Ma, Yuan; Qiao, Zhonghua

    2013-01-01

    In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. The numerical experiments demonstrate that the CPU time is significantly saved for long time simulations

  10. Formulation of an explicit-multiple-time-step time integration method for use in a global primitive equation grid model

    Science.gov (United States)

    Chao, W. C.

    1982-01-01

    With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.

  11. A parallel nearly implicit time-stepping scheme

    OpenAIRE

    Botchev, Mike A.; van der Vorst, Henk A.

    2001-01-01

    Across-the-space parallelism still remains the most mature, convenient and natural way to parallelize large scale problems. One of the major problems here is that implicit time stepping is often difficult to parallelize due to the structure of the system. Approximate implicit schemes have been suggested to circumvent the problem. These schemes have attractive stability properties and they are also very well parallelizable. The purpose of this article is to give an overall assessment of the pa...

  12. Symplectic integrators with adaptive time steps

    Science.gov (United States)

    Richardson, A. S.; Finn, J. M.

    2012-01-01

    In recent decades, there have been many attempts to construct symplectic integrators with variable time steps, with rather disappointing results. In this paper, we identify the causes for this lack of performance, and find that they fall into two categories. In the first, the time step is considered a function of time alone, Δ = Δ(t). In this case, backward error analysis shows that while the algorithms remain symplectic, parametric instabilities may arise because of resonance between oscillations of Δ(t) and the orbital motion. In the second category the time step is a function of phase space variables Δ = Δ(q, p). In this case, the system of equations to be solved is analyzed by introducing a new time variable τ with dt = Δ(q, p) dτ. The transformed equations are no longer in Hamiltonian form, and thus do not benefit from integration methods which would be symplectic for Hamiltonian systems. We analyze two methods for integrating the transformed equations which do, however, preserve the structure of the original equations. The first is an extended phase space method, which has been successfully used in previous studies of adaptive time step symplectic integrators. The second, novel, method is based on a non-canonical mixed-variable generating function. Numerical trials for both of these methods show good results, without parametric instabilities or spurious growth or damping. It is then shown how to adapt the time step to an error estimate found by backward error analysis, in order to optimize the time-stepping scheme. Numerical results are obtained using this formulation and compared with other time-stepping schemes for the extended phase space symplectic method.

  13. Optimal order and time-step criterion for Aarseth-type N-body integrators

    International Nuclear Information System (INIS)

    Makino, Junichiro

    1991-01-01

    How the selection of the time-step criterion and the order of the integrator change the efficiency of Aarseth-type N-body integrators is discussed. An alternative to Aarseth's scheme based on the direct calculation of the time derivative of the force using the Hermite interpolation is compared to Aarseth's scheme, which uses the Newton interpolation to construct the predictor and corrector. How the number of particles in the system changes the behavior of integrators is examined. The Hermite scheme allows a time step twice as large as that for the standard Aarseth scheme for the same accuracy. The calculation cost of the Hermite scheme per time step is roughly twice as much as that of the standard Aarseth scheme. The optimal order of the integrators depends on both the particle number and the accuracy required. The time-step criterion of the standard Aarseth scheme is found to be inapplicable to higher-order integrators, and a more uniformly reliable criterion is proposed. 18 refs

  14. Time step MOTA thermostat simulation

    International Nuclear Information System (INIS)

    Guthrie, G.L.

    1978-09-01

    The report details the logic, program layout, and operating procedures for the time-step MOTA (Materials Open Test Assembly) thermostat simulation program known as GYRD. It will enable prospective users to understand the operation of the program, run it, and interpret the results. The time-step simulation analysis was the approach chosen to determine the maximum value gain that could be used to minimize steady temperature offset without risking undamped thermal oscillations. The advantage of the GYRD program is that it directly shows hunting, ringing phenomenon, and similar events. Programs BITT and CYLB are faster, but do not directly show ringing time

  15. Multiple time step integrators in ab initio molecular dynamics

    International Nuclear Information System (INIS)

    Luehr, Nathan; Martínez, Todd J.; Markland, Thomas E.

    2014-01-01

    Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy

  16. Third-order-accurate numerical methods for efficient, large time-step solutions of mixed linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Cobb, J.W.

    1995-02-01

    There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.

  17. High-resolution seismic wave propagation using local time stepping

    KAUST Repository

    Peter, Daniel

    2017-03-13

    High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step size for ground-motion simulations due to numerical stability conditions. To alleviate this problem, local time stepping (LTS) algorithms allow an explicit time stepping scheme to adapt the time step to the element size, allowing nearoptimal time steps everywhere in the mesh. This can potentially lead to significantly faster simulation runtimes.

  18. FTSPlot: fast time series visualization for large datasets.

    Directory of Open Access Journals (Sweden)

    Michael Riss

    Full Text Available The analysis of electrophysiological recordings often involves visual inspection of time series data to locate specific experiment epochs, mask artifacts, and verify the results of signal processing steps, such as filtering or spike detection. Long-term experiments with continuous data acquisition generate large amounts of data. Rapid browsing through these massive datasets poses a challenge to conventional data plotting software because the plotting time increases proportionately to the increase in the volume of data. This paper presents FTSPlot, which is a visualization concept for large-scale time series datasets using techniques from the field of high performance computer graphics, such as hierarchic level of detail and out-of-core data handling. In a preprocessing step, time series data, event, and interval annotations are converted into an optimized data format, which then permits fast, interactive visualization. The preprocessing step has a computational complexity of O(n x log(N; the visualization itself can be done with a complexity of O(1 and is therefore independent of the amount of data. A demonstration prototype has been implemented and benchmarks show that the technology is capable of displaying large amounts of time series data, event, and interval annotations lag-free with < 20 ms ms. The current 64-bit implementation theoretically supports datasets with up to 2(64 bytes, on the x86_64 architecture currently up to 2(48 bytes are supported, and benchmarks have been conducted with 2(40 bytes/1 TiB or 1.3 x 10(11 double precision samples. The presented software is freely available and can be included as a Qt GUI component in future software projects, providing a standard visualization method for long-term electrophysiological experiments.

  19. Time step size selection for radiation diffusion calculations

    International Nuclear Information System (INIS)

    Rider, W.J.; Knoll, D.A.

    1999-01-01

    The purpose of this note is to describe a time step control technique as applied to radiation diffusion. Standard practice only provides a heuristic criteria related to the relative change in the dependent variables. The authors propose an alternative based on relatively simple physical principles. This time step control applies to methods of solution that are unconditionally stable and converges nonlinearities within a time step in the governing equations. Commonly, nonlinearities in the governing equations are evaluated using existing (old time) data. The authors refer to this as the semi-implicit (SI) method. When a method converges nonlinearities within a time step, the entire governing equation including all nonlinearities is self-consistently evaluated using advance time data (with appropriate time centering for accuracy)

  20. The effects of age and step length on joint kinematics and kinetics of large out-and-back steps.

    Science.gov (United States)

    Schulz, Brian W; Ashton-Miller, James A; Alexander, Neil B

    2008-06-01

    Maximum step length (MSL) is a clinical test that has been shown to correlate with age, various measures of fall risk, and knee and hip joint extension speed, strength, and power capacities, but little is known about the kinematics and kinetics of the large out-and-back step utilized. Body motions and ground reaction forces were recorded for 11 unimpaired younger and 10 older women while attaining maximum step length. Joint kinematics and kinetics were calculated using inverse dynamics. The effects of age group and step length on the biomechanics of these large out-and-back steps were determined. Maximum step length was 40% greater in the younger than in the older women (P<0.0001). Peak knee and hip, but not ankle, angle, velocity, moment, and power were generally greater for younger women and longer steps. After controlling for age group, step length generally explained significant additional variance in hip and torso kinematics and kinetics (incremental R2=0.09-0.37). The young reached their peak knee extension moment immediately after landing of the step out, while the old reached their peak knee extension moment just before the return step liftoff (P=0.03). Maximum step length is strongly associated with hip kinematics and kinetics. Delays in peak knee extension moment that appear to be unrelated to step length, may indicate a reduced ability of older women to rapidly apply force to the ground with the stepping leg and thus arrest the momentum of a fall.

  1. Diffeomorphic image registration with automatic time-step adjustment

    DEFF Research Database (Denmark)

    Pai, Akshay Sadananda Uppinakudru; Klein, S.; Sommer, Stefan Horst

    2015-01-01

    In this paper, we propose an automated Euler's time-step adjustment scheme for diffeomorphic image registration using stationary velocity fields (SVFs). The proposed variational problem aims at bounding the inverse consistency error by adaptively adjusting the number of Euler's step required to r...... accuracy as a fixed time-step scheme however at a much less computational cost....

  2. Exshall: A Turkel-Zwas explicit large time-step FORTRAN program for solving the shallow-water equations in spherical coordinates

    Science.gov (United States)

    Navon, I. M.; Yu, Jian

    A FORTRAN computer program is presented and documented applying the Turkel-Zwas explicit large time-step scheme to a hemispheric barotropic model with constraint restoration of integral invariants of the shallow-water equations. We then proceed to detail the algorithms embodied in the code EXSHALL in this paper, particularly algorithms related to the efficiency and stability of T-Z scheme and the quadratic constraint restoration method which is based on a variational approach. In particular we provide details about the high-latitude filtering, Shapiro filtering, and Robert filtering algorithms used in the code. We explain in detail the various subroutines in the EXSHALL code with emphasis on algorithms implemented in the code and present the flowcharts of some major subroutines. Finally, we provide a visual example illustrating a 4-day run using real initial data, along with a sample printout and graphic isoline contours of the height field and velocity fields.

  3. Perturbed Strong Stability Preserving Time-Stepping Methods For Hyperbolic PDEs

    KAUST Repository

    Hadjimichael, Yiannis

    2017-09-30

    A plethora of physical phenomena are modelled by hyperbolic partial differential equations, for which the exact solution is usually not known. Numerical methods are employed to approximate the solution to hyperbolic problems; however, in many cases it is difficult to satisfy certain physical properties while maintaining high order of accuracy. In this thesis, we develop high-order time-stepping methods that are capable of maintaining stability constraints of the solution, when coupled with suitable spatial discretizations. Such methods are called strong stability preserving (SSP) time integrators, and we mainly focus on perturbed methods that use both upwind- and downwind-biased spatial discretizations. Firstly, we introduce a new family of third-order implicit Runge–Kuttas methods with arbitrarily large SSP coefficient. We investigate the stability and accuracy of these methods and we show that they perform well on hyperbolic problems with large CFL numbers. Moreover, we extend the analysis of SSP linear multistep methods to semi-discretized problems for which different terms on the right-hand side of the initial value problem satisfy different forward Euler (or circle) conditions. Optimal perturbed and additive monotonicity-preserving linear multistep methods are studied in the context of such problems. Optimal perturbed methods attain augmented monotonicity-preserving step sizes when the different forward Euler conditions are taken into account. On the other hand, we show that optimal SSP additive methods achieve a monotonicity-preserving step-size restriction no better than that of the corresponding non-additive SSP linear multistep methods. Furthermore, we develop the first SSP linear multistep methods of order two and three with variable step size, and study their optimality. We describe an optimal step-size strategy and demonstrate the effectiveness of these methods on various one- and multi-dimensional problems. Finally, we establish necessary conditions

  4. An explicit multi-time-stepping algorithm for aerodynamic flows

    OpenAIRE

    Niemann-Tuitman, B.E.; Veldman, A.E.P.

    1997-01-01

    An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for aerodynamic turbulent flows. For two-dimensional flows speedups in the order of five with respect to single time stepping are obtained.

  5. The importance of time-stepping errors in ocean models

    Science.gov (United States)

    Williams, P. D.

    2011-12-01

    Many ocean models use leapfrog time stepping. The Robert-Asselin (RA) filter is usually applied after each leapfrog step, to control the computational mode. However, it will be shown in this presentation that the RA filter generates very large amounts of numerical diapycnal mixing. In some ocean models, the numerical diapycnal mixing from the RA filter is as large as the physical diapycnal mixing. This lowers our confidence in the fidelity of the simulations. In addition to the above problem, the RA filter also damps the physical solution and degrades the numerical accuracy. These two concomitant problems occur because the RA filter does not conserve the mean state, averaged over the three time slices on which it operates. The presenter has recently proposed a simple modification to the RA filter, which does conserve the three-time-level mean state. The modified filter has become known as the Robert-Asselin-Williams (RAW) filter. When used in conjunction with the leapfrog scheme, the RAW filter eliminates the numerical damping of the physical solution and increases the amplitude accuracy by two orders, yielding third-order accuracy. The phase accuracy is unaffected and remains second-order. The RAW filter can easily be incorporated into existing models of the ocean, typically via the insertion of just a single line of code. Better simulations are obtained, at almost no additional computational expense. Results will be shown from recent implementations of the RAW filter in various ocean models. For example, in the UK Met Office Hadley Centre ocean model, sea-surface temperature and sea-ice biases in the North Atlantic Ocean are found to be reduced. These improvements are encouraging for the use of the RAW filter in other ocean models.

  6. A simple, compact, and rigid piezoelectric step motor with large step size

    Science.gov (United States)

    Wang, Qi; Lu, Qingyou

    2009-08-01

    We present a novel piezoelectric stepper motor featuring high compactness, rigidity, simplicity, and any direction operability. Although tested in room temperature, it is believed to work in low temperatures, owing to its loose operation conditions and large step size. The motor is implemented with a piezoelectric scanner tube that is axially cut into almost two halves and clamp holds a hollow shaft inside at both ends via the spring parts of the shaft. Two driving voltages that singly deform the two halves of the piezotube in one direction and recover simultaneously will move the shaft in the opposite direction, and vice versa.

  7. Rapid expansion method (REM) for time‐stepping in reverse time migration (RTM)

    KAUST Repository

    Pestana, Reynam C.

    2009-01-01

    We show that the wave equation solution using a conventional finite‐difference scheme, derived commonly by the Taylor series approach, can be derived directly from the rapid expansion method (REM). After some mathematical manipulation we consider an analytical approximation for the Bessel function where we assume that the time step is sufficiently small. From this derivation we find that if we consider only the first two Chebyshev polynomials terms in the rapid expansion method we can obtain the second order time finite‐difference scheme that is frequently used in more conventional finite‐difference implementations. We then show that if we use more terms from the REM we can obtain a more accurate time integration of the wave field. Consequently, we have demonstrated that the REM is more accurate than the usual finite‐difference schemes and it provides a wave equation solution which allows us to march in large time steps without numerical dispersion and is numerically stable. We illustrate the method with post and pre stack migration results.

  8. An explicit multi-time-stepping algorithm for aerodynamic flows

    NARCIS (Netherlands)

    Niemann-Tuitman, B.E.; Veldman, A.E.P.

    1997-01-01

    An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for

  9. Two Step Acceleration Process of Electrons in the Outer Van Allen Radiation Belt by Time Domain Electric Field Bursts and Large Amplitude Chorus Waves

    Science.gov (United States)

    Agapitov, O. V.; Mozer, F.; Artemyev, A.; Krasnoselskikh, V.; Lejosne, S.

    2014-12-01

    A huge number of different non-linear structures (double layers, electron holes, non-linear whistlers, etc) have been observed by the electric field experiment on the Van Allen Probes in conjunction with relativistic electron acceleration in the Earth's outer radiation belt. These structures, found as short duration (~0.1 msec) quasi-periodic bursts of electric field in the high time resolution electric field waveform, have been called Time Domain Structures (TDS). They can quite effectively interact with radiation belt electrons. Due to the trapping of electrons into these non-linear structures, they are accelerated up to ~10 keV and their pitch angles are changed, especially for low energies (˜1 keV). Large amplitude electric field perturbations cause non-linear resonant trapping of electrons into the effective potential of the TDS and these electrons are then accelerated in the non-homogeneous magnetic field. These locally accelerated electrons create the "seed population" of several keV electrons that can be accelerated by coherent, large amplitude, upper band whistler waves to MeV energies in this two step acceleration process. All the elements of this chain acceleration mechanism have been observed by the Van Allen Probes.

  10. GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling

    Science.gov (United States)

    Miki, Yohei; Umemura, Masayuki

    2017-04-01

    The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.

  11. Positivity-preserving dual time stepping schemes for gas dynamics

    Science.gov (United States)

    Parent, Bernard

    2018-05-01

    A new approach at discretizing the temporal derivative of the Euler equations is here presented which can be used with dual time stepping. The temporal discretization stencil is derived along the lines of the Cauchy-Kowalevski procedure resulting in cross differences in spacetime but with some novel modifications which ensure the positivity of the discretization coefficients. It is then shown that the so-obtained spacetime cross differences result in changes to the wave speeds and can thus be incorporated within Roe or Steger-Warming schemes (with and without reconstruction-evolution) simply by altering the eigenvalues. The proposed approach is advantaged over alternatives in that it is positivity-preserving for the Euler equations. Further, it yields monotone solutions near discontinuities while exhibiting a truncation error in smooth regions less than the one of the second- or third-order accurate backward-difference-formula (BDF) for either small or large time steps. The high resolution and positivity preservation of the proposed discretization stencils are independent of the convergence acceleration technique which can be set to multigrid, preconditioning, Jacobian-free Newton-Krylov, block-implicit, etc. Thus, the current paper also offers the first implicit integration of the time-accurate Euler equations that is positivity-preserving in the strict sense (that is, the density and temperature are guaranteed to remain positive). This is in contrast to all previous positivity-preserving implicit methods which only guaranteed the positivity of the density, not of the temperature or pressure. Several stringent reacting and inert test cases confirm the positivity-preserving property of the proposed method as well as its higher resolution and higher computational efficiency over other second-order and third-order implicit temporal discretization strategies.

  12. Near-Space TOPSAR Large-Scene Full-Aperture Imaging Scheme Based on Two-Step Processing

    Directory of Open Access Journals (Sweden)

    Qianghui Zhang

    2016-07-01

    Full Text Available Free of the constraints of orbit mechanisms, weather conditions and minimum antenna area, synthetic aperture radar (SAR equipped on near-space platform is more suitable for sustained large-scene imaging compared with the spaceborne and airborne counterparts. Terrain observation by progressive scans (TOPS, which is a novel wide-swath imaging mode and allows the beam of SAR to scan along the azimuth, can reduce the time of echo acquisition for large scene. Thus, near-space TOPS-mode SAR (NS-TOPSAR provides a new opportunity for sustained large-scene imaging. An efficient full-aperture imaging scheme for NS-TOPSAR is proposed in this paper. In this scheme, firstly, two-step processing (TSP is adopted to eliminate the Doppler aliasing of the echo. Then, the data is focused in two-dimensional frequency domain (FD based on Stolt interpolation. Finally, a modified TSP (MTSP is performed to remove the azimuth aliasing. Simulations are presented to demonstrate the validity of the proposed imaging scheme for near-space large-scene imaging application.

  13. Large step-down DC-DC converters with reduced current stress

    International Nuclear Information System (INIS)

    Ismail, Esam H.

    2009-01-01

    In this paper, several DC-DC converters with large voltage step-down ratios are introduced. A simple modification in the output section of the conventional buck and quadratic converters can effectively extend the duty-cycle range. Only two additional components (an inductor and diode) are necessary for extending the duty-cycle range. The topologies presented in this paper show an improvement in the duty-cycle (about 40%) over the conventional buck and quadratic converters. Consequently, they are well suited for extreme step-down voltage conversion ratio applications. With extended duty-cycle, the current stress on all components is reduced, leading to a significant improvement of the system losses. The principle of operation, theoretical analysis, and comparison of circuit performances with other step-down converters is discussed regarding voltage and current stress. Experimental results of one prototype rated 40-W and operating at 100 kHz are provided in this paper to verify the performance of this new family of converters. The efficiency of the proposed converters is higher than the quadratic converters

  14. Coherent states for the time dependent harmonic oscillator: the step function

    International Nuclear Information System (INIS)

    Moya-Cessa, Hector; Fernandez Guasti, Manuel

    2003-01-01

    We study the time evolution for the quantum harmonic oscillator subjected to a sudden change of frequency. It is based on an approximate analytic solution to the time dependent Ermakov equation for a step function. This approach allows for a continuous treatment that differs from former studies that involve the matching of two time independent solutions at the time when the step occurs

  15. Adaptive Time Stepping for Transient Network Flow Simulation in Rocket Propulsion Systems

    Science.gov (United States)

    Majumdar, Alok K.; Ravindran, S. S.

    2017-01-01

    Fluid and thermal transients found in rocket propulsion systems such as propellant feedline system is a complex process involving fast phases followed by slow phases. Therefore their time accurate computation requires use of short time step initially followed by the use of much larger time step. Yet there are instances that involve fast-slow-fast phases. In this paper, we present a feedback control based adaptive time stepping algorithm, and discuss its use in network flow simulation of fluid and thermal transients. The time step is automatically controlled during the simulation by monitoring changes in certain key variables and by feedback. In order to demonstrate the viability of time adaptivity for engineering problems, we applied it to simulate water hammer and cryogenic chill down in pipelines. Our comparison and validation demonstrate the accuracy and efficiency of this adaptive strategy.

  16. Large-area gold nanohole arrays fabricated by one-step method for surface plasmon resonance biochemical sensing.

    Science.gov (United States)

    Qi, Huijie; Niu, Lihong; Zhang, Jie; Chen, Jian; Wang, Shujie; Yang, Jingjing; Guo, Siyi; Lawson, Tom; Shi, Bingyang; Song, Chunpeng

    2018-04-01

    Surface plasmon resonance (SPR) nanosensors based on metallic nanohole arrays have been widely reported to detect binding interactions in biological specimens. A simple and effective method for constructing nanoscale arrays is essential for the development of SPR nanosensors. In this work, we report a one-step method to fabricate nanohole arrays by thermal nanoimprinting in the matrix of IPS (Intermediate Polymer Stamp). No additional etching process or supporting substrate is required. The preparation process is simple, time-saving and compatible for roll-to-roll process, potentially allowing mass production. Moreover, the nanohole arrays were integrated into detection platform as SPR sensors to investigate different types of biological binding interactions. The results demonstrate that our one-step method can be used to efficiently fabricate large-area and uniform nanohole arrays for biochemical sensing.

  17. High-resolution seismic wave propagation using local time stepping

    KAUST Repository

    Peter, Daniel; Rietmann, Max; Galvez, Percy; Ampuero, Jean Paul

    2017-01-01

    High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step

  18. [Collaborative application of BEPS at different time steps.

    Science.gov (United States)

    Lu, Wei; Fan, Wen Yi; Tian, Tian

    2016-09-01

    BEPSHourly is committed to simulate the ecological and physiological process of vegetation at hourly time steps, and is often applied to analyze the diurnal change of gross primary productivity (GPP), net primary productivity (NPP) at site scale because of its more complex model structure and time-consuming solving process. However, daily photosynthetic rate calculation in BEPSDaily model is simpler and less time-consuming, not involving many iterative processes. It is suitable for simulating the regional primary productivity and analyzing the spatial distribution of regional carbon sources and sinks. According to the characteristics and applicability of BEPSDaily and BEPSHourly models, this paper proposed a method of collaborative application of BEPS at daily and hourly time steps. Firstly, BEPSHourly was used to optimize the main photosynthetic parameters: the maximum rate of carboxylation (V c max ) and the maximum rate of photosynthetic electron transport (J max ) at site scale, and then the two optimized parameters were introduced into BEPSDaily model to estimate regional NPP at regional scale. The results showed that optimization of the main photosynthesis parameters based on the flux data could improve the simulate ability of the model. The primary productivity of different forest types in descending order was deciduous broad-leaved forest, mixed forest, coniferous forest in 2011. The collaborative application of carbon cycle models at different steps proposed in this study could effectively optimize the main photosynthesis parameters V c max and J max , simulate the monthly averaged diurnal GPP, NPP, calculate the regional NPP, and analyze the spatial distribution of regional carbon sources and sinks.

  19. A coupled weather generator - rainfall-runoff approach on hourly time steps for flood risk analysis

    Science.gov (United States)

    Winter, Benjamin; Schneeberger, Klaus; Dung Nguyen, Viet; Vorogushyn, Sergiy; Huttenlau, Matthias; Merz, Bruno; Stötter, Johann

    2017-04-01

    The evaluation of potential monetary damage of flooding is an essential part of flood risk management. One possibility to estimate the monetary risk is to analyze long time series of observed flood events and their corresponding damages. In reality, however, only few flood events are documented. This limitation can be overcome by the generation of a set of synthetic, physically and spatial plausible flood events and subsequently the estimation of the resulting monetary damages. In the present work, a set of synthetic flood events is generated by a continuous rainfall-runoff simulation in combination with a coupled weather generator and temporal disaggregation procedure for the study area of Vorarlberg (Austria). Most flood risk studies focus on daily time steps, however, the mesoscale alpine study area is characterized by short concentration times, leading to large differences between daily mean and daily maximum discharge. Accordingly, an hourly time step is needed for the simulations. The hourly metrological input for the rainfall-runoff model is generated in a two-step approach. A synthetic daily dataset is generated by a multivariate and multisite weather generator and subsequently disaggregated to hourly time steps with a k-Nearest-Neighbor model. Following the event generation procedure, the negative consequences of flooding are analyzed. The corresponding flood damage for each synthetic event is estimated by combining the synthetic discharge at representative points of the river network with a loss probability relation for each community in the study area. The loss probability relation is based on exposure and susceptibility analyses on a single object basis (residential buildings) for certain return periods. For these impact analyses official inundation maps of the study area are used. Finally, by analyzing the total event time series of damages, the expected annual damage or losses associated with a certain probability of occurrence can be estimated for

  20. Parallel time domain solvers for electrically large transient scattering problems

    KAUST Repository

    Liu, Yang

    2014-09-26

    Marching on in time (MOT)-based integral equation solvers represent an increasingly appealing avenue for analyzing transient electromagnetic interactions with large and complex structures. MOT integral equation solvers for analyzing electromagnetic scattering from perfect electrically conducting objects are obtained by enforcing electric field boundary conditions and implicitly time advance electric surface current densities by iteratively solving sparse systems of equations at all time steps. Contrary to finite difference and element competitors, these solvers apply to nonlinear and multi-scale structures comprising geometrically intricate and deep sub-wavelength features residing atop electrically large platforms. Moreover, they are high-order accurate, stable in the low- and high-frequency limits, and applicable to conducting and penetrable structures represented by highly irregular meshes. This presentation reviews some recent advances in the parallel implementations of time domain integral equation solvers, specifically those that leverage multilevel plane-wave time-domain algorithm (PWTD) on modern manycore computer architectures including graphics processing units (GPUs) and distributed memory supercomputers. The GPU-based implementation achieves at least one order of magnitude speedups compared to serial implementations while the distributed parallel implementation are highly scalable to thousands of compute-nodes. A distributed parallel PWTD kernel has been adopted to solve time domain surface/volume integral equations (TDSIE/TDVIE) for analyzing transient scattering from large and complex-shaped perfectly electrically conducting (PEC)/dielectric objects involving ten million/tens of millions of spatial unknowns.

  1. Methods for growth of relatively large step-free SiC crystal surfaces

    Science.gov (United States)

    Neudeck, Philip G. (Inventor); Powell, J. Anthony (Inventor)

    2002-01-01

    A method for growing arrays of large-area device-size films of step-free (i.e., atomically flat) SiC surfaces for semiconductor electronic device applications is disclosed. This method utilizes a lateral growth process that better overcomes the effect of extended defects in the seed crystal substrate that limited the obtainable step-free area achievable by prior art processes. The step-free SiC surface is particularly suited for the heteroepitaxial growth of 3C (cubic) SiC, AlN, and GaN films used for the fabrication of both surface-sensitive devices (i.e., surface channel field effect transistors such as HEMT's and MOSFET's) as well as high-electric field devices (pn diodes and other solid-state power switching devices) that are sensitive to extended crystal defects.

  2. Studies on steps affecting tritium residence time in solid blanket

    International Nuclear Information System (INIS)

    Tanaka, Satoru

    1987-01-01

    For the self sustaining of CTR fuel cycle, the effective tritium recovery from blankets is essential. This means that not only tritium breeding ratio must be larger than 1.0, but also high recovering speed is required for the short residence time of tritium in blankets. Short residence time means that the tritium inventory in blankets is small. In this paper, the tritium residence time and tritium inventory in a solid blanket are modeled by considering the steps constituting tritium release. Some of these tritium migration processes were experimentally evaluated. The tritium migration steps in a solid blanket using sintered breeding materials consist of diffusion in grains, desorption at grain edges, diffusion and permeation through grain boundaries, desorption at particle edges, diffusion and percolation through interconnected pores to purging stream, and convective mass transfer to stream. Corresponding to these steps, diffusive, soluble, adsorbed and trapped tritium inventories and the tritium in gas phase are conceivable. The code named TTT was made for calculating these tritium inventories and the residence time of tritium. An example of the results of calculation is shown. The blanket is REPUTER-1, which is the conceptual design of a commercial reversed field pinch fusion reactor studied at the University of Tokyo. The experimental studies on the migration steps of tritium are reported. (Kako, I.)

  3. A large number of stepping motor network construction by PLC

    Science.gov (United States)

    Mei, Lin; Zhang, Kai; Hongqiang, Guo

    2017-11-01

    In the flexible automatic line, the equipment is complex, the control mode is flexible, how to realize the large number of step and servo motor information interaction, the orderly control become a difficult control. Based on the existing flexible production line, this paper makes a comparative study of its network strategy. After research, an Ethernet + PROFIBUSE communication configuration based on PROFINET IO and profibus was proposed, which can effectively improve the data interaction efficiency of the equipment and stable data interaction information.

  4. Time dependent theory of two-step absorption of two pulses

    Energy Technology Data Exchange (ETDEWEB)

    Rebane, Inna, E-mail: inna.rebane@ut.ee

    2015-09-25

    The time dependent theory of two step-absorption of two different light pulses with arbitrary duration in the electronic three-level model is proposed. The probability that the third level is excited at the moment t is found in depending on the time delay between pulses, the spectral widths of the pulses and the energy relaxation constants of the excited electronic levels. The time dependent perturbation theory is applied without using “doorway–window” approach. The time and spectral behavior of the spectrum using in calculations as simple as possible model is analyzed. - Highlights: • Time dependent theory of two-step absorption in the three-level model is proposed. • Two different light pulses with arbitrary duration is observed. • The time dependent perturbation theory is applied without “door–window” approach. • The time and spectral behavior of the spectra is analyzed for several cases.

  5. Stability analysis and time-step limits for a Monte Carlo Compton-scattering method

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.

    2010-01-01

    A Monte Carlo method for simulating Compton scattering in high energy density applications has been presented that models the photon-electron collision kinematics exactly [E. Canfield, W.M. Howard, E.P. Liang, Inverse Comptonization by one-dimensional relativistic electrons, Astrophys. J. 323 (1987) 565]. However, implementing this technique typically requires an explicit evaluation of the material temperature, which can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and develop two time-step limits that avoid undesirable behavior. The first time-step limit prevents instabilities, while the second, more restrictive time-step limit avoids both instabilities and nonphysical oscillations. With a set of numerical examples, we demonstrate the efficacy of these time-step limits.

  6. Large eddy simulation of turbulent premixed combustion flows over backward facing step

    Energy Technology Data Exchange (ETDEWEB)

    Park, Nam Seob [Yuhan University, Bucheon (Korea, Republic of); Ko, Sang Cheol [Jeju National University, Jeju (Korea, Republic of)

    2011-03-15

    Large eddy simulation (LES) of turbulent premixed combustion flows over backward facing step has been performed using a dynamic sub-grid G-equation flamelet model. A flamelet model for the premixed flame is combined with a dynamic sub-grid combustion model for the filtered propagation of flame speed. The objective of this study is to investigate the validity of the dynamic sub-grid G-equation model in a complex turbulent premixed combustion flow. For the purpose of validating the LES combustion model, the LES of isothermal and reacting shear layer formed at a backward facing step is carried out. The calculated results are compared with the experimental results, and a good agreement is obtained.

  7. Large eddy simulation of turbulent premixed combustion flows over backward facing step

    International Nuclear Information System (INIS)

    Park, Nam Seob; Ko, Sang Cheol

    2011-01-01

    Large eddy simulation (LES) of turbulent premixed combustion flows over backward facing step has been performed using a dynamic sub-grid G-equation flamelet model. A flamelet model for the premixed flame is combined with a dynamic sub-grid combustion model for the filtered propagation of flame speed. The objective of this study is to investigate the validity of the dynamic sub-grid G-equation model in a complex turbulent premixed combustion flow. For the purpose of validating the LES combustion model, the LES of isothermal and reacting shear layer formed at a backward facing step is carried out. The calculated results are compared with the experimental results, and a good agreement is obtained

  8. Multi-Time Step Service Restoration for Advanced Distribution Systems and Microgrids

    International Nuclear Information System (INIS)

    Chen, Bo; Chen, Chen; Wang, Jianhui; Butler-Purry, Karen L.

    2017-01-01

    Modern power systems are facing increased risk of disasters that can cause extended outages. The presence of remote control switches (RCSs), distributed generators (DGs), and energy storage systems (ESS) provides both challenges and opportunities for developing post-fault service restoration methodologies. Inter-temporal constraints of DGs, ESS, and loads under cold load pickup (CLPU) conditions impose extra complexity on problem formulation and solution. In this paper, a multi-time step service restoration methodology is proposed to optimally generate a sequence of control actions for controllable switches, ESSs, and dispatchable DGs to assist the system operator with decision making. The restoration sequence is determined to minimize the unserved customers by energizing the system step by step without violating operational constraints at each time step. The proposed methodology is formulated as a mixed-integer linear programming (MILP) model and can adapt to various operation conditions. Furthermore, the proposed method is validated through several case studies that are performed on modified IEEE 13-node and IEEE 123-node test feeders.

  9. Grief: Difficult Times, Simple Steps.

    Science.gov (United States)

    Waszak, Emily Lane

    This guide presents techniques to assist others in coping with the loss of a loved one. Using the language of 9 layperson, the book contains more than 100 tips for caregivers or loved ones. A simple step is presented on each page, followed by reasons and instructions for each step. Chapters include: "What to Say"; "Helpful Things to Do"; "Dealing…

  10. Adaptive time-stepping Monte Carlo integration of Coulomb collisions

    Science.gov (United States)

    Särkimäki, K.; Hirvijoki, E.; Terävä, J.

    2018-01-01

    We report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell-Jüttner statistics. The implementation is based on the Beliaev-Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space. Detailed description is provided for both the physics and implementation of the operator. The focus is in adaptive integration of stochastic differential equations, which is an overlooked aspect among existing Monte Carlo implementations of Coulomb collision operators. We verify that our operator converges to known analytical results and demonstrate that careless implementation of the adaptive time step can lead to severely erroneous results. The operator is provided as a self-contained Fortran 95 module and can be included into existing orbit-following tools that trace either the full Larmor motion or the guiding center dynamics. The adaptive time-stepping algorithm is expected to be useful in situations where the collision frequencies vary greatly over the course of a simulation. Examples include the slowing-down of fusion products or other fast ions, and the Dreicer generation of runaway electrons as well as the generation of fast ions or electrons with ion or electron cyclotron resonance heating.

  11. Solving point reactor kinetic equations by time step-size adaptable numerical methods

    International Nuclear Information System (INIS)

    Liao Chaqing

    2007-01-01

    Based on the analysis of effects of time step-size on numerical solutions, this paper showed the necessity of step-size adaptation. Based on the relationship between error and step-size, two-step adaptation methods for solving initial value problems (IVPs) were introduced. They are Two-Step Method and Embedded Runge-Kutta Method. PRKEs were solved by implicit Euler method with step-sizes optimized by using Two-Step Method. It was observed that the control error has important influence on the step-size and the accuracy of solutions. With suitable control errors, the solutions of PRKEs computed by the above mentioned method are accurate reasonably. The accuracy and usage of MATLAB built-in ODE solvers ode23 and ode45, both of which adopt Runge-Kutta-Fehlberg method, were also studied and discussed. (authors)

  12. Considerations for the independent reaction times and step-by-step methods for radiation chemistry simulations

    Science.gov (United States)

    Plante, Ianik; Devroye, Luc

    2017-10-01

    Ionizing radiation interacts with the water molecules of the tissues mostly by ionizations and excitations, which result in the formation of the radiation track structure and the creation of radiolytic species such as H.,.OH, H2, H2O2, and e-aq. After their creation, these species diffuse and may chemically react with the neighboring species and with the molecules of the medium. Therefore radiation chemistry is of great importance in radiation biology. As the chemical species are not distributed homogeneously, the use of conventional models of homogeneous reactions cannot completely describe the reaction kinetics of the particles. Actually, many simulations of radiation chemistry are done using the Independent Reaction Time (IRT) method, which is a very fast technique to calculate radiochemical yields but which do not calculate the positions of the radiolytic species as a function of time. Step-by-step (SBS) methods, which are able to provide such information, have been used only sparsely because these are time-consuming in terms of calculation. Recent improvements in computer performance now allow the regular use of the SBS method in radiation chemistry. The SBS and IRT methods are both based on the Green's functions of the diffusion equation (GFDE). In this paper, several sampling algorithms of the GFDE and for the IRT method are presented. We show that the IRT and SBS methods are exactly equivalent for 2-particles systems for diffusion and partially diffusion-controlled reactions between non-interacting particles. We also show that the results obtained with the SBS simulation method with periodic boundary conditions are in agreement with the predictions by classical reaction kinetics theory, which is an important step towards using this method for modelling of biochemical networks and metabolic pathways involved in oxidative stress. Finally, the first simulation results obtained with the code RITRACKS (Relativistic Ion Tracks) are presented.

  13. On an efficient multiple time step Monte Carlo simulation of the SABR model

    NARCIS (Netherlands)

    Leitao Rodriguez, A.; Grzelak, L.A.; Oosterlee, C.W.

    2017-01-01

    In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math.

  14. The hyperbolic step potential: Anti-bound states, SUSY partners and Wigner time delays

    Energy Technology Data Exchange (ETDEWEB)

    Gadella, M. [Departamento de Física Teórica, Atómica y Óptica and IMUVA, Universidad de Valladolid, E-47011 Valladolid (Spain); Kuru, Ş. [Department of Physics, Faculty of Science, Ankara University, 06100 Ankara (Turkey); Negro, J., E-mail: jnegro@fta.uva.es [Departamento de Física Teórica, Atómica y Óptica and IMUVA, Universidad de Valladolid, E-47011 Valladolid (Spain)

    2017-04-15

    We study the scattering produced by a one dimensional hyperbolic step potential, which is exactly solvable and shows an unusual interest because of its asymmetric character. The analytic continuation of the scattering matrix in the momentum representation has a branch cut and an infinite number of simple poles on the negative imaginary axis which are related with the so called anti-bound states. This model does not show resonances. Using the wave functions of the anti-bound states, we obtain supersymmetric (SUSY) partners which are the series of Rosen–Morse II potentials. We have computed the Wigner reflection and transmission time delays for the hyperbolic step and such SUSY partners. Our results show that the more bound states a partner Hamiltonian has the smaller is the time delay. We also have evaluated time delays for the hyperbolic step potential in the classical case and have obtained striking similitudes with the quantum case. - Highlights: • The scattering matrix of hyperbolic step potential is studied. • The scattering matrix has a branch cut and an infinite number of poles. • The poles are associated to anti-bound states. • Susy partners using antibound states are computed. • Wigner time delays for the hyperbolic step and partner potentials are compared.

  15. Enforcing the Courant-Friedrichs-Lewy condition in explicitly conservative local time stepping schemes

    Science.gov (United States)

    Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.

    2018-04-01

    An optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubic "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a constraint on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.

  16. Errors in Postural Preparation Lead to Increased Choice Reaction Times for Step Initiation in Older Adults

    Science.gov (United States)

    Nutt, John G.; Horak, Fay B.

    2011-01-01

    Background. This study asked whether older adults were more likely than younger adults to err in the initial direction of their anticipatory postural adjustment (APA) prior to a step (indicating a motor program error), whether initial motor program errors accounted for reaction time differences for step initiation, and whether initial motor program errors were linked to inhibitory failure. Methods. In a stepping task with choice reaction time and simple reaction time conditions, we measured forces under the feet to quantify APA onset and step latency and we used body kinematics to quantify forward movement of center of mass and length of first step. Results. Trials with APA errors were almost three times as common for older adults as for younger adults, and they were nine times more likely in choice reaction time trials than in simple reaction time trials. In trials with APA errors, step latency was delayed, correlation between APA onset and step latency was diminished, and forward motion of the center of mass prior to the step was increased. Participants with more APA errors tended to have worse Stroop interference scores, regardless of age. Conclusions. The results support the hypothesis that findings of slow choice reaction time step initiation in older adults are attributable to inclusion of trials with incorrect initial motor preparation and that these errors are caused by deficits in response inhibition. By extension, the results also suggest that mixing of trials with correct and incorrect initial motor preparation might explain apparent choice reaction time slowing with age in upper limb tasks. PMID:21498431

  17. Effect of time step size and turbulence model on the open water hydrodynamic performance prediction of contra-rotating propellers

    Science.gov (United States)

    Wang, Zhan-zhi; Xiong, Ying

    2013-04-01

    A growing interest has been devoted to the contra-rotating propellers (CRPs) due to their high propulsive efficiency, torque balance, low fuel consumption, low cavitations, low noise performance and low hull vibration. Compared with the single-screw system, it is more difficult for the open water performance prediction because forward and aft propellers interact with each other and generate a more complicated flow field around the CRPs system. The current work focuses on the open water performance prediction of contra-rotating propellers by RANS and sliding mesh method considering the effect of computational time step size and turbulence model. The validation study has been performed on two sets of contra-rotating propellers developed by David W Taylor Naval Ship R & D center. Compared with the experimental data, it shows that RANS with sliding mesh method and SST k-ω turbulence model has a good precision in the open water performance prediction of contra-rotating propellers, and small time step size can improve the level of accuracy for CRPs with the same blade number of forward and aft propellers, while a relatively large time step size is a better choice for CRPs with different blade numbers.

  18. Dissolvable fluidic time delays for programming multi-step assays in instrument-free paper diagnostics.

    Science.gov (United States)

    Lutz, Barry; Liang, Tinny; Fu, Elain; Ramachandran, Sujatha; Kauffman, Peter; Yager, Paul

    2013-07-21

    Lateral flow tests (LFTs) are an ingenious format for rapid and easy-to-use diagnostics, but they are fundamentally limited to assay chemistries that can be reduced to a single chemical step. In contrast, most laboratory diagnostic assays rely on multiple timed steps carried out by a human or a machine. Here, we use dissolvable sugar applied to paper to create programmable flow delays and present a paper network topology that uses these time delays to program automated multi-step fluidic protocols. Solutions of sucrose at different concentrations (10-70% of saturation) were added to paper strips and dried to create fluidic time delays spanning minutes to nearly an hour. A simple folding card format employing sugar delays was shown to automate a four-step fluidic process initiated by a single user activation step (folding the card); this device was used to perform a signal-amplified sandwich immunoassay for a diagnostic biomarker for malaria. The cards are capable of automating multi-step assay protocols normally used in laboratories, but in a rapid, low-cost, and easy-to-use format.

  19. Canadian children's and youth's pedometer-determined steps/day, parent-reported TV watching time, and overweight/obesity: The CANPLAY Surveillance Study

    Directory of Open Access Journals (Sweden)

    Craig Cora L

    2011-06-01

    Full Text Available Abstract Background This study examines associations between pedometer-determined steps/day and parent-reported child's Body Mass Index (BMI and time typically spent watching television between school and dinner. Methods Young people (aged 5-19 years were recruited through their parents by random digit dialling and mailed a data collection package. Information on height and weight and time spent watching television between school and dinner on a typical school day was collected from parents. In total, 5949 boys and 5709 girls reported daily steps. BMI was categorized as overweight or obese using Cole's cut points. Participants wore pedometers for 7 days and logged daily steps. The odds of being overweight and obese by steps/day and parent-reported time spent television watching were estimated using logistic regression for complex samples. Results Girls had a lower median steps/day (10682 versus 11059 for boys and also a narrower variation in steps/day (interquartile range, 4410 versus 5309 for boys. 11% of children aged 5-19 years were classified as obese; 17% of boys and girls were overweight. Both boys and girls watched, on average, Discussion Television viewing is the more prominent factor in terms of predicting overweight, and it contributes to obesity, but steps/day attenuates the association between television viewing and obesity, and therefore can be considered protective against obesity. In addition to replacing opportunities for active alternative behaviours, exposure to television might also impact body weight by promoting excess energy intake. Conclusions In this large nationally representative sample, pedometer-determined steps/day was associated with reduced odds of being obese (but not overweight whereas each parent-reported hour spent watching television between school and dinner increased the odds of both overweight and obesity.

  20. Biomechanical influences on balance recovery by stepping.

    Science.gov (United States)

    Hsiao, E T; Robinovitch, S N

    1999-10-01

    Stepping represents a common means for balance recovery after a perturbation to upright posture. Yet little is known regarding the biomechanical factors which determine whether a step succeeds in preventing a fall. In the present study, we developed a simple pendulum-spring model of balance recovery by stepping, and used this to assess how step length and step contact time influence the effort (leg contact force) and feasibility of balance recovery by stepping. We then compared model predictions of step characteristics which minimize leg contact force to experimentally observed values over a range of perturbation strengths. At all perturbation levels, experimentally observed step execution times were higher than optimal, and step lengths were smaller than optimal. However, the predicted increase in leg contact force associated with these deviations was substantial only for large perturbations. Furthermore, increases in the strength of the perturbation caused subjects to take larger, quicker steps, which reduced their predicted leg contact force. We interpret these data to reflect young subjects' desire to minimize recovery effort, subject to neuromuscular constraints on step execution time and step length. Finally, our model predicts that successful balance recovery by stepping is governed by a coupling between step length, step execution time, and leg strength, so that the feasibility of balance recovery decreases unless declines in one capacity are offset by enhancements in the others. This suggests that one's risk for falls may be affected more by small but diffuse neuromuscular impairments than by larger impairment in a single motor capacity.

  1. Do not Lose Your Students in Large Lectures: A Five-Step Paper-Based Model to Foster Students’ Participation

    Directory of Open Access Journals (Sweden)

    Mona Hassan Aburahma

    2015-07-01

    Full Text Available Like most of the pharmacy colleges in developing countries with high population growth, public pharmacy colleges in Egypt are experiencing a significant increase in students’ enrollment annually due to the large youth population, accompanied with the keenness of students to join pharmacy colleges as a step to a better future career. In this context, large lectures represent a popular approach for teaching the students as economic and logistic constraints prevent splitting them into smaller groups. Nevertheless, the impact of large lectures in relation to student learning has been widely questioned due to their educational limitations, which are related to the passive role the students maintain in lectures. Despite the reported feebleness underlying large lectures and lecturing in general, large lectures will likely continue to be taught in the same format in these countries. Accordingly, to soften the negative impacts of large lectures, this article describes a simple and feasible 5-step paper-based model to transform lectures from a passive information delivery space into an active learning environment. This model mainly suits educational establishments with financial constraints, nevertheless, it can be applied in lectures presented in any educational environment to improve active participation of students. The components and the expected advantages of employing the 5-step paper-based model in large lectures as well as its limitations and ways to overcome them are presented briefly. The impact of applying this model on students’ engagement and learning is currently being investigated.

  2. Multi-Step Time Series Forecasting with an Ensemble of Varied Length Mixture Models.

    Science.gov (United States)

    Ouyang, Yicun; Yin, Hujun

    2018-05-01

    Many real-world problems require modeling and forecasting of time series, such as weather temperature, electricity demand, stock prices and foreign exchange (FX) rates. Often, the tasks involve predicting over a long-term period, e.g. several weeks or months. Most existing time series models are inheritably for one-step prediction, that is, predicting one time point ahead. Multi-step or long-term prediction is difficult and challenging due to the lack of information and uncertainty or error accumulation. The main existing approaches, iterative and independent, either use one-step model recursively or treat the multi-step task as an independent model. They generally perform poorly in practical applications. In this paper, as an extension of the self-organizing mixture autoregressive (AR) model, the varied length mixture (VLM) models are proposed to model and forecast time series over multi-steps. The key idea is to preserve the dependencies between the time points within the prediction horizon. Training data are segmented to various lengths corresponding to various forecasting horizons, and the VLM models are trained in a self-organizing fashion on these segments to capture these dependencies in its component AR models of various predicting horizons. The VLM models form a probabilistic mixture of these varied length models. A combination of short and long VLM models and an ensemble of them are proposed to further enhance the prediction performance. The effectiveness of the proposed methods and their marked improvements over the existing methods are demonstrated through a number of experiments on synthetic data, real-world FX rates and weather temperatures.

  3. Combined Effects of Numerical Method Type and Time Step on Water Stressed Actual Crop ET

    Directory of Open Access Journals (Sweden)

    B. Ghahraman

    2016-02-01

    .7. Therefore, nine different classes were formed by combination of three crop types and three soil class types. Then, the results of numerical methods were compared to the analytical solution of the soil moisture differential equation as a datum. Three factors (time step, initial soil water content, and maximum evaporation, ETc were considered as influencing variables. Results and Discussion: It was clearly shown that as the crops becomes more sensitive, the dependency of Eta to ETc increases. The same is true as the soil becomes fine textured. The results showed that as water stress progress during the time step, relative errors of computed ET by different numerical methods did not depend on initial soil moisture. On overall and irrespective to soil tpe, crop type, and numerical method, relative error increased by increasing time step and/or increasing ETc. On overall, the absolute errors were negative for implicit Euler and third order Heun, while for other methods were positive. There was a systematic trend for relative error, as it increased by sandier soil and/or crop sensitivity. Absolute errors of ET computations decreased with consecutive time steps, which ensures the stability of water balance predictions. It was not possible to prescribe a unique numerical method for considering all variables. For comparing the numerical methods, however, we took the largest relative error corresponding to 10-day time step and ETc equal to 12 mm.d-1, while considered soil and crop types as variable. Explicit Euler was unstable and varied between 40% and 150%. Implicit Euler was robust and its relative error was around 20% for all combinations of soil and crop types. Unstable pattern was governed for modified Euler. The relative error was as low as 10% only for two cases while on overall it ranged between 20% and 100%. Although the relative errors of third order Heun were the smallest among the all methods, its robustness was not as good as implicit Euler method. Excluding one large

  4. Associations of office workers' objectively assessed occupational sitting, standing and stepping time with musculoskeletal symptoms.

    Science.gov (United States)

    Coenen, Pieter; Healy, Genevieve N; Winkler, Elisabeth A H; Dunstan, David W; Owen, Neville; Moodie, Marj; LaMontagne, Anthony D; Eakin, Elizabeth A; O'Sullivan, Peter B; Straker, Leon M

    2018-04-22

    We examined the association of musculoskeletal symptoms (MSS) with workplace sitting, standing and stepping time, as well as sitting and standing time accumulation (i.e. usual bout duration of these activities), measured objectively with the activPAL3 monitor. Using baseline data from the Stand Up Victoria trial (216 office workers, 14 workplaces), cross-sectional associations of occupational activities with self-reported MSS (low-back, upper and lower extremity symptoms in the last three months) were examined using probit regression, correcting for clustering and adjusting for confounders. Sitting bout duration was significantly (p < 0.05) associated, non-linearly, with MSS, such that those in the middle tertile displayed the highest prevalence of upper extremity symptoms. Other associations were non-significant but sometimes involved large differences in symptom prevalence (e.g. 38%) by activity. Though causation is unclear, these non-linear associations suggest that sitting and its alternatives (i.e. standing and stepping) interact with MSS and this should be considered when designing safe work systems. Practitioner summary: We studied associations of objectively assessed occupational activities with musculoskeletal symptoms in office workers. Workers who accumulated longer sitting bouts reported fewer upper extremity symptoms. Total activity duration was not significantly associated with musculoskeletal symptoms. We underline the importance of considering total volumes and patterns of activity time in musculoskeletal research.

  5. Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping

    KAUST Repository

    Bonito, Andrea

    2014-10-31

    © Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.

  6. Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping

    KAUST Repository

    Bonito, Andrea; Guermond, Jean-Luc; Lee, Sanghyun

    2014-01-01

    © Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.

  7. Two-Step Time of Arrival Estimation for Pulse-Based Ultra-Wideband Systems

    Directory of Open Access Journals (Sweden)

    H. Vincent Poor

    2008-05-01

    Full Text Available In cooperative localization systems, wireless nodes need to exchange accurate position-related information such as time-of-arrival (TOA and angle-of-arrival (AOA, in order to obtain accurate location information. One alternative for providing accurate position-related information is to use ultra-wideband (UWB signals. The high time resolution of UWB signals presents a potential for very accurate positioning based on TOA estimation. However, it is challenging to realize very accurate positioning systems in practical scenarios, due to both complexity/cost constraints and adverse channel conditions such as multipath propagation. In this paper, a two-step TOA estimation algorithm is proposed for UWB systems in order to provide accurate TOA estimation under practical constraints. In order to speed up the estimation process, the first step estimates a coarse TOA of the received signal based on received signal energy. Then, in the second step, the arrival time of the first signal path is estimated by considering a hypothesis testing approach. The proposed scheme uses low-rate correlation outputs and is able to perform accurate TOA estimation in reasonable time intervals. The simulation results are presented to analyze the performance of the estimator.

  8. 40 CFR 141.81 - Applicability of corrosion control treatment steps to small, medium-size and large water systems.

    Science.gov (United States)

    2010-07-01

    ... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... WATER REGULATIONS Control of Lead and Copper § 141.81 Applicability of corrosion control treatment steps...). (ii) A report explaining the test methods used by the water system to evaluate the corrosion control...

  9. Some Comments on the Behavior of the RELAP5 Numerical Scheme at Very Small Time Steps

    International Nuclear Information System (INIS)

    Tiselj, Iztok; Cerne, Gregor

    2000-01-01

    The behavior of the RELAP5 code at very short time steps is described, i.e., δt [approximately equal to] 0.01 δx/c. First, the property of the RELAP5 code to trace acoustic waves with 'almost' second-order accuracy is demonstrated. Quasi-second-order accuracy is usually achieved for acoustic waves at very short time steps but can never be achieved for the propagation of nonacoustic temperature and void fraction waves. While this feature may be beneficial for the simulations of fast transients describing pressure waves, it also has an adverse effect: The lack of numerical diffusion at very short time steps can cause typical second-order numerical oscillations near steep pressure jumps. This behavior explains why an automatic halving of the time step, which is used in RELAP5 when numerical difficulties are encountered, in some cases leads to the failure of the simulation.Second, the integration of the stiff interphase exchange terms in RELAP5 is studied. For transients with flashing and/or rapid condensation as the main phenomena, results strongly depend on the time step used. Poor accuracy is achieved with 'normal' time steps (δt [approximately equal to] δx/v) because of the very short characteristic timescale of the interphase mass and heat transfer sources. In such cases significantly different results are predicted with very short time steps because of the more accurate integration of the stiff interphase exchange terms

  10. Multi-time-step domain coupling method with energy control

    DEFF Research Database (Denmark)

    Mahjoubi, N.; Krenk, Steen

    2010-01-01

    the individual time step. It is demonstrated that displacement continuity between the subdomains leads to cancelation of the interface contributions to the energy balance equation, and thus stability and algorithmic damping properties of the original algorithms are retained. The various subdomains can...... by a numerical example using a refined mesh around concentrated forces. Copyright © 2010 John Wiley & Sons, Ltd....

  11. Sharing Steps in the Workplace: Changing Privacy Concerns Over Time

    DEFF Research Database (Denmark)

    Jensen, Nanna Gorm; Shklovski, Irina

    2016-01-01

    study of a Danish workplace participating in a step counting campaign. We find that concerns of employees who choose to participate and those who choose not to differ. Moreover, privacy concerns of participants develop and change over time. Our findings challenge the assumption that consumers...

  12. Development of a real time activity monitoring Android application utilizing SmartStep.

    Science.gov (United States)

    Hegde, Nagaraj; Melanson, Edward; Sazonov, Edward

    2016-08-01

    Footwear based activity monitoring systems are becoming popular in academic research as well as consumer industry segments. In our previous work, we had presented developmental aspects of an insole based activity and gait monitoring system-SmartStep, which is a socially acceptable, fully wireless and versatile insole. The present work describes the development of an Android application that captures the SmartStep data wirelessly over Bluetooth Low energy (BLE), computes features on the received data, runs activity classification algorithms and provides real time feedback. The development of activity classification methods was based on the the data from a human study involving 4 participants. Participants were asked to perform activities of sitting, standing, walking, and cycling while they wore SmartStep insole system. Multinomial Logistic Discrimination (MLD) was utilized in the development of machine learning model for activity prediction. The resulting classification model was implemented in an Android Smartphone. The Android application was benchmarked for power consumption and CPU loading. Leave one out cross validation resulted in average accuracy of 96.9% during model training phase. The Android application for real time activity classification was tested on a human subject wearing SmartStep resulting in testing accuracy of 95.4%.

  13. The Technique of Changing the Drive Method of Micro Step Drive and Sensorless Drive for Hybrid Stepping Motor

    Science.gov (United States)

    Yoneda, Makoto; Dohmeki, Hideo

    The position control system with the advantage large torque, low vibration, and high resolution can be obtained by the constant current micro step drive applied to hybrid stepping motor. However loss is large, in order not to be concerned with load torque but to control current uniformly. As the one technique of a position control system in which high efficiency is realizable, the same sensorless control as a permanent magnet motor is effective. But, it was the purpose that the control method proposed until now controls speed. Then, this paper proposed changing the drive method of micro step drive and sensorless drive. The change of the drive method was verified from the simulation and the experiment. On no load, it was checked not producing change of a large speed at the time of a change by making electrical angle and carrying out zero reset of the integrator. On load, it was checked that a large speed change arose. The proposed system could change drive method by setting up the initial value of an integrator using the estimated result, without producing speed change. With this technique, the low loss position control system, which employed the advantage of the hybrid stepping motor, has been built.

  14. Developing Major Steps for a Feasibility Study for Upgrading I and C Systems in a Large Scale for an Operating Nuclear Power Plant

    International Nuclear Information System (INIS)

    Suh, Yong Suk; Keum, Jong Yong; Kim, Dong Hoon; Kang, Hyeon Tae; Sung, Chan Ho; Lee, Jae Ki; Cho, Chang Hwan

    2009-01-01

    According to the IAEA report as of Jan. 2008, 436 nuclear power reactors are in operation over the world and 368 nuclear power reactors exceed their operating ages by 20 years. The average I and C equipment's life span is 20 years comparing with that the average reactor's life time is 40 to 60 years. This means that a reactor must be faced with I and C equipment obsolescence problems once or twice during its operating years. The I and C equipment is replaced with new equipment only when the obsolescence problem occurs in a nuclear power plant. This is called an equipment basis upgrade in this paper. This replacement is such a general practice that occurs only when needed. We can assume that most of I and C equipment of a plant will meet with the obsolescence problem almost same time since it started operating. Although there must be a little time difference in the occurrence of the problems among I and C equipment, the replacement will be required in consecutive years. With this assumption, it is recommendable to upgrade the equipment, which is to meet with the problem at the same time, with new equipment at the same time. This is called a system basis upgrade in this paper. The system-basis replacement can be achieved in a large scale by coupling systems whose functions are related each other and replacing them together with a new upto- date platform. This paper focuses on the large scale upgrade of I and C systems for existing and operating NPPs. While performing a feasibility study for the large scale upgrade for Korea standard nuclear power plants (KSNPs), six major steps are developed for the study. This paper is to present what to perform in each step

  15. Measuring border delay and crossing times at the US-Mexico border : part II. Step-by-step guidelines for implementing a radio frequency identification (RFID) system to measure border crossing and wait times.

    Science.gov (United States)

    2012-06-01

    The purpose of these step-by-step guidelines is to assist in planning, designing, and deploying a system that uses radio frequency identification (RFID) technology to measure the time needed for commercial vehicles to complete the northbound border c...

  16. Stepping Stones through Time

    Directory of Open Access Journals (Sweden)

    Emily Lyle

    2012-03-01

    Full Text Available Indo-European mythology is known only through written records but it needs to be understood in terms of the preliterate oral-cultural context in which it was rooted. It is proposed that this world was conceptually organized through a memory-capsule consisting of the current generation and the three before it, and that there was a system of alternate generations with each generation taking a step into the future under the leadership of a white or red king.

  17. Evaluating Web-Scale Discovery Services: A Step-by-Step Guide

    Directory of Open Access Journals (Sweden)

    Joseph Deodato

    2015-06-01

    Full Text Available Selecting a web-scale discovery service is a large and important undertaking that involves a significant investment of time, staff, and resources. Finding the right match begins with a thorough and carefully planned evaluation process. In order to be successful, this process should be inclusive, goal-oriented, data-driven, user-centered, and transparent. The following article offers a step-by-step guide for developing a web-scale discovery evaluation plan rooted in these five key principles based on best practices synthesized from the literature as well as the author’s own experiences coordinating the evaluation process at Rutgers University. The goal is to offer academic libraries that are considering acquiring a web-scale discovery service a blueprint for planning a structured and comprehensive evaluation process.

  18. Error Analysis of a Fractional Time-Stepping Technique for Incompressible Flows with Variable Density

    KAUST Repository

    Guermond, J.-L.; Salgado, Abner J.

    2011-01-01

    In this paper we analyze the convergence properties of a new fractional time-stepping technique for the solution of the variable density incompressible Navier-Stokes equations. The main feature of this method is that, contrary to other existing algorithms, the pressure is determined by just solving one Poisson equation per time step. First-order error estimates are proved, and stability of a formally second-order variant of the method is established. © 2011 Society for Industrial and Applied Mathematics.

  19. Multiple time step molecular dynamics in the optimized isokinetic ensemble steered with the molecular theory of solvation: Accelerating with advanced extrapolation of effective solvation forces

    International Nuclear Information System (INIS)

    Omelyan, Igor; Kovalenko, Andriy

    2013-01-01

    We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nosé-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics

  20. A stabilized Runge–Kutta–Legendre method for explicit super-time-stepping of parabolic and mixed equations

    International Nuclear Information System (INIS)

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-01

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s 2 times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful

  1. Multiple Time-Step Dual-Hamiltonian Hybrid Molecular Dynamics - Monte Carlo Canonical Propagation Algorithm.

    Science.gov (United States)

    Chen, Yunjie; Kale, Seyit; Weare, Jonathan; Dinner, Aaron R; Roux, Benoît

    2016-04-12

    A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method.

  2. Step dynamics and terrace-width distribution on flame-annealed gold films: The effect of step-step interaction

    International Nuclear Information System (INIS)

    Shimoni, Nira; Ayal, Shai; Millo, Oded

    2000-01-01

    Dynamics of atomic steps and the terrace-width distribution within step bunches on flame-annealed gold films are studied using scanning tunneling microscopy. The distribution is narrower than commonly observed for vicinal planes and has a Gaussian shape, indicating a short-range repulsive interaction between the steps, with an apparently large interaction constant. The dynamics of the atomic steps, on the other hand, appear to be influenced, in addition to these short-range interactions, also by a longer-range attraction of steps towards step bunches. Both types of interactions promote self-ordering of terrace structures on the surface. When current is driven through the films a step-fingering instability sets in, reminiscent of the Bales-Zangwill instability

  3. One-step electrodeposition process of CuInSe2: Deposition time effect

    Indian Academy of Sciences (India)

    Administrator

    CuInSe2 thin films were prepared by one-step electrodeposition process using a simplified two- electrodes system. ... homojunctions or heterojunctions (Rincon et al 1983). Efficiency of ... deposition times onto indium thin oxide (ITO)-covered.

  4. Age-related differences in lower-limb force-time relation during the push-off in rapid voluntary stepping.

    Science.gov (United States)

    Melzer, I; Krasovsky, T; Oddsson, L I E; Liebermann, D G

    2010-12-01

    This study investigated the force-time relationship during the push-off stage of a rapid voluntary step in young and older healthy adults, to study the assumption that when balance is lost a quick step may preserve stability. The ability to achieve peak propulsive force within a short time is critical for the performance of such a quick powerful step. We hypothesized that older adults would achieve peak force and power in significantly longer times compared to young people, particularly during the push-off preparatory phase. Fifteen young and 15 older volunteers performed rapid forward steps while standing on a force platform. Absolute anteroposterior and body weight normalized vertical forces during the push-off in the preparation and swing phases were used to determine time to peak and peak force, and step power. Two-way analyses of variance ('Group' [young-older] by 'Phase' [preparation-swing]) were used to assess our hypothesis (P ≤ 0.05). Older people exerted lower peak forces (anteroposterior and vertical) than young adults, but not necessarily lower peak power. More significantly, they showed a longer time to peak force, particularly in the vertical direction during the preparation phase. Older adults generate propulsive forces slowly and reach lower magnitudes, mainly during step preparation. The time to achieve a peak force and power, rather than its actual magnitude, may account for failures in quickly performing a preventive action. Such delay may be associated with the inability to react and recruit muscles quickly. Thus, training elderly to step fast in response to relevant cues may be beneficial in the prevention of falls. Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. Parallel time domain solvers for electrically large transient scattering problems

    KAUST Repository

    Liu, Yang; Yucel, Abdulkadir; Bagcý , Hakan; Michielssen, Eric

    2014-01-01

    scattering from perfect electrically conducting objects are obtained by enforcing electric field boundary conditions and implicitly time advance electric surface current densities by iteratively solving sparse systems of equations at all time steps. Contrary

  6. Stability of the high-order finite elements for acoustic or elastic wave propagation with high-order time stepping

    KAUST Repository

    De Basabe, Jonás D.

    2010-04-01

    We investigate the stability of some high-order finite element methods, namely the spectral element method and the interior-penalty discontinuous Galerkin method (IP-DGM), for acoustic or elastic wave propagation that have become increasingly popular in the recent past. We consider the Lax-Wendroff method (LWM) for time stepping and show that it allows for a larger time step than the classical leap-frog finite difference method, with higher-order accuracy. In particular the fourth-order LWM allows for a time step 73 per cent larger than that of the leap-frog method; the computational cost is approximately double per time step, but the larger time step partially compensates for this additional cost. Necessary, but not sufficient, stability conditions are given for the mentioned methods for orders up to 10 in space and time. The stability conditions for IP-DGM are approximately 20 and 60 per cent more restrictive than those for SEM in the acoustic and elastic cases, respectively. © 2010 The Authors Journal compilation © 2010 RAS.

  7. Numerical simulation of pseudoelastic shape memory alloys using the large time increment method

    Science.gov (United States)

    Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad

    2017-04-01

    The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.

  8. Effects of walking speed on the step-by-step control of step width.

    Science.gov (United States)

    Stimpson, Katy H; Heitkamp, Lauren N; Horne, Joscelyn S; Dean, Jesse C

    2018-02-08

    Young, healthy adults walking at typical preferred speeds use step-by-step adjustments of step width to appropriately redirect their center of mass motion and ensure mediolateral stability. However, it is presently unclear whether this control strategy is retained when walking at the slower speeds preferred by many clinical populations. We investigated whether the typical stabilization strategy is influenced by walking speed. Twelve young, neurologically intact participants walked on a treadmill at a range of prescribed speeds (0.2-1.2 m/s). The mediolateral stabilization strategy was quantified as the proportion of step width variance predicted by the mechanical state of the pelvis throughout a step (calculated as R 2 magnitude from a multiple linear regression). Our ability to accurately predict the upcoming step width increased over the course of a step. The strength of the relationship between step width and pelvis mechanics at the start of a step was reduced at slower speeds. However, these speed-dependent differences largely disappeared by the end of a step, other than at the slowest walking speed (0.2 m/s). These results suggest that mechanics-dependent adjustments in step width are a consistent component of healthy gait across speeds and contexts. However, slower walking speeds may ease this control by allowing mediolateral repositioning of the swing leg to occur later in a step, thus encouraging slower walking among clinical populations with limited sensorimotor control. Published by Elsevier Ltd.

  9. Fast Determination of Distribution-Connected PV Impacts Using a Variable Time-Step Quasi-Static Time-Series Approach: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Mather, Barry

    2017-08-24

    The increasing deployment of distribution-connected photovoltaic (DPV) systems requires utilities to complete complex interconnection studies. Relatively simple interconnection study methods worked well for low penetrations of photovoltaic systems, but more complicated quasi-static time-series (QSTS) analysis is required to make better interconnection decisions as DPV penetration levels increase. Tools and methods must be developed to support this. This paper presents a variable-time-step solver for QSTS analysis that significantly shortens the computational time and effort to complete a detailed analysis of the operation of a distribution circuit with many DPV systems. Specifically, it demonstrates that the proposed variable-time-step solver can reduce the required computational time by as much as 84% without introducing any important errors to metrics, such as the highest and lowest voltage occurring on the feeder, number of voltage regulator tap operations, and total amount of losses realized in the distribution circuit during a 1-yr period. Further improvement in computational speed is possible with the introduction of only modest errors in these metrics, such as a 91 percent reduction with less than 5 percent error when predicting voltage regulator operations.

  10. Elderly fallers enhance dynamic stability through anticipatory postural adjustments during a choice stepping reaction time

    Directory of Open Access Journals (Sweden)

    Romain Tisserand

    2016-11-01

    Full Text Available In the case of disequilibrium, the capacity to step quickly is critical to avoid falling for elderly. This capacity can be simply assessed through the choice stepping reaction time test (CSRT, where elderly fallers (F take longer to step than elderly non-fallers (NF. However, reasons why elderly F elongate their stepping time remain unclear. The purpose of this study is to assess the characteristics of anticipated postural adjustments (APA that elderly F develop in a stepping context and their consequences on the dynamic stability. 44 community-dwelling elderly subjects (20 F and 22 NF performed a CSRT where kinematics and ground reaction forces were collected. Variables were analyzed using two-way repeated measures ANOVAs. Results for F compared to NF showed that stepping time is elongated, due to a longer APA phase. During APA, they seem to use two distinct balance strategies, depending on the axis: in the anteroposterior direction, we measured a smaller backward movement and slower peak velocity of the center of pressure (CoP; in the mediolateral direction, the CoP movement was similar in amplitude and peak velocity between groups but lasted longer. The biomechanical consequence of both strategies was an increased margin of stability (MoS at foot-off, in the respective direction. By elongating their APA, elderly F use a safer balance strategy that prioritizes dynamic stability conditions instead of the objective of the task. Such a choice in balance strategy probably comes from muscular limitations and/or a higher fear of falling and paradoxically indicates an increased risk of fall.

  11. Modeling Stepped Leaders Using a Time Dependent Multi-dipole Model and High-speed Video Data

    Science.gov (United States)

    Karunarathne, S.; Marshall, T.; Stolzenburg, M.; Warner, T. A.; Orville, R. E.

    2012-12-01

    In summer of 2011, we collected lightning data with 10 stations of electric field change meters (bandwidth of 0.16 Hz - 2.6 MHz) on and around NASA/Kennedy Space Center (KSC) covering nearly 70 km × 100 km area. We also had a high-speed video (HSV) camera recording 50,000 images per second collocated with one of the electric field change meters. In this presentation we describe our use of these data to model the electric field change caused by stepped leaders. Stepped leaders of a cloud to ground lightning flash typically create the initial path for the first return stroke (RS). Most of the time, stepped leaders have multiple complex branches, and one of these branches will create the ground connection for the RS to start. HSV data acquired with a short focal length lens at ranges of 5-25 km from the flash are useful for obtaining the 2-D location of these multiple branches developing at the same time. Using HSV data along with data from the KSC Lightning Detection and Ranging (LDAR2) system and the Cloud to Ground Lightning Surveillance System (CGLSS), the 3D path of a leader may be estimated. Once the path of a stepped leader is obtained, the time dependent multi-dipole model [ Lu, Winn,and Sonnenfeld, JGR 2011] can be used to match the electric field change at various sensor locations. Based on this model, we will present the time-dependent charge distribution along a leader channel and the total charge transfer during the stepped leader phase.

  12. Implementation of Real-Time Machining Process Control Based on Fuzzy Logic in a New STEP-NC Compatible System

    Directory of Open Access Journals (Sweden)

    Po Hu

    2016-01-01

    Full Text Available Implementing real-time machining process control at shop floor has great significance on raising the efficiency and quality of product manufacturing. A framework and implementation methods of real-time machining process control based on STEP-NC are presented in this paper. Data model compatible with ISO 14649 standard is built to transfer high-level real-time machining process control information between CAPP systems and CNC systems, in which EXPRESS language is used to define new STEP-NC entities. Methods for implementing real-time machining process control at shop floor are studied and realized on an open STEP-NC controller, which is developed using object-oriented, multithread, and shared memory technologies conjunctively. Cutting force at specific direction of machining feature in side mill is chosen to be controlled object, and a fuzzy control algorithm with self-adjusting factor is designed and embedded in the software CNC kernel of STEP-NC controller. Experiments are carried out to verify the proposed framework, STEP-NC data model, and implementation methods for real-time machining process control. The results of experiments prove that real-time machining process control tasks can be interpreted and executed correctly by the STEP-NC controller at shop floor, in which actual cutting force is kept around ideal value, whether axial cutting depth changes suddenly or continuously.

  13. Rapid expansion method (REM) for time‐stepping in reverse time migration (RTM)

    KAUST Repository

    Pestana, Reynam C.; Stoffa, Paul L.

    2009-01-01

    an analytical approximation for the Bessel function where we assume that the time step is sufficiently small. From this derivation we find that if we consider only the first two Chebyshev polynomials terms in the rapid expansion method we can obtain the second

  14. Testing a stepped care model for binge-eating disorder: a two-step randomized controlled trial.

    Science.gov (United States)

    Tasca, Giorgio A; Koszycki, Diana; Brugnera, Agostino; Chyurlia, Livia; Hammond, Nicole; Francis, Kylie; Ritchie, Kerri; Ivanova, Iryna; Proulx, Genevieve; Wilson, Brian; Beaulac, Julie; Bissada, Hany; Beasley, Erin; Mcquaid, Nancy; Grenon, Renee; Fortin-Langelier, Benjamin; Compare, Angelo; Balfour, Louise

    2018-05-24

    A stepped care approach involves patients first receiving low-intensity treatment followed by higher intensity treatment. This two-step randomized controlled trial investigated the efficacy of a sequential stepped care approach for the psychological treatment of binge-eating disorder (BED). In the first step, all participants with BED (n = 135) received unguided self-help (USH) based on a cognitive-behavioral therapy model. In the second step, participants who remained in the trial were randomized either to 16 weeks of group psychodynamic-interpersonal psychotherapy (GPIP) (n = 39) or to a no-treatment control condition (n = 46). Outcomes were assessed for USH in step 1, and then for step 2 up to 6-months post-treatment using multilevel regression slope discontinuity models. In the first step, USH resulted in large and statistically significant reductions in the frequency of binge eating. Statistically significant moderate to large reductions in eating disorder cognitions were also noted. In the second step, there was no difference in change in frequency of binge eating between GPIP and the control condition. Compared with controls, GPIP resulted in significant and large improvement in attachment avoidance and interpersonal problems. The findings indicated that a second step of a stepped care approach did not significantly reduce binge-eating symptoms beyond the effects of USH alone. The study provided some evidence for the second step potentially to reduce factors known to maintain binge eating in the long run, such as attachment avoidance and interpersonal problems.

  15. Multi-step-prediction of chaotic time series based on co-evolutionary recurrent neural network

    International Nuclear Information System (INIS)

    Ma Qianli; Zheng Qilun; Peng Hong; Qin Jiangwei; Zhong Tanwei

    2008-01-01

    This paper proposes a co-evolutionary recurrent neural network (CERNN) for the multi-step-prediction of chaotic time series, it estimates the proper parameters of phase space reconstruction and optimizes the structure of recurrent neural networks by co-evolutionary strategy. The searching space was separated into two subspaces and the individuals are trained in a parallel computational procedure. It can dynamically combine the embedding method with the capability of recurrent neural network to incorporate past experience due to internal recurrence. The effectiveness of CERNN is evaluated by using three benchmark chaotic time series data sets: the Lorenz series, Mackey-Glass series and real-world sun spot series. The simulation results show that CERNN improves the performances of multi-step-prediction of chaotic time series

  16. Rapid Two-Step Procedure for Large-Scale Purification of Pediocin-Like Bacteriocins and Other Cationic Antimicrobial Peptides from Complex Culture Medium

    OpenAIRE

    Uteng, Marianne; Hauge, Håvard Hildeng; Brondz, Ilia; Nissen-Meyer, Jon; Fimland, Gunnar

    2002-01-01

    A rapid and simple two-step procedure suitable for both small- and large-scale purification of pediocin-like bacteriocins and other cationic peptides has been developed. In the first step, the bacterial culture was applied directly on a cation-exchange column (1-ml cation exchanger per 100-ml cell culture). Bacteria and anionic compounds passed through the column, and cationic bacteriocins were subsequently eluted with 1 M NaCl. In the second step, the bacteriocin fraction was applied on a lo...

  17. Integration of FULLSWOF2D and PeanoClaw: Adaptivity and Local Time-Stepping for Complex Overland Flows

    KAUST Repository

    Unterweger, K.

    2015-01-01

    © Springer International Publishing Switzerland 2015. We propose to couple our adaptive mesh refinement software PeanoClaw with existing solvers for complex overland flows that are tailored to regular Cartesian meshes. This allows us to augment them with spatial adaptivity and local time-stepping without altering the computational kernels. FullSWOF2D—Full Shallow Water Overland Flows—here is our software of choice though all paradigms hold for other solvers as well.We validate our hybrid simulation software in an artificial test scenario before we provide results for a large-scale flooding scenario of the Mecca region. The latter demonstrates that our coupling approach enables the simulation of complex “real-world” scenarios.

  18. Estimating heterotrophic respiration at large scales: Challenges, approaches, and next steps

    Science.gov (United States)

    Bond-Lamberty, Ben; Epron, Daniel; Harden, Jennifer W.; Harmon, Mark E.; Hoffman, Forrest; Kumar, Jitendra; McGuire, Anthony David; Vargas, Rodrigo

    2016-01-01

    Heterotrophic respiration (HR), the aerobic and anaerobic processes mineralizing organic matter, is a key carbon flux but one impossible to measure at scales significantly larger than small experimental plots. This impedes our ability to understand carbon and nutrient cycles, benchmark models, or reliably upscale point measurements. Given that a new generation of highly mechanistic, genomic-specific global models is not imminent, we suggest that a useful step to improve this situation would be the development of “Decomposition Functional Types” (DFTs). Analogous to plant functional types (PFTs), DFTs would abstract and capture important differences in HR metabolism and flux dynamics, allowing modelers and experimentalists to efficiently group and vary these characteristics across space and time. We argue that DFTs should be initially informed by top-down expert opinion, but ultimately developed using bottom-up, data-driven analyses, and provide specific examples of potential dependent and independent variables that could be used. We present an example clustering analysis to show how annual HR can be broken into distinct groups associated with global variability in biotic and abiotic factors, and demonstrate that these groups are distinct from (but complementary to) already-existing PFTs. A similar analysis incorporating observational data could form the basis for future DFTs. Finally, we suggest next steps and critical priorities: collection and synthesis of existing data; more in-depth analyses combining open data with rigorous testing of analytical results; using point measurements and realistic forcing variables to constrain process-based models; and planning by the global modeling community for decoupling decomposition from fixed site data. These are all critical steps to build a foundation for DFTs in global models, thus providing the ecological and climate change communities with robust, scalable estimates of HR.

  19. Comparing an Annual and a Daily Time-Step Model for Predicting Field-Scale Phosphorus Loss.

    Science.gov (United States)

    Bolster, Carl H; Forsberg, Adam; Mittelstet, Aaron; Radcliffe, David E; Storm, Daniel; Ramirez-Avila, John; Sharpley, Andrew N; Osmond, Deanna

    2017-11-01

    A wide range of mathematical models are available for predicting phosphorus (P) losses from agricultural fields, ranging from simple, empirically based annual time-step models to more complex, process-based daily time-step models. In this study, we compare field-scale P-loss predictions between the Annual P Loss Estimator (APLE), an empirically based annual time-step model, and the Texas Best Management Practice Evaluation Tool (TBET), a process-based daily time-step model based on the Soil and Water Assessment Tool. We first compared predictions of field-scale P loss from both models using field and land management data collected from 11 research sites throughout the southern United States. We then compared predictions of P loss from both models with measured P-loss data from these sites. We observed a strong and statistically significant ( loss between the two models; however, APLE predicted, on average, 44% greater dissolved P loss, whereas TBET predicted, on average, 105% greater particulate P loss for the conditions simulated in our study. When we compared model predictions with measured P-loss data, neither model consistently outperformed the other, indicating that more complex models do not necessarily produce better predictions of field-scale P loss. Our results also highlight limitations with both models and the need for continued efforts to improve their accuracy. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  20. Stability of the high-order finite elements for acoustic or elastic wave propagation with high-order time stepping

    KAUST Repository

    De Basabe, Joná s D.; Sen, Mrinal K.

    2010-01-01

    popular in the recent past. We consider the Lax-Wendroff method (LWM) for time stepping and show that it allows for a larger time step than the classical leap-frog finite difference method, with higher-order accuracy. In particular the fourth-order LWM

  1. Sharp Penalty Term and Time Step Bounds for the Interior Penalty Discontinuous Galerkin Method for Linear Hyperbolic Problems

    NARCIS (Netherlands)

    Geevers, Sjoerd; van der Vegt, J.J.W.

    2017-01-01

    We present sharp and sucient bounds for the interior penalty term and time step size to ensure stability of the symmetric interior penalty discontinuous Galerkin (SIPDG) method combined with an explicit time-stepping scheme. These conditions hold for generic meshes, including unstructured

  2. Underground structure pattern and multi AO reaction with step feed concept for upgrading an large wastewater treatment plant

    Science.gov (United States)

    Peng, Yi; Zhang, Jie; Li, Dong

    2018-03-01

    A large wastewater treatment plant (WWTP) could not meet the new demand of urban environment and the need of reclaimed water in China, using a US treatment technology. Thus a multi AO reaction process (Anaerobic/oxic/anoxic/oxic/anoxic/oxic) WWTP with underground structure was proposed to carry out the upgrade project. Four main new technologies were applied: (1) multi AO reaction with step feed technology; (2) deodorization; (3) new energy-saving technology such as water resource heat pump and optical fiber lighting system; (4) dependable old WWTP’s water quality support measurement during new WWTP’s construction. After construction, upgrading WWTP had saved two thirds land occupation, increased 80% treatment capacity and improved effluent standard by more than two times. Moreover, it had become a benchmark of an ecological negative capital changing to a positive capital.

  3. Step scaling and the Yang-Mills gradient flow

    International Nuclear Information System (INIS)

    Lüscher, Martin

    2014-01-01

    The use of the Yang-Mills gradient flow in step-scaling studies of lattice QCD is expected to lead to results of unprecedented precision. Step scaling is usually based on the Schrödinger functional, where time ranges over an interval [0,T] and all fields satisfy Dirichlet boundary conditions at time 0 and T. In these calculations, potentially important sources of systematic errors are boundary lattice effects and the infamous topology-freezing problem. The latter is here shown to be absent if Neumann instead of Dirichlet boundary conditions are imposed on the gauge field at time 0. Moreover, the expectation values of gauge-invariant local fields at positive flow time (and of other well localized observables) that reside in the center of the space-time volume are found to be largely insensitive to the boundary lattice effects.

  4. Full-Scale Modeling Explaining Large Spatial Variations of Nitrous Oxide Fluxes in a Step-Feed Plug-Flow Wastewater Treatment Reactor.

    Science.gov (United States)

    Ni, Bing-Jie; Pan, Yuting; van den Akker, Ben; Ye, Liu; Yuan, Zhiguo

    2015-08-04

    Nitrous oxide (N2O) emission data collected from wastewater treatment plants (WWTPs) show huge variations between plants and within one plant (both spatially and temporarily). Such variations and the relative contributions of various N2O production pathways are not fully understood. This study applied a previously established N2O model incorporating two currently known N2O production pathways by ammonia-oxidizing bacteria (AOB) (namely the AOB denitrification and the hydroxylamine pathways) and the N2O production pathway by heterotrophic denitrifiers to describe and provide insights into the large spatial variations of N2O fluxes in a step-feed full-scale activated sludge plant. The model was calibrated and validated by comparing simulation results with 40 days of N2O emission monitoring data as well as other water quality parameters from the plant. The model demonstrated that the relatively high biomass specific nitrogen loading rate in the Second Step of the reactor was responsible for the much higher N2O fluxes from this section. The results further revealed the AOB denitrification pathway decreased and the NH2OH oxidation pathway increased along the path of both Steps due to the increasing dissolved oxygen concentration. The overall N2O emission from this step-feed WWTP would be largely mitigated if 30% of the returned sludge were returned to the Second Step to reduce its biomass nitrogen loading rate.

  5. ChromAlign: A two-step algorithmic procedure for time alignment of three-dimensional LC-MS chromatographic surfaces.

    Science.gov (United States)

    Sadygov, Rovshan G; Maroto, Fernando Martin; Hühmer, Andreas F R

    2006-12-15

    We present an algorithmic approach to align three-dimensional chromatographic surfaces of LC-MS data of complex mixture samples. The approach consists of two steps. In the first step, we prealign chromatographic profiles: two-dimensional projections of chromatographic surfaces. This is accomplished by correlation analysis using fast Fourier transforms. In this step, a temporal offset that maximizes the overlap and dot product between two chromatographic profiles is determined. In the second step, the algorithm generates correlation matrix elements between full mass scans of the reference and sample chromatographic surfaces. The temporal offset from the first step indicates a range of the mass scans that are possibly correlated, then the correlation matrix is calculated only for these mass scans. The correlation matrix carries information on highly correlated scans, but it does not itself determine the scan or time alignment. Alignment is determined as a path in the correlation matrix that maximizes the sum of the correlation matrix elements. The computational complexity of the optimal path generation problem is reduced by the use of dynamic programming. The program produces time-aligned surfaces. The use of the temporal offset from the first step in the second step reduces the computation time for generating the correlation matrix and speeds up the process. The algorithm has been implemented in a program, ChromAlign, developed in C++ language for the .NET2 environment in WINDOWS XP. In this work, we demonstrate the applications of ChromAlign to alignment of LC-MS surfaces of several datasets: a mixture of known proteins, samples from digests of surface proteins of T-cells, and samples prepared from digests of cerebrospinal fluid. ChromAlign accurately aligns the LC-MS surfaces we studied. In these examples, we discuss various aspects of the alignment by ChromAlign, such as constant time axis shifts and warping of chromatographic surfaces.

  6. Real-time simulation of large-scale floods

    Science.gov (United States)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  7. Time step size limitation introduced by the BSSN Gamma Driver

    Energy Technology Data Exchange (ETDEWEB)

    Schnetter, Erik, E-mail: schnetter@cct.lsu.ed [Department of Physics and Astronomy, Louisiana State University, LA (United States)

    2010-08-21

    Many mesh refinement simulations currently performed in numerical relativity counteract instabilities near the outer boundary of the simulation domain either by changes to the mesh refinement scheme or by changes to the gauge condition. We point out that the BSSN Gamma Driver gauge condition introduces a time step size limitation in a similar manner as a Courant-Friedrichs-Lewy condition, but which is independent of the spatial resolution. We give a didactic explanation of this issue, show why, especially, mesh refinement simulations suffer from it, and point to a simple remedy. (note)

  8. Analysis on burnup step effect for evaluating reactor criticality and fuel breeding ratio

    International Nuclear Information System (INIS)

    Saputra, Geby; Purnama, Aditya Rizki; Permana, Sidik; Suzuki, Mitsutoshi

    2014-01-01

    Criticality condition of the reactors is one of the important factors for evaluating reactor operation and nuclear fuel breeding ratio is another factor to show nuclear fuel sustainability. This study analyzes the effect of burnup steps and cycle operation step for evaluating the criticality condition of the reactor as well as the performance of nuclear fuel breeding or breeding ratio (BR). Burnup step is performed based on a day step analysis which is varied from 10 days up to 800 days and for cycle operation from 1 cycle up to 8 cycles reactor operations. In addition, calculation efficiency based on the variation of computer processors to run the analysis in term of time (time efficiency in the calculation) have been also investigated. Optimization method for reactor design analysis which is used a large fast breeder reactor type as a reference case was performed by adopting an established reactor design code of JOINT-FR. The results show a criticality condition becomes higher for smaller burnup step (day) and for breeding ratio becomes less for smaller burnup step (day). Some nuclides contribute to make better criticality when smaller burnup step due to individul nuclide half-live. Calculation time for different burnup step shows a correlation with the time consuming requirement for more details step calculation, although the consuming time is not directly equivalent with the how many time the burnup time step is divided

  9. Computational challenges of large-scale, long-time, first-principles molecular dynamics

    International Nuclear Information System (INIS)

    Kent, P R C

    2008-01-01

    Plane wave density functional calculations have traditionally been able to use the largest available supercomputing resources. We analyze the scalability of modern projector-augmented wave implementations to identify the challenges in performing molecular dynamics calculations of large systems containing many thousands of electrons. Benchmark calculations on the Cray XT4 demonstrate that global linear-algebra operations are the primary reason for limited parallel scalability. Plane-wave related operations can be made sufficiently scalable. Improving parallel linear-algebra performance is an essential step to reaching longer timescales in future large-scale molecular dynamics calculations

  10. Control Software for Piezo Stepping Actuators

    Science.gov (United States)

    Shields, Joel F.

    2013-01-01

    A control system has been developed for the Space Interferometer Mission (SIM) piezo stepping actuator. Piezo stepping actuators are novel because they offer extreme dynamic range (centimeter stroke with nanometer resolution) with power, thermal, mass, and volume advantages over existing motorized actuation technology. These advantages come with the added benefit of greatly reduced complexity in the support electronics. The piezo stepping actuator consists of three fully redundant sets of piezoelectric transducers (PZTs), two sets of brake PZTs, and one set of extension PZTs. These PZTs are used to grasp and move a runner attached to the optic to be moved. By proper cycling of the two brake and extension PZTs, both forward and backward moves of the runner can be achieved. Each brake can be configured for either a power-on or power-off state. For SIM, the brakes and gate of the mechanism are configured in such a manner that, at the end of the step, the actuator is in a parked or power-off state. The control software uses asynchronous sampling of an optical encoder to monitor the position of the runner. These samples are timed to coincide with the end of the previous move, which may consist of a variable number of steps. This sampling technique linearizes the device by avoiding input saturation of the actuator and makes latencies of the plant vanish. The software also estimates, in real time, the scale factor of the device and a disturbance caused by cycling of the brakes. These estimates are used to actively cancel the brake disturbance. The control system also includes feedback and feedforward elements that regulate the position of the runner to a given reference position. Convergence time for smalland medium-sized reference positions (less than 200 microns) to within 10 nanometers can be achieved in under 10 seconds. Convergence times for large moves (greater than 1 millimeter) are limited by the step rate.

  11. Effects of the lateral amplitude and regularity of upper body fluctuation on step time variability evaluated using return map analysis.

    Science.gov (United States)

    Chidori, Kazuhiro; Yamamoto, Yuji

    2017-01-01

    The aim of this study was to evaluate the effects of the lateral amplitude and regularity of upper body fluctuation on step time variability. Return map analysis was used to clarify the relationship between step time variability and a history of falling. Eleven healthy, community-dwelling older adults and twelve younger adults participated in the study. All of the subjects walked 25 m at a comfortable speed. Trunk acceleration was measured using triaxial accelerometers attached to the third lumbar vertebrae (L3) and the seventh cervical vertebrae (C7). The normalized average magnitude of acceleration, the coefficient of determination ($R^2$) of the return map, and the step time variabilities, were calculated. Cluster analysis using the average fluctuation and the regularity of C7 fluctuation identified four walking patterns in the mediolateral (ML) direction. The participants with higher fluctuation and lower regularity showed significantly greater step time variability compared with the others. Additionally, elderly participants who had fallen in the past year had higher amplitude and a lower regularity of fluctuation during walking. In conclusion, by focusing on the time evolution of each step, it is possible to understand the cause of stride and/or step time variability that is associated with a risk of falls.

  12. Development and evaluation of a real-time one step Reverse-Transcriptase PCR for quantitation of Chandipura Virus

    Directory of Open Access Journals (Sweden)

    Tandale Babasaheb V

    2008-12-01

    Full Text Available Abstract Background Chandipura virus (CHPV, a member of family Rhabdoviridae was attributed to an explosive outbreak of acute encephalitis in children in Andhra Pradesh, India in 2003 and a small outbreak among tribal children from Gujarat, Western India in 2004. The case-fatality rate ranged from 55–75%. Considering the rapid progression of the disease and high mortality, a highly sensitive method for quantifying CHPV RNA by real-time one step reverse transcriptase PCR (real-time one step RT-PCR using TaqMan technology was developed for rapid diagnosis. Methods Primers and probe for P gene were designed and used to standardize real-time one step RT-PCR assay for CHPV RNA quantitation. Standard RNA was prepared by PCR amplification, TA cloning and run off transcription. The optimized real-time one step RT-PCR assay was compared with the diagnostic nested RT-PCR and different virus isolation systems [in vivo (mice in ovo (eggs, in vitro (Vero E6, PS, RD and Sand fly cell line] for the detection of CHPV. Sensitivity and specificity of real-time one step RT-PCR assay was evaluated with diagnostic nested RT-PCR, which is considered as a gold standard. Results Real-time one step RT-PCR was optimized using in vitro transcribed (IVT RNA. Standard curve showed linear relationship for wide range of 102-1010 (r2 = 0.99 with maximum Coefficient of variation (CV = 5.91% for IVT RNA. The newly developed real-time RT-PCR was at par with nested RT-PCR in sensitivity and superior to cell lines and other living systems (embryonated eggs and infant mice used for the isolation of the virus. Detection limit of real-time one step RT-PCR and nested RT-PCR was found to be 1.2 × 100 PFU/ml. RD cells, sand fly cells, infant mice, and embryonated eggs showed almost equal sensitivity (1.2 × 102 PFU/ml. Vero and PS cell-lines (1.2 × 103 PFU/ml were least sensitive to CHPV infection. Specificity of the assay was found to be 100% when RNA from other viruses or healthy

  13. Tandem mirror next step: remote maintenance

    International Nuclear Information System (INIS)

    Doggett, J.N.; Damm, C.C.; Hanson, C.L.

    1980-01-01

    This study of the next proposed experiment in the Mirror Fusion Program, the Tandem Mirror Next Step (TMNS), has included serious consideration of the maintenance requirements of such a large source of high energy neutrons with its attendant throughput of tritium. Although maintenance will be costly in time and money, our conclusion is that with careful attention to a design for maintenance plan such a device can be reliably operated

  14. Real-time, single-step bioassay using nanoplasmonic resonator with ultra-high sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xiang; Ellman, Jonathan A; Chen, Fanqing Frank; Su, Kai-Hang; Wei, Qi-Huo; Sun, Cheng

    2014-04-01

    A nanoplasmonic resonator (NPR) comprising a metallic nanodisk with alternating shielding layer(s), having a tagged biomolecule conjugated or tethered to the surface of the nanoplasmonic resonator for highly sensitive measurement of enzymatic activity. NPRs enhance Raman signals in a highly reproducible manner, enabling fast detection of protease and enzyme activity, such as Prostate Specific Antigen (paPSA), in real-time, at picomolar sensitivity levels. Experiments on extracellular fluid (ECF) from paPSA-positive cells demonstrate specific detection in a complex bio-fluid background in real-time single-step detection in very small sample volumes.

  15. Multi-time-step ahead daily and hourly intermittent reservoir inflow prediction by artificial intelligent techniques using lumped and distributed data

    Science.gov (United States)

    Jothiprakash, V.; Magar, R. B.

    2012-07-01

    SummaryIn this study, artificial intelligent (AI) techniques such as artificial neural network (ANN), Adaptive neuro-fuzzy inference system (ANFIS) and Linear genetic programming (LGP) are used to predict daily and hourly multi-time-step ahead intermittent reservoir inflow. To illustrate the applicability of AI techniques, intermittent Koyna river watershed in Maharashtra, India is chosen as a case study. Based on the observed daily and hourly rainfall and reservoir inflow various types of time-series, cause-effect and combined models are developed with lumped and distributed input data. Further, the model performance was evaluated using various performance criteria. From the results, it is found that the performances of LGP models are found to be superior to ANN and ANFIS models especially in predicting the peak inflows for both daily and hourly time-step. A detailed comparison of the overall performance indicated that the combined input model (combination of rainfall and inflow) performed better in both lumped and distributed input data modelling. It was observed that the lumped input data models performed slightly better because; apart from reducing the noise in the data, the better techniques and their training approach, appropriate selection of network architecture, required inputs, and also training-testing ratios of the data set. The slight poor performance of distributed data is due to large variations and lesser number of observed values.

  16. Stepping reaction time and gait adaptability are significantly impaired in people with Parkinson's disease: Implications for fall risk.

    Science.gov (United States)

    Caetano, Maria Joana D; Lord, Stephen R; Allen, Natalie E; Brodie, Matthew A; Song, Jooeun; Paul, Serene S; Canning, Colleen G; Menant, Jasmine C

    2018-02-01

    Decline in the ability to take effective steps and to adapt gait, particularly under challenging conditions, may be important reasons why people with Parkinson's disease (PD) have an increased risk of falling. This study aimed to determine the extent of stepping and gait adaptability impairments in PD individuals as well as their associations with PD symptoms, cognitive function and previous falls. Thirty-three older people with PD and 33 controls were assessed in choice stepping reaction time, Stroop stepping and gait adaptability tests; measurements identified as fall risk factors in older adults. People with PD had similar mean choice stepping reaction times to healthy controls, but had significantly greater intra-individual variability. In the Stroop stepping test, the PD participants were more likely to make an error (48 vs 18%), took 715 ms longer to react (2312 vs 1517 ms) and had significantly greater response variability (536 vs 329 ms) than the healthy controls. People with PD also had more difficulties adapting their gait in response to targets (poorer stepping accuracy) and obstacles (increased number of steps) appearing at short notice on a walkway. Within the PD group, higher disease severity, reduced cognition and previous falls were associated with poorer stepping and gait adaptability performances. People with PD have reduced ability to adapt gait to unexpected targets and obstacles and exhibit poorer stepping responses, particularly in a test condition involving conflict resolution. Such impaired stepping responses in Parkinson's disease are associated with disease severity, cognitive impairment and falls. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Finite-Time Stability of Large-Scale Systems with Interval Time-Varying Delay in Interconnection

    Directory of Open Access Journals (Sweden)

    T. La-inchua

    2017-01-01

    Full Text Available We investigate finite-time stability of a class of nonlinear large-scale systems with interval time-varying delays in interconnection. Time-delay functions are continuous but not necessarily differentiable. Based on Lyapunov stability theory and new integral bounding technique, finite-time stability of large-scale systems with interval time-varying delays in interconnection is derived. The finite-time stability criteria are delays-dependent and are given in terms of linear matrix inequalities which can be solved by various available algorithms. Numerical examples are given to illustrate effectiveness of the proposed method.

  18. Multiple-time-stepping generalized hybrid Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  19. Scalable explicit implementation of anisotropic diffusion with Runge-Kutta-Legendre super-time stepping

    Science.gov (United States)

    Vaidya, Bhargav; Prasad, Deovrat; Mignone, Andrea; Sharma, Prateek; Rickler, Luca

    2017-12-01

    An important ingredient in numerical modelling of high temperature magnetized astrophysical plasmas is the anisotropic transport of heat along magnetic field lines from higher to lower temperatures. Magnetohydrodynamics typically involves solving the hyperbolic set of conservation equations along with the induction equation. Incorporating anisotropic thermal conduction requires to also treat parabolic terms arising from the diffusion operator. An explicit treatment of parabolic terms will considerably reduce the simulation time step due to its dependence on the square of the grid resolution (Δx) for stability. Although an implicit scheme relaxes the constraint on stability, it is difficult to distribute efficiently on a parallel architecture. Treating parabolic terms with accelerated super-time-stepping (STS) methods has been discussed in literature, but these methods suffer from poor accuracy (first order in time) and also have difficult-to-choose tuneable stability parameters. In this work, we highlight a second-order (in time) Runge-Kutta-Legendre (RKL) scheme (first described by Meyer, Balsara & Aslam 2012) that is robust, fast and accurate in treating parabolic terms alongside the hyperbolic conversation laws. We demonstrate its superiority over the first-order STS schemes with standard tests and astrophysical applications. We also show that explicit conduction is particularly robust in handling saturated thermal conduction. Parallel scaling of explicit conduction using RKL scheme is demonstrated up to more than 104 processors.

  20. Rapid Two-Step Procedure for Large-Scale Purification of Pediocin-Like Bacteriocins and Other Cationic Antimicrobial Peptides from Complex Culture Medium

    Science.gov (United States)

    Uteng, Marianne; Hauge, Håvard Hildeng; Brondz, Ilia; Nissen-Meyer, Jon; Fimland, Gunnar

    2002-01-01

    A rapid and simple two-step procedure suitable for both small- and large-scale purification of pediocin-like bacteriocins and other cationic peptides has been developed. In the first step, the bacterial culture was applied directly on a cation-exchange column (1-ml cation exchanger per 100-ml cell culture). Bacteria and anionic compounds passed through the column, and cationic bacteriocins were subsequently eluted with 1 M NaCl. In the second step, the bacteriocin fraction was applied on a low-pressure, reverse-phase column and the bacteriocins were detected as major optical density peaks upon elution with propanol. More than 80% of the activity that was initially in the culture supernatant was recovered in both purification steps, and the final bacteriocin preparation was more than 90% pure as judged by analytical reverse-phase chromatography and capillary electrophoresis. PMID:11823243

  1. Microsoft Office professional 2010 step by step

    CERN Document Server

    Cox, Joyce; Frye, Curtis

    2011-01-01

    Teach yourself exactly what you need to know about using Office Professional 2010-one step at a time! With STEP BY STEP, you build and practice new skills hands-on, at your own pace. Covering Microsoft Word, PowerPoint, Outlook, Excel, Access, Publisher, and OneNote, this book will help you learn the core features and capabilities needed to: Create attractive documents, publications, and spreadsheetsManage your e-mail, calendar, meetings, and communicationsPut your business data to workDevelop and deliver great presentationsOrganize your ideas and notes in one placeConnect, share, and accom

  2. The enhancement of time-stepping procedures in SYVAC A/C

    International Nuclear Information System (INIS)

    Broyd, T.W.

    1986-01-01

    This report summarises the work carried out an SYVAC A/C between February and May 1985 aimed at improving the way in which time-stepping procedures are handled. The majority of the work was concerned with three types of problem, viz: i) Long vault release, short geosphere response ii) Short vault release, long geosphere response iii) Short vault release, short geosphere response The report contains details of changes to the logic and structure of SYVAC A/C, as well as the results of code implementation tests. It has been written primarily for members of the UK SYVAC development team, and should not be used or referred to in isolation. (author)

  3. Performance of an attention-demanding task during treadmill walking shifts the noise qualities of step-to-step variation in step width.

    Science.gov (United States)

    Grabiner, Mark D; Marone, Jane R; Wyatt, Marilynn; Sessoms, Pinata; Kaufman, Kenton R

    2018-06-01

    The fractal scaling evident in the step-to-step fluctuations of stepping-related time series reflects, to some degree, neuromotor noise. The primary purpose of this study was to determine the extent to which the fractal scaling of step width, step width and step width variability are affected by performance of an attention-demanding task. We hypothesized that the attention-demanding task would shift the structure of the step width time series toward white, uncorrelated noise. Subjects performed two 10-min treadmill walking trials, a control trial of undisturbed walking and a trial during which they performed a mental arithmetic/texting task. Motion capture data was converted to step width time series, the fractal scaling of which were determined from their power spectra. Fractal scaling decreased by 22% during the texting condition (p Step width and step width variability increased 19% and five percent, respectively (p step width fractal scaling. The change of the fractal scaling of step width is consistent with increased cognitive demand and suggests a transition in the characteristics of the signal noise. This may reflect an important advance toward the understanding of the manner in which neuromotor noise contributes to some types of falls. However, further investigation of the repeatability of the results, the sensitivity of the results to progressive increases in cognitive load imposed by attention-demanding tasks, and the extent to which the results can be generalized to the gait of older adults seems warranted. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Discrete maximal regularity of time-stepping schemes for fractional evolution equations.

    Science.gov (United States)

    Jin, Bangti; Li, Buyang; Zhou, Zhi

    2018-01-01

    In this work, we establish the maximal [Formula: see text]-regularity for several time stepping schemes for a fractional evolution model, which involves a fractional derivative of order [Formula: see text], [Formula: see text], in time. These schemes include convolution quadratures generated by backward Euler method and second-order backward difference formula, the L1 scheme, explicit Euler method and a fractional variant of the Crank-Nicolson method. The main tools for the analysis include operator-valued Fourier multiplier theorem due to Weis (Math Ann 319:735-758, 2001. doi:10.1007/PL00004457) and its discrete analogue due to Blunck (Stud Math 146:157-176, 2001. doi:10.4064/sm146-2-3). These results generalize the corresponding results for parabolic problems.

  5. Comparison of step-by-step kinematics of resisted, assisted and unloaded 20-m sprint runs.

    Science.gov (United States)

    van den Tillaar, Roland; Gamble, Paul

    2018-03-26

    This investigation examined step-by-step kinematics of sprint running acceleration. Using a randomised counterbalanced approach, 37 female team handball players (age 17.8 ± 1.6 years, body mass 69.6 ± 9.1 kg, height 1.74 ± 0.06 m) performed resisted, assisted and unloaded 20-m sprints within a single session. 20-m sprint times and step velocity, as well as step length, step frequency, contact and flight times of each step were evaluated for each condition with a laser gun and an infrared mat. Almost all measured parameters were altered for each step under the resisted and assisted sprint conditions (η 2  ≥ 0.28). The exception was step frequency, which did not differ between assisted and normal sprints. Contact time, flight time and step frequency at almost each step were different between 'fast' vs. 'slow' sub-groups (η 2  ≥ 0.22). Nevertheless overall both groups responded similarly to the respective sprint conditions. No significant differences in step length were observed between groups for the respective condition. It is possible that continued exposure to assisted sprinting might allow the female team-sports players studied to adapt their coordination to the 'over-speed' condition and increase step frequency. It is notable that step-by-step kinematics in these sprints were easy to obtain using relatively inexpensive equipment with possibilities of direct feedback.

  6. Steps of Supercritical Fluid Extraction of Natural Products and Their Characteristic Times

    OpenAIRE

    Sovová, H. (Helena)

    2012-01-01

    Kinetics of supercritical fluid extraction (SFE) from plants is variable due to different micro-structure of plants and their parts, different properties of extracted substances and solvents, and different flow patterns in the extractor. Variety of published mathematical models for SFE of natural products corresponds to this diversification. This study presents simplified equations of extraction curves in terms of characteristic times of four single extraction steps: internal diffusion, exter...

  7. A one-step, real-time PCR assay for rapid detection of rhinovirus.

    Science.gov (United States)

    Do, Duc H; Laus, Stella; Leber, Amy; Marcon, Mario J; Jordan, Jeanne A; Martin, Judith M; Wadowsky, Robert M

    2010-01-01

    One-step, real-time PCR assays for rhinovirus have been developed for a limited number of PCR amplification platforms and chemistries, and some exhibit cross-reactivity with genetically similar enteroviruses. We developed a one-step, real-time PCR assay for rhinovirus by using a sequence detection system (Applied Biosystems; Foster City, CA). The primers were designed to amplify a 120-base target in the noncoding region of picornavirus RNA, and a TaqMan (Applied Biosystems) degenerate probe was designed for the specific detection of rhinovirus amplicons. The PCR assay had no cross-reactivity with a panel of 76 nontarget nucleic acids, which included RNAs from 43 enterovirus strains. Excellent lower limits of detection relative to viral culture were observed for the PCR assay by using 38 of 40 rhinovirus reference strains representing different serotypes, which could reproducibly detect rhinovirus serotype 2 in viral transport medium containing 10 to 10,000 TCID(50) (50% tissue culture infectious dose endpoint) units/ml of the virus. However, for rhinovirus serotypes 59 and 69, the PCR assay was less sensitive than culture. Testing of 48 clinical specimens from children with cold-like illnesses for rhinovirus by the PCR and culture assays yielded detection rates of 16.7% and 6.3%, respectively. For a batch of 10 specimens, the entire assay was completed in 4.5 hours. This real-time PCR assay enables detection of many rhinovirus serotypes with the Applied Biosystems reagent-instrument platform.

  8. Issues in measure-preserving three dimensional flow integrators: Self-adjointness, reversibility, and non-uniform time stepping

    International Nuclear Information System (INIS)

    Finn, John M.

    2015-01-01

    Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012

  9. A step-defined sedentary lifestyle index: <5000 steps/day.

    Science.gov (United States)

    Tudor-Locke, Catrine; Craig, Cora L; Thyfault, John P; Spence, John C

    2013-02-01

    Step counting (using pedometers or accelerometers) is widely accepted by researchers, practitioners, and the general public. Given the mounting evidence of the link between low steps/day and time spent in sedentary behaviours, how few steps/day some populations actually perform, and the growing interest in the potentially deleterious effects of excessive sedentary behaviours on health, an emerging question is "How many steps/day are too few?" This review examines the utility, appropriateness, and limitations of using a reoccurring candidate for a step-defined sedentary lifestyle index: 10 000) to lower (sedentary lifestyle index for adults is appropriate for researchers and practitioners and for communicating with the general public. There is little evidence to advocate any specific value indicative of a step-defined sedentary lifestyle index in children and adolescents.

  10. Comparison of step-by-step kinematics in repeated 30m sprints in female soccer players.

    Science.gov (United States)

    van den Tillaar, Roland

    2018-01-04

    The aim of this study was to compare kinematics in repeated 30m sprints in female soccer players. Seventeen subjects performed seven 30m sprints every 30s in one session. Kinematics were measured with an infrared contact mat and laser gun, and running times with an electronic timing device. The main findings were that sprint times increased in the repeated sprint ability test. The main changes in kinematics during the repeated sprint ability test were increased contact time and decreased step frequency, while no change in step length was observed. The step velocity increased in almost each step until the 14, which occurred around 22m. After this, the velocity was stable until the last step, when it decreased. This increase in step velocity was mainly caused by the increased step length and decreased contact times. It was concluded that the fatigue induced in repeated 30m sprints in female soccer players resulted in decreased step frequency and increased contact time. Employing this approach in combination with a laser gun and infrared mat for 30m makes it very easy to analyse running kinematics in repeated sprints in training. This extra information gives the athlete, coach and sports scientist the opportunity to give more detailed feedback and help to target these changes in kinematics better to enhance repeated sprint performance.

  11. Traffic safety and step-by-step driving licence for young people

    DEFF Research Database (Denmark)

    Tønning, Charlotte; Agerholm, Niels

    2017-01-01

    presents a review of safety effects from step-by-step driving licence schemes. Most of the investigated schemes consist of a step-by-step driving licence with Step 1) various tests and education, Step 2) a period where driving is only allowed together with an experienced driver and Step 3) driving without...... companion is allowed but with various restrictions and, in some cases, additional driving education and tests. In general, a step-by-step driving licence improves traffic safety even though the young people are permitted to drive a car earlier on. The effects from driving with an experienced driver vary......Young novice car drivers are much more accident-prone than other drivers - up to 10 times that of their parents' generation. A central solution to improve the traffic safety for this group is implementation of a step-by-step driving licence. A number of countries have introduced a step...

  12. Time simulation of flutter with large stiffness changes

    Science.gov (United States)

    Karpel, Mordechay; Wieseman, Carol D.

    1992-01-01

    Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for a basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness, and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few apriori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.

  13. Time step rescaling recovers continuous-time dynamical properties for discrete-time Langevin integration of nonequilibrium systems.

    Science.gov (United States)

    Sivak, David A; Chodera, John D; Crooks, Gavin E

    2014-06-19

    When simulating molecular systems using deterministic equations of motion (e.g., Newtonian dynamics), such equations are generally numerically integrated according to a well-developed set of algorithms that share commonly agreed-upon desirable properties. However, for stochastic equations of motion (e.g., Langevin dynamics), there is still broad disagreement over which integration algorithms are most appropriate. While multiple desiderata have been proposed throughout the literature, consensus on which criteria are important is absent, and no published integration scheme satisfies all desiderata simultaneously. Additional nontrivial complications stem from simulating systems driven out of equilibrium using existing stochastic integration schemes in conjunction with recently developed nonequilibrium fluctuation theorems. Here, we examine a family of discrete time integration schemes for Langevin dynamics, assessing how each member satisfies a variety of desiderata that have been enumerated in prior efforts to construct suitable Langevin integrators. We show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting (related to the velocity Verlet discretization) that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts.

  14. Fast analysis of wide-band scattering from electrically large targets with time-domain parabolic equation method

    Science.gov (United States)

    He, Zi; Chen, Ru-Shan

    2016-03-01

    An efficient three-dimensional time domain parabolic equation (TDPE) method is proposed to fast analyze the narrow-angle wideband EM scattering properties of electrically large targets. The finite difference (FD) of Crank-Nicolson (CN) scheme is used as the traditional tool to solve the time-domain parabolic equation. However, a huge computational resource is required when the meshes become dense. Therefore, the alternating direction implicit (ADI) scheme is introduced to discretize the time-domain parabolic equation. In this way, the reduced transient scattered fields can be calculated line by line in each transverse plane for any time step with unconditional stability. As a result, less computational resources are required for the proposed ADI-based TDPE method when compared with both the traditional CN-based TDPE method and the finite-different time-domain (FDTD) method. By employing the rotating TDPE method, the complete bistatic RCS can be obtained with encouraging accuracy for any observed angle. Numerical examples are given to demonstrate the accuracy and efficiency of the proposed method.

  15. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    Science.gov (United States)

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and

  16. Solving large scale structure in ten easy steps with COLA

    Energy Technology Data Exchange (ETDEWEB)

    Tassev, Svetlin [Department of Astrophysical Sciences, Princeton University, 4 Ivy Lane, Princeton, NJ 08544 (United States); Zaldarriaga, Matias [School of Natural Sciences, Institute for Advanced Study, Olden Lane, Princeton, NJ 08540 (United States); Eisenstein, Daniel J., E-mail: stassev@cfa.harvard.edu, E-mail: matiasz@ias.edu, E-mail: deisenstein@cfa.harvard.edu [Center for Astrophysics, Harvard University, 60 Garden Street, Cambridge, MA 02138 (United States)

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  17. Large holographic displays for real-time applications

    Science.gov (United States)

    Schwerdtner, A.; Häussler, R.; Leister, N.

    2008-02-01

    Holography is generally accepted as the ultimate approach to display three-dimensional scenes or objects. Principally, the reconstruction of an object from a perfect hologram would appear indistinguishable from viewing the corresponding real-world object. Up to now two main obstacles have prevented large-screen Computer-Generated Holograms (CGH) from achieving a satisfactory laboratory prototype not to mention a marketable one. The reason is a small cell pitch CGH resulting in a huge number of hologram cells and a very high computational load for encoding the CGH. These seemingly inevitable technological hurdles for a long time have not been cleared limiting the use of holography to special applications, such as optical filtering, interference, beam forming, digital holography for capturing the 3-D shape of objects, and others. SeeReal Technologies has developed a new approach for real-time capable CGH using the socalled Tracked Viewing Windows technology to overcome these problems. The paper will show that today's state of the art reconfigurable Spatial Light Modulators (SLM), especially today's feasible LCD panels are suited for reconstructing large 3-D scenes which can be observed from large viewing angles. For this to achieve the original holographic concept of containing information from the entire scene in each part of the CGH has been abandoned. This substantially reduces the hologram resolution and thus the computational load by several orders of magnitude making thus real-time computation possible. A monochrome real-time prototype measuring 20 inches has been built and demonstrated at last year's SID conference and exhibition 2007 and at several other events.

  18. A positive and multi-element conserving time stepping scheme for biogeochemical processes in marine ecosystem models

    Science.gov (United States)

    Radtke, H.; Burchard, H.

    2015-01-01

    In this paper, an unconditionally positive and multi-element conserving time stepping scheme for systems of non-linearly coupled ODE's is presented. These systems of ODE's are used to describe biogeochemical transformation processes in marine ecosystem models. The numerical scheme is a positive-definite modification of the Runge-Kutta method, it can have arbitrarily high order of accuracy and does not require time step adaption. If the scheme is combined with a modified Patankar-Runge-Kutta method from Burchard et al. (2003), it also gets the ability to solve a certain class of stiff numerical problems, but the accuracy is restricted to second-order then. The performance of the new scheme on two test case problems is shown.

  19. Time to pause before the next step

    International Nuclear Information System (INIS)

    Siemon, R.E.

    1998-01-01

    Many scientists, who have staunchly supported ITER for years, are coming to realize it is time to further rethink fusion energy's development strategy. Specifically, as was suggested by Grant Logan and Dale Meade, and in keeping with the restructuring of 1996, a theme of better, cheaper, faster fusion would serve the program more effectively than ''demonstrating controlled ignition...and integrated testing of the high-heat-flux and nuclear components required to utilize fusion energy...'' which are the important ingredients of ITER's objectives. The author has personally shifted his view for a mixture of technical and political reasons. On the technical side, he senses that through advanced tokamak research, spherical tokamak research, and advanced stellarator work, scientists are coming to a new understanding that might make a burning-plasma device significantly smaller and less expensive. Thus waiting for a few years, even ten years, seems prudent. Scientifically, there is fascinating physics to be learned through studies of burning plasma on a tokamak. And clearly if one wishes to study burning plasma physics in a sustained plasma, there is no other configuration with an adequate database on which to proceed. But what is the urgency of moving towards an ITER-like step focused on burning plasma? Some of the arguments put forward and the counter arguments are discussed here

  20. Stepping out: dare to step forward, step back, or just stand still and breathe.

    Science.gov (United States)

    Waisman, Mary Sue

    2012-01-01

    It is important to step out and make a difference. We have one of the most unique and diverse professions that allows for diversity in thought and practice, permitting each of us to grow in our unique niches and make significant contributions. I was frightened to 'step out' to go to culinary school at the age of 46, but it changed forever the way I look at my profession and I have since experienced the most enjoyable and innovative career. There are also times when it is important to 'step back' to relish the roots of our profession; to help bring food back into nutrition; to translate all of our wonderful science into a language of food that Canadians understand. We all need to take time to 'just stand still and breathe': to celebrate our accomplishments, reflect on our actions, ensure we are heading toward our vision, keep the profession vibrant and relevant, and cherish one another.

  1. Step training improves reaction time, gait and balance and reduces falls in older people: a systematic review and meta-analysis.

    Science.gov (United States)

    Okubo, Yoshiro; Schoene, Daniel; Lord, Stephen R

    2017-04-01

    To examine the effects of stepping interventions on fall risk factors and fall incidence in older people. Electronic databases (PubMed, EMBASE, CINAHL, Cochrane, CENTRAL) and reference lists of included articles from inception to March 2015. Randomised (RCT) or clinical controlled trials (CCT) of volitional and reactive stepping interventions that included older (minimum age 60) people providing data on falls or fall risk factors. Meta-analyses of seven RCTs (n=660) showed that the stepping interventions significantly reduced the rate of falls (rate ratio=0.48, 95% CI 0.36 to 0.65, prisk ratio=0.51, 95% CI 0.38 to 0.68, pfalls and proportion of fallers. A meta-analysis of two RCTs (n=62) showed that stepping interventions significantly reduced laboratory-induced falls, and meta-analysis findings of up to five RCTs and CCTs (n=36-416) revealed that stepping interventions significantly improved simple and choice stepping reaction time, single leg stance, timed up and go performance (pfalls among older adults by approximately 50%. This clinically significant reduction may be due to improvements in reaction time, gait, balance and balance recovery but not in strength. Further high-quality studies aimed at maximising the effectiveness and feasibility of stepping interventions are required. CRD42015017357. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  2. Time ordering of two-step processes in energetic ion-atom collisions: Basic formalism

    International Nuclear Information System (INIS)

    Stolterfoht, N.

    1993-01-01

    The semiclassical approximation is applied in second order to describe time ordering of two-step processes in energetic ion-atom collisions. Emphasis is given to the conditions for interferences between first- and second-order terms. In systems with two active electrons, time ordering gives rise to a pair of associated paths involving a second-order process and its time-inverted process. Combining these paths within the independent-particle frozen orbital model, time ordering is lost. It is shown that the loss of time ordering modifies the second-order amplitude so that its ability to interfere with the first-order amplitude is essentially reduced. Time ordering and the capability for interference is regained, as one path is blocked by means of the Pauli exclusion principle. The time-ordering formalism is prepared for papers dealing with collision experiments of single excitation [Stolterfoht et al., following paper, Phys. Rev. A 48, 2986 (1993)] and double excitation [Stolterfoht et al. (unpublished)

  3. On the Convexity of Step out - Step in Sequencing Games

    NARCIS (Netherlands)

    Musegaas, Marieke; Borm, Peter; Quant, Marieke

    2016-01-01

    The main result of this paper is the convexity of Step out - Step in (SoSi) sequencing games, a class of relaxed sequencing games first analyzed by Musegaas, Borm, and Quant (2015). The proof makes use of a polynomial time algorithm determining the value and an optimal processing order for an

  4. Timing of the steps in transformation of C3H 10T1/2 cells by X-irradiation

    International Nuclear Information System (INIS)

    Kennedy, A.R.; Cairns, J.; Little, J.B.

    1984-01-01

    Transformation of cells in culture by chemical carcinogens or X-rays seems to require at least two steps. The initial step is a frequent event; for example, after transient exposure to either methylcholanthrene or X-rays. It has been hypothesized that the second step behaves like a spontaneous mutation in having a constant but small probability of occurring each time an initiated cell divides. We show here that the clone size distribution of transformed cells in growing cultures initiated by X-rays, is, indeed, exactly what would be expected on that hypothesis. (author)

  5. Imaginary Time Step Method to Solve the Dirac Equation with Nonlocal Potential

    International Nuclear Information System (INIS)

    Zhang Ying; Liang Haozhao; Meng Jie

    2009-01-01

    The imaginary time step (ITS) method is applied to solve the Dirac equation with nonlocal potentials in coordinate space. Taking the nucleus 12 C as an example, even with nonlocal potentials, the direct ITS evolution for the Dirac equation still meets the disaster of the Dirac sea. However, following the recipe in our former investigation, the disaster can be avoided by the ITS evolution for the corresponding Schroedinger-like equation without localization, which gives the convergent results exactly the same with those obtained iteratively by the shooting method with localized effective potentials.

  6. Microsoft® Visual Basic® 2010 Step by Step

    CERN Document Server

    Halvorson, Michael

    2010-01-01

    Your hands-on, step-by-step guide to learning Visual Basic® 2010. Teach yourself the essential tools and techniques for Visual Basic® 2010-one step at a time. No matter what your skill level, you'll find the practical guidance and examples you need to start building professional applications for Windows® and the Web. Discover how to: Work in the Microsoft® Visual Studio® 2010 Integrated Development Environment (IDE)Master essential techniques-from managing data and variables to using inheritance and dialog boxesCreate professional-looking UIs; add visual effects and print supportBuild com

  7. Time and frequency domain analyses of the Hualien Large-Scale Seismic Test

    International Nuclear Information System (INIS)

    Kabanda, John; Kwon, Oh-Sung; Kwon, Gunup

    2015-01-01

    Highlights: • Time- and frequency-domain analysis methods are verified against each other. • The two analysis methods are validated against Hualien LSST. • The nonlinear time domain (NLTD) analysis resulted in more realistic response. • The frequency domain (FD) analysis shows amplification at resonant frequencies. • The NLTD analysis requires significant modeling and computing time. - Abstract: In the nuclear industry, the equivalent-linear frequency domain analysis method has been the de facto standard procedure primarily due to the method's computational efficiency. This study explores the feasibility of applying the nonlinear time domain analysis method for the soil–structure-interaction analysis of nuclear power facilities. As a first step, the equivalency of the time and frequency domain analysis methods is verified through a site response analysis of one-dimensional soil, a dynamic impedance analysis of soil–foundation system, and a seismic response analysis of the entire soil–structure system. For the verifications, an idealized elastic soil–structure system is used to minimize variables in the comparison of the two methods. Then, the verified analysis methods are used to develop time and frequency domain models of Hualien Large-Scale Seismic Test. The predicted structural responses are compared against field measurements. The models are also analyzed with an amplified ground motion to evaluate discrepancies of the time and frequency domain analysis methods when the soil–structure system behaves beyond the elastic range. The analysis results show that the equivalent-linear frequency domain analysis method amplifies certain frequency bands and tends to result in higher structural acceleration than the nonlinear time domain analysis method. A comparison with field measurements shows that the nonlinear time domain analysis method better captures the frequency distribution of recorded structural responses than the frequency domain

  8. Step by step parallel programming method for molecular dynamics code

    International Nuclear Information System (INIS)

    Orii, Shigeo; Ohta, Toshio

    1996-07-01

    Parallel programming for a numerical simulation program of molecular dynamics is carried out with a step-by-step programming technique using the two phase method. As a result, within the range of a certain computing parameters, it is found to obtain parallel performance by using the level of parallel programming which decomposes the calculation according to indices of do-loops into each processor on the vector parallel computer VPP500 and the scalar parallel computer Paragon. It is also found that VPP500 shows parallel performance in wider range computing parameters. The reason is that the time cost of the program parts, which can not be reduced by the do-loop level of the parallel programming, can be reduced to the negligible level by the vectorization. After that, the time consuming parts of the program are concentrated on less parts that can be accelerated by the do-loop level of the parallel programming. This report shows the step-by-step parallel programming method and the parallel performance of the molecular dynamics code on VPP500 and Paragon. (author)

  9. PHISICS/RELAP5-3D Adaptive Time-Step Method Demonstrated for the HTTR LOFC#1 Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Robin Ivey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Balestra, Paolo [Univ. of Rome (Italy); Strydom, Gerhard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-05-01

    A collaborative effort between Japan Atomic Energy Agency (JAEA) and Idaho National Laboratory (INL) as part of the Civil Nuclear Energy Working Group is underway to model the high temperature engineering test reactor (HTTR) loss of forced cooling (LOFC) transient that was performed in December 2010. The coupled version of RELAP5-3D, a thermal fluids code, and PHISICS, a neutronics code, were used to model the transient. The focus of this report is to summarize the changes made to the PHISICS-RELAP5-3D code for implementing an adaptive time step methodology into the code for the first time, and to test it using the full HTTR PHISICS/RELAP5-3D model developed by JAEA and INL and the LOFC simulation. Various adaptive schemes are available based on flux or power convergence criteria that allow significantly larger time steps to be taken by the neutronics module. The report includes a description of the HTTR and the associated PHISICS/RELAP5-3D model test results as well as the University of Rome sub-contractor report documenting the adaptive time step theory and methodology implemented in PHISICS/RELAP5-3D. Two versions of the HTTR model were tested using 8 and 26 energy groups. It was found that most of the new adaptive methods lead to significant improvements in the LOFC simulation time required without significant accuracy penalties in the prediction of the fission power and the fuel temperature. In the best performing 8 group model scenarios, a LOFC simulation of 20 hours could be completed in real-time, or even less than real-time, compared with the previous version of the code that completed the same transient 3-8 times slower than real-time. A few of the user choice combinations between the methodologies available and the tolerance settings did however result in unacceptably high errors or insignificant gains in simulation time. The study is concluded with recommendations on which methods to use for this HTTR model. An important caveat is that these findings

  10. Implementation of a Large Eddy Simulation Method Applied to Recirculating Flow in a Ventilated Room

    DEFF Research Database (Denmark)

    Davidson, Lars

    In the present work Large Eddy Simulations are presented. The flow in a ventilated enclosure is studied. We use an explicit, two-steps time-advancement scheme where the pressure is solved from a Poisson equation.......In the present work Large Eddy Simulations are presented. The flow in a ventilated enclosure is studied. We use an explicit, two-steps time-advancement scheme where the pressure is solved from a Poisson equation....

  11. A multi-time-step noise reduction method for measuring velocity statistics from particle tracking velocimetry

    Science.gov (United States)

    Machicoane, Nathanaël; López-Caballero, Miguel; Bourgoin, Mickael; Aliseda, Alberto; Volk, Romain

    2017-10-01

    We present a method to improve the accuracy of velocity measurements for fluid flow or particles immersed in it, based on a multi-time-step approach that allows for cancellation of noise in the velocity measurements. Improved velocity statistics, a critical element in turbulent flow measurements, can be computed from the combination of the velocity moments computed using standard particle tracking velocimetry (PTV) or particle image velocimetry (PIV) techniques for data sets that have been collected over different values of time intervals between images. This method produces Eulerian velocity fields and Lagrangian velocity statistics with much lower noise levels compared to standard PIV or PTV measurements, without the need of filtering and/or windowing. Particle displacement between two frames is computed for multiple different time-step values between frames in a canonical experiment of homogeneous isotropic turbulence. The second order velocity structure function of the flow is computed with the new method and compared to results from traditional measurement techniques in the literature. Increased accuracy is also demonstrated by comparing the dissipation rate of turbulent kinetic energy measured from this function against previously validated measurements.

  12. Discrete-time optimal control and games on large intervals

    CERN Document Server

    Zaslavski, Alexander J

    2017-01-01

    Devoted to the structure of approximate solutions of discrete-time optimal control problems and approximate solutions of dynamic discrete-time two-player zero-sum games, this book presents results on properties of approximate solutions in an interval that is independent lengthwise, for all sufficiently large intervals. Results concerning the so-called turnpike property of optimal control problems and zero-sum games in the regions close to the endpoints of the time intervals are the main focus of this book. The description of the structure of approximate solutions on sufficiently large intervals and its stability will interest graduate students and mathematicians in optimal control and game theory, engineering, and economics. This book begins with a brief overview and moves on to analyze the structure of approximate solutions of autonomous nonconcave discrete-time optimal control Lagrange problems.Next the structures of approximate solutions of autonomous discrete-time optimal control problems that are discret...

  13. Smart Wireless Power Transfer Operated by Time-Modulated Arrays via a Two-Step Procedure

    Directory of Open Access Journals (Sweden)

    Diego Masotti

    2015-01-01

    Full Text Available The paper introduces a novel method for agile and precise wireless power transmission operated by a time-modulated array. The unique, almost real-time reconfiguration capability of these arrays is fully exploited by a two-step procedure: first, a two-element time-modulated subarray is used for localization of tagged sensors to be energized; the entire 16-element TMA then provides the power to the detected tags, by exploiting the fundamental and first-sideband harmonic radiation. An investigation on the best array architecture is carried out, showing the importance of the adopted nonlinear/full-wave computer-aided-design platform. Very promising simulated energy transfer performance of the entire nonlinear radiating system is demonstrated.

  14. The Large Observatory For x-ray Timing

    DEFF Research Database (Denmark)

    Feroci, M.; Herder, J. W. den; Bozzo, E.

    2014-01-01

    The Large Observatory For x-ray Timing (LOFT) was studied within ESA M3 Cosmic Vision framework and participated in the final down-selection for a launch slot in 2022-2024. Thanks to the unprecedented combination of effective area and spectral resolution of its main instrument, LOFT will study th...

  15. The association between choice stepping reaction time and falls in older adults--a path analysis model

    NARCIS (Netherlands)

    Pijnappels, M.A.G.M.; Delbaere, K.; Sturnieks, D.L.; Lord, S.R.

    2010-01-01

    Background: choice stepping reaction time (CSRT) is a functional measure that has been shown to significantly discriminate older fallers from non-fallers. Objective: to investigate how physiological and cognitive factors mediate the association between CSRT performance and multiple falls by use of

  16. Intake flow and time step analysis in the modeling of a direct injection Diesel engine

    Energy Technology Data Exchange (ETDEWEB)

    Zancanaro Junior, Flavio V.; Vielmo, Horacio A. [Federal University of Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Mechanical Engineering Dept.], E-mails: zancanaro@mecanica.ufrgs.br, vielmoh@mecanica.ufrgs.br

    2010-07-01

    This paper discusses the effects of the time step on turbulence flow structure in the intake and in-cylinder systems of a Diesel engine during the intake process, under the motored condition. The three-dimensional modeling of a reciprocating engine geometry comprising a bowl-in-piston combustion chamber, intake port of shallow ramp helical type and exhaust port of conventional type. The equations are numerically solved, including a transient analysis, valves and piston movements, for engine speed of 1500 rpm, using a commercial Finite Volumes CFD code. A parallel computation is employed. For the purpose of examining the in-cylinder turbulence characteristics two parameters are observed: the discharge coefficient and swirl ratio. This two parameters quantify the fluid flow characteristics inside cylinder in the intake stroke, therefore, it is very important their study and understanding. Additionally, the evolution of the discharge coefficient and swirl ratio, along crank angle, are correlated and compared, with the objective of clarifying the physical mechanisms. Regarding the turbulence, computations are performed with the Eddy Viscosity Model k-u SST, in its Low-Reynolds approaches, with standard near wall treatment. The system of partial differential equations to be solved consists of the Reynolds-averaged compressible Navier-Stokes equations with the constitutive relations for an ideal gas, and using a segregated solution algorithm. The enthalpy equation is also solved. A moving hexahedral trimmed mesh independence study is presented. In the same way many convergence tests are performed, and a secure criterion established. The results of the pressure fields are shown in relation to vertical plane that passes through the valves. Areas of low pressure can be seen in the valve curtain region, due to strong jet flows. Also, it is possible to note divergences between the time steps, mainly for the smaller time step. (author)

  17. Just-in-time connectivity for large spiking networks.

    Science.gov (United States)

    Lytton, William W; Omurtag, Ahmet; Neymotin, Samuel A; Hines, Michael L

    2008-11-01

    The scale of large neuronal network simulations is memory limited due to the need to store connectivity information: connectivity storage grows as the square of neuron number up to anatomically relevant limits. Using the NEURON simulator as a discrete-event simulator (no integration), we explored the consequences of avoiding the space costs of connectivity through regenerating connectivity parameters when needed: just in time after a presynaptic cell fires. We explored various strategies for automated generation of one or more of the basic static connectivity parameters: delays, postsynaptic cell identities, and weights, as well as run-time connectivity state: the event queue. Comparison of the JitCon implementation to NEURON's standard NetCon connectivity method showed substantial space savings, with associated run-time penalty. Although JitCon saved space by eliminating connectivity parameters, larger simulations were still memory limited due to growth of the synaptic event queue. We therefore designed a JitEvent algorithm that added items to the queue only when required: instead of alerting multiple postsynaptic cells, a spiking presynaptic cell posted a callback event at the shortest synaptic delay time. At the time of the callback, this same presynaptic cell directly notified the first postsynaptic cell and generated another self-callback for the next delay time. The JitEvent implementation yielded substantial additional time and space savings. We conclude that just-in-time strategies are necessary for very large network simulations but that a variety of alternative strategies should be considered whose optimality will depend on the characteristics of the simulation to be run.

  18. Influence of step complexity and presentation style on step performance of computerized emergency operating procedures

    Energy Technology Data Exchange (ETDEWEB)

    Xu Song [Department of Industrial Engineering, Tsinghua University, Beijing 100084 (China); Li Zhizhong [Department of Industrial Engineering, Tsinghua University, Beijing 100084 (China)], E-mail: zzli@tsinghua.edu.cn; Song Fei; Luo Wei; Zhao Qianyi; Salvendy, Gavriel [Department of Industrial Engineering, Tsinghua University, Beijing 100084 (China)

    2009-02-15

    With the development of information technology, computerized emergency operating procedures (EOPs) are taking the place of paper-based ones. However, ergonomics issues of computerized EOPs have not been studied adequately since the industrial practice is quite limited yet. This study examined the influence of step complexity and presentation style of EOPs on step performance. A simulated computerized EOP system was developed in two presentation styles: Style A: one- and two-dimensional flowcharts combination; Style B: two-dimensional flowchart and success logic tree combination. Step complexity was quantified by a complexity measure model based on an entropy concept. Forty subjects participated in the experiment of EOP execution using the simulated system. The results of data analysis on the experiment data indicate that step complexity and presentation style could significantly influence step performance (both step error rate and operation time). Regression models were also developed. The regression analysis results imply that operation time of a step could be well predicted by step complexity while step error rate could only partly predicted by it. The result of a questionnaire investigation implies that step error rate was influenced not only by the operation task itself but also by other human factors. These findings may be useful for the design and assessment of computerized EOPs.

  19. The Non–Symmetric s–Step Lanczos Algorithm: Derivation of Efficient Recurrences and Synchronization–Reducing Variants of BiCG and QMR

    Directory of Open Access Journals (Sweden)

    Feuerriegel Stefan

    2015-12-01

    Full Text Available The Lanczos algorithm is among the most frequently used iterative techniques for computing a few dominant eigenvalues of a large sparse non-symmetric matrix. At the same time, it serves as a building block within biconjugate gradient (BiCG and quasi-minimal residual (QMR methods for solving large sparse non-symmetric systems of linear equations. It is well known that, when implemented on distributed-memory computers with a huge number of processes, the synchronization time spent on computing dot products increasingly limits the parallel scalability. Therefore, we propose synchronization-reducing variants of the Lanczos, as well as BiCG and QMR methods, in an attempt to mitigate these negative performance effects. These so-called s-step algorithms are based on grouping dot products for joint execution and replacing time-consuming matrix operations by efficient vector recurrences. The purpose of this paper is to provide a rigorous derivation of the recurrences for the s-step Lanczos algorithm, introduce s-step BiCG and QMR variants, and compare the parallel performance of these new s-step versions with previous algorithms.

  20. Age-related changes in gait adaptability in response to unpredictable obstacles and stepping targets.

    Science.gov (United States)

    Caetano, Maria Joana D; Lord, Stephen R; Schoene, Daniel; Pelicioni, Paulo H S; Sturnieks, Daina L; Menant, Jasmine C

    2016-05-01

    A large proportion of falls in older people occur when walking. Limitations in gait adaptability might contribute to tripping; a frequently reported cause of falls in this group. To evaluate age-related changes in gait adaptability in response to obstacles or stepping targets presented at short notice, i.e.: approximately two steps ahead. Fifty older adults (aged 74±7 years; 34 females) and 21 young adults (aged 26±4 years; 12 females) completed 3 usual gait speed (baseline) trials. They then completed the following randomly presented gait adaptability trials: obstacle avoidance, short stepping target, long stepping target and no target/obstacle (3 trials of each). Compared with the young, the older adults slowed significantly in no target/obstacle trials compared with the baseline trials. They took more steps and spent more time in double support while approaching the obstacle and stepping targets, demonstrated poorer stepping accuracy and made more stepping errors (failed to hit the stepping targets/avoid the obstacle). The older adults also reduced velocity of the two preceding steps and shortened the previous step in the long stepping target condition and in the obstacle avoidance condition. Compared with their younger counterparts, the older adults exhibited a more conservative adaptation strategy characterised by slow, short and multiple steps with longer time in double support. Even so, they demonstrated poorer stepping accuracy and made more stepping errors. This reduced gait adaptability may place older adults at increased risk of falling when negotiating unexpected hazards. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Multigrid Reduction in Time for Nonlinear Parabolic Problems

    Energy Technology Data Exchange (ETDEWEB)

    Falgout, R. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Manteuffel, T. A. [Univ. of Colorado, Boulder, CO (United States); O' Neill, B. [Univ. of Colorado, Boulder, CO (United States); Schroder, J. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-04

    The need for parallel-in-time is being driven by changes in computer architectures, where future speed-ups will be available through greater concurrency, but not faster clock speeds, which are stagnant.This leads to a bottleneck for sequential time marching schemes, because they lack parallelism in the time dimension. Multigrid Reduction in Time (MGRIT) is an iterative procedure that allows for temporal parallelism by utilizing multigrid reduction techniques and a multilevel hierarchy of coarse time grids. MGRIT has been shown to be effective for linear problems, with speedups of up to 50 times. The goal of this work is the efficient solution of nonlinear problems with MGRIT, where efficient is defined as achieving similar performance when compared to a corresponding linear problem. As our benchmark, we use the p-Laplacian, where p = 4 corresponds to a well-known nonlinear diffusion equation and p = 2 corresponds to our benchmark linear diffusion problem. When considering linear problems and implicit methods, the use of optimal spatial solvers such as spatial multigrid imply that the cost of one time step evaluation is fixed across temporal levels, which have a large variation in time step sizes. This is not the case for nonlinear problems, where the work required increases dramatically on coarser time grids, where relatively large time steps lead to worse conditioned nonlinear solves and increased nonlinear iteration counts per time step evaluation. This is the key difficulty explored by this paper. We show that by using a variety of strategies, most importantly, spatial coarsening and an alternate initial guess to the nonlinear time-step solver, we can reduce the work per time step evaluation over all temporal levels to a range similar with the corresponding linear problem. This allows for parallel scaling behavior comparable to the corresponding linear problem.

  2. Implicit time accurate simulation of unsteady flow

    Science.gov (United States)

    van Buuren, René; Kuerten, Hans; Geurts, Bernard J.

    2001-03-01

    Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright

  3. Mixed Discretization of the Time Domain MFIE at Low Frequencies

    KAUST Repository

    Ulku, Huseyin Arda

    2017-01-10

    Solution of the magnetic field integral equation (MFIE), which is obtained by the classical marching on-in-time (MOT) scheme, becomes inaccurate when the time step is large, i.e., under low-frequency excitation. It is shown here that the inaccuracy stems from the classical MOT scheme’s failure to predict the correct scaling of the current’s Helmholtz components for large time steps. A recently proposed mixed discretization strategy is used to alleviate the inaccuracy problem by restoring the correct scaling of the current’s Helmholtz components under low-frequency excitation.

  4. Time-step selection considerations in the analysis of reactor transients with DIF3D-K

    International Nuclear Information System (INIS)

    Taiwo, T.A.; Khalil, H.S.; Cahalan, J.E.; Morris, E.E.

    1993-01-01

    The DIF3D-K code solves the three-dimensional, time-dependent multigroup neutron diffusion equations by using a nodal approach for spatial discretization and either the theta method or one of three space-time factorization approaches for temporal integration of the nodal equations. The three space-time factorization options (namely, improved quasistatic, adiabatic and conventional point kinetics) were implemented because of their potential efficiency advantage for the analysis of transients in which the flux shape changes more slowly than its amplitude. Here we describe the implementation of DIF3D-K as the neutronics module within the SAS-HWR accident analysis code. We also describe the neutronics-related time step selection algorithms and their influence on the accuracy and efficiency of the various solution options

  5. Aerial robot intelligent control method based on back-stepping

    Science.gov (United States)

    Zhou, Jian; Xue, Qian

    2018-05-01

    The aerial robot is characterized as strong nonlinearity, high coupling and parameter uncertainty, a self-adaptive back-stepping control method based on neural network is proposed in this paper. The uncertain part of the aerial robot model is compensated online by the neural network of Cerebellum Model Articulation Controller and robust control items are designed to overcome the uncertainty error of the system during online learning. At the same time, particle swarm algorithm is used to optimize and fix parameters so as to improve the dynamic performance, and control law is obtained by the recursion of back-stepping regression. Simulation results show that the designed control law has desired attitude tracking performance and good robustness in case of uncertainties and large errors in the model parameters.

  6. Comparison of the Screening Tests for Gestational Diabetes Mellitus between "One-Step" and "Two-Step" Methods among Thai Pregnant Women.

    Science.gov (United States)

    Luewan, Suchaya; Bootchaingam, Phenphan; Tongsong, Theera

    2018-01-01

    To compare the prevalence and pregnancy outcomes of GDM between those screened by the "one-step" (75 gm GTT) and "two-step" (100 gm GTT) methods. A prospective study was conducted on singleton pregnancies at low or average risk of GDM. All were screened between 24 and 28 weeks, using the one-step or two-step method based on patients' preference. The primary outcome was prevalence of GDM, and secondary outcomes included birthweight, gestational age, rates of preterm birth, small/large-for-gestational age, low Apgar scores, cesarean section, and pregnancy-induced hypertension. A total of 648 women were screened: 278 in the one-step group and 370 in the two-step group. The prevalence of GDM was significantly higher in the one-step group; 32.0% versus 10.3%. Baseline characteristics and pregnancy outcomes in both groups were comparable. However, mean birthweight was significantly higher among pregnancies with GDM diagnosed by the two-step approach (3204 ± 555 versus 3009 ± 666 g; p =0.022). Likewise, the rate of large-for-date tended to be higher in the two-step group, but was not significant. The one-step approach is associated with very high prevalence of GDM among Thai population, without clear evidence of better outcomes. Thus, this approach may not be appropriate for screening in a busy antenatal care clinic like our setting or other centers in developing countries.

  7. Time dispersion in large plastic scintillation neutron detectors

    International Nuclear Information System (INIS)

    De, A.; Dasgupta, S.S.; Sen, D.

    1993-01-01

    Time dispersion (TD) has been computed for large neutron detectors using plastic scintillators. It has been shown that TD seen by the PM tube does not necessarily increase with incident neutron energy, a result not fully in agreement with the usual finding

  8. Evidence for Topological Edge States in a Large Energy Gap near the Step Edges on the Surface of ZrTe_{5}

    Directory of Open Access Journals (Sweden)

    R. Wu

    2016-05-01

    Full Text Available Two-dimensional topological insulators with a large bulk band gap are promising for experimental studies of quantum spin Hall effect and for spintronic device applications. Despite considerable theoretical efforts in predicting large-gap two-dimensional topological insulator candidates, none of them have been experimentally demonstrated to have a full gap, which is crucial for quantum spin Hall effect. Here, by combining scanning tunneling microscopy/spectroscopy and angle-resolved photoemission spectroscopy, we reveal that ZrTe_{5} crystal hosts a large full gap of ∼100  meV on the surface and a nearly constant density of states within the entire gap at the monolayer step edge. These features are well reproduced by our first-principles calculations, which point to the topologically nontrivial nature of the edge states.

  9. Intraindividual Stepping Reaction Time Variability Predicts Falls in Older Adults With Mild Cognitive Impairment.

    Science.gov (United States)

    Bunce, David; Haynes, Becky I; Lord, Stephen R; Gschwind, Yves J; Kochan, Nicole A; Reppermund, Simone; Brodaty, Henry; Sachdev, Perminder S; Delbaere, Kim

    2017-06-01

    Reaction time measures have considerable potential to aid neuropsychological assessment in a variety of health care settings. One such measure, the intraindividual reaction time variability (IIV), is of particular interest as it is thought to reflect neurobiological disturbance. IIV is associated with a variety of age-related neurological disorders, as well as gait impairment and future falls in older adults. However, although persons diagnosed with Mild Cognitive Impairment (MCI) are at high risk of falling, the association between IIV and prospective falls is unknown. We conducted a longitudinal cohort study in cognitively intact (n = 271) and MCI (n = 154) community-dwelling adults aged 70-90 years. IIV was assessed through a variety of measures including simple and choice hand reaction time and choice stepping reaction time tasks (CSRT), the latter administered as a single task and also with a secondary working memory task. Logistic regression did not show an association between IIV on the hand-held tasks and falls. Greater IIV in both CSRT tasks, however, did significantly increase the risk of future falls. This effect was specific to the MCI group, with a stronger effect in persons exhibiting gait, posture, or physiological impairment. The findings suggest that increased stepping IIV may indicate compromised neural circuitry involved in executive function, gait, and posture in persons with MCI increasing their risk of falling. IIV measures have potential to assess neurobiological disturbance underlying physical and cognitive dysfunction in old age, and aid fall risk assessment and routine care in community and health care settings. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. An explicit marching on-in-time solver for the time domain volume magnetic field integral equation

    KAUST Repository

    Sayed, Sadeed Bin

    2014-07-01

    Transient scattering from inhomogeneous dielectric objects can be modeled using time domain volume integral equations (TDVIEs). TDVIEs are oftentimes solved using marching on-in-time (MOT) techniques. Classical MOT-TDVIE solvers expand the field induced on the scatterer using local spatio-temporal basis functions. Inserting this expansion into the TDVIE and testing the resulting equation in space and time yields a system of equations that is solved by time marching. Depending on the type of the basis and testing functions and the time step, the time marching scheme can be implicit (N. T. Gres, et al., Radio Sci., 36(3), 379-386, 2001) or explicit (A. Al-Jarro, et al., IEEE Trans. Antennas Propag., 60(11), 5203-5214, 2012). Implicit MOT schemes are known to be more stable and accurate. However, under low-frequency excitation, i.e., when the time step size is large, they call for inversion of a full matrix system at very time step.

  11. An explicit marching on-in-time solver for the time domain volume magnetic field integral equation

    KAUST Repository

    Sayed, Sadeed Bin; Ulku, Huseyin Arda; Bagci, Hakan

    2014-01-01

    Transient scattering from inhomogeneous dielectric objects can be modeled using time domain volume integral equations (TDVIEs). TDVIEs are oftentimes solved using marching on-in-time (MOT) techniques. Classical MOT-TDVIE solvers expand the field induced on the scatterer using local spatio-temporal basis functions. Inserting this expansion into the TDVIE and testing the resulting equation in space and time yields a system of equations that is solved by time marching. Depending on the type of the basis and testing functions and the time step, the time marching scheme can be implicit (N. T. Gres, et al., Radio Sci., 36(3), 379-386, 2001) or explicit (A. Al-Jarro, et al., IEEE Trans. Antennas Propag., 60(11), 5203-5214, 2012). Implicit MOT schemes are known to be more stable and accurate. However, under low-frequency excitation, i.e., when the time step size is large, they call for inversion of a full matrix system at very time step.

  12. Step-to-step spatiotemporal variables and ground reaction forces of intra-individual fastest sprinting in a single session.

    Science.gov (United States)

    Nagahara, Ryu; Mizutani, Mirai; Matsuo, Akifumi; Kanehisa, Hiroaki; Fukunaga, Tetsuo

    2018-06-01

    We aimed to investigate the step-to-step spatiotemporal variables and ground reaction forces during the acceleration phase for characterising intra-individual fastest sprinting within a single session. Step-to-step spatiotemporal variables and ground reaction forces produced by 15 male athletes were measured over a 50-m distance during repeated (three to five) 60-m sprints using a long force platform system. Differences in measured variables between the fastest and slowest trials were examined at each step until the 22nd step using a magnitude-based inferences approach. There were possibly-most likely higher running speed and step frequency (2nd to 22nd steps) and shorter support time (all steps) in the fastest trial than in the slowest trial. Moreover, for the fastest trial there were likely-very likely greater mean propulsive force during the initial four steps and possibly-very likely larger mean net anterior-posterior force until the 17th step. The current results demonstrate that better sprinting performance within a single session is probably achieved by 1) a high step frequency (except the initial step) with short support time at all steps, 2) exerting a greater mean propulsive force during initial acceleration, and 3) producing a greater mean net anterior-posterior force during initial and middle acceleration.

  13. Data warehousing technologies for large-scale and right-time data

    DEFF Research Database (Denmark)

    Xiufeng, Liu

    heterogeneous sources into a central data warehouse (DW) by Extract-Transform-Load (ETL) at regular time intervals, e.g., monthly, weekly, or daily. But now, it becomes challenging for large-scale data, and hard to meet the near real-time/right-time business decisions. This thesis considers some...

  14. Time-step selection considerations in the analysis of reactor transients with DIF3D-K

    International Nuclear Information System (INIS)

    Taiwo, T.A.; Khalil, H.S.; Cahalan, J.E.; Morris, E.E.

    1993-01-01

    The DIF3D-K code solves the three-dimensional, time-dependent multigroup neutron diffusion equations by using a nodal approach for spatial discretization and either the theta method or one of three space-time factorization approaches for temporal integration of the nodal equations. The three space-time factorization options (namely, improved quasistatic, adiabatic, and conventional point kinetics) were implemented because of their potential efficiency advantage for the analysis of transients in which the flux shape changes more slowly than its amplitude. In this paper, we describe the implementation of DIF3D-K as the neutronics module within the SAS-HWR accident analysis code. We also describe the neuronic-related time-step selection algorithms and their influence on the accuracy and efficiency of the various solution options

  15. Quantum transport with long-range steps on Watts-Strogatz networks

    Science.gov (United States)

    Wang, Yan; Xu, Xin-Jian

    2016-07-01

    We study transport dynamics of quantum systems with long-range steps on the Watts-Strogatz network (WSN) which is generated by rewiring links of the regular ring. First, we probe physical systems modeled by the discrete nonlinear schrödinger (DNLS) equation. Using the localized initial condition, we compute the time-averaged occupation probability of the initial site, which is related to the nonlinearity, the long-range steps and rewiring links. Self-trapping transitions occur at large (small) nonlinear parameters for coupling ɛ=-1 (1), as long-range interactions are intensified. The structure disorder induced by random rewiring, however, has dual effects for ɛ=-1 and inhibits the self-trapping behavior for ɛ=1. Second, we investigate continuous-time quantum walks (CTQW) on the regular ring ruled by the discrete linear schrödinger (DLS) equation. It is found that only the presence of the long-range steps does not affect the efficiency of the coherent exciton transport, while only the allowance of random rewiring enhances the partial localization. If both factors are considered simultaneously, localization is greatly strengthened, and the transport becomes worse.

  16. Impact of first-step potential and time on the vertical growth of ZnO nanorods on ITO substrate by two-step electrochemical deposition

    International Nuclear Information System (INIS)

    Kim, Tae Gyoum; Jang, Jin-Tak; Ryu, Hyukhyun; Lee, Won-Jae

    2013-01-01

    Highlights: •We grew vertical ZnO nanorods on ITO substrate using a two-step continuous potential process. •The nucleation for the ZnO nanorods growth was changed by first-step potential and duration. •The vertical ZnO nanorods were well grown when first-step potential was −1.2 V and 10 s. -- Abstract: In this study, we analyzed the growth of ZnO nanorods on an ITO (indium doped tin oxide) substrate by electrochemical deposition using a two-step, continuous potential process. We examined the effect of changing the first-step potential as well as the first-step duration on the morphological, structural and optical properties of ZnO nanorods, measured via using field emission scanning electron microscopy (FE-SEM), X-ray diffraction (XRD) and photoluminescence (PL), respectively. As a result, vertical ZnO nanorods were grown on ITO substrate without the need for a template when the first-step potential was set to −1.2 V for a duration of 10 s, and the second-step potential was set to −0.7 V for a duration of 1190 s. The ZnO nanorods on this sample showed the highest XRD (0 0 2)/(1 0 0) peak intensity ratio and the highest PL near band edge emission to deep level emission peak intensity ratio (NBE/DLE). In this study, the nucleation for vertical ZnO nanorod growth on an ITO substrate was found to be affected by changes in the first-step potential and first-step duration

  17. Associations between seasonal meteorological conditions and the daily step count of adults in Yokohama, Japan: Results of year-round pedometer measurements in a large population

    Directory of Open Access Journals (Sweden)

    Kimihiro Hino

    2017-12-01

    Full Text Available People's year-round interpersonal step count variations according to meteorological conditions are not fully understood, because complete year-round data from a sufficient sample of the general population are difficult to acquire. This study examined the associations between meteorological conditions and objectively measured step counts using year-round data collected from a large cohort (N=24,625 in Yokohama, Japan from April 2015 to March 2016.Two-piece linear regression analysis was used to examine the associations between the monthly median daily step count and three meteorological indices (mean values of temperature, temperature-humidity index (THI, and net effective temperature (NET.The number of steps per day peaked at temperatures between 19.4 and 20.7°C. At lower temperatures, the increase in steps per day was between 46.4 and 52.5 steps per 1°C increase. At temperatures higher than those at which step counts peaked, the decrease in steps per day was between 98.0 and 187.9 per 1°C increase. Furthermore, these effects were more obvious in elderly than non-elderly persons in both sexes. A similar tendency was seen when using THI and NET instead of temperature. Among the three meteorological indices, the highest R2 value with step counts was observed with THI in all four groups.Both high and low meteorological indices discourage people from walking and higher values of the indices adversely affect step count more than lower values, particularly among the elderly. Among the three indices assessed, THI best explains the seasonal fluctuations in step counts. Keywords: Elderly, Developed countries, Health policy, Humidity, Linear regression, Physical activity, Temperature

  18. Numerical simulation of complex turbulent Flow over a backward-facing step

    International Nuclear Information System (INIS)

    Silveira Neto, A.

    1991-06-01

    A statistical and topological study of a complex turbulent flow over a backward-facing step is realized by means of Direct and Large-Eddy Simulations. Direct simulations are performed in an isothermal and in a stratified two-dimensional case. In the isothermal case coherent structures have been obtained by the numerical simulation in the mixing layer downstream of the step. In a second step a thermal stratification is imposed on this flow. The coherent structures are in this case produced in the immediate vicinity of the step and disappear dowstream for increasing stratification. Afterwards, large-eddy simulations are carried out in the three-dimensional case. The subgrid-scale model is a local adaptation to the physical space of the spectral eddy-viscosity concept. The statistics of turbulence are in good agreement with the experimental data, corresponding to a small step configuration. Furthermore, calculations at higher step configuration show that the eddy structure of the flow presents striking analogies with the plane shear layers, with large billows shed behind the step, and intense longitudinal vortices strained between these billows [fr

  19. Effect of increased exposure times on amount of residual monomer released from single-step self-etch adhesives.

    Science.gov (United States)

    Altunsoy, Mustafa; Botsali, Murat Selim; Tosun, Gonca; Yasar, Ahmet

    2015-10-16

    The aim of this study was to evaluate the effect of increased exposure times on the amount of residual Bis-GMA, TEGDMA, HEMA and UDMA released from single-step self-etch adhesive systems. Two adhesive systems were used. The adhesives were applied to bovine dentin surface according to the manufacturer's instructions and were polymerized using an LED curing unit for 10, 20 and 40 seconds (n = 5). After polymerization, the specimens were stored in 75% ethanol-water solution (6 mL). Residual monomers (Bis-GMA, TEGDMA, UDMA and HEMA) that were eluted from the adhesives (after 10 minutes, 1 hour, 1 day, 7 days and 30 days) were analyzed by high-performance liquid chromatography (HPLC). The data were analyzed using 1-way analysis of variance and Tukey HSD tests. Among the time periods, the highest amount of released residual monomers from adhesives was observed in the 10th minute. There were statistically significant differences regarding released Bis-GMA, UDMA, HEMA and TEGDMA between the adhesive systems (p<0.05). There were no significant differences among the 10, 20 and 40 second polymerization times according to their effect on residual monomer release from adhesives (p>0.05). Increasing the polymerization time did not have an effect on residual monomer release from single-step self-etch adhesives.

  20. STEP flight experiments Large Deployable Reflector (LDR) telescope

    Science.gov (United States)

    Runge, F. C.

    1984-01-01

    Flight testing plans for a large deployable infrared reflector telescope to be tested on a space platform are discussed. Subsystem parts, subassemblies, and whole assemblies are discussed. Assurance of operational deployability, rigidization, alignment, and serviceability will be sought.

  1. Preimages for Step-Reduced SHA-2

    DEFF Research Database (Denmark)

    Aoki, Kazumaro; Guo, Jian; Matusiewicz, Krystian

    2009-01-01

    In this paper, we present preimage attacks on up to 43-step SHA-256 (around 67% of the total 64 steps) and 46-step SHA-512 (around 57.5% of the total 80 steps), which significantly increases the number of attacked steps compared to the best previously published preimage attack working for 24 steps....... The time complexities are 2^251.9, 2^509 for finding pseudo-preimages and 2^254.9, 2^511.5 compression function operations for full preimages. The memory requirements are modest, around 2^6 words for 43-step SHA-256 and 46-step SHA-512. The pseudo-preimage attack also applies to 43-step SHA-224 and SHA-384...

  2. Microsoft Office SharePoint Designer 2007 Step by Step

    CERN Document Server

    Coventry, Penelope

    2008-01-01

    The smart way to learn Office SharePoint Designer 2007-one step at a time! Work at your own pace through the easy numbered steps, practice files on CD, helpful hints, and troubleshooting tips to master the fundamentals of building customized SharePoint sites and applications. You'll learn how to work with Windows® SharePoint Services 3.0 and Office SharePoint Server 2007 to create Web pages complete with Cascading Style Sheets, Lists, Libraries, and customized Web parts. Then, make your site really work for you by adding data sources, including databases, XML data and Web services, and RSS fe

  3. Effect of One-Step and Multi-Steps Polishing System on Enamel Roughness

    Directory of Open Access Journals (Sweden)

    Cynthia Sumali

    2013-07-01

    Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 The final procedures of orthodontic treatment are bracket debonding and cleaning the remaining adhesive. Multi-step polishing system is the most common method used. The disadvantage of that system is long working time, because of the stages that should be done. Therefore, dental material manufacturer make an improvement to the system, to reduce several stages into one stage only. This new system is known as one-step polishing system. Objective: To compare the effect of one-step and multi-step polishing system on enamel roughness after orthodontic bracket debonding. Methods: Randomized control trial was conducted included twenty-eight maxillary premolar randomized into two polishing system; one-step OptraPol (Ivoclar, Vivadent and multi-step AstroPol (Ivoclar, Vivadent. After bracket debonding, the remaining adhesive on each group was cleaned by subjective polishing system for ninety seconds using low speed handpiece. The enamel roughness was subjected to profilometer, registering two roughness parameters (Ra, Rz. Independent t-test was used to analyze the mean score of enamel roughness in each group. Results: There was no significant difference of enamel roughness between one-step and multi-step polishing system (p>0.005. Conclusion: One-step polishing system can produce a similar enamel roughness to multi-step polishing system after bracket debonding and adhesive cleaning.DOI: 10.14693/jdi.v19i3.136

  4. Looking at large data sets using binned data plots

    Energy Technology Data Exchange (ETDEWEB)

    Carr, D.B.

    1990-04-01

    This report addresses the monumental challenge of developing exploratory analysis methods for large data sets. The goals of the report are to increase awareness of large data sets problems and to contribute simple graphical methods that address some of the problems. The graphical methods focus on two- and three-dimensional data and common task such as finding outliers and tail structure, assessing central structure and comparing central structures. The methods handle large sample size problems through binning, incorporate information from statistical models and adapt image processing algorithms. Examples demonstrate the application of methods to a variety of publicly available large data sets. The most novel application addresses the too many plots to examine'' problem by using cognostics, computer guiding diagnostics, to prioritize plots. The particular application prioritizes views of computational fluid dynamics solution sets on the fly. That is, as each time step of a solution set is generated on a parallel processor the cognostics algorithms assess virtual plots based on the previous time step. Work in such areas is in its infancy and the examples suggest numerous challenges that remain. 35 refs., 15 figs.

  5. SPAR-H Step-by-Step Guidance

    Energy Technology Data Exchange (ETDEWEB)

    W. J. Galyean; A. M. Whaley; D. L. Kelly; R. L. Boring

    2011-05-01

    This guide provides step-by-step guidance on the use of the SPAR-H method for quantifying Human Failure Events (HFEs). This guide is intended to be used with the worksheets provided in: 'The SPAR-H Human Reliability Analysis Method,' NUREG/CR-6883, dated August 2005. Each step in the process of producing a Human Error Probability (HEP) is discussed. These steps are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff. The discussions on dependence are extensive and include an appendix that describes insights obtained from the psychology literature.

  6. SPAR-H Step-by-Step Guidance

    International Nuclear Information System (INIS)

    Galyean, W.J.; Whaley, A.M.; Kelly, D.L.; Boring, R.L.

    2011-01-01

    This guide provides step-by-step guidance on the use of the SPAR-H method for quantifying Human Failure Events (HFEs). This guide is intended to be used with the worksheets provided in: 'The SPAR-H Human Reliability Analysis Method,' NUREG/CR-6883, dated August 2005. Each step in the process of producing a Human Error Probability (HEP) is discussed. These steps are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff. The discussions on dependence are extensive and include an appendix that describes insights obtained from the psychology literature.

  7. SPAR-H Step-by-Step Guidance

    Energy Technology Data Exchange (ETDEWEB)

    April M. Whaley; Dana L. Kelly; Ronald L. Boring; William J. Galyean

    2012-06-01

    Step-by-step guidance was developed recently at Idaho National Laboratory for the US Nuclear Regulatory Commission on the use of the Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) method for quantifying Human Failure Events (HFEs). This work was done to address SPAR-H user needs, specifically requests for additional guidance on the proper application of various aspects of the methodology. This paper overviews the steps of the SPAR-H analysis process and highlights some of the most important insights gained during the development of the step-by-step directions. This supplemental guidance for analysts is applicable when plant-specific information is available, and goes beyond the general guidance provided in existing SPAR-H documentation. The steps highlighted in this paper are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff.

  8. Large Deviations for Two-Time-Scale Diffusions, with Delays

    International Nuclear Information System (INIS)

    Kushner, Harold J.

    2010-01-01

    We consider the problem of large deviations for a two-time-scale reflected diffusion process, possibly with delays in the dynamical terms. The Dupuis-Ellis weak convergence approach is used. It is perhaps the most intuitive and simplest for the problems of concern. The results have applications to the problem of approximating optimal controls for two-time-scale systems via use of the averaged equation.

  9. Effect of the processing steps on compositions of table olive since harvesting time to pasteurization.

    Science.gov (United States)

    Nikzad, Nasim; Sahari, Mohammad A; Vanak, Zahra Piravi; Safafar, Hamed; Boland-nazar, Seyed A

    2013-08-01

    Weight, oil, fatty acids, tocopherol, polyphenol, and sterol properties of 5 olive cultivars (Zard, Fishomi, Ascolana, Amigdalolia, and Conservalia) during crude, lye treatment, washing, fermentation, and pasteurization steps were studied. Results showed: oil percent was higher and lower in Ascolana (crude step) and in Fishomi (pasteurization step), respectively; during processing steps, in all cultivars, oleic, palmitic, linoleic, and stearic acids were higher; the highest changes in saturated and unsaturated fatty acids were in fermentation step; the highest and the lowest ratios of ω3 / ω6 were in Ascolana (washing step) and in Zard (pasteurization step), respectively; the highest and the lowest tocopherol were in Amigdalolia and Fishomi, respectively, and major damage occurred in lye step; the highest and the lowest polyphenols were in Ascolana (crude step) and in Zard and Ascolana (pasteurization step), respectively; the major damage among cultivars occurred during lye step, in which the polyphenol reduced to 1/10 of first content; sterol did not undergo changes during steps. Reviewing of olive patents shows that many compositions of fruits such as oil quality, fatty acids, quantity and its fraction can be changed by alteration in cultivar and process.

  10. A three-step Maximum-A-Posterior probability method for InSAR data inversion of coseismic rupture with application to four recent large earthquakes in Asia

    Science.gov (United States)

    Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.

    2012-12-01

    We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of

  11. 3D elastic wave modeling using modified high‐order time stepping schemes with improved stability conditions

    KAUST Repository

    Chu, Chunlei; Stoffa, Paul L.; Seif, Roustam

    2009-01-01

    We present two Lax‐Wendroff type high‐order time stepping schemes and apply them to solving the 3D elastic wave equation. The proposed schemes have the same format as the Taylor series expansion based schemes, only with modified temporal extrapolation coefficients. We demonstrate by both theoretical analysis and numerical examples that the modified schemes significantly improve the stability conditions.

  12. Multiple-step fault estimation for interval type-II T-S fuzzy system of hypersonic vehicle with time-varying elevator faults

    Directory of Open Access Journals (Sweden)

    Jin Wang

    2017-03-01

    Full Text Available This article proposes a multiple-step fault estimation algorithm for hypersonic flight vehicles that uses an interval type-II Takagi–Sugeno fuzzy model. An interval type-II Takagi–Sugeno fuzzy model is developed to approximate the nonlinear dynamic system and handle the parameter uncertainties of hypersonic firstly. Then, a multiple-step time-varying additive fault estimation algorithm is designed to estimate time-varying additive elevator fault of hypersonic flight vehicles. Finally, the simulation is conducted in both aspects of modeling and fault estimation; the validity and availability of such method are verified by a series of the comparison of numerical simulation results.

  13. Associations between the Objectively Measured Office Environment and Workplace Step Count and Sitting Time: Cross-Sectional Analyses from the Active Buildings Study.

    Science.gov (United States)

    Fisher, Abi; Ucci, Marcella; Smith, Lee; Sawyer, Alexia; Spinney, Richard; Konstantatou, Marina; Marmot, Alexi

    2018-06-01

    Office-based workers spend a large proportion of the day sitting and tend to have low overall activity levels. Despite some evidence that features of the external physical environment are associated with physical activity, little is known about the influence of the spatial layout of the internal environment on movement, and the majority of data use self-report. This study investigated associations between objectively-measured sitting time and activity levels and the spatial layout of office floors in a sample of UK office-based workers. Participants wore activPAL accelerometers for at least three consecutive workdays. Primary outcomes were steps and proportion of sitting time per working hour. Primary exposures were office spatial layout, which was objectively-measured by deriving key spatial variables: 'distance from each workstation to key office destinations', 'distance from participant's workstation to all other workstations', 'visibility of co-workers', and workstation 'closeness'. 131 participants from 10 organisations were included. Fifty-four per cent were female, 81% were white, and the majority had a managerial or professional role (72%) in their organisation. The average proportion of the working hour spent sitting was 0.7 (SD 0.15); participants took on average 444 (SD 210) steps per working hour. Models adjusted for confounders revealed significant negative associations between step count and distance from each workstation to all other office destinations (e.g., B = -4.66, 95% CI: -8.12, -1.12, p office destinations (e.g., B = -6.45, 95% CI: -11.88, -0.41, p office destinations the less they walked, suggesting that changing the relative distance between workstations and other destinations on the same floor may not be the most fruitful target for promoting walking and reducing sitting in the workplace. However, reported effect sizes were very small and based on cross-sectional analyses. The approaches developed in this study could be applied to other

  14. Detection and Correction of Step Discontinuities in Kepler Flux Time Series

    Science.gov (United States)

    Kolodziejczak, J. J.; Morris, R. L.

    2011-01-01

    PDC 8.0 includes an implementation of a new algorithm to detect and correct step discontinuities appearing in roughly one of every 20 stellar light curves during a given quarter. The majority of such discontinuities are believed to result from high-energy particles (either cosmic or solar in origin) striking the photometer and causing permanent local changes (typically -0.5%) in quantum efficiency, though a partial exponential recovery is often observed [1]. Since these features, dubbed sudden pixel sensitivity dropouts (SPSDs), are uncorrelated across targets they cannot be properly accounted for by the current detrending algorithm. PDC detrending is based on the assumption that features in flux time series are due either to intrinsic stellar phenomena or to systematic errors and that systematics will exhibit measurable correlations across targets. SPSD events violate these assumptions and their successful removal not only rectifies the flux values of affected targets, but demonstrably improves the overall performance of PDC detrending [1].

  15. On an adaptive time stepping strategy for solving nonlinear diffusion equations

    International Nuclear Information System (INIS)

    Chen, K.; Baines, M.J.; Sweby, P.K.

    1993-01-01

    A new time step selection procedure is proposed for solving non- linear diffusion equations. It has been implemented in the ASWR finite element code of Lorenz and Svoboda [10] for 2D semiconductor process modelling diffusion equations. The strategy is based on equi-distributing the local truncation errors of the numerical scheme. The use of B-splines for interpolation (as well as for the trial space) results in a banded and diagonally dominant matrix. The approximate inverse of such a matrix can be provided to a high degree of accuracy by another banded matrix, which in turn can be used to work out the approximate finite difference scheme corresponding to the ASWR finite element method, and further to calculate estimates of the local truncation errors of the numerical scheme. Numerical experiments on six full simulation problems arising in semiconductor process modelling have been carried out. Results show that our proposed strategy is more efficient and better conserves the total mass. 18 refs., 6 figs., 2 tabs

  16. Convenient one-step synthesis of 5-carboxy-seminaphthofluoresceins

    DEFF Research Database (Denmark)

    Hammershøj, Peter; Thyhaug, Erling; Harris, Pernille

    2017-01-01

    The one-step synthesis and characterization of a series of regioisomerically pure 5-carboxy-seminaphthofluoresceins (5-carboxy-SNAFLs) is reported. The optical properties were determined in aqueous buffer at around biological pH, and highly pH sensitive, large Stokes-shift fluorophores with emiss......The one-step synthesis and characterization of a series of regioisomerically pure 5-carboxy-seminaphthofluoresceins (5-carboxy-SNAFLs) is reported. The optical properties were determined in aqueous buffer at around biological pH, and highly pH sensitive, large Stokes-shift fluorophores...

  17. The Relaxation of Vicinal (001) with ZigZag [110] Steps

    Science.gov (United States)

    Hawkins, Micah; Hamouda, Ajmi Bh; González-Cabrera, Diego Luis; Einstein, Theodore L.

    2012-02-01

    This talk presents a kinetic Monte Carlo study of the relaxation dynamics of [110] steps on a vicinal (001) simple cubic surface. This system is interesting because [110] steps have different elementary excitation energetics and favor step diffusion more than close-packed [100] steps. In this talk we show how this leads to relaxation dynamics showing greater fluctuations on a shorter time scale for [110] steps as well as 2-bond breaking processes being rate determining in contrast to 3-bond breaking processes for [100] steps. The existence of a steady state is shown via the convergence of terrace width distributions at times much longer than the relaxation time. In this time regime excellent fits to the modified generalized Wigner distribution (as well as to the Berry-Robnik model when steps can overlap) were obtained. Also, step-position correlation function data show diffusion-limited increase for small distances along the step as well as greater average step displacement for zigzag steps compared to straight steps for somewhat longer distances along the step. Work supported by NSF-MRSEC Grant DMR 05-20471 as well as a DOE-CMCSN Grant.

  18. Voluntary stepping behavior under single- and dual-task conditions in chronic stroke survivors: A comparison between the involved and uninvolved legs.

    Science.gov (United States)

    Melzer, Itshak; Goldring, Melissa; Melzer, Yehudit; Green, Elad; Tzedek, Irit

    2010-12-01

    If balance is lost, quick step execution can prevent falls. Research has shown that speed of voluntary stepping was able to predict future falls in old adults. The aim of the study was to investigate voluntary stepping behavior, as well as to compare timing and leg push-off force-time relation parameters of involved and uninvolved legs in stroke survivors during single- and dual-task conditions. We also aimed to compare timing and leg push-off force-time relation parameters between stroke survivors and healthy individuals in both task conditions. Ten stroke survivors performed a voluntary step execution test with their involved and uninvolved legs under two conditions: while focusing only on the stepping task and while a separate attention-demanding task was performed simultaneously. Temporal parameters related to the step time were measured including the duration of the step initiation phase, the preparatory phase, the swing phase, and the total step time. In addition, force-time parameters representing the push-off power during stepping were calculated from ground reaction data and compared with 10 healthy controls. The involved legs of stroke survivors had a significantly slower stepping time than uninvolved legs due to increased swing phase duration during both single- and dual-task conditions. For dual compared to single task, the stepping time increased significantly due to a significant increase in the duration of step initiation. In general, the force time parameters were significantly different in both legs of stroke survivors as compared to healthy controls, with no significant effect of dual compared with single-task conditions in both groups. The inability of stroke survivors to swing the involved leg quickly may be the most significant factor contributing to the large number of falls to the paretic side. The results suggest that stroke survivors were unable to rapidly produce muscle force in fast actions. This may be the mechanism of delayed execution

  19. Calculation of neutron die-away times in a large-vehicle portal monitor

    International Nuclear Information System (INIS)

    Lillie, R.A.; Santoro, R.T.; Alsmiller, R.G. Jr.

    1980-05-01

    Monte Carlo methods have been used to calculate neutron die-away times in a large-vehicle portal monitor. These calculations were performed to investigate the adequacy of using neutron die-away time measurements to detect the clandestine movement of shielded nuclear materials. The geometry consisted of a large tunnel lined with He 3 proportional counters. The time behavior of the (n,p) capture reaction in these counters was calculated when the tunnel contained a number of different tractor-trailer load configurations. Neutron die-away times obtained from weighted least squares fits to these data were compared. The change in neutron die-away time due to the replacement of cargo in a fully loaded truck with a spherical shell containing 240 kg of borated polyethylene was calculated to be less than 3%. This result together with the overall behavior of neutron die-away time versus mass inside the tunnel strongly suggested that measurements of this type will not provide a reliable means of detecting shielded nuclear materials in a large vehicle. 5 figures, 4 tables

  20. Transmission of laser pulses with high output beam quality using step-index fibers having large cladding

    Science.gov (United States)

    Yalin, Azer P; Joshi, Sachin

    2014-06-03

    An apparatus and method for transmission of laser pulses with high output beam quality using large core step-index silica optical fibers having thick cladding, are described. The thick cladding suppresses diffusion of modal power to higher order modes at the core-cladding interface, thereby enabling higher beam quality, M.sup.2, than are observed for large core, thin cladding optical fibers. For a given NA and core size, the thicker the cladding, the better the output beam quality. Mode coupling coefficients, D, has been found to scale approximately as the inverse square of the cladding dimension and the inverse square root of the wavelength. Output from a 2 m long silica optical fiber having a 100 .mu.m core and a 660 .mu.m cladding was found to be close to single mode, with an M.sup.2=1.6. Another thick cladding fiber (400 .mu.m core and 720 .mu.m clad) was used to transmit 1064 nm pulses of nanosecond duration with high beam quality to form gas sparks at the focused output (focused intensity of >100 GW/cm.sup.2), wherein the energy in the core was laser pulses was about 6 ns. Extending the pulse duration provided the ability to increase the delivered pulse energy (>20 mJ delivered for 50 ns pulses) without damaging the silica fiber.

  1. A theory of the stepped leader in lightning

    International Nuclear Information System (INIS)

    Lowke, J.J.

    1999-01-01

    There is no generally accepted explanation of the stepped leader behaviour in terms of basic physical processes. Existing theories generally involve significant gas heating within the stepped leader. In the present paper, the stepped nature of the leader is proposed to arise due to a combination of two physical phenomena. Electron transport is dominant over ion transport, during the luminous step stage, because electron mobilities are about 100 times larger than ion mobilities, and the streamer front velocity is determined by electron ionization effects. During the dark time between steps, there are only ions and charge transport is very much slower. The second effect leading to stepped behaviour arises because the electric field required for electric breakdown in air prior to a discharge is ∼30kV/cm, and is very much higher than the electric field of 5kV/cm that is required to sustain a glow discharge in air. During the luminous step stage, electrons tend to produce space charges to make a uniform field in the streamer of ∼5kV/cm. During the dark time between steps, there are no electrons but only ions. Time is required for ion drift to produce a space charge sheath of negative ions at the head of the streamer to produce a field of ∼30kV/cm sufficient for electron ionization to produce a new luminous step

  2. Effect of moisture and drying time on the bond strength of the one-step self-etching adhesive system

    Directory of Open Access Journals (Sweden)

    Yoon Lee

    2012-08-01

    Full Text Available Objectives To investigate the effect of dentin moisture degree and air-drying time on dentin-bond strength of two different one-step self-etching adhesive systems. Materials and Methods Twenty-four human third molars were used for microtensile bond strength testing of G-Bond and Clearfil S3 Bond. The dentin surface was either blot-dried or air-dried before applying these adhesive agents. After application of the adhesive agent, three different air drying times were evaluated: 1, 5, and 10 sec. Composite resin was build up to 4 mm thickness and light cured for 40 sec with 2 separate layers. Then the tooth was sectioned and trimmed to measure the microtensile bond strength using a universal testing machine. The measured bond strengths were analyzed with three-way ANOVA and regression analysis was done (p = 0.05. Results All three factors, materials, dentin wetness and air drying time, showed significant effect on the microtensile bond strength. Clearfil S3 Bond, dry dentin surface and 10 sec air drying time showed higher bond strength. Conclusions Within the limitation of this experiment, air drying time after the application of the one-step self-etching adhesive agent was the most significant factor affecting the bond strength, followed by the material difference and dentin moisture before applying the adhesive agent.

  3. A training approach to improve stepping automaticity while dual-tasking in Parkinson's disease

    Science.gov (United States)

    Chomiak, Taylor; Watts, Alexander; Meyer, Nicole; Pereira, Fernando V.; Hu, Bin

    2017-01-01

    Abstract Background: Deficits in motor movement automaticity in Parkinson's disease (PD), especially during multitasking, are early and consistent hallmarks of cognitive function decline, which increases fall risk and reduces quality of life. This study aimed to test the feasibility and potential efficacy of a wearable sensor-enabled technological platform designed for an in-home music-contingent stepping-in-place (SIP) training program to improve step automaticity during dual-tasking (DT). Methods: This was a 4-week prospective intervention pilot study. The intervention uses a sensor system and algorithm that runs off the iPod Touch which calculates step height (SH) in real-time. These measurements were then used to trigger auditory (treatment group, music; control group, radio podcast) playback in real-time through wireless headphones upon maintenance of repeated large amplitude stepping. With small steps or shuffling, auditory playback stops, thus allowing participants to use anticipatory motor control to regain positive feedback. Eleven participants were recruited from an ongoing trial (Trial Number: ISRCTN06023392). Fear of falling (FES-I), general cognitive functioning (MoCA), self-reported freezing of gait (FOG-Q), and DT step automaticity were evaluated. Results: While we found no significant effect of training on FES-I, MoCA, or FOG-Q, we did observe a significant group (music vs podcast) by training interaction in DT step automaticity (Ptraining to increase motor automaticity for people living with PD. The training approach described here can be implemented at home to meet the growing demand for self-management of symptoms by patients. PMID:28151878

  4. Rigid Body Sampling and Individual Time Stepping for Rigid-Fluid Coupling of Fluid Simulation

    Directory of Open Access Journals (Sweden)

    Xiaokun Wang

    2017-01-01

    Full Text Available In this paper, we propose an efficient and simple rigid-fluid coupling scheme with scientific programming algorithms for particle-based fluid simulation and three-dimensional visualization. Our approach samples the surface of rigid bodies with boundary particles that interact with fluids. It contains two procedures, that is, surface sampling and sampling relaxation, which insures uniform distribution of particles with less iterations. Furthermore, we present a rigid-fluid coupling scheme integrating individual time stepping to rigid-fluid coupling, which gains an obvious speedup compared to previous method. The experimental results demonstrate the effectiveness of our approach.

  5. One False Step: "Detroit," "Step" and Movies of Rising and Falling

    Science.gov (United States)

    Beck, Bernard

    2018-01-01

    "Detroit" and "Step" are two recent movies in the context of urban riots in protest of police brutality. They refer to time periods separated by half a century, but there are common themes in the two that seem appropriate to both times. The movies are not primarily concerned with the riot events, but the riot is a major…

  6. Fast time- and frequency-domain finite-element methods for electromagnetic analysis

    Science.gov (United States)

    Lee, Woochan

    Fast electromagnetic analysis in time and frequency domain is of critical importance to the design of integrated circuits (IC) and other advanced engineering products and systems. Many IC structures constitute a very large scale problem in modeling and simulation, the size of which also continuously grows with the advancement of the processing technology. This results in numerical problems beyond the reach of existing most powerful computational resources. Different from many other engineering problems, the structure of most ICs is special in the sense that its geometry is of Manhattan type and its dielectrics are layered. Hence, it is important to develop structure-aware algorithms that take advantage of the structure specialties to speed up the computation. In addition, among existing time-domain methods, explicit methods can avoid solving a matrix equation. However, their time step is traditionally restricted by the space step for ensuring the stability of a time-domain simulation. Therefore, making explicit time-domain methods unconditionally stable is important to accelerate the computation. In addition to time-domain methods, frequency-domain methods have suffered from an indefinite system that makes an iterative solution difficult to converge fast. The first contribution of this work is a fast time-domain finite-element algorithm for the analysis and design of very large-scale on-chip circuits. The structure specialty of on-chip circuits such as Manhattan geometry and layered permittivity is preserved in the proposed algorithm. As a result, the large-scale matrix solution encountered in the 3-D circuit analysis is turned into a simple scaling of the solution of a small 1-D matrix, which can be obtained in linear (optimal) complexity with negligible cost. Furthermore, the time step size is not sacrificed, and the total number of time steps to be simulated is also significantly reduced, thus achieving a total cost reduction in CPU time. The second contribution

  7. The large-s field-reversed configuration experiment

    International Nuclear Information System (INIS)

    Hoffman, A.L.; Carey, L.N.; Crawford, E.A.; Harding, D.G.; DeHart, T.E.; McDonald, K.F.; McNeil, J.L.; Milroy, R.D.; Slough, J.T.; Maqueda, R.; Wurden, G.A.

    1993-01-01

    The Large-s Experiment (LSX) was built to study the formation and equilibrium properties of field-reversed configurations (FRCs) as the scale size increases. The dynamic, field-reversed theta-pinch method of FRC creation produces axial and azimuthal deformations and makes formation difficult, especially in large devices with large s (number of internal gyroradii) where it is difficult to achieve initial plasma uniformity. However, with the proper technique, these formation distortions can be minimized and are then observed to decay with time. This suggests that the basic stability and robustness of FRCs formed, and in some cases translated, in smaller devices may also characterize larger FRCs. Elaborate formation controls were included on LSX to provide the initial uniformity and symmetry necessary to minimize formation disturbances, and stable FRCs could be formed up to the design goal of s = 8. For x ≤ 4, the formation distortions decayed away completely, resulting in symmetric equilibrium FRCs with record confinement times up to 0.5 ms, agreeing with previous empirical scaling laws (τ∝sR). Above s = 4, reasonably long-lived (up to 0.3 ms) configurations could still be formed, but the initial formation distortions were so large that they never completely decayed away, and the equilibrium confinement was degraded from the empirical expectations. The LSX was only operational for 1 yr, and it is not known whether s = 4 represents a fundamental limit for good confinement in simple (no ion beam stabilization) FRCs or whether it simply reflects a limit of present formation technology. Ideally, s could be increased through flux buildup from neutral beams. Since the addition of kinetic or beam ions will probably be desirable for heating, sustainment, and further stabilization of magnetohydrodynamic modes at reactor-level s values, neutral beam injection is the next logical step in FRC development. 24 refs., 21 figs., 2 tabs

  8. Step-by-Step Simulation of Radiation Chemistry Using Green Functions for Diffusion-Influenced Reactions

    Science.gov (United States)

    Plante, Ianik; Cucinotta, Francis A.

    2011-01-01

    Radiolytic species are formed approximately 1 ps after the passage of ionizing radiation through matter. After their formation, they diffuse and chemically react with other radiolytic species and neighboring biological molecules, leading to various oxidative damage. Therefore, the simulation of radiation chemistry is of considerable importance to understand how radiolytic species damage biological molecules [1]. The step-by-step simulation of chemical reactions is difficult, because the radiolytic species are distributed non-homogeneously in the medium. Consequently, computational approaches based on Green functions for diffusion-influenced reactions should be used [2]. Recently, Green functions for more complex type of reactions have been published [3-4]. We have developed exact random variate generators of these Green functions [5], which will allow us to use them in radiation chemistry codes. Moreover, simulating chemistry using the Green functions is which is computationally very demanding, because the probabilities of reactions between each pair of particles should be evaluated at each timestep [2]. This kind of problem is well adapted for General Purpose Graphic Processing Units (GPGPU), which can handle a large number of similar calculations simultaneously. These new developments will allow us to include more complex reactions in chemistry codes, and to improve the calculation time. This code should be of importance to link radiation track structure simulations and DNA damage models.

  9. Dwell time considerations for large area cold plasma decontamination

    Science.gov (United States)

    Konesky, Gregory

    2009-05-01

    Atmospheric discharge cold plasmas have been shown to be effective in the reduction of pathogenic bacteria and spores and in the decontamination of simulated chemical warfare agents, without the generation of toxic or harmful by-products. Cold plasmas may also be useful in assisting cleanup of radiological "dirty bombs." For practical applications in realistic scenarios, the plasma applicator must have both a large area of coverage, and a reasonably short dwell time. However, the literature contains a wide range of reported dwell times, from a few seconds to several minutes, needed to achieve a given level of reduction. This is largely due to different experimental conditions, and especially, different methods of generating the decontaminating plasma. We consider these different approaches and attempt to draw equivalencies among them, and use this to develop requirements for a practical, field-deployable plasma decontamination system. A plasma applicator with 12 square inches area and integral high voltage, high frequency generator is described.

  10. Urban Freight Management with Stochastic Time-Dependent Travel Times and Application to Large-Scale Transportation Networks

    Directory of Open Access Journals (Sweden)

    Shichao Sun

    2015-01-01

    Full Text Available This paper addressed the vehicle routing problem (VRP in large-scale urban transportation networks with stochastic time-dependent (STD travel times. The subproblem which is how to find the optimal path connecting any pair of customer nodes in a STD network was solved through a robust approach without requiring the probability distributions of link travel times. Based on that, the proposed STD-VRP model can be converted into solving a normal time-dependent VRP (TD-VRP, and algorithms for such TD-VRPs can also be introduced to obtain the solution. Numerical experiments were conducted to address STD-VRPTW of practical sizes on a real world urban network, demonstrated here on the road network of Shenzhen, China. The stochastic time-dependent link travel times of the network were calibrated by historical floating car data. A route construction algorithm was applied to solve the STD problem in 4 delivery scenarios efficiently. The computational results showed that the proposed STD-VRPTW model can improve the level of customer service by satisfying the time-window constraint under any circumstances. The improvement can be very significant especially for large-scale network delivery tasks with no more increase in cost and environmental impacts.

  11. High resolution time-of-flight measurements in small and large scintillation counters

    International Nuclear Information System (INIS)

    D'Agostini, G.; Marini, G.; Martellotti, G.; Massa, F.; Rambaldi, A.; Sciubba, A.

    1981-01-01

    In a test run, the experimental time-of-flight resolution was measured for several different scintillation counters of small (10 x 5 cm 2 ) and large (100 x 15 cm 2 and 75 x 25 cm 2 ) area. The design characteristics were decided on the basis of theoretical Monte Carlo calculations. We report results using twisted, fish-tail, and rectangular light- guides and different types of scintillator (NE 114 and PILOT U). Time resolution up to approx. equal to 130-150 ps fwhm for the small counters and up to approx. equal to 280-300 ps fwhm for the large counters were obtained. The spatial resolution from time measurements in the large counters is also reported. The results of Monte Carlo calculations on the type of scintillator, the shape and dimensions of the light-guides, and the nature of the external wrapping surfaces - to be used in order to optimize the time resolution - are also summarized. (orig.)

  12. A massively parallel algorithm for the solution of constrained equations of motion with applications to large-scale, long-time molecular dynamics simulations

    Energy Technology Data Exchange (ETDEWEB)

    Fijany, A. [Jet Propulsion Lab., Pasadena, CA (United States); Coley, T.R. [Virtual Chemistry, Inc., San Diego, CA (United States); Cagin, T.; Goddard, W.A. III [California Institute of Technology, Pasadena, CA (United States)

    1997-12-31

    Successful molecular dynamics (MD) simulation of large systems (> million atoms) for long times (> nanoseconds) requires the integration of constrained equations of motion (CEOM). Constraints are used to eliminate high frequency degrees of freedom (DOF) and to allow the use of rigid bodies. Solving the CEOM allows for larger integration time-steps and helps focus the simulation on the important collective dynamics of chemical, biological, and materials systems. We explore advances in multibody dynamics which have resulted in O(N) algorithms for propagating the CEOM. However, because of their strictly sequential nature, the computational time required by these algorithms does not scale down with increased numbers of processors. We then present the new constraint force algorithm for solving the CEOM and show that this algorithm is fully parallelizable, leading to a computational cost of O(N/P+IogP) for N DOF on P processors.

  13. Step out - Step in Sequencing Games

    NARCIS (Netherlands)

    Musegaas, M.; Borm, P.E.M.; Quant, M.

    2014-01-01

    In this paper a new class of relaxed sequencing games is introduced: the class of Step out - Step in sequencing games. In this relaxation any player within a coalition is allowed to step out from his position in the processing order and to step in at any position later in the processing order.

  14. Step-to-step variability in treadmill walking: influence of rhythmic auditory cueing.

    Directory of Open Access Journals (Sweden)

    Philippe Terrier

    Full Text Available While walking, human beings continuously adjust step length (SpL, step time (SpT, step speed (SpS = SpL/SpT and step width (SpW by integrating both feedforward and feedback mechanisms. These motor control processes result in correlations of gait parameters between consecutive strides (statistical persistence. Constraining gait with a speed cue (treadmill and/or a rhythmic auditory cue (metronome, modifies the statistical persistence to anti-persistence. The objective was to analyze whether the combined effect of treadmill and rhythmic auditory cueing (RAC modified not only statistical persistence, but also fluctuation magnitude (standard deviation, SD, and stationarity of SpL, SpT, SpS and SpW. Twenty healthy subjects performed 6 × 5 min. walking tests at various imposed speeds on a treadmill instrumented with foot-pressure sensors. Freely-chosen walking cadences were assessed during the first three trials, and then imposed accordingly in the last trials with a metronome. Fluctuation magnitude (SD of SpT, SpL, SpS and SpW was assessed, as well as NonStationarity Index (NSI, which estimates the dispersion of local means in the times series (SD of 20 local means over 10 steps. No effect of RAC on fluctuation magnitude (SD was observed. SpW was not modified by RAC, what is likely the evidence that lateral foot placement is separately regulated. Stationarity (NSI was modified by RAC in the same manner as persistent pattern: Treadmill induced low NSI in the time series of SpS, and high NSI in SpT and SpL. On the contrary, SpT, SpL and SpS exhibited low NSI under RAC condition. We used relatively short sample of consecutive strides (100 as compared to the usual number of strides required to analyze fluctuation dynamics (200 to 1000 strides. Therefore, the responsiveness of stationarity measure (NSI to cued walking opens the perspective to perform short walking tests that would be adapted to patients with a reduced gait perimeter.

  15. Calibration and Evaluation of Different Estimation Models of Daily Solar Radiation in Seasonally and Annual Time Steps in Shiraz Region

    Directory of Open Access Journals (Sweden)

    Hamid Reza Fooladmand

    2017-06-01

    2006 to 2008 were used for calibrating fourteen estimated models of solar radiation in seasonally and annual time steps and the measured data of years 2009 and 2010 were used for evaluating the obtained results. The equations were used in this study divided into three groups contains: 1 The equations based on only sunshine hours. 2 The equations based on only air temperature. 3 The equations based on sunshine hours and air temperature together. On the other hand, statistical comparison must be done to select the best equation for estimating solar radiation in seasonally and annual time steps. For this purpose, in validation stage the combination of statistical equations and linear correlation was used, and then the value of mean square deviation (MSD was calculated to evaluate the different models for estimating solar radiation in mentioned time steps. Results and Discussion: The mean values of mean square deviation (MSD of fourteen models for estimating solar radiation were equal to 24.16, 20.42, 4.08 and 16.19 for spring to winter respectively, and 15.40 in annual time step. Therefore, the results showed that using the equations for autumn enjoyed high accuracy, however for other seasons had low accuracy. So, using the equations for annual time step were appropriate more than the equations for seasonally time steps. Also, the mean values of mean square deviation (MSD of the equations based on only sunshine hours, the equations based on only air temperature, and the equations based on the combination of sunshine hours and air temperature for estimating solar radiation were equal to 14.82, 17.40 and 14.88, respectively. Therefore, the results indicated that the models based on only air temperature were the worst conditions for estimating solar radiation in Shiraz region, and therefore, using the sunshine hours for estimating solar radiation is necessary. Conclusions: In this study for estimating solar radiation in seasonally and annual time steps in Shiraz region

  16. The Effect of Forward-Facing Steps on Stationary Crossflow Instability Growth and Breakdown

    Science.gov (United States)

    Eppink, Jenna L.

    2018-01-01

    The e?ect of a forward-facing step on stationary cross?ow transition was studied using standard stereo particle image velocimetry (PIV) and time-resolved PIV. Step heights ranging from 53 to 71% of the boundary-layer thickness were studied in detail. The steps above a critical step height of approximately 60% of the boundary-layer thickness had a signi?cant impact on the stationary cross?ow growth downstream of the step. For the critical cases, the stationary cross?ow amplitude grew suddenly downstream of the step, decayed for a short region, then grew again. The adverse pressure gradient upstream of the step resulted in a region of cross?ow reversal. A secondary set of vortices, rotating in the opposite direction to the primary vortices, developed underneath the uplifted primary vortices. The wall-normal velocity disturbance (V' ) created by these secondary vortices impacted the step, and is believed to feed into the strong vortex that developed downstream of the step. A large but very short negative cross?ow region formed for a short region downstream of the step due to a sharp inboard curvature of the streamlines near the wall. For the larger step height cases, a cross?ow-reversal region formed just downstream of the strong negative cross?ow region. This cross?ow reversal region is believed to play an important role in the growth of the stationary cross?ow vortices downstream of the step, and may be a good indication of the critical forward-facing step height.

  17. Step out-step in sequencing games

    NARCIS (Netherlands)

    Musegaas, Marieke; Borm, Peter; Quant, Marieke

    2015-01-01

    In this paper a new class of relaxed sequencing games is introduced: the class of Step out–Step in sequencing games. In this relaxation any player within a coalition is allowed to step out from his position in the processing order and to step in at any position later in the processing order. First,

  18. Modeling single-file diffusion with step fractional Brownian motion and a generalized fractional Langevin equation

    International Nuclear Information System (INIS)

    Lim, S C; Teo, L P

    2009-01-01

    Single-file diffusion behaves as normal diffusion at small time and as subdiffusion at large time. These properties can be described in terms of fractional Brownian motion with variable Hurst exponent or multifractional Brownian motion. We introduce a new stochastic process called Riemann–Liouville step fractional Brownian motion which can be regarded as a special case of multifractional Brownian motion with a step function type of Hurst exponent tailored for single-file diffusion. Such a step fractional Brownian motion can be obtained as a solution of the fractional Langevin equation with zero damping. Various kinds of fractional Langevin equations and their generalizations are then considered in order to decide whether their solutions provide the correct description of the long and short time behaviors of single-file diffusion. The cases where the dissipative memory kernel is a Dirac delta function, a power-law function and a combination of these functions are studied in detail. In addition to the case where the short time behavior of single-file diffusion behaves as normal diffusion, we also consider the possibility of a process that begins as ballistic motion

  19. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically realized as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this work, we introduce a discrete event-based simulation tool that models the data flow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers, resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error of simulation when comparing the results to a large amount of real-world ope...

  20. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically implemented as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this paper, we introduce a discrete event-based simulation tool that models the dataflow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers; resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error in simulation when comparing the results to a large amount of real-world ...

  1. Assessment of radiopacity of restorative composite resins with various target distances and exposure times and a modified aluminum step wedge

    Energy Technology Data Exchange (ETDEWEB)

    Bejeh Mir, Arash Poorsattar [Dentistry Student Research Committee (DSRC), Dental Materials Research Center, Dentistry School, Babol University of Medical Sciences, Babol (Iran, Islamic Republic of); Bejeh Mir, Morvarid Poorsattar [Private Practice of Orthodontics, Montreal, Quebec (Canada)

    2012-09-15

    ANSI/ADA has established standards for adequate radiopacity. This study was aimed to assess the changes in radiopacity of composite resins according to various tube-target distances and exposure times. Five 1-mm thick samples of Filtek P60 and Clearfil composite resins were prepared and exposed with six tube-target distance/exposure time setups (i.e., 40 cm, 0.2 seconds; 30 cm, 0.2 seconds; 30 cm, 0.16 seconds, 30 cm, 0.12 seconds; 15 cm, 0.2 seconds; 15 cm, 0.12 seconds) performing at 70 kVp and 7 mA along with a 12-step aluminum stepwedge (1 mm incremental steps) using a PSP digital sensor. Thereafter, the radiopacities measured with Digora for Windows software 2.5 were converted to absorbencies (i.e., A=-log (1-G/255)), where A is the absorbency and G is the measured gray scale). Furthermore, the linear regression model of aluminum thickness and absorbency was developed and used to convert the radiopacity of dental materials to the equivalent aluminum thickness. In addition, all calculations were compared with those obtained from a modified 3-step stepwedge (i.e., using data for the 2nd, 5th, and 8th steps). The radiopacities of the composite resins differed significantly with various setups (p<0.001) and between the materials (p<0.001). The best predicted model was obtained for the 30 cm 0.2 seconds setup (R2=0.999). Data from the reduced modified stepwedge was remarkable and comparable with the 12-step stepwedge. Within the limits of the present study, our findings support that various setups might influence the radiopacity of dental materials on digital radiographs.

  2. Evolution of robot-assisted orthotopic ileal neobladder formation: a step-by-step update to the University of Southern California (USC) technique.

    Science.gov (United States)

    Chopra, Sameer; de Castro Abreu, Andre Luis; Berger, Andre K; Sehgal, Shuchi; Gill, Inderbir; Aron, Monish; Desai, Mihir M

    2017-01-01

    To describe our, step-by-step, technique for robotic intracorporeal neobladder formation. The main surgical steps to forming the intracorporeal orthotopic ileal neobladder are: isolation of 65 cm of small bowel; small bowel anastomosis; bowel detubularisation; suture of the posterior wall of the neobladder; neobladder-urethral anastomosis and cross folding of the pouch; and uretero-enteral anastomosis. Improvements have been made to these steps to enhance time efficiency without compromising neobladder configuration. Our technical improvements have resulted in an improvement in operative time from 450 to 360 min. We describe an updated step-by-step technique of robot-assisted intracorporeal orthotopic ileal neobladder formation. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.

  3. Two-level systems driven by large-amplitude fields

    International Nuclear Information System (INIS)

    Ashhab, S.; Johansson, J. R.; Zagoskin, A. M.; Nori, Franco

    2007-01-01

    We analyze the dynamics of a two-level system subject to driving by large-amplitude external fields, focusing on the resonance properties in the case of driving around the region of avoided level crossing. In particular, we consider three main questions that characterize resonance dynamics: (1) the resonance condition (2) the frequency of the resulting oscillations on resonance, and (3) the width of the resonance. We identify the regions of validity of different approximations. In a large region of the parameter space, we use a geometric picture in order to obtain both a simple understanding of the dynamics and quantitative results. The geometric approach is obtained by dividing the evolution into discrete time steps, with each time step described by either a phase shift on the basis states or a coherent mixing process corresponding to a Landau-Zener crossing. We compare the results of the geometric picture with those of a rotating wave approximation. We also comment briefly on the prospects of employing strong driving as a useful tool to manipulate two-level systems

  4. Step Detection Robust against the Dynamics of Smartphones

    Science.gov (United States)

    Lee, Hwan-hee; Choi, Suji; Lee, Myeong-jin

    2015-01-01

    A novel algorithm is proposed for robust step detection irrespective of step mode and device pose in smartphone usage environments. The dynamics of smartphones are decoupled into a peak-valley relationship with adaptive magnitude and temporal thresholds. For extracted peaks and valleys in the magnitude of acceleration, a step is defined as consisting of a peak and its adjacent valley. Adaptive magnitude thresholds consisting of step average and step deviation are applied to suppress pseudo peaks or valleys that mostly occur during the transition among step modes or device poses. Adaptive temporal thresholds are applied to time intervals between peaks or valleys to consider the time-varying pace of human walking or running for the correct selection of peaks or valleys. From the experimental results, it can be seen that the proposed step detection algorithm shows more than 98.6% average accuracy for any combination of step mode and device pose and outperforms state-of-the-art algorithms. PMID:26516857

  5. Generalized Runge-Kutta method for two- and three-dimensional space-time diffusion equations with a variable time step

    International Nuclear Information System (INIS)

    Aboanber, A.E.; Hamada, Y.M.

    2008-01-01

    An extensive knowledge of the spatial power distribution is required for the design and analysis of different types of current-generation reactors, and that requires the development of more sophisticated theoretical methods. Therefore, the need to develop new methods for multidimensional transient reactor analysis still exists. The objective of this paper is to develop a computationally efficient numerical method for solving the multigroup, multidimensional, static and transient neutron diffusion kinetics equations. A generalized Runge-Kutta method has been developed for the numerical integration of the stiff space-time diffusion equations. The method is fourth-order accurate, using an embedded third-order solution to arrive at an estimate of the truncation error for automatic time step control. In addition, the A(α)-stability properties of the method are investigated. The analyses of two- and three-dimensional benchmark problems as well as static and transient problems, demonstrate that very accurate solutions can be obtained with assembly-sized spatial meshes. Preliminary numerical evaluations using two- and three-dimensional finite difference codes showed that the presented generalized Runge-Kutta method is highly accurate and efficient when compared with other optimized iterative numerical and conventional finite difference methods

  6. Detection of Tomato black ring virus by real-time one-step RT-PCR.

    Science.gov (United States)

    Harper, Scott J; Delmiglio, Catia; Ward, Lisa I; Clover, Gerard R G

    2011-01-01

    A TaqMan-based real-time one-step RT-PCR assay was developed for the rapid detection of Tomato black ring virus (TBRV), a significant plant pathogen which infects a wide range of economically important crops. Primers and a probe were designed against existing genomic sequences to amplify a 72 bp fragment from RNA-2. The assay amplified all isolates of TBRV tested, but no amplification was observed from the RNA of other nepovirus species or healthy host plants. The detection limit of the assay was estimated to be around nine copies of the TBRV target region in total RNA. A comparison with conventional RT-PCR and ELISA, indicated that ELISA, the current standard test method, lacked specificity and reacted to all nepovirus species tested, while conventional RT-PCR was approximately ten-fold less sensitive than the real-time RT-PCR assay. Finally, the real-time RT-PCR assay was tested using five different RT-PCR reagent kits and was found to be robust and reliable, with no significant differences in sensitivity being found. The development of this rapid assay should aid in quarantine and post-border surveys for regulatory agencies. Copyright © 2010 Elsevier B.V. All rights reserved.

  7. Two-level systems driven by large-amplitude fields

    Science.gov (United States)

    Nori, F.; Ashhab, S.; Johansson, J. R.; Zagoskin, A. M.

    2009-03-01

    We analyze the dynamics of a two-level system subject to driving by large-amplitude external fields, focusing on the resonance properties in the case of driving around the region of avoided level crossing. In particular, we consider three main questions that characterize resonance dynamics: (1) the resonance condition, (2) the frequency of the resulting oscillations on resonance, and (3) the width of the resonance. We identify the regions of validity of different approximations. In a large region of the parameter space, we use a geometric picture in order to obtain both a simple understanding of the dynamics and quantitative results. The geometric approach is obtained by dividing the evolution into discrete time steps, with each time step described by either a phase shift on the basis states or a coherent mixing process corresponding to a Landau-Zener crossing. We compare the results of the geometric picture with those of a rotating wave approximation. We also comment briefly on the prospects of employing strong driving as a useful tool to manipulate two-level systems. S. Ashhab, J.R. Johansson, A.M. Zagoskin, F. Nori, Two-level systems driven by large-amplitude fields, Phys. Rev. A 75, 063414 (2007). S. Ashhab et al, unpublished.

  8. Standardization of a two-step real-time polymerase chain reaction based method for species-specific detection of medically important Aspergillus species.

    Science.gov (United States)

    Das, P; Pandey, P; Harishankar, A; Chandy, M; Bhattacharya, S; Chakrabarti, A

    2017-01-01

    Standardization of Aspergillus polymerase chain reaction (PCR) poses two technical challenges (a) standardization of DNA extraction, (b) optimization of PCR against various medically important Aspergillus species. Many cases of aspergillosis go undiagnosed because of relative insensitivity of conventional diagnostic methods such as microscopy, culture or antigen detection. The present study is an attempt to standardize real-time PCR assay for rapid sensitive and specific detection of Aspergillus DNA in EDTA whole blood. Three nucleic acid extraction protocols were compared and a two-step real-time PCR assay was developed and validated following the recommendations of the European Aspergillus PCR Initiative in our setup. In the first PCR step (pan-Aspergillus PCR), the target was 28S rDNA gene, whereas in the second step, species specific PCR the targets were beta-tubulin (for Aspergillus fumigatus, Aspergillus flavus, Aspergillus terreus), gene and calmodulin gene (for Aspergillus niger). Species specific identification of four medically important Aspergillus species, namely, A. fumigatus, A. flavus, A. niger and A. terreus were achieved by this PCR. Specificity of the PCR was tested against 34 different DNA source including bacteria, virus, yeast, other Aspergillus sp., other fungal species and for human DNA and had no false-positive reactions. The analytical sensitivity of the PCR was found to be 102 CFU/ml. The present protocol of two-step real-time PCR assays for genus- and species-specific identification for commonly isolated species in whole blood for diagnosis of invasive Aspergillus infections offers a rapid, sensitive and specific assay option and requires clinical validation at multiple centers.

  9. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    Science.gov (United States)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  10. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time

    Science.gov (United States)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which—as shown on the contact process—provides a significant improvement of the large deviation function estimators compared to the standard one.

  11. Iteratively improving Hi-C experiments one step at a time.

    Science.gov (United States)

    Golloshi, Rosela; Sanders, Jacob T; McCord, Rachel Patton

    2018-04-30

    The 3D organization of eukaryotic chromosomes affects key processes such as gene expression, DNA replication, cell division, and response to DNA damage. The genome-wide chromosome conformation capture (Hi-C) approach can characterize the landscape of 3D genome organization by measuring interaction frequencies between all genomic regions. Hi-C protocol improvements and rapid advances in DNA sequencing power have made Hi-C useful to study diverse biological systems, not only to elucidate the role of 3D genome structure in proper cellular function, but also to characterize genomic rearrangements, assemble new genomes, and consider chromatin interactions as potential biomarkers for diseases. Yet, the Hi-C protocol is still complex and subject to variations at numerous steps that can affect the resulting data. Thus, there is still a need for better understanding and control of factors that contribute to Hi-C experiment success and data quality. Here, we evaluate recently proposed Hi-C protocol modifications as well as often overlooked variables in sample preparation and examine their effects on Hi-C data quality. We examine artifacts that can occur during Hi-C library preparation, including microhomology-based artificial template copying and chimera formation that can add noise to the downstream data. Exploring the mechanisms underlying Hi-C artifacts pinpoints steps that should be further optimized in the future. To improve the utility of Hi-C in characterizing the 3D genome of specialized populations of cells or small samples of primary tissue, we identify steps prone to DNA loss which should be considered to adapt Hi-C to lower cell numbers. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Versatile synchronized real-time MEG hardware controller for large-scale fast data acquisition

    Science.gov (United States)

    Sun, Limin; Han, Menglai; Pratt, Kevin; Paulson, Douglas; Dinh, Christoph; Esch, Lorenz; Okada, Yoshio; Hämäläinen, Matti

    2017-05-01

    Versatile controllers for accurate, fast, and real-time synchronized acquisition of large-scale data are useful in many areas of science, engineering, and technology. Here, we describe the development of a controller software based on a technique called queued state machine for controlling the data acquisition (DAQ) hardware, continuously acquiring a large amount of data synchronized across a large number of channels (>400) at a fast rate (up to 20 kHz/channel) in real time, and interfacing with applications for real-time data analysis and display of electrophysiological data. This DAQ controller was developed specifically for a 384-channel pediatric whole-head magnetoencephalography (MEG) system, but its architecture is useful for wide applications. This controller running in a LabVIEW environment interfaces with microprocessors in the MEG sensor electronics to control their real-time operation. It also interfaces with a real-time MEG analysis software via transmission control protocol/internet protocol, to control the synchronous acquisition and transfer of the data in real time from >400 channels to acquisition and analysis workstations. The successful implementation of this controller for an MEG system with a large number of channels demonstrates the feasibility of employing the present architecture in several other applications.

  13. Comparison of single-step and two-step purified coagulants from Moringa oleifera seed for turbidity and DOC removal.

    Science.gov (United States)

    Sánchez-Martín, J; Ghebremichael, K; Beltrán-Heredia, J

    2010-08-01

    The coagulant proteins from Moringa oleifera purified with single-step and two-step ion-exchange processes were used for the coagulation of surface water from Meuse river in The Netherlands. The performances of the two purified coagulants and the crude extract were assessed in terms of turbidity and DOC removal. The results indicated that the optimum dosage of the single-step purified coagulant was more than two times higher compared to the two-step purified coagulant in terms of turbidity removal. And the residual DOC in the two-step purified coagulant was lower than in single-step purified coagulant or crude extract. (c) 2010 Elsevier Ltd. All rights reserved.

  14. Percutaneous Cystgastrostomy as a Single-Step Procedure

    International Nuclear Information System (INIS)

    Curry, L.; Sookur, P.; Low, D.; Bhattacharya, S.; Fotheringham, T.

    2009-01-01

    The purpose of this study was to evaluate the success of percutaneous transgastric cystgastrostomy as a single-step procedure. We performed a retrospective analysis of single-step percutaneous transgastric cystgastrostomy carried out in 12 patients (8 male, 4 female; mean age 44 years; range 21-70 years), between 2002 and 2007, with large symptomatic pancreatic pseudocysts for whom up to 1-year follow-up data (mean 10 months) were available. All pseudocysts were drained by single-step percutaneous cystgastrostomy with the placement of either one or two stents. The procedure was completed successfully in all 12 patients. The pseudocysts showed complete resolution on further imaging in 7 of 12 patients with either enteric passage of the stent or stent removal by endoscopy. In 2 of 12 patients, the pseudocysts showed complete resolution on imaging, with the stents still noted in situ. In 2 of 12 patients, the pseudocysts became infected after 1 month and required surgical intervention. In 1 of 12 patients, the pseudocyst showed partial resolution on imaging, but subsequently reaccumulated and later required external drainage. In our experience, percutaneous cystgastrostomy as a single-step procedure has a high success rate and good short-term outcomes over 1-year follow-up and should be considered in the treatment of large symptomatic cysts.

  15. Detection of Listeria monocytogenes in ready-to-eat food by Step One real-time polymerase chain reaction.

    Science.gov (United States)

    Pochop, Jaroslav; Kačániová, Miroslava; Hleba, Lukáš; Lopasovský, L'ubomír; Bobková, Alica; Zeleňáková, Lucia; Stričík, Michal

    2012-01-01

    The aim of this study was to follow contamination of ready-to-eat food with Listeria monocytogenes by using the Step One real time polymerase chain reaction (PCR). We used the PrepSEQ Rapid Spin Sample Preparation Kit for isolation of DNA and MicroSEQ® Listeria monocytogenes Detection Kit for the real-time PCR performance. In 30 samples of ready-to-eat milk and meat products without incubation we detected strains of Listeria monocytogenes in five samples (swabs). Internal positive control (IPC) was positive in all samples. Our results indicated that the real-time PCR assay developed in this study could sensitively detect Listeria monocytogenes in ready-to-eat food without incubation.

  16. Large Time Behavior of the Vlasov-Poisson-Boltzmann System

    Directory of Open Access Journals (Sweden)

    Li Li

    2013-01-01

    Full Text Available The motion of dilute charged particles can be modeled by Vlasov-Poisson-Boltzmann system. We study the large time stability of the VPB system. To be precise, we prove that when time goes to infinity, the solution of VPB system tends to global Maxwellian state in a rate Ot−∞, by using a method developed for Boltzmann equation without force in the work of Desvillettes and Villani (2005. The improvement of the present paper is the removal of condition on parameter λ as in the work of Li (2008.

  17. Structural comparison of anodic nanoporous-titania fabricated from single-step and three-step of anodization using two paralleled-electrodes anodizing cell

    Directory of Open Access Journals (Sweden)

    Mallika Thabuot

    2016-02-01

    Full Text Available Anodization of Ti sheet in the ethylene glycol electrolyte containing 0.38wt% NH4F with the addition of 1.79wt% H2O at room temperature was studied. Applied potential of 10-60 V and anodizing time of 1-3 h were conducted by single-step and three-step of anodization within the two paralleled-electrodes anodizing cell. Their structural and textural properties were investigated by X-ray diffraction (XRD and scanning electron microscopy (SEM. After annealing at 600°C in the air furnace for 3 h, TiO2-nanotubes was transformed to the higher proportion of anatase crystal phase. Also crystallization of anatase phase was enhanced as the duration of anodization as the final step increased. By using single-step of anodization, pore texture of oxide film was started to reveal at the applied potential of 30 V. Better orderly arrangement of the TiO2-nanotubes array with larger pore size was obtained with the increase of applied potential. The applied potential of 60 V was selected for the three-step of anodization with anodizing time of 1-3 h. Results showed that the well-smooth surface coverage with higher density of porous-TiO2 was achieved using prolonging time at the first and second step, however, discontinuity tube in length was produced instead of the long-vertical tube. Layer thickness of anodic oxide film depended on the anodizing time at the last step of anodization. More well arrangement of nanostructured-TiO2 was produced using three-step of anodization under 60 V with 3 h for each step.

  18. Continuous versus step-by-step scanning mode of a novel 3D scanner for CyberKnife measurements

    International Nuclear Information System (INIS)

    Al Kafi, M Abdullah; Mwidu, Umar; Moftah, Belal

    2015-01-01

    The purpose of the study is to investigate the continuous versus step-by-step scanning mode of a commercial circular 3D scanner for commissioning measurements of a robotic stereotactic radiosurgery system. The 3D scanner was used for profile measurements in step-by-step and continuous modes with the intent of comparing the two scanning modes for consistency. The profile measurements of in-plane, cross-plane, 15 degree, and 105 degree were performed for both fixed cones and Iris collimators at depth of maximum dose and at 10 cm depth. For CyberKnife field size, penumbra, flatness and symmetry analysis, it was observed that the measurements with continuous mode, which can be up to 6 times faster than step-by-step mode, are comparable and produce scans nearly identical to step-by-step mode. When compared with centered step-by-step mode data, a fully processed continuous mode data gives rise to maximum of 0.50% and 0.60% symmetry and flatness difference respectfully for all the fixed cones and Iris collimators studied. - Highlights: • D scanner for CyberKnife beam data measurements. • Beam data analysis for continuous and step-by-step scan modes. • Faster continuous scanning data are comparable to step-by-step mode scan data.

  19. Time series clustering in large data sets

    Directory of Open Access Journals (Sweden)

    Jiří Fejfar

    2011-01-01

    Full Text Available The clustering of time series is a widely researched area. There are many methods for dealing with this task. We are actually using the Self-organizing map (SOM with the unsupervised learning algorithm for clustering of time series. After the first experiment (Fejfar, Weinlichová, Šťastný, 2009 it seems that the whole concept of the clustering algorithm is correct but that we have to perform time series clustering on much larger dataset to obtain more accurate results and to find the correlation between configured parameters and results more precisely. The second requirement arose in a need for a well-defined evaluation of results. It seems useful to use sound recordings as instances of time series again. There are many recordings to use in digital libraries, many interesting features and patterns can be found in this area. We are searching for recordings with the similar development of information density in this experiment. It can be used for musical form investigation, cover songs detection and many others applications.The objective of the presented paper is to compare clustering results made with different parameters of feature vectors and the SOM itself. We are describing time series in a simplistic way evaluating standard deviations for separated parts of recordings. The resulting feature vectors are clustered with the SOM in batch training mode with different topologies varying from few neurons to large maps.There are other algorithms discussed, usable for finding similarities between time series and finally conclusions for further research are presented. We also present an overview of the related actual literature and projects.

  20. Compensatory stepping responses in individuals with stroke: a pilot study.

    Science.gov (United States)

    Lakhani, Bimal; Mansfield, Avril; Inness, Elizabeth L; McIlroy, William E

    2011-05-01

    Impaired postural control and a high incidence of falls are commonly observed following stroke. Compensatory stepping responses are critical to reactive balance control. We hypothesize that, following a stroke, individuals with unilateral limb dyscontrol will be faced with the unique challenge of controlling such rapid stepping reactions that may eventually be linked to the high rate of falling. The objectives of this exploratory pilot study were to investigate compensatory stepping in individuals poststroke with regard to: (1) choice of initial stepping limb (paretic or non-paretic); (2) step characteristics; and (3) differences in step characteristics when the initial step is taken with the paretic vs. the non-paretic limb. Four subjects following stroke (38-165 days post) and 11 healthy young adults were recruited. Anterior and posterior perturbations were delivered by using a weight drop system. Force plates recorded centre-of-pressure excursion prior to the onset of stepping and step timing. Of the four subjects, three only attempted to step with their non-paretic limb and one stepped with either limb. Time to foot-off was generally slow, whereas step onset time and swing time were comparable to healthy controls. Two of the four subjects executed multistep responses in every trial, and attempts to force stepping with the paretic limb were unsuccessful in three of the four subjects. Despite high clinical balance scores, these individuals with stroke demonstrated impaired compensatory stepping responses, suggesting that current clinical evaluations might not accurately reflect reactive balance control in this population.

  1. Large Break LOCA Accident Management Strategies for Accidents With Large Containment Leaks

    International Nuclear Information System (INIS)

    Sdouz, Gert

    2006-01-01

    The goal of this work is the investigation of the influence of different accident management strategies on the thermal-hydraulics in the containment during a Large Break Loss of Coolant Accident with a large containment leak from the beginning of the accident. The increasing relevance of terrorism suggests a closer look at this kind of severe accidents. Normally the course of severe accidents and their associated phenomena are investigated with the assumption of an intact containment from the beginning of the accident. This intact containment has the ability to retain a large part of the radioactive inventory. In these cases there is only a release via a very small leakage due to the un-tightness of the containment up to cavity bottom melt through. This paper represents the last part of a comprehensive study on the influence of accident management strategies on the source term of VVER-1000 reactors. Basically two different accident sequences were investigated: the 'Station Blackout'- sequence and the 'Large Break LOCA'. In a first step the source term calculations were performed assuming an intact containment from the beginning of the accident and no accident management action. In a further step the influence of different accident management strategies was studied. The last part of the project was a repetition of the calculations with the assumption of a damaged containment from the beginning of the accident. This paper concentrates on the last step in the case of a Large Break LOCA. To be able to compare the results with calculations performed years ago the calculations were performed using the Source Term Code Package (STCP), hydrogen explosions are not considered. In this study four different scenarios have been investigated. The main parameter was the switch on time of the spray systems. One of the results is the influence of different accident management strategies on the source term. In the comparison with the sequence with intact containment it was

  2. Small Town Energy Program (STEP) Final Report revised

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, Charles (Chuck) T.

    2014-01-02

    University Park, Maryland (“UP”) is a small town of 2,540 residents, 919 homes, 2 churches, 1 school, 1 town hall, and 1 breakthrough community energy efficiency initiative: the Small Town Energy Program (“STEP”). STEP was developed with a mission to “create a model community energy transformation program that serves as a roadmap for other small towns across the U.S.” STEP first launched in January 2011 in UP and expanded in July 2012 to the neighboring communities of Hyattsville, Riverdale Park, and College Heights Estates, MD. STEP, which concluded in July 2013, was generously supported by a grant from the U.S. Department of Energy (DOE). The STEP model was designed for replication in other resource-constrained small towns similar to University Park - a sector largely neglected to date in federal and state energy efficiency programs. STEP provided a full suite of activities for replication, including: energy audits and retrofits for residential buildings, financial incentives, a community-based social marketing backbone and local community delivery partners. STEP also included the highly innovative use of an “Energy Coach” who worked one-on-one with clients throughout the program. Please see www.smalltownenergy.org for more information. In less than three years, STEP achieved the following results in University Park: • 30% of community households participated voluntarily in STEP; • 25% of homes received a Home Performance with ENERGY STAR assessment; • 16% of households made energy efficiency improvements to their home; • 64% of households proceeded with an upgrade after their assessment; • 9 Full Time Equivalent jobs were created or retained, and 39 contractors worked on STEP over the course of the project. Estimated Energy Savings - Program Totals kWh Electricity 204,407 Therms Natural Gas 24,800 Gallons of Oil 2,581 Total Estimated MMBTU Saved (Source Energy) 5,474 Total Estimated Annual Energy Cost Savings $61,343 STEP clients who

  3. Recovery of forward stepping in spinal cord injured patients does not transfer to untrained backward stepping.

    Science.gov (United States)

    Grasso, Renato; Ivanenko, Yuri P; Zago, Myrka; Molinari, Marco; Scivoletto, Giorgio; Lacquaniti, Francesco

    2004-08-01

    Six spinal cord injured (SCI) patients were trained to step on a treadmill with body-weight support for 1.5-3 months. At the end of training, foot motion recovered the shape and the step-by-step reproducibility that characterize normal gait. They were then asked to step backward on the treadmill belt that moved in the opposite direction relative to standard forward training. In contrast to healthy subjects, who can immediately reverse the direction of walking by time-reversing the kinematic waveforms, patients were unable to step backward. Similarly patients were unable to perform another untrained locomotor task, namely stepping in place on the idle treadmill. Two patients who were trained to step backward for 2-3 weeks were able to develop control of foot motion appropriate for this task. The results show that locomotor improvement does not transfer to untrained tasks, thus supporting the idea of task-dependent plasticity in human locomotor networks.

  4. Solution of large nonlinear time-dependent problems using reduced coordinates

    International Nuclear Information System (INIS)

    Mish, K.D.

    1987-01-01

    This research is concerned with the idea of reducing a large time-dependent problem, such as one obtained from a finite-element discretization, down to a more manageable size while preserving the most-important physical behavior of the solution. This reduction process is motivated by the concept of a projection operator on a Hilbert Space, and leads to the Lanczos Algorithm for generation of approximate eigenvectors of a large symmetric matrix. The Lanczos Algorithm is then used to develop a reduced form of the spatial component of a time-dependent problem. The solution of the remaining temporal part of the problem is considered from the standpoint of numerical-integration schemes in the time domain. All of these theoretical results are combined to motivate the proposed reduced coordinate algorithm. This algorithm is then developed, discussed, and compared to related methods from the mechanics literature. The proposed reduced coordinate method is then applied to the solution of some representative problems in mechanics. The results of these problems are discussed, conclusions are drawn, and suggestions are made for related future research

  5. Step driven competitive epitaxial and self-limited growth of graphene on copper surface

    Directory of Open Access Journals (Sweden)

    Lili Fan

    2011-09-01

    Full Text Available The existence of surface steps was found to have significant function and influence on the growth of graphene on copper via chemical vapor deposition. The two typical growth modes involved were found to be influenced by the step morphologies on copper surface, which led to our proposed step driven competitive growth mechanism. We also discovered a protective role of graphene in preserving steps on copper surface. Our results showed that wide and high steps promoted epitaxial growth and yielded multilayer graphene domains with regular shape, while dense and low steps favored self-limited growth and led to large-area monolayer graphene films. We have demonstrated that controllable growth of graphene domains of specific shape and large-area continuous graphene films are feasible.

  6. Next Step Spherical Torus Design Studies

    International Nuclear Information System (INIS)

    Neumeyer, C.; Heitzenroeder, P.; Kessel, C.; Ono, M.; Peng, M.; Schmidt, J.; Woolley, R.; Zatz, I.

    2002-01-01

    Studies are underway to identify and characterize a design point for a Next Step Spherical Torus (NSST) experiment. This would be a ''Proof of Performance'' device which would follow and build upon the successes of the National Spherical Torus Experiment (NSTX) a ''Proof of Principle'' device which has operated at PPPL since 1999. With the Decontamination and Decommissioning (DandD) of the Tokamak Fusion Test Reactor (TFTR) nearly completed, the TFTR test cell and facility will soon be available for a device such as NSST. By utilizing the TFTR test cell, NSST can be constructed for a relatively low cost on a short time scale. In addition, while furthering spherical torus (ST) research, this device could achieve modest fusion power gain for short-pulse lengths, a significant step toward future large burning plasma devices now under discussion in the fusion community. The selected design point is Q=2 at HH=1.4, P subscript ''fusion''=60 MW, 5 second pulse, with R subscript ''0''=1.5 m, A=1.6, I subscript ''p''=10vMA, B subscript ''t''=2.6 T, CS flux=16 weber. Most of the research would be conducted in D-D, with a limited D-T campaign during the last years of the program

  7. How many steps/day are enough? for adults

    Directory of Open Access Journals (Sweden)

    Rowe David A

    2011-07-01

    Full Text Available Abstract Physical activity guidelines from around the world are typically expressed in terms of frequency, duration, and intensity parameters. Objective monitoring using pedometers and accelerometers offers a new opportunity to measure and communicate physical activity in terms of steps/day. Various step-based versions or translations of physical activity guidelines are emerging, reflecting public interest in such guidance. However, there appears to be a wide discrepancy in the exact values that are being communicated. It makes sense that step-based recommendations should be harmonious with existing evidence-based public health guidelines that recognize that "some physical activity is better than none" while maintaining a focus on time spent in moderate-to-vigorous physical activity (MVPA. Thus, the purpose of this review was to update our existing knowledge of "How many steps/day are enough?", and to inform step-based recommendations consistent with current physical activity guidelines. Normative data indicate that healthy adults typically take between 4,000 and 18,000 steps/day, and that 10,000 steps/day is reasonable for this population, although there are notable "low active populations." Interventions demonstrate incremental increases on the order of 2,000-2,500 steps/day. The results of seven different controlled studies demonstrate that there is a strong relationship between cadence and intensity. Further, despite some inter-individual variation, 100 steps/minute represents a reasonable floor value indicative of moderate intensity walking. Multiplying this cadence by 30 minutes (i.e., typical of a daily recommendation produces a minimum of 3,000 steps that is best used as a heuristic (i.e., guiding value, but these steps must be taken over and above habitual activity levels to be a true expression of free-living steps/day that also includes recommendations for minimal amounts of time in MVPA. Computed steps/day translations of time in

  8. Irregular Morphing for Real-Time Rendering of Large Terrain

    Directory of Open Access Journals (Sweden)

    S. Kalem

    2016-06-01

    Full Text Available The following paper proposes an alternative approach to the real-time adaptive triangulation problem. A new region-based multi-resolution approach for terrain rendering is described which improves on-the-fly the distribution of the density of triangles inside the tile after selecting appropriate Level-Of-Detail by an adaptive sampling. This proposed approach organizes the heightmap into a QuadTree of tiles that are processed independently. This technique combines the benefits of both Triangular Irregular Network approach and region-based multi-resolution approach by improving the distribution of the density of triangles inside the tile. Our technique morphs the initial regular grid of the tile to deformed grid in order to minimize approximation error. The proposed technique strives to combine large tile size and real-time processing while guaranteeing an upper bound on the screen space error. Thus, this approach adapts terrain rendering process to local surface characteristics and enables on-the-fly handling of large amount of terrain data. Morphing is based-on the multi-resolution wavelet analysis. The use of the D2WT multi-resolution analysis of the terrain height-map speeds up processing and permits to satisfy an interactive terrain rendering. Tests and experiments demonstrate that Haar B-Spline wavelet, well known for its properties of localization and its compact support, is suitable for fast and accurate redistribution. Such technique could be exploited in client-server architecture for supporting interactive high-quality remote visualization of very large terrain.

  9. Comparing the efficacy of metronome beeps and stepping stones to adjust gait: steps to follow!

    Science.gov (United States)

    Bank, Paulina J M; Roerdink, Melvyn; Peper, C E

    2011-03-01

    Acoustic metronomes and visual targets have been used in rehabilitation practice to improve pathological gait. In addition, they may be instrumental in evaluating and training instantaneous gait adjustments. The aim of this study was to compare the efficacy of two cue types in inducing gait adjustments, viz. acoustic temporal cues in the form of metronome beeps and visual spatial cues in the form of projected stepping stones. Twenty healthy elderly (aged 63.2 ± 3.6 years) were recruited to walk on an instrumented treadmill at preferred speed and cadence, paced by either metronome beeps or projected stepping stones. Gait adaptations were induced using two manipulations: by perturbing the sequence of cues and by imposing switches from one cueing type to the other. Responses to these manipulations were quantified in terms of step-length and step-time adjustments, the percentage correction achieved over subsequent steps, and the number of steps required to restore the relation between gait and the beeps or stepping stones. The results showed that perturbations in a sequence of stepping stones were overcome faster than those in a sequence of metronome beeps. In switching trials, switching from metronome beeps to stepping stones was achieved faster than vice versa, indicating that gait was influenced more strongly by the stepping stones than the metronome beeps. Together these results revealed that, in healthy elderly, the stepping stones induced gait adjustments more effectively than did the metronome beeps. Potential implications for the use of metronome beeps and stepping stones in gait rehabilitation practice are discussed.

  10. Stability analysis of implicit time discretizations for the Compton-scattering Fokker-Planck equation

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.; Morel, Jim E.

    2009-01-01

    The Fokker-Planck equation is a widely used approximation for modeling the Compton scattering of photons in high energy density applications. In this paper, we perform a stability analysis of three implicit time discretizations for the Compton-Scattering Fokker-Planck equation. Specifically, we examine (i) a Semi-Implicit (SI) scheme that employs backward-Euler differencing but evaluates temperature-dependent coefficients at their beginning-of-time-step values, (ii) a Fully Implicit (FI) discretization that instead evaluates temperature-dependent coefficients at their end-of-time-step values, and (iii) a Linearized Implicit (LI) scheme, which is developed by linearizing the temperature dependence of the FI discretization within each time step. Our stability analysis shows that the FI and LI schemes are unconditionally stable and cannot generate oscillatory solutions regardless of time-step size, whereas the SI discretization can suffer from instabilities and nonphysical oscillations for sufficiently large time steps. With the results of this analysis, we present time-step limits for the SI scheme that prevent undesirable behavior. We test the validity of our stability analysis and time-step limits with a set of numerical examples.

  11. Stability analysis of implicit time discretizations for the Compton-scattering Fokker-Planck equation

    Energy Technology Data Exchange (ETDEWEB)

    Densmore, Jeffery D [Los Alamos National Laboratory; Warsa, James S [Los Alamos National Laboratory; Lowrie, Robert B [Los Alamos National Laboratory; Morel, Jim E [TEXAS A& M UNIV

    2008-01-01

    The Fokker-Planck equation is a widely used approximation for modeling the Compton scattering of photons in high energy density applications. In this paper, we perform a stability analysis of three implicit time discretizations for the Compton-Scattering Fokker-Planck equation. Specifically, we examine (i) a Semi-Implicit (SI) scheme that employs backward-Euler differencing but evaluates temperature-dependent coefficients at their beginning-of-time-step values, (ii) a Fully Implicit (FI) discretization that instead evaluates temperature-dependent coefficients at their end-of-time-step values, and (iii) a Linearized Implicit (LI) scheme, which is developed by linearizing the temperature dependence of the FI discretization within each time step. Our stability analysis shows that the FI and LI schemes are unconditionally stable and cannot generate oscillatory solutions regardless of time-step size, whereas the SI discretization can suffer from instabilities and nonphysical oscillations for sufficiently large time steps. With the results of this analysis, we present time-step limits for the SI scheme that prevent undesirable behavior. We test the validity of our stability analysis and time-step limits with a set of numerical examples.

  12. Stability analysis of implicit time discretizations for the Compton-scattering Fokker-Planck equation

    Science.gov (United States)

    Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.; Morel, Jim E.

    2009-09-01

    The Fokker-Planck equation is a widely used approximation for modeling the Compton scattering of photons in high energy density applications. In this paper, we perform a stability analysis of three implicit time discretizations for the Compton-Scattering Fokker-Planck equation. Specifically, we examine (i) a Semi-Implicit (SI) scheme that employs backward-Euler differencing but evaluates temperature-dependent coefficients at their beginning-of-time-step values, (ii) a Fully Implicit (FI) discretization that instead evaluates temperature-dependent coefficients at their end-of-time-step values, and (iii) a Linearized Implicit (LI) scheme, which is developed by linearizing the temperature dependence of the FI discretization within each time step. Our stability analysis shows that the FI and LI schemes are unconditionally stable and cannot generate oscillatory solutions regardless of time-step size, whereas the SI discretization can suffer from instabilities and nonphysical oscillations for sufficiently large time steps. With the results of this analysis, we present time-step limits for the SI scheme that prevent undesirable behavior. We test the validity of our stability analysis and time-step limits with a set of numerical examples.

  13. TIME DISTRIBUTIONS OF LARGE AND SMALL SUNSPOT GROUPS OVER FOUR SOLAR CYCLES

    International Nuclear Information System (INIS)

    Kilcik, A.; Yurchyshyn, V. B.; Abramenko, V.; Goode, P. R.; Cao, W.; Ozguc, A.; Rozelot, J. P.

    2011-01-01

    Here we analyze solar activity by focusing on time variations of the number of sunspot groups (SGs) as a function of their modified Zurich class. We analyzed data for solar cycles 20-23 by using Rome (cycles 20 and 21) and Learmonth Solar Observatory (cycles 22 and 23) SG numbers. All SGs recorded during these time intervals were separated into two groups. The first group includes small SGs (A, B, C, H, and J classes by Zurich classification), and the second group consists of large SGs (D, E, F, and G classes). We then calculated small and large SG numbers from their daily mean numbers as observed on the solar disk during a given month. We report that the time variations of small and large SG numbers are asymmetric except for solar cycle 22. In general, large SG numbers appear to reach their maximum in the middle of the solar cycle (phases 0.45-0.5), while the international sunspot numbers and the small SG numbers generally peak much earlier (solar cycle phases 0.29-0.35). Moreover, the 10.7 cm solar radio flux, the facular area, and the maximum coronal mass ejection speed show better agreement with the large SG numbers than they do with the small SG numbers. Our results suggest that the large SG numbers are more likely to shed light on solar activity and its geophysical implications. Our findings may also influence our understanding of long-term variations of the total solar irradiance, which is thought to be an important factor in the Sun-Earth climate relationship.

  14. When a Step Is Not a Step! Specificity Analysis of Five Physical Activity Monitors.

    Science.gov (United States)

    O'Connell, Sandra; ÓLaighin, Gearóid; Quinlan, Leo R

    2017-01-01

    Physical activity is an essential aspect of a healthy lifestyle for both physical and mental health states. As step count is one of the most utilized measures for quantifying physical activity it is important that activity-monitoring devices be both sensitive and specific in recording actual steps taken and disregard non-stepping body movements. The objective of this study was to assess the specificity of five activity monitors during a variety of prescribed non-stepping activities. Participants wore five activity monitors simultaneously for a variety of prescribed activities including deskwork, taking an elevator, taking a bus journey, automobile driving, washing and drying dishes; functional reaching task; indoor cycling; outdoor cycling; and indoor rowing. Each task was carried out for either a specific duration of time or over a specific distance. Activity monitors tested were the ActivPAL micro™, NL-2000™ pedometer, Withings Smart Activity Monitor Tracker (Pulse O2)™, Fitbit One™ and Jawbone UP™. Participants were video-recorded while carrying out the prescribed activities and the false positive step count registered on each activity monitor was obtained and compared to the video. All activity monitors registered a significant number of false positive steps per minute during one or more of the prescribed activities. The Withings™ activity performed best, registering a significant number of false positive steps per minute during the outdoor cycling activity only (P = 0.025). The Jawbone™ registered a significant number of false positive steps during the functional reaching task and while washing and drying dishes, which involved arm and hand movement (P positive steps during the cycling exercises (P positive steps were registered on the activity monitors during the non-stepping activities, the authors conclude that non-stepping physical activities can result in the false detection of steps. This can negatively affect the quantification of physical

  15. RankExplorer: Visualization of Ranking Changes in Large Time Series Data.

    Science.gov (United States)

    Shi, Conglei; Cui, Weiwei; Liu, Shixia; Xu, Panpan; Chen, Wei; Qu, Huamin

    2012-12-01

    For many applications involving time series data, people are often interested in the changes of item values over time as well as their ranking changes. For example, people search many words via search engines like Google and Bing every day. Analysts are interested in both the absolute searching number for each word as well as their relative rankings. Both sets of statistics may change over time. For very large time series data with thousands of items, how to visually present ranking changes is an interesting challenge. In this paper, we propose RankExplorer, a novel visualization method based on ThemeRiver to reveal the ranking changes. Our method consists of four major components: 1) a segmentation method which partitions a large set of time series curves into a manageable number of ranking categories; 2) an extended ThemeRiver view with embedded color bars and changing glyphs to show the evolution of aggregation values related to each ranking category over time as well as the content changes in each ranking category; 3) a trend curve to show the degree of ranking changes over time; 4) rich user interactions to support interactive exploration of ranking changes. We have applied our method to some real time series data and the case studies demonstrate that our method can reveal the underlying patterns related to ranking changes which might otherwise be obscured in traditional visualizations.

  16. Computing the real-time Green's Functions of large Hamiltonian matrices

    OpenAIRE

    Iitaka, Toshiaki

    1998-01-01

    A numerical method is developed for calculating the real time Green's functions of very large sparse Hamiltonian matrices, which exploits the numerical solution of the inhomogeneous time-dependent Schroedinger equation. The method has a clear-cut structure reflecting the most naive definition of the Green's functions, and is very suitable to parallel and vector supercomputers. The effectiveness of the method is illustrated by applying it to simple lattice models. An application of this method...

  17. Towards a comprehensive framework for cosimulation of dynamic models with an emphasis on time stepping

    Science.gov (United States)

    Hoepfer, Matthias

    co-simulation approach to modeling and simulation. It lays out the general approach to dynamic system co-simulation, and gives a comprehensive overview of what co-simulation is and what it is not. It creates a taxonomy of the requirements and limits of co-simulation, and the issues arising with co-simulating sub-models. Possible solutions towards resolving the stated problems are investigated to a certain depth. A particular focus is given to the issue of time stepping. It will be shown that for dynamic models, the selection of the simulation time step is a crucial issue with respect to computational expense, simulation accuracy, and error control. The reasons for this are discussed in depth, and a time stepping algorithm for co-simulation with unknown dynamic sub-models is proposed. Motivations and suggestions for the further treatment of selected issues are presented.

  18. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    Energy Technology Data Exchange (ETDEWEB)

    Murari, A.; Barana, O. [Consorzio RFX Associazione EURATOM ENEA per la Fusione, Corso Stati Uniti 4, Padua (Italy); Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F. [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon (United Kingdom); Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D. [Association EURATOM-CEA, CEA Cadarache, 13 - Saint-Paul-lez-Durance (France); Albanese, R. [Assoc. Euratom-ENEA-CREATE, Univ. Mediterranea RC (Italy); Arena, P.; Bruno, M. [Assoc. Euratom-ENEA-CREATE, Univ.di Catania (Italy); Ambrosino, G.; Ariola, M. [Assoc. Euratom-ENEA-CREATE, Univ. Napoli Federico Napoli (Italy); Crisanti, F. [Associazone EURATOM ENEA sulla Fusione, C.R. Frascati (Italy); Luna, E. de la; Sanchez, J. [Associacion EURATOM CIEMAT para Fusion, Madrid (Spain)

    2004-07-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  19. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    International Nuclear Information System (INIS)

    Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Murari, A.; Barana, O.; Albanese, R.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with internal transport barriers. Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  20. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    International Nuclear Information System (INIS)

    Murari, A.; Barana, O.; Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Albanese, R.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  1. Numerical Simulation of Air Entrainment for Flat-Sloped Stepped Spillway

    Directory of Open Access Journals (Sweden)

    Bentalha Chakib

    2015-03-01

    Full Text Available Stepped spillway is a good hydraulic structure for energy dissipation because of the large value of the surface roughness. The performance of the stepped spillway is enhanced with the presence of air that can prevent or reduce the cavitation damage. Chanson developed a method to determine the position of the start of air entrainment called inception point. Within this work the inception point is determined by using fluent computational fluid dynamics (CFD where the volume of fluid (VOF model is used as a tool to simulate air-water interaction on the free surface thereby the turbulence closure is derived in the k –ε turbulence standard model, at the same time one-sixth power law distribution of the velocity profile is verified. Also the pressure contours and velocity vectors at the bed surface are determined. The found numerical results agree well with experimental results.

  2. Free Modal Algebras Revisited: The Step-by-Step Method

    NARCIS (Netherlands)

    Bezhanishvili, N.; Ghilardi, Silvio; Jibladze, Mamuka

    2012-01-01

    We review the step-by-step method of constructing finitely generated free modal algebras. First we discuss the global step-by-step method, which works well for rank one modal logics. Next we refine the global step-by-step method to obtain the local step-by-step method, which is applicable beyond

  3. Unexpected perturbations training improves balance control and voluntary stepping times in older adults - a double blind randomized control trial.

    Science.gov (United States)

    Kurz, Ilan; Gimmon, Yoav; Shapiro, Amir; Debi, Ronen; Snir, Yoram; Melzer, Itshak

    2016-03-04

    Falls are common among elderly, most of them occur while slipping or tripping during walking. We aimed to explore whether a training program that incorporates unexpected loss of balance during walking able to improve risk factors for falls. In a double-blind randomized controlled trial 53 community dwelling older adults (age 80.1±5.6 years), were recruited and randomly allocated to an intervention group (n = 27) or a control group (n = 26). The intervention group received 24 training sessions over 3 months that included unexpected perturbation of balance exercises during treadmill walking. The control group performed treadmill walking with no perturbations. The primary outcome measures were the voluntary step execution times, traditional postural sway parameters and Stabilogram-Diffusion Analysis. The secondary outcome measures were the fall efficacy Scale (FES), self-reported late life function (LLFDI), and Performance-Oriented Mobility Assessment (POMA). Compared to control, participation in intervention program that includes unexpected loss of balance during walking led to faster Voluntary Step Execution Times under single (p = 0.002; effect size [ES] =0.75) and dual task (p = 0.003; [ES] = 0.89) conditions; intervention group subjects showed improvement in Short-term Effective diffusion coefficients in the mediolateral direction of the Stabilogram-Diffusion Analysis under eyes closed conditions (p = 0.012, [ES] = 0.92). Compared to control there were no significant changes in FES, LLFDI, and POMA. An intervention program that includes unexpected loss of balance during walking can improve voluntary stepping times and balance control, both previously reported as risk factors for falls. This however, did not transferred to a change self-reported function and FES. ClinicalTrials.gov NCT01439451 .

  4. SYSTEMATIZATION OF THE BASIC STEPS OF THE STEP-AEROBICS

    Directory of Open Access Journals (Sweden)

    Darinka Korovljev

    2011-03-01

    Full Text Available Following the development of the powerful sport industry, in front of us appeared a lot of new opportunities for creating of the new programmes of exercising with certain requisites. One of such programmes is certainly step-aerobics. Step-aerobics can be defined as a type of aerobics consisting of the basic aerobic steps (basic steps applied in exercising on stepper (step bench, with a possibility to regulate its height. Step-aerobics itself can be divided into several groups, depending on the following: type of music, working methods and adopted knowledge of the attendants. In this work, the systematization of the basic steps in step-aerobics was made on the basis of the following criteria: steps origin, number of leg motions in stepping and relating the body support at the end of the step. Systematization of the basic steps of the step-aerobics is quite significant for making a concrete review of the existing basic steps, thus making creation of the step-aerobics lesson easier

  5. A simple test of choice stepping reaction time for assessing fall risk in people with multiple sclerosis.

    Science.gov (United States)

    Tijsma, Mylou; Vister, Eva; Hoang, Phu; Lord, Stephen R

    2017-03-01

    Purpose To determine (a) the discriminant validity for established fall risk factors and (b) the predictive validity for falls of a simple test of choice stepping reaction time (CSRT) in people with multiple sclerosis (MS). Method People with MS (n = 210, 21-74y) performed the CSRT, sensorimotor, balance and neuropsychological tests in a single session. They were then followed up for falls using monthly fall diaries for 6 months. Results The CSRT test had excellent discriminant validity with respect to established fall risk factors. Frequent fallers (≥3 falls) performed significantly worse in the CSRT test than non-frequent fallers (0-2 falls). With the odds of suffering frequent falls increasing 69% with each SD increase in CSRT (OR = 1.69, 95% CI: 1.27-2.26, p = falls in people with MS. This test may prove useful in documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions. Implications for rehabilitation Good choice stepping reaction time (CSRT) is required for maintaining balance. A simple low-tech CSRT test has excellent discriminative and predictive validity in relation to falls in people with MS. This test may prove useful documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions.

  6. Time-sliced perturbation theory for large scale structure I: general formalism

    Energy Technology Data Exchange (ETDEWEB)

    Blas, Diego; Garny, Mathias; Sibiryakov, Sergey [Theory Division, CERN, CH-1211 Genève 23 (Switzerland); Ivanov, Mikhail M., E-mail: diego.blas@cern.ch, E-mail: mathias.garny@cern.ch, E-mail: mikhail.ivanov@cern.ch, E-mail: sergey.sibiryakov@cern.ch [FSB/ITP/LPPC, École Polytechnique Fédérale de Lausanne, CH-1015, Lausanne (Switzerland)

    2016-07-01

    We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein-de Sitter universe, the time evolution of the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This paves the way towards the systematic resummation of infrared effects in large scale structure formation. We also argue that the approach proposed here provides a natural framework to account for the influence of short-scale dynamics on larger scales along the lines of effective field theory.

  7. Computer experiments of the time-sequence of individual steps in multiple Coulomb-excitation

    International Nuclear Information System (INIS)

    Boer, J. de; Dannhaueser, G.

    1982-01-01

    The way in which the multiple E2 steps in the Coulomb-excitation of a rotational band of a nucleus follow one another is elucidated for selected examples using semiclassical computer experiments. The role a given transition plays for the excitation of a given final state is measured by a quantity named ''importance function''. It is found that these functions, calculated for the highest rotational state, peak at times forming a sequence for the successive E2 transitions starting from the ground state. This sequential behaviour is used to approximately account for the effects on the projectile orbit of the sequential transfer of excitation energy and angular momentum from projectile to target. These orbits lead to similar deflection functions and cross sections as those obtained from a symmetrization procedure approximately accounting for the transfer of angular momentum and energy. (Auth.)

  8. Comparison of the Danish step test and the watt-max test for estimation of maximal oxygen uptake

    DEFF Research Database (Denmark)

    Aadahl, Mette; Zacho, Morten; Linneberg, Allan René

    2013-01-01

    . Altogether, 795 eligible participants (response rate 35.8%) performed the watt max and the Danish step test. Correlation and agreement between the two VO(2max) test results was explored by Pearson's rho, Bland-Altman plots, Kappa(w), and gamma coefficients.Results: The correlation between VO(2max) (ml......Introduction: There is a need for simple and feasible methods for estimation of cardiorespiratory fitness (CRF) in large study populations, as existing methods for valid estimation of maximal oxygen consumption are generally time consuming and relatively expensive to administer. The Danish step...

  9. Maximizing Efficiency in Two-step Solar-thermochemical Fuel Production

    Energy Technology Data Exchange (ETDEWEB)

    Ermanoski, I. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-05-01

    Widespread solar fuel production depends on its economic viability, largely driven by the solar-to-fuel conversion efficiency. In this paper, the material and energy requirements in two-step solar-thermochemical cycles are considered.The need for advanced redox active materials is demonstrated, by considering the oxide mass flow requirements at a large scale. Two approaches are also identified for maximizing the efficiency: optimizing reaction temperatures, and minimizing the pressure in the thermal reduction step by staged thermal reduction. The results show that each approach individually, and especially the two in conjunction, result in significant efficiency gains.

  10. Comparison between time-step-integration and probabilistic methods in seismic analysis of a linear structure

    International Nuclear Information System (INIS)

    Schneeberger, B.; Breuleux, R.

    1977-01-01

    Assuming that earthquake ground motion is a stationary time function, the seismic analysis of a linear structure can be done by probailistic methods using the 'power spectral density function' (PSD), instead of applying the more traditional time-step-integration using earthquake time histories (TH). A given structure was analysed both by PSD and TH methods computing and comparing 'floor response spectra'. The analysis using TH was performed for two different TH and different frequency intervals for the 'floor-response-spectra'. The analysis using PSD first produced PSD functions of the responses of the floors and these were then converted into 'foor-response-spectra'. Plots of the resulting 'floor-response-spectra' show: (1) The agreement of TH and PSD results is quite close. (2) The curves produced by PSD are much smoother than those produced by TH and mostly form an enelope of the latter. (3) The curves produced by TH are quite jagged with the location and magnitude of the peaks depending on the choice of frequencies at which the 'floor-response-spectra' were evaluated and on the choice of TH. (Auth.)

  11. Some features of stepped and dart-stepped leaders near the ground in natural negative cloud-to-ground lightning discharges

    Directory of Open Access Journals (Sweden)

    X. Qie

    2002-06-01

    Full Text Available Characteristics of the electric fields produced by stepped and dart-stepped leaders 200 µs just prior to the return strokes during natural negative cloud-to-ground (CG lightning discharges have been analyzed by using data from a broad-band slow antenna system with 0.08 µs time resolution in southeastern China. It has been found that the electric field changes between the last stepped leader and the first return stroke could be classified in three categories. The first type is characterized by a small pulse superimposed on the abrupt beginning of the return stroke, and accounts for 42% of all the cases. The second type accounts for 33.3% and is characterized by relatively smooth electric field changes between the last leader pulse and the following return stroke. The third type accounts for 24.7%, and is characterized by small pulses between the last recognizable leader pulse and the following return stroke. On the average, the time interval between the successive leader pulses prior to the first return strokes and subsequent return strokes was 15.8 µs and 9.4 µs, respectively. The distribution of time intervals between successive stepped leader pulses is quite similar to Gaussian distribution while that for dart-stepped leader pulses is more similar to a log-normal distribution. Other discharge features, such as the average time interval between the last leader step and the first return stroke peak, the ratio of the last leader pulse peak to that of the return stroke amplitude are also discussed in the paper.Key words. Meteology and atmospheric dynamics (atmospheric electricity; lightning – Radio science (electromagnetic noise and interference

  12. Some features of stepped and dart-stepped leaders near the ground in natural negative cloud-to-ground lightning discharges

    Directory of Open Access Journals (Sweden)

    X. Qie

    Full Text Available Characteristics of the electric fields produced by stepped and dart-stepped leaders 200 µs just prior to the return strokes during natural negative cloud-to-ground (CG lightning discharges have been analyzed by using data from a broad-band slow antenna system with 0.08 µs time resolution in southeastern China. It has been found that the electric field changes between the last stepped leader and the first return stroke could be classified in three categories. The first type is characterized by a small pulse superimposed on the abrupt beginning of the return stroke, and accounts for 42% of all the cases. The second type accounts for 33.3% and is characterized by relatively smooth electric field changes between the last leader pulse and the following return stroke. The third type accounts for 24.7%, and is characterized by small pulses between the last recognizable leader pulse and the following return stroke. On the average, the time interval between the successive leader pulses prior to the first return strokes and subsequent return strokes was 15.8 µs and 9.4 µs, respectively. The distribution of time intervals between successive stepped leader pulses is quite similar to Gaussian distribution while that for dart-stepped leader pulses is more similar to a log-normal distribution. Other discharge features, such as the average time interval between the last leader step and the first return stroke peak, the ratio of the last leader pulse peak to that of the return stroke amplitude are also discussed in the paper.

    Key words. Meteology and atmospheric dynamics (atmospheric electricity; lightning – Radio science (electromagnetic noise and interference

  13. A robust and high-performance queue management controller for large round trip time networks

    Science.gov (United States)

    Khoshnevisan, Ladan; Salmasi, Farzad R.

    2016-05-01

    Congestion management for transmission control protocol is of utmost importance to prevent packet loss within a network. This necessitates strategies for active queue management. The most applied active queue management strategies have their inherent disadvantages which lead to suboptimal performance and even instability in the case of large round trip time and/or external disturbance. This paper presents an internal model control robust queue management scheme with two degrees of freedom in order to restrict the undesired effects of large and small round trip time and parameter variations in the queue management. Conventional approaches such as proportional integral and random early detection procedures lead to unstable behaviour due to large delay. Moreover, internal model control-Smith scheme suffers from large oscillations due to the large round trip time. On the other hand, other schemes such as internal model control-proportional integral and derivative show excessive sluggish performance for small round trip time values. To overcome these shortcomings, we introduce a system entailing two individual controllers for queue management and disturbance rejection, simultaneously. Simulation results based on Matlab/Simulink and also Network Simulator 2 (NS2) demonstrate the effectiveness of the procedure and verify the analytical approach.

  14. Effect of different air-drying time on the microleakage of single-step self-etch adhesives

    Directory of Open Access Journals (Sweden)

    Horieh Moosavi

    2013-05-01

    Full Text Available Objectives This study evaluated the effect of three different air-drying times on microleakage of three self-etch adhesive systems. Materials and Methods Class I cavities were prepared for 108 extracted sound human premolars. The teeth were divided into three main groups based on three different adhesives: Opti Bond All in One (OBAO, Clearfil S3 Bond (CSB, Bond Force (BF. Each main group divided into three subgroups regarding the air-drying time: without application of air stream, following the manufacturer's instruction, for 10 sec more than manufacturer's instruction. After completion of restorations, specimens were thermocycled and then connected to a fluid filtration system to evaluate microleakage. The data were statistically analyzed using two-way ANOVA and Tukey-test (α = 0.05. Results The microleakage of all adhesives decreased when the air-drying time increased from 0 sec to manufacturer's instruction (p < 0.001. The microleakage of BF reached its lowest values after increasing the drying time to 10 sec more than the manufacturer's instruction (p < 0.001. Microleakage of OBAO and CSB was significantly lower compared to BF in all three drying time (p < 0.001. Conclusions Increasing in air-drying time of adhesive layer in one-step self-etch adhesives caused reduction of microleakage, but the amount of this reduction may be dependent on the adhesive components of self-etch adhesives.

  15. A Novel Molten Salt Reactor Concept to Implement the Multi-Step Time-Scheduled Transmutation Strategy

    International Nuclear Information System (INIS)

    Csom, Gyula; Feher, Sandor; Szieberthj, Mate

    2002-01-01

    Nowadays the molten salt reactor (MSR) concept seems to revive as one of the most promising systems for the realization of transmutation. In the molten salt reactors and subcritical systems the fuel and material to be transmuted circulate dissolved in some molten salt. The main advantage of this reactor type is the possibility of the continuous feed and reprocessing of the fuel. In the present paper a novel molten salt reactor concept is introduced and its transmutation capabilities are studied. The goal is the development of a transmutation technique along with a device implementing it, which yield higher transmutation efficiencies than that of the known procedures and thus results in radioactive waste whose load on the environment is reduced both in magnitude and time length. The procedure is the multi-step time-scheduled transmutation, in which transformation is done in several consecutive steps of different neutron flux and spectrum. In the new MSR concept, named 'multi-region' MSR (MRMSR), the primary circuit is made up of a few separate loops, in which salt-fuel mixtures of different compositions are circulated. The loop sections constituting the core region are only neutronically and thermally coupled. This new concept makes possible the utilization of the spatial dependence of spectrum as well as the advantageous features of liquid fuel such as the possibility of continuous chemical processing etc. In order to compare a 'conventional' MSR and a proposed MRMSR in terms of efficiency, preliminary calculational results are shown. Further calculations in order to find the optimal implementation of this new concept and to emphasize its other advantageous features are going on. (authors)

  16. Modeling and control of a large nuclear reactor. A three-time-scale approach

    Energy Technology Data Exchange (ETDEWEB)

    Shimjith, S.R. [Indian Institute of Technology Bombay, Mumbai (India); Bhabha Atomic Research Centre, Mumbai (India); Tiwari, A.P. [Bhabha Atomic Research Centre, Mumbai (India); Bandyopadhyay, B. [Indian Institute of Technology Bombay, Mumbai (India). IDP in Systems and Control Engineering

    2013-07-01

    Recent research on Modeling and Control of a Large Nuclear Reactor. Presents a three-time-scale approach. Written by leading experts in the field. Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property, with emphasis on three-time-scale systems.

  17. Are women positive for the One Step but negative for the Two Step screening tests for gestational diabetes at higher risk for adverse outcomes?

    Science.gov (United States)

    Caissutti, Claudia; Khalifeh, Adeeb; Saccone, Gabriele; Berghella, Vincenzo

    2018-02-01

    The aim of this study was to evaluate if women meeting criteria for gestational diabetes mellitus (GDM) by the One Step test as per International Association of the Diabetes and Pregnancy Study Groups (IADPSG) criteria but not by other less strict criteria have adverse pregnancy outcomes compared with GDM-negative controls. The primary outcome was the incidence of macrosomia, defined as birthweight > 4000 g. Electronic databases were searched from their inception until May 2017. All studies identifying pregnant women negative at the Two Step test, but positive at the One Step test for IADPSG criteria were included. We excluded studies that randomized women to the One Step vs. the Two Step tests; studies that compared different criteria within the same screening method; randomized studies comparing treatments for GDM; and studies comparing incidence of GDM in women doing the One Step test vs. the Two Step test. Eight retrospective cohort studies, including 29 983 women, were included. Five study groups and four control groups were identified. The heterogeneity between the studies was high. Gestational hypertension, preeclampsia and large for gestational age, as well as in some analyses cesarean delivery, macrosomia and preterm birth, were significantly more frequent, and small for gestational age in some analyses significantly less frequent, in women GDM-positive by the One Step, but not the Two Step. Women meeting criteria for GDM by IADPSG criteria but not by other less strict criteria have an increased risk of adverse pregnancy outcomes such as gestational hypertension, preeclampsia and large for gestational age, compared with GDM-negative controls. Based on these findings, and evidence from other studies that treatment decreases these adverse outcomes, we suggest screening for GDM using the One Step IADPSG criteria. © 2017 Nordic Federation of Societies of Obstetrics and Gynecology.

  18. When a Step Is Not a Step! Specificity Analysis of Five Physical Activity Monitors.

    Directory of Open Access Journals (Sweden)

    Sandra O'Connell

    Full Text Available Physical activity is an essential aspect of a healthy lifestyle for both physical and mental health states. As step count is one of the most utilized measures for quantifying physical activity it is important that activity-monitoring devices be both sensitive and specific in recording actual steps taken and disregard non-stepping body movements. The objective of this study was to assess the specificity of five activity monitors during a variety of prescribed non-stepping activities.Participants wore five activity monitors simultaneously for a variety of prescribed activities including deskwork, taking an elevator, taking a bus journey, automobile driving, washing and drying dishes; functional reaching task; indoor cycling; outdoor cycling; and indoor rowing. Each task was carried out for either a specific duration of time or over a specific distance. Activity monitors tested were the ActivPAL micro™, NL-2000™ pedometer, Withings Smart Activity Monitor Tracker (Pulse O2™, Fitbit One™ and Jawbone UP™. Participants were video-recorded while carrying out the prescribed activities and the false positive step count registered on each activity monitor was obtained and compared to the video.All activity monitors registered a significant number of false positive steps per minute during one or more of the prescribed activities. The Withings™ activity performed best, registering a significant number of false positive steps per minute during the outdoor cycling activity only (P = 0.025. The Jawbone™ registered a significant number of false positive steps during the functional reaching task and while washing and drying dishes, which involved arm and hand movement (P < 0.01 for both. The ActivPAL™ registered a significant number of false positive steps during the cycling exercises (P < 0.001 for both.As a number of false positive steps were registered on the activity monitors during the non-stepping activities, the authors conclude that non-stepping

  19. s-Step Krylov Subspace Methods as Bottom Solvers for Geometric Multigrid

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lijewski, Mike [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Almgren, Ann [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Straalen, Brian Van [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Carson, Erin [Univ. of California, Berkeley, CA (United States); Knight, Nicholas [Univ. of California, Berkeley, CA (United States); Demmel, James [Univ. of California, Berkeley, CA (United States)

    2014-08-14

    Geometric multigrid solvers within adaptive mesh refinement (AMR) applications often reach a point where further coarsening of the grid becomes impractical as individual sub domain sizes approach unity. At this point the most common solution is to use a bottom solver, such as BiCGStab, to reduce the residual by a fixed factor at the coarsest level. Each iteration of BiCGStab requires multiple global reductions (MPI collectives). As the number of BiCGStab iterations required for convergence grows with problem size, and the time for each collective operation increases with machine scale, bottom solves in large-scale applications can constitute a significant fraction of the overall multigrid solve time. In this paper, we implement, evaluate, and optimize a communication-avoiding s-step formulation of BiCGStab (CABiCGStab for short) as a high-performance, distributed-memory bottom solver for geometric multigrid solvers. This is the first time s-step Krylov subspace methods have been leveraged to improve multigrid bottom solver performance. We use a synthetic benchmark for detailed analysis and integrate the best implementation into BoxLib in order to evaluate the benefit of a s-step Krylov subspace method on the multigrid solves found in the applications LMC and Nyx on up to 32,768 cores on the Cray XE6 at NERSC. Overall, we see bottom solver improvements of up to 4.2x on synthetic problems and up to 2.7x in real applications. This results in as much as a 1.5x improvement in solver performance in real applications.

  20. Microsoft® SQL Server® 2008 MDX Step by Step

    CERN Document Server

    Smith, Bryan; Consulting, Hitachi

    2009-01-01

    Teach yourself the Multidimensional Expressions (MDX) query language-one step at a time. With this practical, learn-by-doing tutorial, you'll build the core techniques for using MDX with Analysis Services to deliver high-performance business intelligence solutions. Discover how to: Construct and execute MDX queriesWork with tuples, sets, and expressionsBuild complex sets to retrieve the exact data users needPerform aggregation functions and navigate data hierarchiesAssemble time-based business metricsCustomize an Analysis Services cube through the MDX scriptImplement dynamic security to cont

  1. Effects of Conjugate Gradient Methods and Step-Length Formulas on the Multiscale Full Waveform Inversion in Time Domain: Numerical Experiments

    Science.gov (United States)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José; Liu, Qinya; Zhou, Bing

    2017-05-01

    We carry out full waveform inversion (FWI) in time domain based on an alternative frequency-band selection strategy that allows us to implement the method with success. This strategy aims at decomposing the seismic data within partially overlapped frequency intervals by carrying out a concatenated treatment of the wavelet to largely avoid redundant frequency information to adapt to wavelength or wavenumber coverage. A pertinent numerical test proves the effectiveness of this strategy. Based on this strategy, we comparatively analyze the effects of update parameters for the nonlinear conjugate gradient (CG) method and step-length formulas on the multiscale FWI through several numerical tests. The investigations of up to eight versions of the nonlinear CG method with and without Gaussian white noise make clear that the HS (Hestenes and Stiefel in J Res Natl Bur Stand Sect 5:409-436, 1952), CD (Fletcher in Practical methods of optimization vol. 1: unconstrained optimization, Wiley, New York, 1987), and PRP (Polak and Ribière in Revue Francaise Informat Recherche Opertionelle, 3e Année 16:35-43, 1969; Polyak in USSR Comput Math Math Phys 9:94-112, 1969) versions are more efficient among the eight versions, while the DY (Dai and Yuan in SIAM J Optim 10:177-182, 1999) version always yields inaccurate result, because it overestimates the deeper parts of the model. The application of FWI algorithms using distinct step-length formulas, such as the direct method ( Direct), the parabolic search method ( Search), and the two-point quadratic interpolation method ( Interp), proves that the Interp is more efficient for noise-free data, while the Direct is more efficient for Gaussian white noise data. In contrast, the Search is less efficient because of its slow convergence. In general, the three step-length formulas are robust or partly insensitive to Gaussian white noise and the complexity of the model. When the initial velocity model deviates far from the real model or the

  2. Explicit solution of Calderon preconditioned time domain integral equations

    KAUST Repository

    Ulku, Huseyin Arda

    2013-07-01

    An explicit marching on-in-time (MOT) scheme for solving Calderon-preconditioned time domain integral equations is proposed. The scheme uses Rao-Wilton-Glisson and Buffa-Christiansen functions to discretize the domain and range of the integral operators and a PE(CE)m type linear multistep to march on in time. Unlike its implicit counterpart, the proposed explicit solver requires the solution of an MOT system with a Gram matrix that is sparse and well-conditioned independent of the time step size. Numerical results demonstrate that the explicit solver maintains its accuracy and stability even when the time step size is chosen as large as that typically used by an implicit solver. © 2013 IEEE.

  3. Implementation of a variable-step integration technique for nonlinear structural dynamic analysis

    International Nuclear Information System (INIS)

    Underwood, P.; Park, K.C.

    1977-01-01

    The paper presents the implementation of a recently developed unconditionally stable implicit time integration method into a production computer code for the transient response analysis of nonlinear structural dynamic systems. The time integrator is packaged with two significant features; a variable step size that is automatically determined and this is accomplished without additional matrix refactorizations. The equations of motion solved by the time integrator must be cast in the pseudo-force form, and this provides the mechanism for controlling the step size. Step size control is accomplished by extrapolating the pseudo-force to the next time (the predicted pseudo-force), then performing the integration step and then recomputing the pseudo-force based on the current solution (the correct pseudo-force); from this data an error norm is constructed, the value of which determines the step size for the next step. To avoid refactoring the required matrix with each step size change a matrix scaling technique is employed, which allows step sizes to change by a factor of 100 without refactoring. If during a computer run the integrator determines it can run with a step size larger than 100 times the original minimum step size, the matrix is refactored to take advantage of the larger step size. The strategy for effecting these features are discussed in detail. (Auth.)

  4. Step-Up DC-DC converters

    DEFF Research Database (Denmark)

    Forouzesh, Mojtaba; Siwakoti, Yam P.; Gorji, Saman A.

    2017-01-01

    on the general law and framework of the development of next-generation step-up dc-dc converters, this paper aims to comprehensively review and classify various step-up dc-dc converters based on their characteristics and voltage-boosting techniques. In addition, the advantages and disadvantages of these voltage......DC-DC converters with voltage boost capability are widely used in a large number of power conversion applications, from fraction-of-volt to tens of thousands of volts at power levels from milliwatts to megawatts. The literature has reported on various voltage-boosting techniques, in which......-boosting techniques and associated converters are discussed in detail. Finally, broad applications of dc-dc converters are presented and summarized with comparative study of different voltage-boosting techniques....

  5. Canadian children's and youth's pedometer-determined steps/day, parent-reported TV watching time, and overweight/obesity: The CANPLAY Surveillance Study

    OpenAIRE

    Tudor-Locke, Catrine; Craig, Cora L; Cameron, Christine; Griffiths, Joseph M

    2011-01-01

    Abstract Background This study examines associations between pedometer-determined steps/day and parent-reported child's Body Mass Index (BMI) and time typically spent watching television between school and dinner. Methods Young people (aged 5-19 years) were recruited through their parents by random digit dialling and mailed a data collection package. Information on height and weight and time spent watching television between school and dinner on a typical school day was collected from parents...

  6. The LOFT (Large Observatory for X-ray Timing) background simulations

    DEFF Research Database (Denmark)

    Campana, R.; Feroci, M.; Del Monte, E.

    2012-01-01

    The Large Observatory For X-ray Timing (LOFT) is an innovative medium-class mission selected for an assessment phase in the framework of the ESA M3 Cosmic Vision call. LOFT is intended to answer fundamental questions about the behavior of matter in theh very strong gravitational and magnetic fields...

  7. Real-time gigabit DMT transmission over plastic optical fibre

    NARCIS (Netherlands)

    Lee, S.C.J.; Breyer, F.; Cárdenas, D.; Randel, S.; Koonen, A.M.J.

    2009-01-01

    For the first time, a real-time 1.25 Gbit/s discrete multitone (DMT) transmitter implemented in a field-programmable gate array is demonstrated for use in low-cost, standard 1 mm step-index plastic optical fibre applications based on a commercial resonant-cavity LED and a large-diameter

  8. Sex ratio and time to pregnancy: analysis of four large European population surveys

    DEFF Research Database (Denmark)

    Joffe, Mike; Bennett, James; Best, Nicky

    2007-01-01

    To test whether the secondary sex ratio (proportion of male births) is associated with time to pregnancy, a marker of fertility. Design Analysis of four large population surveys. Setting Denmark and the United Kingdom. Participants 49 506 pregnancies.......To test whether the secondary sex ratio (proportion of male births) is associated with time to pregnancy, a marker of fertility. Design Analysis of four large population surveys. Setting Denmark and the United Kingdom. Participants 49 506 pregnancies....

  9. Extraction of Human Stepping Pattern Using Acceleration Sensors

    Directory of Open Access Journals (Sweden)

    Toyohira Takayuki

    2017-01-01

    Full Text Available Gait analysis plays an important role in characterizing individuals and each condition and gait analysis systems have been developed using various devices or instruments. However, most systems do not catch synchronous stepping actions between right foot and left foot. For obtaining a precise gait pattern, a synchronous walking sensing system is developed, in which a pair of acceleration and angular velocity sensors are attached to left and right shoes of a walking person and their data are transmitted to a PC through a wireless channel. Walking data from 19 persons of the age of 14 to 20 are acquired for walking analysis. Stepping time diagrams are extracted from the acquired data of right and left foot actions of stepping-off and-on the ground, and the time diagrams distinguish between an ordinary person and a person injured on left leg, and a stepping recovery process of the injured person is shown. Synchronous sensing of stepping action between right foot and left foot contributes to obtain precise stepping patterns.

  10. 2-Step IMAT and 2-Step IMRT in three dimensions

    International Nuclear Information System (INIS)

    Bratengeier, Klaus

    2005-01-01

    In two dimensions, 2-Step Intensity Modulated Arc Therapy (2-Step IMAT) and 2-Step Intensity Modulated Radiation Therapy (IMRT) were shown to be powerful methods for the optimization of plans with organs at risk (OAR) (partially) surrounded by a target volume (PTV). In three dimensions, some additional boundary conditions have to be considered to establish 2-Step IMAT as an optimization method. A further aim was to create rules for ad hoc adaptations of an IMRT plan to a daily changing PTV-OAR constellation. As a test model, a cylindrically symmetric PTV-OAR combination was used. The centrally placed OAR can adapt arbitrary diameters with different gap widths toward the PTV. Along the rotation axis the OAR diameter can vary, the OAR can even vanish at some axis positions, leaving a circular PTV. The width and weight of the second segment were the free parameters to optimize. The objective function f to minimize was the root of the integral of the squared difference of the dose in the target volume and a reference dose. For the problem, two local minima exist. Therefore, as a secondary criteria, the magnitude of hot and cold spots were taken into account. As a result, the solution with a larger segment width was recommended. From plane to plane for varying radii of PTV and OAR and for different gaps between them, different sets of weights and widths were optimal. Because only one weight for one segment shall be used for all planes (respectively leaf pairs), a strategy for complex three-dimensional (3-D) cases was established to choose a global weight. In a second step, a suitable segment width was chosen, minimizing f for this global weight. The concept was demonstrated in a planning study for a cylindrically symmetric example with a large range of different radii of an OAR along the patient axis. The method is discussed for some classes of tumor/organ at risk combinations. Noncylindrically symmetric cases were treated exemplarily. The product of width and weight of

  11. Timing paradox of stepping and falls in ageing: not so quick and quick(er) on the trigger.

    Science.gov (United States)

    Rogers, Mark W; Mille, Marie-Laure

    2016-08-15

    Physiological and degenerative changes affecting human standing balance are major contributors to falls with ageing. During imbalance, stepping is a powerful protective action for preserving balance that may be voluntarily initiated in recognition of a balance threat, or be induced by an externally imposed mechanical or sensory perturbation. Paradoxically, with ageing and falls, initiation slowing of voluntary stepping is observed together with perturbation-induced steps that are triggered as fast as or faster than for younger adults. While age-associated changes in sensorimotor conduction, central neuronal processing and cognitive functions are linked to delayed voluntary stepping, alterations in the coupling of posture and locomotion may also prolong step triggering. It is less clear, however, how these factors may explain the accelerated triggering of induced stepping. We present a conceptual model that addresses this issue. For voluntary stepping, a disruption in the normal coupling between posture and locomotion may underlie step-triggering delays through suppression of the locomotion network based on an estimation of the evolving mechanical state conditions for stability. During induced stepping, accelerated step initiation may represent an event-triggering process whereby stepping is released according to the occurrence of a perturbation rather than to the specific sensorimotor information reflecting the evolving instability. In this case, errors in the parametric control of induced stepping and its effectiveness in stabilizing balance would be likely to occur. We further suggest that there is a residual adaptive capacity with ageing that could be exploited to improve paradoxical triggering and other changes in protective stepping to impact fall risk. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.

  12. STEP - Product Model Data Sharing and Exchange

    DEFF Research Database (Denmark)

    Kroszynski, Uri

    1998-01-01

    During the last fifteen years, a very large effort to standardize the product models employed in product design, manufacturing and other life-cycle phases has been undertaken. This effort has the acronym STEP, and resulted in the International Standard ISO-10303 "Industrial Automation Systems...... - Product Data Representation and Exchange", featuring at present some 30 released parts, and growing continuously. Many of the parts are Application Protocols (AP). This article presents an overview of STEP, based upon years of involvement in three ESPRIT projects, which contributed to the development...

  13. The step complexity measure for emergency operating procedures: measure verification

    International Nuclear Information System (INIS)

    Park, Jinkyun; Jung, Wondea; Ha, Jaejoo; Park, Changkue

    2002-01-01

    In complex systems, such as nuclear power plants (NPPs) or airplane control systems, human errors play a major role in many accidents. Therefore, to prevent an occurrence of accidents or to ensure system safety, extensive effort has been made to identify significant factors that can cause human errors. According to related studies, written manuals or operating procedures are revealed as one of the most important factors, and the understandability is pointed out as one of the major reasons for procedure-related human errors. Many qualitative checklists are suggested to evaluate emergency operating procedures (EOPs) of NPPs. However, since qualitative evaluations using checklists have some drawbacks, a quantitative measure that can quantify the complexity of EOPs is very necessary to compensate for them. In order to quantify the complexity of steps included in EOPs, Park et al. suggested the step complexity (SC) measure. In addition, to ascertain the appropriateness of the SC measure, averaged step performance time data obtained from emergency training records for the loss of coolant accident and the excess steam dump event were compared with estimated SC scores. Although averaged step performance time data show good correlation with estimated SC scores, conclusions for some important issues that have to be clarified to ensure the appropriateness of the SC measure were not properly drawn because of lack of backup data. In this paper, to clarify remaining issues, additional activities to verify the appropriateness of the SC measure are performed using averaged step performance time data obtained from emergency training records. The total number of available records is 36, and training scenarios are the steam generator tube rupture and the loss of all feedwater. The number of scenarios is 18 each. From these emergency training records, averaged step performance time data for 30 steps are retrieved. As the results, the SC measure shows statistically meaningful

  14. Focal cryotherapy: step by step technique description

    Directory of Open Access Journals (Sweden)

    Cristina Redondo

    Full Text Available ABSTRACT Introduction and objective: Focal cryotherapy emerged as an efficient option to treat favorable and localized prostate cancer (PCa. The purpose of this video is to describe the procedure step by step. Materials and methods: We present the case of a 68 year-old man with localized PCa in the anterior aspect of the prostate. Results: The procedure is performed under general anesthesia, with the patient in lithotomy position. Briefly, the equipment utilized includes the cryotherapy console coupled with an ultrasound system, argon and helium gas bottles, cryoprobes, temperature probes and an urethral warming catheter. The procedure starts with a real-time trans-rectal prostate ultrasound, which is used to outline the prostate, the urethra and the rectal wall. The cryoprobes are pretested and placed in to the prostate through the perineum, following a grid template, along with the temperature sensors under ultrasound guidance. A cystoscopy confirms the right positioning of the needles and the urethral warming catheter is installed. Thereafter, the freeze sequence with argon gas is started, achieving extremely low temperatures (-40°C to induce tumor cell lysis. Sequentially, the thawing cycle is performed using helium gas. This process is repeated one time. Results among several series showed a biochemical disease-free survival between 71-93% at 9-70 month- follow-up, incontinence rates between 0-3.6% and erectile dysfunction between 0-42% (1–5. Conclusions: Focal cryotherapy is a feasible procedure to treat anterior PCa that may offer minimal morbidity, allowing good cancer control and better functional outcomes when compared to whole-gland treatment.

  15. Improving stability of stabilized and multiscale formulations in flow simulations at small time steps

    KAUST Repository

    Hsu, Ming-Chen

    2010-02-01

    The objective of this paper is to show that use of the element-vector-based definition of stabilization parameters, introduced in [T.E. Tezduyar, Computation of moving boundaries and interfaces and stabilization parameters, Int. J. Numer. Methods Fluids 43 (2003) 555-575; T.E. Tezduyar, Y. Osawa, Finite element stabilization parameters computed from element matrices and vectors, Comput. Methods Appl. Mech. Engrg. 190 (2000) 411-430], circumvents the well-known instability associated with conventional stabilized formulations at small time steps. We describe formulations for linear advection-diffusion and incompressible Navier-Stokes equations and test them on three benchmark problems: advection of an L-shaped discontinuity, laminar flow in a square domain at low Reynolds number, and turbulent channel flow at friction-velocity Reynolds number of 395. © 2009 Elsevier B.V. All rights reserved.

  16. Stepping movement analysis of control rod drive mechanism

    International Nuclear Information System (INIS)

    Xu Yantao; Zu Hongbiao

    2013-01-01

    Background: Control rod drive mechanism (CRDM) is one of the important safety-related equipment for nuclear power plants. Purpose: The operating parameters of stepping movement, including lifting loads, step distance and step velocity, are all critical design targets. Methods: FEA and numerical simulation are used to analyze stepping movement separately. Results: The motion equations of the movable magnet in stepping movement are established by load analysis. Gravitation, magnetic force, fluid resistance and spring force are all in consideration in the load analysis. The operating parameters of stepping movement are given. Conclusions: The results, including time history curves of force, speed and etc, can positively used in the design of CRDM. (authors)

  17. Large deviations of a long-time average in the Ehrenfest urn model

    Science.gov (United States)

    Meerson, Baruch; Zilber, Pini

    2018-05-01

    Since its inception in 1907, the Ehrenfest urn model (EUM) has served as a test bed of key concepts of statistical mechanics. Here we employ this model to study large deviations of a time-additive quantity. We consider two continuous-time versions of the EUM with K urns and N balls: with and without interactions between the balls in the same urn. We evaluate the probability distribution that the average number of balls in one urn over time T, , takes any specified value aN, where . For long observation time, , a Donsker–Varadhan large deviation principle holds: , where … denote additional parameters of the model. We calculate the rate function exactly by two different methods due to Donsker and Varadhan and compare the exact results with those obtained with a variant of WKB approximation (after Wentzel, Kramers and Brillouin). In the absence of interactions the WKB prediction for is exact for any N. In the presence of interactions the WKB method gives asymptotically exact results for . The WKB method also uncovers the (very simple) time history of the system which dominates the contribution of different time histories to .

  18. The Effect of Stepped Austempering on Phase Composition and Mechanical Properties of Nanostructured X37CrMoV5-1 Steel

    Directory of Open Access Journals (Sweden)

    Marciniak S.

    2015-04-01

    Full Text Available This paper presents the results of studies of X37CrMoV5-1 steel subjected to quenching processes with a one-step and a two-step isothermal annealing. The TEM observation revealed that steel after one-step treatment led is composed of carbide-free bainite with nanometric thickness of ferrite plates and of high volume fraction of retained austenite in form of thin layers or large blocks. In order to improve the strength parameters an attempt was made to reduce the austenite content by use of quenching with the two-step isothermal annealing. The temperature and time of each step were designed on the basis of dilatometric measurements. It was shown, that the two-step heat treatment led to increase of the bainitic ferrite content and resulted in improvement of steel's strength with no loss of steel ductility.

  19. Avoid the tsunami of the Dirac sea in the imaginary time step method

    International Nuclear Information System (INIS)

    Zhang, Ying; Liang, Haozhao; Meng, Jie

    2010-01-01

    The discrete single-particle spectra in both the Fermi and Dirac sea have been calculated by the imaginary time step (ITS) method for the Schroedinger-like equation after avoiding the "tsunami" of the Dirac sea, i.e. the diving behavior of the single-particle level into the Dirac sea in the direct application of the ITS method for the Dirac equation. It is found that by the transform from the Dirac equation to the Schroedinger-like equation, the single-particle spectra, which extend from the positive to the negative infinity, can be separately obtained by the ITS evolution in either the Fermi sea or the Dirac sea. Identical results with those in the conventional shooting method have been obtained via the ITS evolution for the equivalent Schroedinger-like equation, which demonstrates the feasibility, practicality and reliability of the present algorithm and dispels the doubts on the ITS method in the relativistic system. (author)

  20. Webinar Presentation: Environmental Exposures and Health Risks in California Child Care Facilities: First Steps to Improve Environmental Health where Children Spend Time

    Science.gov (United States)

    This presentation, Environmental Exposures and Health Risks in California Child Care Facilities: First Steps to Improve Environmental Health where Children Spend Time, was given at the NIEHS/EPA Children's Centers 2016 Webinar Series: Exposome.

  1. Sub-step methodology for coupled Monte Carlo depletion and thermal hydraulic codes

    International Nuclear Information System (INIS)

    Kotlyar, D.; Shwageraus, E.

    2016-01-01

    Highlights: • Discretization of time in coupled MC codes determines the results’ accuracy. • The error is due to lack of information regarding the time-dependent reaction rates. • The proposed sub-step method considerably reduces the time discretization error. • No additional MC transport solutions are required within the time step. • The reaction rates are varied as functions of nuclide densities and TH conditions. - Abstract: The governing procedure in coupled Monte Carlo (MC) codes relies on discretization of the simulation time into time steps. Typically, the MC transport solution at discrete points will generate reaction rates, which in most codes are assumed to be constant within the time step. This assumption can trigger numerical instabilities or result in a loss of accuracy, which, in turn, would require reducing the time steps size. This paper focuses on reducing the time discretization error without requiring additional MC transport solutions and hence with no major computational overhead. The sub-step method presented here accounts for the reaction rate variation due to the variation in nuclide densities and thermal hydraulic (TH) conditions. This is achieved by performing additional depletion and TH calculations within the analyzed time step. The method was implemented in BGCore code and subsequently used to analyze a series of test cases. The results indicate that computational speedup of up to a factor of 10 may be achieved over the existing coupling schemes.

  2. Vibration amplitude rule study for rotor under large time scale

    International Nuclear Information System (INIS)

    Yang Xuan; Zuo Jianli; Duan Changcheng

    2014-01-01

    The rotor is an important part of the rotating machinery; its vibration performance is one of the important factors affecting the service life. This paper presents both theoretical analyses and experimental demonstrations of the vibration rule of the rotor under large time scales. The rule can be used for the service life estimation of the rotor. (authors)

  3. Performance analysis in stepped solar still for effluent desalination

    Energy Technology Data Exchange (ETDEWEB)

    Velmurugan, V. [Department of Mechanical Engineering, Infant Jesus College of Engineering, Thoothukudi, Tamil Nadu 628 851 (India); Naveen Kumar, K.J.; Noorul Haq, T.; Srithar, K. [Department of Mechanical Engineering, Thiagarajar College of Engineering, Madurai, Tamil Nadu 625 015 (India)

    2009-09-15

    In this work, a stepped solar still and an effluent settling tank are fabricated and tested for desalinating the textile effluent. The effluent is purified in an effluent settling tank. In this tank, large and fine solid particles are settled and clarified. The settled effluents are used as raw water in the stepped solar still. For better performance, the stepped solar still consists of 50 trays with two different depths. First 25 trays with 10 mm height and the next 25 trays with 5 mm height are used. Fin, sponge, pebble and combination of the above are used for enhancing the productivity of the stepped solar still. A maximum increase in productivity of 98% occurs in stepped solar still when fin, sponge and pebbles are used in this basin. Theoretical analysis agrees well with experimental results. (author)

  4. Comparative analysis of single-step and two-step biodiesel production using supercritical methanol on laboratory-scale

    International Nuclear Information System (INIS)

    Micic, Radoslav D.; Tomić, Milan D.; Kiss, Ferenc E.; Martinovic, Ferenc L.; Simikić, Mirko Ð.; Molnar, Tibor T.

    2016-01-01

    Highlights: • Single-step supercritical transesterification compared to the two-step process. • Two-step process: oil hydrolysis and subsequent supercritical methyl esterification. • Experiments were conducted in a laboratory-scale batch reactor. • Higher biodiesel yields in two-step process at milder reaction conditions. • Two-step process has potential to be cost-competitive with the single-step process. - Abstract: Single-step supercritical transesterification and two-step biodiesel production process consisting of oil hydrolysis and subsequent supercritical methyl esterification were studied and compared. For this purpose, comparative experiments were conducted in a laboratory-scale batch reactor and optimal reaction conditions (temperature, pressure, molar ratio and time) were determined. Results indicate that in comparison to a single-step transesterification, methyl esterification (second step of the two-step process) produces higher biodiesel yields (95 wt% vs. 91 wt%) at lower temperatures (270 °C vs. 350 °C), pressures (8 MPa vs. 12 MPa) and methanol to oil molar ratios (1:20 vs. 1:42). This can be explained by the fact that the reaction system consisting of free fatty acid (FFA) and methanol achieves supercritical condition at milder reaction conditions. Furthermore, the dissolved FFA increases the acidity of supercritical methanol and acts as an acid catalyst that increases the reaction rate. There is a direct correlation between FFA content of the product obtained in hydrolysis and biodiesel yields in methyl esterification. Therefore, the reaction parameters of hydrolysis were optimized to yield the highest FFA content at 12 MPa, 250 °C and 1:20 oil to water molar ratio. Results of direct material and energy costs comparison suggest that the process based on the two-step reaction has the potential to be cost-competitive with the process based on single-step supercritical transesterification. Higher biodiesel yields, similar or lower energy

  5. Biological effect of pulsed dose rate brachytherapy with stepping sources if short half-times of repair are present in tissues

    International Nuclear Information System (INIS)

    Fowler, Jack F.; Limbergen, Erik F.M. van

    1997-01-01

    Purpose: To explore the possible increase of radiation effect in tissues irradiated by pulsed brachytherapy (PDR) for local tissue dose rates between those 'averaged over the whole pulse' and the instantaneous high dose rates close to the dwell positions. Increased effect is more likely for tissues with short half-times of repair of the order of a few minutes, similar to pulse durations. Methods and Materials: Calculations were done assuming the linear quadratic formula for radiation damage, in which only the dose-squared term is subject to exponential repair. The situation with two components of T (1(2)) is addressed. A constant overall time of 140 h and a constant total dose of 70 Gy were assumed throughout, the continuous low dose rate of 0.5 Gy/h (CLDR) providing the unitary standard effects for each PDR condition. Effects of dose rates ranging from 4 Gy/h to 120 Gy/h (HDR at 2 Gy/min) were studied, covering the gap in an earlier publication. Four schedules were examined: doses per pulse of 0.5, 1, 1.5, and 2 Gy given at repetition frequencies of 1, 2, 3, and 4 h, respectively, each with a range of assumed half-times of repair of 4 min to 1.5 h. Results are presented for late-responding tissues, the differences from CLDR being two or three times greater than for early-responding tissues and most tumors. Results: Curves are presented relating the ratio of increased biological effect (proportional to log cell kill) calculated for PDR relative to CLDR. Ratios as high as 1.5 can be found for large doses per pulse (2 Gy) if the half-time of repair in tissues is as short as a few minutes. The major influences on effect are dose per pulse, half-time of repair in tissue, and--when T (1(2)) is short--the instantaneous dose rate. Maximum ratios of PDR/CLDR occur when the dose rate is such that pulse duration is approximately equal to T (1(2)) . As dose rate in the pulse is increased, a plateau of effect is reached, for most T (1(2)) s, above 10 to 20 Gy/h, which is

  6. Numerical characterisation of one-step and three-step solar air heating collectors used for cocoa bean solar drying.

    Science.gov (United States)

    Orbegoso, Elder Mendoza; Saavedra, Rafael; Marcelo, Daniel; La Madrid, Raúl

    2017-12-01

    In the northern coastal and jungle areas of Peru, cocoa beans are dried using artisan methods, such as direct exposure to sunlight. This traditional process is time intensive, leading to a reduction in productivity and, therefore, delays in delivery times. The present study was intended to numerically characterise the thermal behaviour of three configurations of solar air heating collectors in order to determine which demonstrated the best thermal performance under several controlled operating conditions. For this purpose, a computational fluid dynamics model was developed to describe the simultaneous convective and radiative heat transfer phenomena under several operation conditions. The constructed computational fluid dynamics model was firstly validated through comparison with the data measurements of a one-step solar air heating collector. We then simulated two further three-step solar air heating collectors in order to identify which demonstrated the best thermal performance in terms of outlet air temperature and thermal efficiency. The numerical results show that under the same solar irradiation area of exposition and operating conditions, the three-step solar air heating collector with the collector plate mounted between the second and third channels was 67% more thermally efficient compared to the one-step solar air heating collector. This is because the air exposition with the surface of the collector plate for the three-step solar air heating collector former device was twice than the one-step solar air heating collector. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. A new and inexpensive non-bit-for-bit solution reproducibility test based on time step convergence (TSC1.0)

    Science.gov (United States)

    Wan, Hui; Zhang, Kai; Rasch, Philip J.; Singh, Balwinder; Chen, Xingyuan; Edwards, Jim

    2017-02-01

    A test procedure is proposed for identifying numerically significant solution changes in evolution equations used in atmospheric models. The test issues a fail signal when any code modifications or computing environment changes lead to solution differences that exceed the known time step sensitivity of the reference model. Initial evidence is provided using the Community Atmosphere Model (CAM) version 5.3 that the proposed procedure can be used to distinguish rounding-level solution changes from impacts of compiler optimization or parameter perturbation, which are known to cause substantial differences in the simulated climate. The test is not exhaustive since it does not detect issues associated with diagnostic calculations that do not feedback to the model state variables. Nevertheless, it provides a practical and objective way to assess the significance of solution changes. The short simulation length implies low computational cost. The independence between ensemble members allows for parallel execution of all simulations, thus facilitating fast turnaround. The new method is simple to implement since it does not require any code modifications. We expect that the same methodology can be used for any geophysical model to which the concept of time step convergence is applicable.

  8. Physiological and cognitive mediators for the association between self-reported depressed mood and impaired choice stepping reaction time in older people.

    NARCIS (Netherlands)

    Kvelde, T.; Pijnappels, M.A.G.M.; Delbaere, K.; Close, J.C.; Lord, S.R.

    2010-01-01

    Background. The aim of the study was to use path analysis to test a theoretical model proposing that the relationship between self-reported depressed mood and choice stepping reaction time (CSRT) is mediated by psychoactive medication use, physiological performance, and cognitive ability.A total of

  9. Steps and dislocations in cubic lyotropic crystals

    International Nuclear Information System (INIS)

    Leroy, S; Pieranski, P

    2006-01-01

    It has been shown recently that lyotropic systems are convenient for studies of faceting, growth or anisotropic surface melting of crystals. All these phenomena imply the active contribution of surface steps and bulk dislocations. We show here that steps can be observed in situ and in real time by means of a new method combining hygroscopy with phase contrast. First results raise interesting issues about the consequences of bicontinuous topology on the structure and dynamical behaviour of steps and dislocations

  10. Two-step rapid sulfur capture. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-04-01

    The primary goal of this program was to test the technical and economic feasibility of a novel dry sorbent injection process called the Two-Step Rapid Sulfur Capture process for several advanced coal utilization systems. The Two-Step Rapid Sulfur Capture process consists of limestone activation in a high temperature auxiliary burner for short times followed by sorbent quenching in a lower temperature sulfur containing coal combustion gas. The Two-Step Rapid Sulfur Capture process is based on the Non-Equilibrium Sulfur Capture process developed by the Energy Technology Office of Textron Defense Systems (ETO/TDS). Based on the Non-Equilibrium Sulfur Capture studies the range of conditions for optimum sorbent activation were thought to be: activation temperature > 2,200 K for activation times in the range of 10--30 ms. Therefore, the aim of the Two-Step process is to create a very active sorbent (under conditions similar to the bomb reactor) and complete the sulfur reaction under thermodynamically favorable conditions. A flow facility was designed and assembled to simulate the temperature, time, stoichiometry, and sulfur gas concentration prevalent in the advanced coal utilization systems such as gasifiers, fluidized bed combustors, mixed-metal oxide desulfurization systems, diesel engines, and gas turbines.

  11. Constructing an exposure chart: step by step (based on standard procedures)

    International Nuclear Information System (INIS)

    David, Jocelyn L; Cansino, Percedita T.; Taguibao, Angileo P.

    2000-01-01

    An exposure chart is very important in conducting radiographic inspection of materials. By using an accurate exposure chart, an inspector is able to avoid a trial and error way of determining correct time to expose a specimen, thereby producing a radiograph that has an acceptable density based on a standard. The chart gives the following information: x-ray machine model and brand, distance of the x-ray tube from the film, type and thickness of intensifying screens, film type, radiograph density, and film processing conditions. The methods of preparing an exposure chart are available in existing radiographic testing manuals. These described methods are presented in step by step procedures, covering the actual laboratory set-up, data gathering, computations, and transformation of derived data into Characteristic Curve and Exposure Chart

  12. A two-step lyssavirus real-time polymerase chain reaction using degenerate primers with superior sensitivity to the fluorescent antigen test.

    Science.gov (United States)

    Suin, Vanessa; Nazé, Florence; Francart, Aurélie; Lamoral, Sophie; De Craeye, Stéphane; Kalai, Michael; Van Gucht, Steven

    2014-01-01

    A generic two-step lyssavirus real-time reverse transcriptase polymerase chain reaction (qRT-PCR), based on a nested PCR strategy, was validated for the detection of different lyssavirus species. Primers with 17 to 30% of degenerate bases were used in both consecutive steps. The assay could accurately detect RABV, LBV, MOKV, DUVV, EBLV-1, EBLV-2, and ABLV. In silico sequence alignment showed a functional match with the remaining lyssavirus species. The diagnostic specificity was 100% and the sensitivity proved to be superior to that of the fluorescent antigen test. The limit of detection was ≤ 1 50% tissue culture infectious dose. The related vesicular stomatitis virus was not recognized, confirming the selectivity for lyssaviruses. The assay was applied to follow the evolution of rabies virus infection in the brain of mice from 0 to 10 days after intranasal inoculation. The obtained RNA curve corresponded well with the curves obtained by a one-step monospecific RABV-qRT-PCR, the fluorescent antigen test, and virus titration. Despite the presence of degenerate bases, the assay proved to be highly sensitive, specific, and reproducible.

  13. A training approach to improve stepping automaticity while dual-tasking in Parkinson's disease: A prospective pilot study.

    Science.gov (United States)

    Chomiak, Taylor; Watts, Alexander; Meyer, Nicole; Pereira, Fernando V; Hu, Bin

    2017-02-01

    Deficits in motor movement automaticity in Parkinson's disease (PD), especially during multitasking, are early and consistent hallmarks of cognitive function decline, which increases fall risk and reduces quality of life. This study aimed to test the feasibility and potential efficacy of a wearable sensor-enabled technological platform designed for an in-home music-contingent stepping-in-place (SIP) training program to improve step automaticity during dual-tasking (DT). This was a 4-week prospective intervention pilot study. The intervention uses a sensor system and algorithm that runs off the iPod Touch which calculates step height (SH) in real-time. These measurements were then used to trigger auditory (treatment group, music; control group, radio podcast) playback in real-time through wireless headphones upon maintenance of repeated large amplitude stepping. With small steps or shuffling, auditory playback stops, thus allowing participants to use anticipatory motor control to regain positive feedback. Eleven participants were recruited from an ongoing trial (Trial Number: ISRCTN06023392). Fear of falling (FES-I), general cognitive functioning (MoCA), self-reported freezing of gait (FOG-Q), and DT step automaticity were evaluated. While we found no significant effect of training on FES-I, MoCA, or FOG-Q, we did observe a significant group (music vs podcast) by training interaction in DT step automaticity (Ptraining to increase motor automaticity for people living with PD. The training approach described here can be implemented at home to meet the growing demand for self-management of symptoms by patients.

  14. Dynamic balance and stepping versus tai chi training to improve balance and stepping in at-risk older adults.

    Science.gov (United States)

    Nnodim, Joseph O; Strasburg, Debra; Nabozny, Martina; Nyquist, Linda; Galecki, Andrzej; Chen, Shu; Alexander, Neil B

    2006-12-01

    To compare the effect of two 10-week balance training programs, Combined Balance and Step Training (CBST) versus tai chi (TC), on balance and stepping measures. Prospective intervention trial. Local senior centers and congregate housing facilities. Aged 65 and older with at least mild impairment in the ability to perform unipedal stance and tandem walk. Participants were allocated to TC (n = 107, mean age 78) or CBST, an intervention focused on improving dynamic balance and stepping (n = 106, mean age 78). At baseline and 10 weeks, participants were tested in their static balance (Unipedal Stance and Tandem Stance (TS)), stepping (Maximum Step Length, Rapid Step Test), and Timed Up and Go (TUG). Performance improved more with CBST than TC, ranging from 5% to 10% for the stepping tests (Maximum Step Length and Rapid Step Test) and 9% for TUG. The improvement in TUG represented an improvement of more than 1 second. Greater improvements were also seen in static balance ability (in TS) with CBST than TC. Of the two training programs, in which variants of each program have been proven to reduce falls, CBST results in modest improvements in balance, stepping, and functional mobility versus TC over a 10-week period. Future research should include a prospective comparison of fall rates in response to these two balance training programs.

  15. Large lateral photovoltaic effect with ultrafast relaxation time in SnSe/Si junction

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xianjie; Zhao, Xiaofeng; Hu, Chang; Zhang, Yang; Song, Bingqian; Zhang, Lingli; Liu, Weilong; Lv, Zhe; Zhang, Yu; Sui, Yu, E-mail: suiyu@hit.edu.cn [Department of Physics, Harbin Institute of Technology, Harbin 150001 (China); Tang, Jinke [Department of Physics and Astronomy, University of Wyoming, Laramie, Wyoming 82071 (United States); Song, Bo, E-mail: songbo@hit.edu.cn [Department of Physics, Harbin Institute of Technology, Harbin 150001 (China); Academy of Fundamental and Interdisciplinary Sciences, Harbin Institute of Technology, Harbin 150001 (China)

    2016-07-11

    In this paper, we report a large lateral photovoltaic effect (LPE) with ultrafast relaxation time in SnSe/p-Si junctions. The LPE shows a linear dependence on the position of the laser spot, and the position sensitivity is as high as 250 mV mm{sup −1}. The optical response time and the relaxation time of the LPE are about 100 ns and 2 μs, respectively. The current-voltage curve on the surface of the SnSe film indicates the formation of an inversion layer at the SnSe/p-Si interface. Our results clearly suggest that most of the excited-electrons diffuse laterally in the inversion layer at the SnSe/p-Si interface, which results in a large LPE with ultrafast relaxation time. The high positional sensitivity and ultrafast relaxation time of the LPE make the SnSe/p-Si junction a promising candidate for a wide range of optoelectronic applications.

  16. The Space-Time Conservative Schemes for Large-Scale, Time-Accurate Flow Simulations with Tetrahedral Meshes

    Science.gov (United States)

    Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung

    2016-01-01

    Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.

  17. Variable Neighborhood Search for Parallel Machines Scheduling Problem with Step Deteriorating Jobs

    Directory of Open Access Journals (Sweden)

    Wenming Cheng

    2012-01-01

    Full Text Available In many real scheduling environments, a job processed later needs longer time than the same job when it starts earlier. This phenomenon is known as scheduling with deteriorating jobs to many industrial applications. In this paper, we study a scheduling problem of minimizing the total completion time on identical parallel machines where the processing time of a job is a step function of its starting time and a deteriorating date that is individual to all jobs. Firstly, a mixed integer programming model is presented for the problem. And then, a modified weight-combination search algorithm and a variable neighborhood search are employed to yield optimal or near-optimal schedule. To evaluate the performance of the proposed algorithms, computational experiments are performed on randomly generated test instances. Finally, computational results show that the proposed approaches obtain near-optimal solutions in a reasonable computational time even for large-sized problems.

  18. Large-scale machine learning and evaluation platform for real-time traffic surveillance

    Science.gov (United States)

    Eichel, Justin A.; Mishra, Akshaya; Miller, Nicholas; Jankovic, Nicholas; Thomas, Mohan A.; Abbott, Tyler; Swanson, Douglas; Keller, Joel

    2016-09-01

    In traffic engineering, vehicle detectors are trained on limited datasets, resulting in poor accuracy when deployed in real-world surveillance applications. Annotating large-scale high-quality datasets is challenging. Typically, these datasets have limited diversity; they do not reflect the real-world operating environment. There is a need for a large-scale, cloud-based positive and negative mining process and a large-scale learning and evaluation system for the application of automatic traffic measurements and classification. The proposed positive and negative mining process addresses the quality of crowd sourced ground truth data through machine learning review and human feedback mechanisms. The proposed learning and evaluation system uses a distributed cloud computing framework to handle data-scaling issues associated with large numbers of samples and a high-dimensional feature space. The system is trained using AdaBoost on 1,000,000 Haar-like features extracted from 70,000 annotated video frames. The trained real-time vehicle detector achieves an accuracy of at least 95% for 1/2 and about 78% for 19/20 of the time when tested on ˜7,500,000 video frames. At the end of 2016, the dataset is expected to have over 1 billion annotated video frames.

  19. Cosmological special relativity the large scale structure of space, time and velocity

    CERN Document Server

    Carmeli, Moshe

    1997-01-01

    This book deals with special relativity theory and its application to cosmology. It presents Einstein's theory of space and time in detail, and describes the large scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The book will be of interest to cosmologists, astrophysicists, theoretical

  20. Cosmological special relativity the large scale structure of space, time and velocity

    CERN Document Server

    Carmeli, Moshe

    2002-01-01

    This book presents Einstein's theory of space and time in detail, and describes the large-scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The relationship between cosmic velocity, acceleration and distances is given. In the appendices gravitation is added in the form of a cosmological g

  1. 150 Mb/s wifi transmission over 50m large core diameter step index POF

    NARCIS (Netherlands)

    Shi, Y.; Nieto Munoz, M.; Okonkwo, C.M.; Boom, van den H.P.A.; Tangdiongga, E.; Koonen, A.M.J.

    2011-01-01

    We demonstrate successful transmission of WiFi signals over 50m step-index plastic optical fibre with 1mm core diameter employing an eye-safe resonant cavity light emitting diode and an avalanche photodetector. The EVM performance of 4.1% at signal data rate of 150Mb/s is achieved.

  2. Monte Carlo steps per spin vs. time in the master equation II: Glauber kinetics for the infinite-range ising model in a static magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Suhk Kun [Chungbuk National University, Chungbuk (Korea, Republic of)

    2006-01-15

    As an extension of our previous work on the relationship between time in Monte Carlo simulation and time in the continuous master equation in the infinit-range Glauber kinetic Ising model in the absence of any magnetic field, we explored the same model in the presence of a static magnetic field. Monte Carlo steps per spin as time in the MC simulations again turns out to be proportional to time in the master equation for the model in relatively larger static magnetic fields at any temperature. At and near the critical point in a relatively smaller magnetic field, the model exhibits a significant finite-size dependence, and the solution to the Suzuki-Kubo differential equation stemming from the master equation needs to be re-scaled to fit the Monte Carlo steps per spin for the system with different numbers of spins.

  3. The Throw-and-Catch Model of Human Gait: Evidence from Coupling of Pre-Step Postural Activity and Step Location

    Science.gov (United States)

    Bancroft, Matthew J.; Day, Brian L.

    2016-01-01

    Postural activity normally precedes the lift of a foot from the ground when taking a step, but its function is unclear. The throw-and-catch hypothesis of human gait proposes that the pre-step activity is organized to generate momentum for the body to fall ballistically along a specific trajectory during the step. The trajectory is appropriate for the stepping foot to land at its intended location while at the same time being optimally placed to catch the body and regain balance. The hypothesis therefore predicts a strong coupling between the pre-step activity and step location. Here we examine this coupling when stepping to visually-presented targets at different locations. Ten healthy, young subjects were instructed to step as accurately as possible onto targets placed in five locations that required either different step directions or different step lengths. In 75% of trials, the target location remained constant throughout the step. In the remaining 25% of trials, the intended step location was changed by making the target jump to a new location 96 ms ± 43 ms after initiation of the pre-step activity, long before foot lift. As predicted by the throw-and-catch hypothesis, when the target location remained constant, the pre-step activity led to body momentum at foot lift that was coupled to the intended step location. When the target location jumped, the pre-step activity was adjusted (median latency 223 ms) and prolonged (on average by 69 ms), which altered the body’s momentum at foot lift according to where the target had moved. We conclude that whenever possible the coupling between the pre-step activity and the step location is maintained. This provides further support for the throw-and-catch hypothesis of human gait. PMID:28066208

  4. The Throw-and-Catch Model of Human Gait: Evidence from Coupling of Pre-Step Postural Activity and Step Location.

    Science.gov (United States)

    Bancroft, Matthew J; Day, Brian L

    2016-01-01

    Postural activity normally precedes the lift of a foot from the ground when taking a step, but its function is unclear. The throw-and-catch hypothesis of human gait proposes that the pre-step activity is organized to generate momentum for the body to fall ballistically along a specific trajectory during the step. The trajectory is appropriate for the stepping foot to land at its intended location while at the same time being optimally placed to catch the body and regain balance. The hypothesis therefore predicts a strong coupling between the pre-step activity and step location. Here we examine this coupling when stepping to visually-presented targets at different locations. Ten healthy, young subjects were instructed to step as accurately as possible onto targets placed in five locations that required either different step directions or different step lengths. In 75% of trials, the target location remained constant throughout the step. In the remaining 25% of trials, the intended step location was changed by making the target jump to a new location 96 ms ± 43 ms after initiation of the pre-step activity, long before foot lift. As predicted by the throw-and-catch hypothesis, when the target location remained constant, the pre-step activity led to body momentum at foot lift that was coupled to the intended step location. When the target location jumped, the pre-step activity was adjusted (median latency 223 ms) and prolonged (on average by 69 ms), which altered the body's momentum at foot lift according to where the target had moved. We conclude that whenever possible the coupling between the pre-step activity and the step location is maintained. This provides further support for the throw-and-catch hypothesis of human gait.

  5. Large-time asymptotic behaviour of solutions of non-linear Sobolev-type equations

    International Nuclear Information System (INIS)

    Kaikina, Elena I; Naumkin, Pavel I; Shishmarev, Il'ya A

    2009-01-01

    The large-time asymptotic behaviour of solutions of the Cauchy problem is investigated for a non-linear Sobolev-type equation with dissipation. For small initial data the approach taken is based on a detailed analysis of the Green's function of the linear problem and the use of the contraction mapping method. The case of large initial data is also closely considered. In the supercritical case the asymptotic formulae are quasi-linear. The asymptotic behaviour of solutions of a non-linear Sobolev-type equation with a critical non-linearity of the non-convective kind differs by a logarithmic correction term from the behaviour of solutions of the corresponding linear equation. For a critical convective non-linearity, as well as for a subcritical non-convective non-linearity it is proved that the leading term of the asymptotic expression for large times is a self-similar solution. For Sobolev equations with convective non-linearity the asymptotic behaviour of solutions in the subcritical case is the product of a rarefaction wave and a shock wave. Bibliography: 84 titles.

  6. Time delay effects on large-scale MR damper based semi-active control strategies

    International Nuclear Information System (INIS)

    Cha, Y-J; Agrawal, A K; Dyke, S J

    2013-01-01

    This paper presents a detailed investigation on the robustness of large-scale 200 kN MR damper based semi-active control strategies in the presence of time delays in the control system. Although the effects of time delay on stability and performance degradation of an actively controlled system have been investigated extensively by many researchers, degradation in the performance of semi-active systems due to time delay has yet to be investigated. Since semi-active systems are inherently stable, instability problems due to time delay are unlikely to arise. This paper investigates the effects of time delay on the performance of a building with a large-scale MR damper, using numerical simulations of near- and far-field earthquakes. The MR damper is considered to be controlled by four different semi-active control algorithms, namely (i) clipped-optimal control (COC), (ii) decentralized output feedback polynomial control (DOFPC), (iii) Lyapunov control, and (iv) simple-passive control (SPC). It is observed that all controllers except for the COC are significantly robust with respect to time delay. On the other hand, the clipped-optimal controller should be integrated with a compensator to improve the performance in the presence of time delay. (paper)

  7. Time dispersion in large plastic scintillation neutron detector [Paper No.:B3

    International Nuclear Information System (INIS)

    De, A.; Dasgupta, S.S.; Sen, D.

    1993-01-01

    Time dispersion seen by photomultiplier (PM) tube in large plastic scintillation neutron detector and the light collection mechanism by the same have been computed showing that this time dispersion (TD) seen by the PM tube does not necessarily increase with increasing incident neutron energy in contrast to the usual finding that TD increases with increasing energy. (author). 8 refs., 4 figs

  8. Accessory stimulus modulates executive function during stepping task.

    Science.gov (United States)

    Watanabe, Tatsunori; Koyama, Soichiro; Tanabe, Shigeo; Nojima, Ippei

    2015-07-01

    When multiple sensory modalities are simultaneously presented, reaction time can be reduced while interference enlarges. The purpose of this research was to examine the effects of task-irrelevant acoustic accessory stimuli simultaneously presented with visual imperative stimuli on executive function during stepping. Executive functions were assessed by analyzing temporal events and errors in the initial weight transfer of the postural responses prior to a step (anticipatory postural adjustment errors). Eleven healthy young adults stepped forward in response to a visual stimulus. We applied a choice reaction time task and the Simon task, which consisted of congruent and incongruent conditions. Accessory stimuli were randomly presented with the visual stimuli. Compared with trials without accessory stimuli, the anticipatory postural adjustment error rates were higher in trials with accessory stimuli in the incongruent condition and the reaction times were shorter in trials with accessory stimuli in all the task conditions. Analyses after division of trials according to whether anticipatory postural adjustment error occurred or not revealed that the reaction times of trials with anticipatory postural adjustment errors were reduced more than those of trials without anticipatory postural adjustment errors in the incongruent condition. These results suggest that accessory stimuli modulate the initial motor programming of stepping by lowering decision threshold and exclusively under spatial incompatibility facilitate automatic response activation. The present findings advance the knowledge of intersensory judgment processes during stepping and may aid in the development of intervention and evaluation tools for individuals at risk of falls. Copyright © 2015 the American Physiological Society.

  9. Performance of the Seven-step Procedure in Problem-based Hospitality Management Education

    Directory of Open Access Journals (Sweden)

    Wichard Zwaal

    2016-12-01

    Full Text Available The study focuses on the seven-step procedure (SSP in problem-based learning (PBL. The way students apply the seven-step procedure will help us understand how students work in a problem-based learning curriculum. So far, little is known about how students rate the performance and importance of the different steps, the amount of time they spend on each step and the perceived quality of execution of the procedure. A survey was administered to a sample of 101 students enrolled in a problem-based hospitality management program. Results show that students consider step 6 (Collect additional information outside the group to be most important. The highest performance-rating is for step two (Define the problem and the lowest for step four (Draw a systemic inventory of explanations from step three. Step seven is classified as low in performance and high in importance implicating urgent attention. The average amount of time spent on the seven steps is 133 minutes with the largest part of the time spent on self-study outside the group (42 minutes. The assessment of the execution of a set of specific guidelines (the Blue Card did not completely match with the overall performance ratings for the seven steps. The SSP could be improved by reducing the number of steps and incorporating more attention to group dynamics.

  10. Step Sizes for Strong Stability Preservation with Downwind-Biased Operators

    KAUST Repository

    Ketcheson, David I.

    2011-08-04

    Strong stability preserving (SSP) integrators for initial value ODEs preserve temporal monotonicity solution properties in arbitrary norms. All existing SSP methods, including implicit methods, either require small step sizes or achieve only first order accuracy. It is possible to achieve more relaxed step size restrictions in the discretization of hyperbolic PDEs through the use of both upwind- and downwind-biased semidiscretizations. We investigate bounds on the maximum SSP step size for methods that include negative coefficients and downwind-biased semi-discretizations. We prove that the downwind SSP coefficient for linear multistep methods of order greater than one is at most equal to two, while the downwind SSP coefficient for explicit Runge–Kutta methods is at most equal to the number of stages of the method. In contrast, the maximal downwind SSP coefficient for second order Runge–Kutta methods is shown to be unbounded. We present a class of such methods with arbitrarily large SSP coefficient and demonstrate that they achieve second order accuracy for large CFL number.

  11. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    Science.gov (United States)

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  12. Systematic identification and robust control design for uncertain time delay processes

    DEFF Research Database (Denmark)

    Huusom, Jakob Kjøbsted; Poulsen, Niels Kjølstad; Jørgensen, Sten Bay

    2011-01-01

    A systematic procedure is proposed to handle the standard process control problem. The considered standard problem involves infrequent step disturbances to processes with large delays and measurement noise. The process is modeled as an ARX model and extended with a suitable noise model in order...... to reject unmeasured step disturbances and unavoidable model errors. This controller is illustrated to perform well for both set point tracking and a disturbance rejection for a SISO process example of a furnace which has a time delay which is significantly longer than the dominating time constant....

  13. Long-Time Plasma Membrane Imaging Based on a Two-Step Synergistic Cell Surface Modification Strategy.

    Science.gov (United States)

    Jia, Hao-Ran; Wang, Hong-Yin; Yu, Zhi-Wu; Chen, Zhan; Wu, Fu-Gen

    2016-03-16

    Long-time stable plasma membrane imaging is difficult due to the fast cellular internalization of fluorescent dyes and the quick detachment of the dyes from the membrane. In this study, we developed a two-step synergistic cell surface modification and labeling strategy to realize long-time plasma membrane imaging. Initially, a multisite plasma membrane anchoring reagent, glycol chitosan-10% PEG2000 cholesterol-10% biotin (abbreviated as "GC-Chol-Biotin"), was incubated with cells to modify the plasma membranes with biotin groups with the assistance of the membrane anchoring ability of cholesterol moieties. Fluorescein isothiocyanate (FITC)-conjugated avidin was then introduced to achieve the fluorescence-labeled plasma membranes based on the supramolecular recognition between biotin and avidin. This strategy achieved stable plasma membrane imaging for up to 8 h without substantial internalization of the dyes, and avoided the quick fluorescence loss caused by the detachment of dyes from plasma membranes. We have also demonstrated that the imaging performance of our staining strategy far surpassed that of current commercial plasma membrane imaging reagents such as DiD and CellMask. Furthermore, the photodynamic damage of plasma membranes caused by a photosensitizer, Chlorin e6 (Ce6), was tracked in real time for 5 h during continuous laser irradiation. Plasma membrane behaviors including cell shrinkage, membrane blebbing, and plasma membrane vesiculation could be dynamically recorded. Therefore, the imaging strategy developed in this work may provide a novel platform to investigate plasma membrane behaviors over a relatively long time period.

  14. Mixed Discretization of the Time Domain MFIE at Low Frequencies

    KAUST Repository

    Ulku, Huseyin Arda; Bogaert, Ignace; Cools, Kristof; Andriulli, Francesco Paolo; Bagci, Hakan

    2017-01-01

    stems from the classical MOT scheme’s failure to predict the correct scaling of the current’s Helmholtz components for large time steps. A recently proposed mixed discretization strategy is used to alleviate the inaccuracy problem by restoring

  15. Robust Detection of Stepping-Stone Attacks

    National Research Council Canada - National Science Library

    He, Ting; Tong, Lang

    2006-01-01

    The detection of encrypted stepping-stone attack is considered. Besides encryption and padding, the attacker is capable of inserting chaff packets and perturbing packet timing and transmission order...

  16. Natural Time and Nowcasting Earthquakes: Are Large Global Earthquakes Temporally Clustered?

    Science.gov (United States)

    Luginbuhl, Molly; Rundle, John B.; Turcotte, Donald L.

    2018-02-01

    The objective of this paper is to analyze the temporal clustering of large global earthquakes with respect to natural time, or interevent count, as opposed to regular clock time. To do this, we use two techniques: (1) nowcasting, a new method of statistically classifying seismicity and seismic risk, and (2) time series analysis of interevent counts. We chose the sequences of M_{λ } ≥ 7.0 and M_{λ } ≥ 8.0 earthquakes from the global centroid moment tensor (CMT) catalog from 2004 to 2016 for analysis. A significant number of these earthquakes will be aftershocks of the largest events, but no satisfactory method of declustering the aftershocks in clock time is available. A major advantage of using natural time is that it eliminates the need for declustering aftershocks. The event count we utilize is the number of small earthquakes that occur between large earthquakes. The small earthquake magnitude is chosen to be as small as possible, such that the catalog is still complete based on the Gutenberg-Richter statistics. For the CMT catalog, starting in 2004, we found the completeness magnitude to be M_{σ } ≥ 5.1. For the nowcasting method, the cumulative probability distribution of these interevent counts is obtained. We quantify the distribution using the exponent, β, of the best fitting Weibull distribution; β = 1 for a random (exponential) distribution. We considered 197 earthquakes with M_{λ } ≥ 7.0 and found β = 0.83 ± 0.08. We considered 15 earthquakes with M_{λ } ≥ 8.0, but this number was considered too small to generate a meaningful distribution. For comparison, we generated synthetic catalogs of earthquakes that occur randomly with the Gutenberg-Richter frequency-magnitude statistics. We considered a synthetic catalog of 1.97 × 10^5 M_{λ } ≥ 7.0 earthquakes and found β = 0.99 ± 0.01. The random catalog converted to natural time was also random. We then generated 1.5 × 10^4 synthetic catalogs with 197 M_{λ } ≥ 7.0 in each catalog and

  17. Compact Two-step Laser Time-of-Flight Mass Spectrometer for in Situ Analyses of Aromatic Organics on Planetary Missions

    Science.gov (United States)

    Getty, Stephanie; Brickerhoff, William; Cornish, Timothy; Ecelberger, Scott; Floyd, Melissa

    2012-01-01

    RATIONALE A miniature time-of-flight mass spectrometer has been adapted to demonstrate two-step laser desorption-ionization (LOI) in a compact instrument package for enhanced organics detection. Two-step LDI decouples the desorption and ionization processes, relative to traditional laser ionization-desorption, in order to produce low-fragmentation conditions for complex organic analytes. Tuning UV ionization laser energy allowed control ofthe degree of fragmentation, which may enable better identification of constituent species. METHODS A reflectron time-of-flight mass spectrometer prototype measuring 20 cm in length was adapted to a two-laser configuration, with IR (1064 nm) desorption followed by UV (266 nm) postionization. A relatively low ion extraction voltage of 5 kV was applied at the sample inlet. Instrument capabilities and performance were demonstrated with analysis of a model polycyclic aromatic hydrocarbon, representing a class of compounds important to the fields of Earth and planetary science. RESULTS L2MS analysis of a model PAH standard, pyrene, has been demonstrated, including parent mass identification and the onset o(tunable fragmentation as a function of ionizing laser energy. Mass resolution m/llm = 380 at full width at half-maximum was achieved which is notable for gas-phase ionization of desorbed neutrals in a highly-compact mass analyzer. CONCLUSIONS Achieving two-step laser mass spectrometry (L2MS) in a highly-miniature instrument enables a powerful approach to the detection and characterization of aromatic organics in remote terrestrial and planetary applications. Tunable detection of parent and fragment ions with high mass resolution, diagnostic of molecular structure, is possible on such a compact L2MS instrument. Selectivity of L2MS against low-mass inorganic salt interferences is a key advantage when working with unprocessed, natural samples, and a mechanism for the observed selectivity is presented.

  18. Incipient multiple fault diagnosis in real time with applications to large-scale systems

    International Nuclear Information System (INIS)

    Chung, H.Y.; Bien, Z.; Park, J.H.; Seon, P.H.

    1994-01-01

    By using a modified signed directed graph (SDG) together with the distributed artificial neutral networks and a knowledge-based system, a method of incipient multi-fault diagnosis is presented for large-scale physical systems with complex pipes and instrumentations such as valves, actuators, sensors, and controllers. The proposed method is designed so as to (1) make a real-time incipient fault diagnosis possible for large-scale systems, (2) perform the fault diagnosis not only in the steady-state case but also in the transient case as well by using a concept of fault propagation time, which is newly adopted in the SDG model, (3) provide with highly reliable diagnosis results and explanation capability of faults diagnosed as in an expert system, and (4) diagnose the pipe damage such as leaking, break, or throttling. This method is applied for diagnosis of a pressurizer in the Kori Nuclear Power Plant (NPP) unit 2 in Korea under a transient condition, and its result is reported to show satisfactory performance of the method for the incipient multi-fault diagnosis of such a large-scale system in a real-time manner

  19. EPFM verification by a large scale test

    International Nuclear Information System (INIS)

    Okamura, H.; Yagawa, G.; Hidaka, T.; Sato, M.; Urabe, Y.; Iida, M.

    1993-01-01

    Step B test was carried out as one of the elastic plastic fracture mechanics (EPFR) study in Japanese PTS integrity research project. In step B test bending load was applied to the large flat specimen with thermal shock. Tensile load was kept constant during the test. Estimated stable crack growth at the deepest point of the crack was 3 times larger than the experimental value in the previous analysis. In order to diminish the difference between them from the point of FEM modeling, more precise FEM mesh was introduced. According to the new analysis, the difference considerably decreased. That is, stable crack growth evaluation was improved by adopting precise FEM model near the crack tip and the difference was almost same order as that in the NKS4-1 test analysis by MPA. 8 refs., 17 figs., 5 tabs

  20. Development and validation of a local time stepping-based PaSR solver for combustion and radiation modeling

    DEFF Research Database (Denmark)

    Pang, Kar Mun; Ivarsson, Anders; Haider, Sajjad

    2013-01-01

    In the current work, a local time stepping (LTS) solver for the modeling of combustion, radiative heat transfer and soot formation is developed and validated. This is achieved using an open source computational fluid dynamics code, OpenFOAM. Akin to the solver provided in default assembly i...... library in the edcSimpleFoam solver which was introduced during the 6th OpenFOAM workshop is modified and coupled with the current solver. One of the main amendments made is the integration of soot radiation submodel since this is significant in rich flames where soot particles are formed. The new solver...

  1. A Two-Step Lyssavirus Real-Time Polymerase Chain Reaction Using Degenerate Primers with Superior Sensitivity to the Fluorescent Antigen Test

    Directory of Open Access Journals (Sweden)

    Vanessa Suin

    2014-01-01

    Full Text Available A generic two-step lyssavirus real-time reverse transcriptase polymerase chain reaction (qRT-PCR, based on a nested PCR strategy, was validated for the detection of different lyssavirus species. Primers with 17 to 30% of degenerate bases were used in both consecutive steps. The assay could accurately detect RABV, LBV, MOKV, DUVV, EBLV-1, EBLV-2, and ABLV. In silico sequence alignment showed a functional match with the remaining lyssavirus species. The diagnostic specificity was 100% and the sensitivity proved to be superior to that of the fluorescent antigen test. The limit of detection was ≤1 50% tissue culture infectious dose. The related vesicular stomatitis virus was not recognized, confirming the selectivity for lyssaviruses. The assay was applied to follow the evolution of rabies virus infection in the brain of mice from 0 to 10 days after intranasal inoculation. The obtained RNA curve corresponded well with the curves obtained by a one-step monospecific RABV-qRT-PCR, the fluorescent antigen test, and virus titration. Despite the presence of degenerate bases, the assay proved to be highly sensitive, specific, and reproducible.

  2. Robotic-assisted laparoscopic radical nephrectomy using the Da Vinci Si system: how to improve surgeon autonomy. Our step-by-step technique.

    Science.gov (United States)

    Davila, Hugo H; Storey, Raul E; Rose, Marc C

    2016-09-01

    Herein, we describe several steps to improve surgeon autonomy during a Left Robotic-Assisted Laparoscopic Radical Nephrectomy (RALRN), using the Da Vinci Si system. Our kidney cancer program is based on 2 community hospitals. We use the Da Vinci Si system. Access is obtained with the following trocars: Two 8 mm robotic, one 8 mm robotic, bariatric length (arm 3), 15 mm for the assistant and 12 mm for the camera. We use curved monopolar scissors in robotic arm 1, Bipolar Maryland in arm 2, Prograsp Forceps in arm 3, and we alternate throughout the surgery with EndoWrist clip appliers and the vessel sealer. Here, we described three steps and the use of 3 robotic instruments to improve surgeon autonomy. Step 1: the lower pole of the kidney was dissected and this was retracted upwards and laterally. This maneuver was performed using the 3rd robotic arm with the Prograsp Forceps. Step 2: the monopolar scissors was replaced (robotic arm 1) with the robotic EndoWrist clip applier, 10 mm Hem-o-Lok. The renal artery and vein were controlled and transected by the main surgeon. Step 3: the superior, posterolateral dissection and all bleeders were carefully coagulated by the surgeon with the EndoWrist one vessel sealer. We have now performed 15 RALRN following these steps. Our results were: blood loss 300 cc, console time 140 min, operating room time 200 min, anesthesia time 180 min, hospital stay 2.5 days, 1 incisional hernia, pathology: (13) RCC clear cell, (1) chromophobe and (1) papillary type 1. Tumor Stage: (5) T1b, (8) T2a, (2) T2b. We provide a concise, step-by-step technique for radical nephrectomy (RN) using the Da Vinci Si robotic system that may provide more autonomy to the surgeon, while maintaining surgical outcome equivalent to standard laparoscopic RN.

  3. Microsoft® SQL Server® 2008 Step by Step

    CERN Document Server

    Hotek, Mike

    2009-01-01

    Teach yourself SQL Server 2008-one step at a time. Get the practical guidance you need to build database solutions that solve real-world business problems. Learn to integrate SQL Server data in your applications, write queries, develop reports, and employ powerful business intelligence systems.Discover how to:Install and work with core components and toolsCreate tables and index structuresManipulate and retrieve dataSecure, manage, back up, and recover databasesApply tuning plus optimization techniques to generate high-performing database applicationsOptimize availability through clustering, d

  4. The Satellite Test of the Equivalence Principle (STEP)

    Science.gov (United States)

    2004-01-01

    STEP will carry concentric test masses to Earth orbit to test a fundamental assumption underlying Einstein's theory of general relativity: that gravitational mass is equivalent to inertial mass. STEP is a 21st-century version of the test that Galileo is said to have performed by dropping a carnon ball and a musket ball simultaneously from the top of the Leaning Tower of Pisa to compare their accelerations. During the STEP experiment, four pairs of test masses will be falling around the Earth, and their accelerations will be measured by superconducting quantum interference devices (SQUIDS). The extended time sensitivity of the instruments will allow the measurements to be a million times more accurate than those made in modern ground-based tests.

  5. One-step lowrank wave extrapolation

    KAUST Repository

    Sindi, Ghada Atif

    2014-01-01

    Wavefield extrapolation is at the heart of modeling, imaging, and Full waveform inversion. Spectral methods gained well deserved attention due to their dispersion free solutions and their natural handling of anisotropic media. We propose a scheme a modified one-step lowrank wave extrapolation using Shanks transform in isotropic, and anisotropic media. Specifically, we utilize a velocity gradient term to add to the accuracy of the phase approximation function in the spectral implementation. With the higher accuracy, we can utilize larger time steps and make the extrapolation more efficient. Applications to models with strong inhomogeneity and considerable anisotropy demonstrates the utility of the approach.

  6. Detection of SYT-SSX mutant transcripts in formalin-fixed paraffin-embedded sarcoma tissues using one-step reverse transcriptase real-time PCR.

    Science.gov (United States)

    Norlelawati, A T; Mohd Danial, G; Nora, H; Nadia, O; Zatur Rawihah, K; Nor Zamzila, A; Naznin, M

    2016-04-01

    Synovial sarcoma (SS) is a rare cancer and accounts for 5-10% of adult soft tissue sarcomas. Making an accurate diagnosis is difficult due to the overlapping histological features of SS with other types of sarcomas and the non-specific immunohistochemistry profile findings. Molecular testing is thus considered necessary to confirm the diagnosis since more than 90% of SS cases carry the transcript of t(X;18)(p11.2;q11.2). The purpose of this study is to diagnose SS at molecular level by testing for t(X;18) fusion-transcript expression through One-step reverse transcriptase real-time Polymerase Chain Reaction (PCR). Formalin-fixed paraffin-embedded tissue blocks of 23 cases of soft tissue sarcomas, which included 5 and 8 cases reported as SS as the primary diagnosis and differential diagnosis respectively, were retrieved from the Department of Pathology, Tengku Ampuan Afzan Hospital, Kuantan, Pahang. RNA was purified from the tissue block sections and then subjected to One-step reverse transcriptase real-time PCR using sequence specific hydrolysis probes for simultaneous detection of either SYT-SSX1 or SYT-SSX2 fusion transcript. Of the 23 cases, 4 cases were found to be positive for SYT-SSX fusion transcript in which 2 were diagnosed as SS whereas in the 2 other cases, SS was the differential diagnosis. Three cases were excluded due to failure of both amplification assays SYT-SSX and control β-2-microglobulin. The remaining 16 cases were negative for the fusion transcript. This study has shown that the application of One-Step reverse transcriptase real time PCR for the detection SYT-SSX transcript is feasible as an aid in confirming the diagnosis of synovial sarcoma.

  7. The Effect of Phosphoric Acid Pre-etching Times on Bonding Performance and Surface Free Energy with Single-step Self-etch Adhesives.

    Science.gov (United States)

    Tsujimoto, A; Barkmeier, W W; Takamizawa, T; Latta, M A; Miyazaki, M

    2016-01-01

    The purpose of this study was to evaluate the effect of phosphoric acid pre-etching times on shear bond strength (SBS) and surface free energy (SFE) with single-step self-etch adhesives. The three single-step self-etch adhesives used were: 1) Scotchbond Universal Adhesive (3M ESPE), 2) Clearfil tri-S Bond (Kuraray Noritake Dental), and 3) G-Bond Plus (GC). Two no pre-etching groups, 1) untreated enamel and 2) enamel surfaces after ultrasonic cleaning with distilled water for 30 seconds to remove the smear layer, were prepared. There were four pre-etching groups: 1) enamel surfaces were pre-etched with phosphoric acid (Etchant, 3M ESPE) for 3 seconds, 2) enamel surfaces were pre-etched for 5 seconds, 3) enamel surfaces were pre-etched for 10 seconds, and 4) enamel surfaces were pre-etched for 15 seconds. Resin composite was bonded to the treated enamel surface to determine SBS. The SFEs of treated enamel surfaces were determined by measuring the contact angles of three test liquids. Scanning electron microscopy was used to examine the enamel surfaces and enamel-adhesive interface. The specimens with phosphoric acid pre-etching showed significantly higher SBS and SFEs than the specimens without phosphoric acid pre-etching regardless of the adhesive system used. SBS and SFEs did not increase for phosphoric acid pre-etching times over 3 seconds. There were no significant differences in SBS and SFEs between the specimens with and without a smear layer. The data suggest that phosphoric acid pre-etching of ground enamel improves the bonding performance of single-step self-etch adhesives, but these bonding properties do not increase for phosphoric acid pre-etching times over 3 seconds.

  8. Special Properties of Coherence Scanning Interferometers for large Measurement Volumes

    International Nuclear Information System (INIS)

    Bauer, W

    2011-01-01

    In contrast to many other optical methods the uncertainty of Coherence Scanning Interferometer (CSI) in vertical direction is independent from the field of view. Therefore CSIs are ideal instruments for measuring 3D-profiles of larger areas (36x28mm 2 , e.g.) with high precision. This is of advantage for the determination of form parameters like flatness, parallelism and steps heights within a short time. In addition, using a telecentric beam path allows measurements of deep lying surfaces (<70mm) and the determination of form parameters with large step-heights. The lateral and spatial resolution, however, are reduced. In this presentation different metrological characteristics together with their potential errors are analyzed for large-scale measuring CSIs. Therefore these instruments are ideal tools in quality control for good/bad selections, e.g. The consequences for the practical use in industry and for standardization are discussed by examples of workpieces of automotive suppliers or from the steel industry.

  9. Effectiveness of a step-by-step oral recount before a practical simulation of fracture fixation.

    Science.gov (United States)

    Abagge, Marcelo; Uliana, Christiano Saliba; Fischer, Sergei Taggesell; Kojima, Kodi Edson

    2017-10-01

    To evaluate the effectiveness of a step-by-step oral recount by residents before the final execution of a practical exercise simulating a surgical fixation of a radial diaphyseal fracture. The study included 10 residents of orthopaedics and traumatology (four second- year and six first-year residents) divided into two groups with five residents each. All participants initially gathered in a room in which a video was presented demonstrating the practical exercise to be performed. One group (Group A) was referred directly to the practical exercise room. The other group (Group B) attended an extra session before the practical exercise, in which they were invited by instructors to recount all the steps that they would perform during the practical exercise. During this session, the instructors corrected the residents if any errors in the step-by-step recount were identified, and clarified questions from them. After this session, both Groups A and B gathered in a room in which they proceeded to the practical exercise, while being video recorded and evaluated using a 20-point checklist. Group A achieved a 57% accuracy, with results in this group ranging from 7 to 15 points out of a total of a possible 20 points. Group B achieved an 89% accuracy, with results in this group ranging from 15 to 20 points out of 20. An oral step-by-step recount by the residents before the final execution of a practical simulation exercise of surgical fixation of a diaphyseal radial fracture improved the technique and reduced the execution time of the exercise. © 2017 Elsevier Ltd. All rights reserved.

  10. Time-Sliced Perturbation Theory for Large Scale Structure I: General Formalism

    CERN Document Server

    Blas, Diego; Ivanov, Mikhail M.; Sibiryakov, Sergey

    2016-01-01

    We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein--de Sitter universe, the time evolution of the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This pave...

  11. Astronomical sketching a step-by-step introduction

    CERN Document Server

    Handy, Richard; Perez, Jeremy; Rix, Erika; Robbins, Sol

    2007-01-01

    This book presents the amateur with fine examples of astronomical sketches and step-by-step tutorials in each medium, from pencil to computer graphics programs. This unique book can teach almost anyone to create beautiful sketches of celestial objects.

  12. Rapid Modeling of and Response to Large Earthquakes Using Real-Time GPS Networks (Invited)

    Science.gov (United States)

    Crowell, B. W.; Bock, Y.; Squibb, M. B.

    2010-12-01

    Real-time GPS networks have the advantage of capturing motions throughout the entire earthquake cycle (interseismic, seismic, coseismic, postseismic), and because of this, are ideal for real-time monitoring of fault slip in the region. Real-time GPS networks provide the perfect supplement to seismic networks, which operate with lower noise and higher sampling rates than GPS networks, but only measure accelerations or velocities, putting them at a supreme disadvantage for ascertaining the full extent of slip during a large earthquake in real-time. Here we report on two examples of rapid modeling of recent large earthquakes near large regional real-time GPS networks. The first utilizes Japan’s GEONET consisting of about 1200 stations during the 2003 Mw 8.3 Tokachi-Oki earthquake about 100 km offshore Hokkaido Island and the second investigates the 2010 Mw 7.2 El Mayor-Cucapah earthquake recorded by more than 100 stations in the California Real Time Network. The principal components of strain were computed throughout the networks and utilized as a trigger to initiate earthquake modeling. Total displacement waveforms were then computed in a simulated real-time fashion using a real-time network adjustment algorithm that fixes a station far away from the rupture to obtain a stable reference frame. Initial peak ground displacement measurements can then be used to obtain an initial size through scaling relationships. Finally, a full coseismic model of the event can be run minutes after the event, given predefined fault geometries, allowing emergency first responders and researchers to pinpoint the regions of highest damage. Furthermore, we are also investigating using total displacement waveforms for real-time moment tensor inversions to look at spatiotemporal variations in slip.

  13. A Short Proof of the Large Time Energy Growth for the Boussinesq System

    Science.gov (United States)

    Brandolese, Lorenzo; Mouzouni, Charafeddine

    2017-10-01

    We give a direct proof of the fact that the L^p-norms of global solutions of the Boussinesq system in R^3 grow large as t→ ∞ for 1R+× R3. In particular, the kinetic energy blows up as \\Vert u(t)\\Vert _2^2˜ ct^{1/2} for large time. This contrasts with the case of the Navier-Stokes equations.

  14. High-Temperature-Short-Time Annealing Process for High-Performance Large-Area Perovskite Solar Cells.

    Science.gov (United States)

    Kim, Minjin; Kim, Gi-Hwan; Oh, Kyoung Suk; Jo, Yimhyun; Yoon, Hyun; Kim, Ka-Hyun; Lee, Heon; Kim, Jin Young; Kim, Dong Suk

    2017-06-27

    Organic-inorganic hybrid metal halide perovskite solar cells (PSCs) are attracting tremendous research interest due to their high solar-to-electric power conversion efficiency with a high possibility of cost-effective fabrication and certified power conversion efficiency now exceeding 22%. Although many effective methods for their application have been developed over the past decade, their practical transition to large-size devices has been restricted by difficulties in achieving high performance. Here we report on the development of a simple and cost-effective production method with high-temperature and short-time annealing processing to obtain uniform, smooth, and large-size grain domains of perovskite films over large areas. With high-temperature short-time annealing at 400 °C for 4 s, the perovskite film with an average domain size of 1 μm was obtained, which resulted in fast solvent evaporation. Solar cells fabricated using this processing technique had a maximum power conversion efficiency exceeding 20% over a 0.1 cm 2 active area and 18% over a 1 cm 2 active area. We believe our approach will enable the realization of highly efficient large-area PCSs for practical development with a very simple and short-time procedure. This simple method should lead the field toward the fabrication of uniform large-scale perovskite films, which are necessary for the production of high-efficiency solar cells that may also be applicable to several other material systems for more widespread practical deployment.

  15. Design and fabrication of a chitosan hydrogel with gradient structures via a step-by-step cross-linking process.

    Science.gov (United States)

    Xu, Yongxiang; Yuan, Shenpo; Han, Jianmin; Lin, Hong; Zhang, Xuehui

    2017-11-15

    The development of scaffolds to mimic the gradient structure of natural tissue is an important consideration for effective tissue engineering. In the present study, a physical cross-linking chitosan hydrogel with gradient structures was fabricated via a step-by-step cross-linking process using sodium tripolyphosphate and sodium hydroxide as sequential cross-linkers. Chitosan hydrogels with different structures (single, double, and triple layers) were prepared by modifying the gelling process. The properties of the hydrogels were further adjusted by varying the gelling conditions, such as gelling time, pH, and composition of the crosslinking solution. Slight cytotoxicity was showed in MTT assay for hydrogels with uncross-linking chitosan solution and non-cytotoxicity was showed for other hydrogels. The results suggest that step-by-step cross-linking represents a practicable method to fabricate scaffolds with gradient structures. Copyright © 2017. Published by Elsevier Ltd.

  16. One-step process of hydrothermal and alkaline treatment of wheat straw for improving the enzymatic saccharification.

    Science.gov (United States)

    Sun, Shaolong; Zhang, Lidan; Liu, Fang; Fan, Xiaolin; Sun, Run-Cang

    2018-01-01

    To increase the production of bioethanol, a two-step process based on hydrothermal and dilute alkaline treatment was applied to reduce the natural resistance of biomass. However, the process required a large amount of water and a long operation time due to the solid/liquid separation before the alkaline treatment, which led to decrease the pure economic profit for production of bioethanol. Therefore, four one-step processes based on order of hydrothermal and alkaline treatment have been developed to enhance concentration of glucose of wheat straw by enzymatic saccharification. The aim of the present study was to systematically evaluated effect for different one-step processes by analyzing the physicochemical properties (composition, structural change, crystallinity, surface morphology, and BET surface area) and enzymatic saccharification of the treated substrates. In this study, hemicelluloses and lignins were removed from wheat straw and the morphologic structures were destroyed to various extents during the four one-step processes, which were favorable for cellulase absorption on cellulose. A positive correlation was also observed between the crystallinity and enzymatic saccharification rate of the substrate under the conditions given. The surface area of the substrate was positively related to the concentration of glucose in this study. As compared to the control (3.0 g/L) and treated substrates (11.2-14.6 g/L) obtained by the other three one-step processes, the substrate treated by one-step process based on successively hydrothermal and alkaline treatment had a maximum glucose concentration of 18.6 g/L, which was due to the high cellulose concentration and surface area for the substrate, accompanying with removal of large amounts of lignins and hemicelluloses. The present study demonstrated that the order of hydrothermal and alkaline treatment had significant effects on the physicochemical properties and enzymatic saccharification of wheat straw. The one-step

  17. Stepped piezoresistive microcantilever designs for biosensors

    International Nuclear Information System (INIS)

    Ansari, Mohd Zahid; Cho, Chongdu; Urban, Gerald

    2012-01-01

    The sensitivity of a piezoresistive microcantilever biosensor strongly depends on its ability to convert the surface stress-induced deflections into large resistance change. To improve the sensitivity, we present stepped microcantilever biosensor designs that show significant resistance change compared with commonly used rectangular designs. The cantilever is made of silicon dioxide with a u-shaped silicon piezoresistor. The surface stress-induced deflections, bimorph deflection, fundamental resonant frequency and self-heating properties of the cantilever are studied using the FEM software. The surface stress-induced deflections are compared against the analytical model derived in this work. Results show that stepped designs have better signal-to-noise ratio than the rectangular ones and cantilevers with l/L between 0.5 and 0.75 are better designs for improving sensitivity. (paper)

  18. One step beyond: Different step-to-step transitions exist during continuous contact brachiation in siamangs

    Directory of Open Access Journals (Sweden)

    Fana Michilsens

    2012-02-01

    In brachiation, two main gaits are distinguished, ricochetal brachiation and continuous contact brachiation. During ricochetal brachiation, a flight phase exists and the body centre of mass (bCOM describes a parabolic trajectory. For continuous contact brachiation, where at least one hand is always in contact with the substrate, we showed in an earlier paper that four step-to-step transition types occur. We referred to these as a ‘point’, a ‘loop’, a ‘backward pendulum’ and a ‘parabolic’ transition. Only the first two transition types have previously been mentioned in the existing literature on gibbon brachiation. In the current study, we used three-dimensional video and force analysis to describe and characterize these four step-to-step transition types. Results show that, although individual preference occurs, the brachiation strides characterized by each transition type are mainly associated with speed. Yet, these four transitions seem to form a continuum rather than four distinct types. Energy recovery and collision fraction are used as estimators of mechanical efficiency of brachiation and, remarkably, these parameters do not differ between strides with different transition types. All strides show high energy recoveries (mean  = 70±11.4% and low collision fractions (mean  = 0.2±0.13, regardless of the step-to-step transition type used. We conclude that siamangs have efficient means of modifying locomotor speed during continuous contact brachiation by choosing particular step-to-step transition types, which all minimize collision fraction and enhance energy recovery.

  19. An Improved Split-Step Wavelet Transform Method for Anomalous Radio Wave Propagation Modelling

    Directory of Open Access Journals (Sweden)

    A. Iqbal

    2014-12-01

    Full Text Available Anomalous tropospheric propagation caused by ducting phenomenon is a major problem in wireless communication. Thus, it is important to study the behavior of radio wave propagation in tropospheric ducts. The Parabolic Wave Equation (PWE method is considered most reliable to model anomalous radio wave propagation. In this work, an improved Split Step Wavelet transform Method (SSWM is presented to solve PWE for the modeling of tropospheric propagation over finite and infinite conductive surfaces. A large number of numerical experiments are carried out to validate the performance of the proposed algorithm. Developed algorithm is compared with previously published techniques; Wavelet Galerkin Method (WGM and Split-Step Fourier transform Method (SSFM. A very good agreement is found between SSWM and published techniques. It is also observed that the proposed algorithm is about 18 times faster than WGM and provide more details of propagation effects as compared to SSFM.

  20. A single-step method for rapid extraction of total lipids from green microalgae.

    Directory of Open Access Journals (Sweden)

    Martin Axelsson

    Full Text Available Microalgae produce a wide range of lipid compounds of potential commercial interest. Total lipid extraction performed by conventional extraction methods, relying on the chloroform-methanol solvent system are too laborious and time consuming for screening large numbers of samples. In this study, three previous extraction methods devised by Folch et al. (1957, Bligh and Dyer (1959 and Selstam and Öquist (1985 were compared and a faster single-step procedure was developed for extraction of total lipids from green microalgae. In the single-step procedure, 8 ml of a 2∶1 chloroform-methanol (v/v mixture was added to fresh or frozen microalgal paste or pulverized dry algal biomass contained in a glass centrifuge tube. The biomass was manually suspended by vigorously shaking the tube for a few seconds and 2 ml of a 0.73% NaCl water solution was added. Phase separation was facilitated by 2 min of centrifugation at 350 g and the lower phase was recovered for analysis. An uncharacterized microalgal polyculture and the green microalgae Scenedesmus dimorphus, Selenastrum minutum, and Chlorella protothecoides were subjected to the different extraction methods and various techniques of biomass homogenization. The less labour intensive single-step procedure presented here allowed simultaneous recovery of total lipid extracts from multiple samples of green microalgae with quantitative yields and fatty acid profiles comparable to those of the previous methods. While the single-step procedure is highly correlated in lipid extractability (r² = 0.985 to the previous method of Folch et al. (1957, it allowed at least five times higher sample throughput.

  1. One-dimensional model of interacting-step fluctuations on vicinal surfaces: Analytical formulas and kinetic Monte Carlo simulations

    Science.gov (United States)

    Patrone, Paul N.; Einstein, T. L.; Margetis, Dionisios

    2010-12-01

    We study analytically and numerically a one-dimensional model of interacting line defects (steps) fluctuating on a vicinal crystal. Our goal is to formulate and validate analytical techniques for approximately solving systems of coupled nonlinear stochastic differential equations (SDEs) governing fluctuations in surface motion. In our analytical approach, the starting point is the Burton-Cabrera-Frank (BCF) model by which step motion is driven by diffusion of adsorbed atoms on terraces and atom attachment-detachment at steps. The step energy accounts for entropic and nearest-neighbor elastic-dipole interactions. By including Gaussian white noise to the equations of motion for terrace widths, we formulate large systems of SDEs under different choices of diffusion coefficients for the noise. We simplify this description via (i) perturbation theory and linearization of the step interactions and, alternatively, (ii) a mean-field (MF) approximation whereby widths of adjacent terraces are replaced by a self-consistent field but nonlinearities in step interactions are retained. We derive simplified formulas for the time-dependent terrace-width distribution (TWD) and its steady-state limit. Our MF analytical predictions for the TWD compare favorably with kinetic Monte Carlo simulations under the addition of a suitably conservative white noise in the BCF equations.

  2. Application of stepping motor

    International Nuclear Information System (INIS)

    1980-10-01

    This book is divided into three parts, which is about practical using of stepping motor. The first part has six chapters. The contents of the first part are about stepping motor, classification of stepping motor, basic theory og stepping motor, characteristic and basic words, types and characteristic of stepping motor in hybrid type and basic control of stepping motor. The second part deals with application of stepping motor with hardware of stepping motor control, stepping motor control by microcomputer and software of stepping motor control. The last part mentions choice of stepping motor system, examples of stepping motor, measurement of stepping motor and practical cases of application of stepping motor.

  3. Internship guide : Work placements step by step

    NARCIS (Netherlands)

    Haag, Esther

    2013-01-01

    Internship Guide: Work Placements Step by Step has been written from the practical perspective of a placement coordinator. This book addresses the following questions : what problems do students encounter when they start thinking about the jobs their degree programme prepares them for? How do you

  4. Tax-Optimal Step-Up and Imperfect Loss Offset

    Directory of Open Access Journals (Sweden)

    Markus Diller

    2012-05-01

    Full Text Available In the field of mergers and acquisitions, German and international tax law allow for several opportunities to step up a firm's assets, i.e., to revaluate the assets at fair market values. When a step-up is performed the taxpayer recognizes a taxable gain, but also obtains tax benefits in the form of higher future depreciation allowances associated with stepping up the tax base of the assets. This tax-planning problem is well known in taxation literature and can also be applied to firm valuation in the presence of taxation. However, the known models usually assume a perfect loss offset. If this assumption is abandoned, the depreciation allowances may lose value as they become tax effective at a later point in time, or even never if there are not enough cash flows to be offset against. This aspect is especiallyrelevant if future cash flows are assumed to be uncertain. This paper shows that a step-up may be disadvantageous or a firm overvalued if these aspects are not integrated into the basic calculus. Compared to the standard approach, assets should be stepped up only in a few cases and - under specific conditions - at a later point in time. Firm values may be considerably lower under imperfect loss offset.

  5. Incorporating Real-time Earthquake Information into Large Enrollment Natural Disaster Course Learning

    Science.gov (United States)

    Furlong, K. P.; Benz, H.; Hayes, G. P.; Villasenor, A.

    2010-12-01

    Although most would agree that the occurrence of natural disaster events such as earthquakes, volcanic eruptions, and floods can provide effective learning opportunities for natural hazards-based courses, implementing compelling materials into the large-enrollment classroom environment can be difficult. These natural hazard events derive much of their learning potential from their real-time nature, and in the modern 24/7 news-cycle where all but the most devastating events are quickly out of the public eye, the shelf life for an event is quite limited. To maximize the learning potential of these events requires that both authoritative information be available and course materials be generated as the event unfolds. Although many events such as hurricanes, flooding, and volcanic eruptions provide some precursory warnings, and thus one can prepare background materials to place the main event into context, earthquakes present a particularly confounding situation of providing no warning, but where context is critical to student learning. Attempting to implement real-time materials into large enrollment classes faces the additional hindrance of limited internet access (for students) in most lecture classrooms. In Earth 101 Natural Disasters: Hollywood vs Reality, taught as a large enrollment (150+ students) general education course at Penn State, we are collaborating with the USGS’s National Earthquake Information Center (NEIC) to develop efficient means to incorporate their real-time products into learning activities in the lecture hall environment. Over time (and numerous events) we have developed a template for presenting USGS-produced real-time information in lecture mode. The event-specific materials can be quickly incorporated and updated, along with key contextual materials, to provide students with up-to-the-minute current information. In addition, we have also developed in-class activities, such as student determination of population exposure to severe ground

  6. An adaptive spatio-temporal smoothing model for estimating trends and step changes in disease risk

    OpenAIRE

    Rushworth, Alastair; Lee, Duncan; Sarran, Christophe

    2014-01-01

    Statistical models used to estimate the spatio-temporal pattern in disease\\ud risk from areal unit data represent the risk surface for each time period with known\\ud covariates and a set of spatially smooth random effects. The latter act as a proxy\\ud for unmeasured spatial confounding, whose spatial structure is often characterised by\\ud a spatially smooth evolution between some pairs of adjacent areal units while other\\ud pairs exhibit large step changes. This spatial heterogeneity is not c...

  7. High temperature superconducting Josephson transmission lines for pulse and step sharpening

    International Nuclear Information System (INIS)

    Martens, J.S.; Wendt, J.R.; Hietala, V.M.; Ginley, D.S.; Ashby, C.I.H.; Plut, T.A.; Vawter, G.A.; Tigges, C.P.; Siegal, M.P.; Hou, S.Y.; Phillips, J.M.; Hohenwarter, G.K.G.

    1992-01-01

    An increasing number of high speed digital and other circuit applications require very narrow impulses or rapid pulse edge transitions. Shock wave transmission lines using series or shunt Josephson junctions are one way to generate these signals. Using two different high temperature superconducting Josephson junction processes (step-edge and electron beam defined nanobridges), such transmission lines have been constructed and tested at 77 K. Shock wave lines with approximately 60 YBaCuO nanobridges, have generated steps with fall times of about 10 ps. With step-edge junctions (with higher figures of merit but lower uniformity), step transition times have been reduced to an estimated 1 ps

  8. Timing of Formal Phase Safety Reviews for Large-Scale Integrated Hazard Analysis

    Science.gov (United States)

    Massie, Michael J.; Morris, A. Terry

    2010-01-01

    Integrated hazard analysis (IHA) is a process used to identify and control unacceptable risk. As such, it does not occur in a vacuum. IHA approaches must be tailored to fit the system being analyzed. Physical, resource, organizational and temporal constraints on large-scale integrated systems impose additional direct or derived requirements on the IHA. The timing and interaction between engineering and safety organizations can provide either benefits or hindrances to the overall end product. The traditional approach for formal phase safety review timing and content, which generally works well for small- to moderate-scale systems, does not work well for very large-scale integrated systems. This paper proposes a modified approach to timing and content of formal phase safety reviews for IHA. Details of the tailoring process for IHA will describe how to avoid temporary disconnects in major milestone reviews and how to maintain a cohesive end-to-end integration story particularly for systems where the integrator inherently has little to no insight into lower level systems. The proposal has the advantage of allowing the hazard analysis development process to occur as technical data normally matures.

  9. New simulation capabilities of electron clouds in ion beams with large tune depression

    International Nuclear Information System (INIS)

    Vay, J.L.; Furman, M.A.; Seidl, P.A.; Cohen, R.H.; Friedman, A.; Grote, D.P.; Kireeff-Covo, M.; Molvik, A.W.; Stoltz, P.H.; Veitzer, S.; Verboncoeur, J.P.

    2006-01-01

    The authors have developed a new, comprehensive set of simulation tools aimed at modeling the interaction of intense ion beams and electron clouds (e-clouds). The set contains the 3-D accelerator PIC code WARP and the 2-D ''slice'' e-cloud code POSINST, as well as a merger of the two, augmented by new modules for impact ionization and neutral gas generation. The new capability runs on workstations or parallel supercomputers and contains advanced features such as mesh refinement, disparate adaptive time stepping, and a new ''drift-Lorentz'' particle mover for tracking charged particles in magnetic fields using large time steps. It is being applied to the modeling of ion beams (1 MeV, 180 mA, K+) for heavy ion inertial fusion and warm dense matter studies, as they interact with electron clouds in the High-Current Experiment (HCX). They describe the capabilities and present recent simulation results with detailed comparisons against the HCX experiment, as well as their application (in a different regime) to the modeling of e-clouds in the Large Hadron Collider (LHC)

  10. A permeation theory for single-file ion channels: one- and two-step models.

    Science.gov (United States)

    Nelson, Peter Hugo

    2011-04-28

    How many steps are required to model permeation through ion channels? This question is investigated by comparing one- and two-step models of permeation with experiment and MD simulation for the first time. In recent MD simulations, the observed permeation mechanism was identified as resembling a Hodgkin and Keynes knock-on mechanism with one voltage-dependent rate-determining step [Jensen et al., PNAS 107, 5833 (2010)]. These previously published simulation data are fitted to a one-step knock-on model that successfully explains the highly non-Ohmic current-voltage curve observed in the simulation. However, these predictions (and the simulations upon which they are based) are not representative of real channel behavior, which is typically Ohmic at low voltages. A two-step association/dissociation (A/D) model is then compared with experiment for the first time. This two-parameter model is shown to be remarkably consistent with previously published permeation experiments through the MaxiK potassium channel over a wide range of concentrations and positive voltages. The A/D model also provides a first-order explanation of permeation through the Shaker potassium channel, but it does not explain the asymmetry observed experimentally. To address this, a new asymmetric variant of the A/D model is developed using the present theoretical framework. It includes a third parameter that represents the value of the "permeation coordinate" (fractional electric potential energy) corresponding to the triply occupied state n of the channel. This asymmetric A/D model is fitted to published permeation data through the Shaker potassium channel at physiological concentrations, and it successfully predicts qualitative changes in the negative current-voltage data (including a transition to super-Ohmic behavior) based solely on a fit to positive-voltage data (that appear linear). The A/D model appears to be qualitatively consistent with a large group of published MD simulations, but no

  11. Self-sustained oscillations with acoustic feedback in flows over a backward-facing step with a small upstream step

    Science.gov (United States)

    Yokoyama, Hiroshi; Tsukamoto, Yuichi; Kato, Chisachi; Iida, Akiyoshi

    2007-10-01

    Self-sustained oscillations with acoustic feedback take place in a flow over a two-dimensional two-step configuration: a small forward-backward facing step, which we hereafter call a bump, and a relatively large backward-facing step (backstep). These oscillations can radiate intense tonal sound and fatigue nearby components of industrial products. We clarify the mechanism of these oscillations by directly solving the compressible Navier-Stokes equations. The results show that vortices are shed from the leading edge of the bump and acoustic waves are radiated when these vortices pass the trailing edge of the backstep. The radiated compression waves shed new vortices by stretching the vortex formed by the flow separation at the leading edge of the bump, thereby forming a feedback loop. We propose a formula based on a detailed investigation of the phase relationship between the vortices and the acoustic waves for predicting the frequencies of the tonal sound. The frequencies predicted by this formula are in good agreement with those measured in the experiments we performed.

  12. Decrease of the tunneling time and violation of the Hartman effect for large barriers

    International Nuclear Information System (INIS)

    Olkhovsky, V.S.; Zaichenko, A.K.; Petrillo, V.

    2004-01-01

    The explicit formulation of the initial conditions of the definition of the wave-packet tunneling time is proposed. This formulation takes adequately into account the irreversibility of the wave-packet space-time spreading. Moreover, it explains the violations of the Hartman effect, leading to a strong decrease of the tunneling times up to negative values for wave packets with large momentum spreads due to strong wave-packet time spreading

  13. Time-Efficient Cloning Attacks Identification in Large-Scale RFID Systems

    Directory of Open Access Journals (Sweden)

    Ju-min Zhao

    2017-01-01

    Full Text Available Radio Frequency Identification (RFID is an emerging technology for electronic labeling of objects for the purpose of automatically identifying, categorizing, locating, and tracking the objects. But in their current form RFID systems are susceptible to cloning attacks that seriously threaten RFID applications but are hard to prevent. Existing protocols aimed at detecting whether there are cloning attacks in single-reader RFID systems. In this paper, we investigate the cloning attacks identification in the multireader scenario and first propose a time-efficient protocol, called the time-efficient Cloning Attacks Identification Protocol (CAIP to identify all cloned tags in multireaders RFID systems. We evaluate the performance of CAIP through extensive simulations. The results show that CAIP can identify all the cloned tags in large-scale RFID systems fairly fast with required accuracy.

  14. The stepping behavior analysis of pedestrians from different age groups via a single-file experiment

    Science.gov (United States)

    Cao, Shuchao; Zhang, Jun; Song, Weiguo; Shi, Chang'an; Zhang, Ruifang

    2018-03-01

    The stepping behavior of pedestrians with different age compositions in single-file experiment is investigated in this paper. The relation between step length, step width and stepping time are analyzed by using the step measurement method based on the calculation of curvature of the trajectory. The relations of velocity-step width, velocity-step length and velocity-stepping time for different age groups are discussed and compared with previous studies. Finally effects of pedestrian gender and height on stepping laws and fundamental diagrams are analyzed. The study is helpful for understanding pedestrian dynamics of movement. Meanwhile, it offers experimental data to develop a microscopic model of pedestrian movement by considering stepping behavior.

  15. Using memory-efficient algorithm for large-scale time-domain modeling of surface plasmon polaritons propagation in organic light emitting diodes

    Science.gov (United States)

    Zakirov, Andrey; Belousov, Sergei; Valuev, Ilya; Levchenko, Vadim; Perepelkina, Anastasia; Zempo, Yasunari

    2017-10-01

    We demonstrate an efficient approach to numerical modeling of optical properties of large-scale structures with typical dimensions much greater than the wavelength of light. For this purpose, we use the finite-difference time-domain (FDTD) method enhanced with a memory efficient Locally Recursive non-Locally Asynchronous (LRnLA) algorithm called DiamondTorre and implemented for General Purpose Graphical Processing Units (GPGPU) architecture. We apply our approach to simulation of optical properties of organic light emitting diodes (OLEDs), which is an essential step in the process of designing OLEDs with improved efficiency. Specifically, we consider a problem of excitation and propagation of surface plasmon polaritons (SPPs) in a typical OLED, which is a challenging task given that SPP decay length can be about two orders of magnitude greater than the wavelength of excitation. We show that with our approach it is possible to extend the simulated volume size sufficiently so that SPP decay dynamics is accounted for. We further consider an OLED with periodically corrugated metallic cathode and show how the SPP decay length can be greatly reduced due to scattering off the corrugation. Ultimately, we compare the performance of our algorithm to the conventional FDTD and demonstrate that our approach can efficiently be used for large-scale FDTD simulations with the use of only a single GPGPU-powered workstation, which is not practically feasible with the conventional FDTD.

  16. The way to collisions, step by step

    CERN Multimedia

    2009-01-01

    While the LHC sectors cool down and reach the cryogenic operating temperature, spirits are warming up as we all eagerly await the first collisions. No reason to hurry, though. Making particles collide involves the complex manoeuvring of thousands of delicate components. The experts will make it happen using a step-by-step approach.

  17. An Efficient Format for Nearly Constant-Time Access to Arbitrary Time Intervals in Large Trace Files

    Directory of Open Access Journals (Sweden)

    Anthony Chan

    2008-01-01

    Full Text Available A powerful method to aid in understanding the performance of parallel applications uses log or trace files containing time-stamped events and states (pairs of events. These trace files can be very large, often hundreds or even thousands of megabytes. Because of the cost of accessing and displaying such files, other methods are often used that reduce the size of the tracefiles at the cost of sacrificing detail or other information. This paper describes a hierarchical trace file format that provides for display of an arbitrary time window in a time independent of the total size of the file and roughly proportional to the number of events within the time window. This format eliminates the need to sacrifice data to achieve a smaller trace file size (since storage is inexpensive, it is necessary only to make efficient use of bandwidth to that storage. The format can be used to organize a trace file or to create a separate file of annotations that may be used with conventional trace files. We present an analysis of the time to access all of the events relevant to an interval of time and we describe experiments demonstrating the performance of this file format.

  18. The ATP hydrolysis and phosphate release steps control the time course of force development in rabbit skeletal muscle.

    Science.gov (United States)

    Sleep, John; Irving, Malcolm; Burton, Kevin

    2005-03-15

    The time course of isometric force development following photolytic release of ATP in the presence of Ca(2+) was characterized in single skinned fibres from rabbit psoas muscle. Pre-photolysis force was minimized using apyrase to remove contaminating ATP and ADP. After the initial force rise induced by ATP release, a rapid shortening ramp terminated by a step stretch to the original length was imposed, and the time course of the subsequent force redevelopment was again characterized. Force development after ATP release was accurately described by a lag phase followed by one or two exponential components. At 20 degrees C, the lag was 5.6 +/- 0.4 ms (s.e.m., n = 11), and the force rise was well fitted by a single exponential with rate constant 71 +/- 4 s(-1). Force redevelopment after shortening-restretch began from about half the plateau force level, and its single-exponential rate constant was 68 +/- 3 s(-1), very similar to that following ATP release. When fibres were activated by the addition of Ca(2+) in ATP-containing solution, force developed more slowly, and the rate constant for force redevelopment following shortening-restretch reached a maximum value of 38 +/- 4 s(-1) (n = 6) after about 6 s of activation. This lower value may be associated with progressive sarcomere disorder at elevated temperature. Force development following ATP release was much slower at 5 degrees C than at 20 degrees C. The rate constant of a single-exponential fit to the force rise was 4.3 +/- 0.4 s(-1) (n = 22), and this was again similar to that after shortening-restretch in the same activation at this temperature, 3.8 +/- 0.2 s(-1). We conclude that force development after ATP release and shortening-restretch are controlled by the same steps in the actin-myosin ATPase cycle. The present results and much previous work on mechanical-chemical coupling in muscle can be explained by a kinetic scheme in which force is generated by a rapid conformational change bracketed by two

  19. Implicit solvers for large-scale nonlinear problems

    International Nuclear Information System (INIS)

    Keyes, David E; Reynolds, Daniel R; Woodward, Carol S

    2006-01-01

    Computational scientists are grappling with increasingly complex, multi-rate applications that couple such physical phenomena as fluid dynamics, electromagnetics, radiation transport, chemical and nuclear reactions, and wave and material propagation in inhomogeneous media. Parallel computers with large storage capacities are paving the way for high-resolution simulations of coupled problems; however, hardware improvements alone will not prove enough to enable simulations based on brute-force algorithmic approaches. To accurately capture nonlinear couplings between dynamically relevant phenomena, often while stepping over rapid adjustments to quasi-equilibria, simulation scientists are increasingly turning to implicit formulations that require a discrete nonlinear system to be solved for each time step or steady state solution. Recent advances in iterative methods have made fully implicit formulations a viable option for solution of these large-scale problems. In this paper, we overview one of the most effective iterative methods, Newton-Krylov, for nonlinear systems and point to software packages with its implementation. We illustrate the method with an example from magnetically confined plasma fusion and briefly survey other areas in which implicit methods have bestowed important advantages, such as allowing high-order temporal integration and providing a pathway to sensitivity analyses and optimization. Lastly, we overview algorithm extensions under development motivated by current SciDAC applications

  20. Asymptotic description of two metastable processes of solidification for the case of large relaxation time

    International Nuclear Information System (INIS)

    Omel'yanov, G.A.

    1995-07-01

    The non-isothermal Cahn-Hilliard equations in the n-dimensional case (n = 2,3) are considered. The interaction length is proportional to a small parameter, and the relaxation time is proportional to a constant. The asymptotic solutions describing two metastable processes are constructed and justified. The soliton type solution describes the first stage of separation in alloy, when a set of ''superheated liquid'' appears inside the ''solid'' part. The Van der Waals type solution describes the free interface dynamics for large time. The smoothness of temperature is established for large time and the Mullins-Sekerka problem describing the free interface is derived. (author). 46 refs

  1. Effect of selenization time on the structural and morphological properties of Cu(In,Ga)Se2 thin films absorber layers using two step growth process

    Science.gov (United States)

    Korir, Peter C.; Dejene, Francis B.

    2018-04-01

    In this work two step growth process was used to prepare Cu(In, Ga)Se2 thin film for solar cell applications. The first step involves deposition of Cu-In-Ga precursor films followed by the selenization process under vacuum using elemental selenium vapor to form Cu(In,Ga)Se2 film. The growth process was done at a fixed temperature of 515 °C for 45, 60 and 90 min to control film thickness and gallium incorporation into the absorber layer film. The X-ray diffraction (XRD) pattern confirms single-phase Cu(In,Ga)Se2 film for all the three samples and no secondary phases were observed. A shift in the diffraction peaks to higher 2θ (2 theta) values is observed for the thin films compared to that of pure CuInSe2. The surface morphology of the resulting film grown for 60 min was characterized by the presence of uniform large grain size particles, which are typical for device quality material. Photoluminescence spectra show the shifting of emission peaks to higher energies for longer duration of selenization attributed to the incorporation of more gallium into the CuInSe2 crystal structure. Electron probe microanalysis (EPMA) revealed a uniform distribution of the elements through the surface of the film. The elemental ratio of Cu/(In + Ga) and Se/Cu + In + Ga strongly depends on the selenization time. The Cu/In + Ga ratio for the 60 min film is 0.88 which is in the range of the values (0.75-0.98) for best solar cell device performances.

  2. Effective image differencing with convolutional neural networks for real-time transient hunting

    Science.gov (United States)

    Sedaghat, Nima; Mahabal, Ashish

    2018-06-01

    Large sky surveys are increasingly relying on image subtraction pipelines for real-time (and archival) transient detection. In this process one has to contend with varying point-spread function (PSF) and small brightness variations in many sources, as well as artefacts resulting from saturated stars and, in general, matching errors. Very often the differencing is done with a reference image that is deeper than individual images and the attendant difference in noise characteristics can also lead to artefacts. We present here a deep-learning approach to transient detection that encapsulates all the steps of a traditional image-subtraction pipeline - image registration, background subtraction, noise removal, PSF matching and subtraction - in a single real-time convolutional network. Once trained, the method works lightening-fast and, given that it performs multiple steps in one go, the time saved and false positives eliminated for multi-CCD surveys like Zwicky Transient Facility and Large Synoptic Survey Telescope will be immense, as millions of subtractions will be needed per night.

  3. A New Quaternion-Based Kalman Filter for Real-Time Attitude Estimation Using the Two-Step Geometrically-Intuitive Correction Algorithm.

    Science.gov (United States)

    Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun

    2017-09-19

    In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions.

  4. Statistical analysis of error rate of large-scale single flux quantum logic circuit by considering fluctuation of timing parameters

    International Nuclear Information System (INIS)

    Yamanashi, Yuki; Masubuchi, Kota; Yoshikawa, Nobuyuki

    2016-01-01

    The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.

  5. Statistical analysis of error rate of large-scale single flux quantum logic circuit by considering fluctuation of timing parameters

    Energy Technology Data Exchange (ETDEWEB)

    Yamanashi, Yuki, E-mail: yamanasi@ynu.ac.jp [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan); Masubuchi, Kota; Yoshikawa, Nobuyuki [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan)

    2016-11-15

    The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.

  6. Stepped-to-dart Leaders in Cloud-to-ground Lightning

    Science.gov (United States)

    Stolzenburg, M.; Marshall, T. C.; Karunarathne, S.; Karunarathna, N.; Warner, T.; Orville, R. E.

    2013-12-01

    Using time-correlated high-speed video (50,000 frames per second) and fast electric field change (5 MegaSamples per second) data for lightning flashes in East-central Florida, we describe an apparently rare type of subsequent leader: a stepped leader that finds and follows a previously used channel. The observed 'stepped-to-dart leaders' occur in three natural negative ground flashes. Stepped-to-dart leader connection altitudes are 3.3, 1.6 and 0.7 km above ground in the three cases. Prior to the stepped-to-dart connection, the advancing leaders have properties typical of stepped leaders. After the connection, the behavior changes almost immediately (within 40-60 us) to dart or dart-stepped leader, with larger amplitude E-change pulses and faster average propagation speeds. In this presentation, we will also describe the upward luminosity after the connection in the prior return stroke channel and in the stepped leader path, along with properties of the return strokes and other leaders in the three flashes.

  7. Representative elements: A step to large-scale fracture system simulation

    International Nuclear Information System (INIS)

    Clemo, T.M.

    1987-01-01

    Large-scale simulation of flow and transport in fractured media requires the development of a technique to represent the effect of a large number of fractures. Representative elements are used as a tool to model a subset of a fracture system as a single distributed entity. Representative elements are part of a modeling concept called dual permeability. Dual permeability modeling combines discrete fracture simulation of the most important fractures with the distributed modeling of the less important fracture of a fracture system. This study investigates the use of stochastic analysis to determine properties of representative elements. Given an assumption of fully developed laminar flow, the net fracture conductivities and hence flow velocities can be determined from descriptive statistics of fracture spacing, orientation, aperture, and extent. The distribution of physical characteristics about their mean leads to a distribution of the associated conductivities. The variance of hydraulic conductivity induces dispersion into the transport process. Simple fracture systems are treated to demonstrate the usefulness of stochastic analysis. Explicit equations for conductivity of an element are developed and the dispersion characteristics are shown. Explicit formulation of the hydraulic conductivity and transport dispersion reveals the dependence of these important characteristics on the parameters used to describe the fracture system. Understanding these dependencies will help to focus efforts to identify the characteristics of fracture systems. Simulations of stochastically generated fracture sets do not provide this explicit functional dependence on the fracture system parameters. 12 refs., 6 figs

  8. [The investigation of control mechanisms of stepping rhythm in human in the air-stepping conditions during passive and voluntary leg movements].

    Science.gov (United States)

    Solopova, I A; Selionon, V A; Grishin, A A

    2010-01-01

    In unloading condition the degree of activation of the central stepping program was investigated during passive leg movements in healthy subjects, as well as the excitability of spinal motoneurons during passive and voluntary stepping movement. Passive stepping movements with characteristics maximally approximated to those during voluntary stepping were accomplished by experimenter. The comparison of the muscle activity bursts during voluntary and imposed movements was made. In addition to that the influence of artificially created loading onto the foot to the leg movement characteristics was analyzed. Spinal motoneuron excitability was estimated by means of evaluation of amplitude modulation of the soleus H-reflex. The changes of H-reflexes under the fixation of knee or hip joints were also studied. In majority of subjects the passive movements were accompanied by bursts of EMG activity of hip muscles (and sometimes of knee muscles), which timing during step cycle was coincided with burst timing of voluntary step cycle. In many cases the bursts of EMG activity during passive movements exceeded activity in homonymous muscles during voluntary stepping. The foot loading imitation exerted essential influence on distal parts of moving extremity during voluntary as well passive movements, that was expressed in the appearance of movements in the ankle joint and accompanied by emergence and increasing of phasic EMG activity of shank muscles. The excitability of motoneurons during passive movements was greater then during voluntary ones. The changes and modulation of H-reflex throughout the step cycle without restriction of joint mobility and during exclusion of hip joint mobility were similar. The knee joint fixation exerted the greater influence. It is supposed that imposed movements activate the same mechanisms of rhythm generation as a supraspinal commands during voluntary movements. In the conditions of passive movements the presynaptic inhibition depend on afferent

  9. Single-crossover recombination in discrete time.

    Science.gov (United States)

    von Wangenheim, Ute; Baake, Ellen; Baake, Michael

    2010-05-01

    Modelling the process of recombination leads to a large coupled nonlinear dynamical system. Here, we consider a particular case of recombination in discrete time, allowing only for single crossovers. While the analogous dynamics in continuous time admits a closed solution (Baake and Baake in Can J Math 55:3-41, 2003), this no longer works for discrete time. A more general model (i.e. without the restriction to single crossovers) has been studied before (Bennett in Ann Hum Genet 18:311-317, 1954; Dawson in Theor Popul Biol 58:1-20, 2000; Linear Algebra Appl 348:115-137, 2002) and was solved algorithmically by means of Haldane linearisation. Using the special formalism introduced by Baake and Baake (Can J Math 55:3-41, 2003), we obtain further insight into the single-crossover dynamics and the particular difficulties that arise in discrete time. We then transform the equations to a solvable system in a two-step procedure: linearisation followed by diagonalisation. Still, the coefficients of the second step must be determined in a recursive manner, but once this is done for a given system, they allow for an explicit solution valid for all times.

  10. The effect of large decoherence on mixing time in continuous-time quantum walks on long-range interacting cycles

    Energy Technology Data Exchange (ETDEWEB)

    Salimi, S; Radgohar, R, E-mail: shsalimi@uok.ac.i, E-mail: r.radgohar@uok.ac.i [Faculty of Science, Department of Physics, University of Kurdistan, Pasdaran Ave, Sanandaj (Iran, Islamic Republic of)

    2010-01-28

    In this paper, we consider decoherence in continuous-time quantum walks on long-range interacting cycles (LRICs), which are the extensions of the cycle graphs. For this purpose, we use Gurvitz's model and assume that every node is monitored by the corresponding point-contact induced by the decoherence process. Then, we focus on large rates of decoherence and calculate the probability distribution analytically and obtain the lower and upper bounds of the mixing time. Our results prove that the mixing time is proportional to the rate of decoherence and the inverse of the square of the distance parameter (m). This shows that the mixing time decreases with increasing range of interaction. Also, what we obtain for m = 0 is in agreement with Fedichkin, Solenov and Tamon's results [48] for cycle, and we see that the mixing time of CTQWs on cycle improves with adding interacting edges.

  11. Stepping in Place While Voluntarily Turning Around Produces a Long-Lasting Posteffect Consisting in Inadvertent Turning While Stepping Eyes Closed

    Directory of Open Access Journals (Sweden)

    Stefania Sozzi

    2016-01-01

    Full Text Available Training subjects to step in place on a rotating platform while maintaining a fixed body orientation in space produces a posteffect consisting in inadvertent turning around while stepping in place eyes closed (podokinetic after-rotation, PKAR. We tested the hypothesis that voluntary turning around while stepping in place also produces a posteffect similar to PKAR. Sixteen subjects performed 12 min of voluntary turning while stepping around their vertical axis eyes closed and 12 min of stepping in place eyes open on the center of a platform rotating at 60°/s (pretests. Then, subjects continued stepping in place eyes closed for at least 10 min (posteffect. We recorded the positions of markers fixed to head, shoulder, and feet. The posteffect of voluntary turning shared all features of PKAR. Time decay of angular velocity, stepping cadence, head acceleration, and ratio of angular velocity after to angular velocity before were similar between both protocols. Both postrotations took place inadvertently. The posteffects are possibly dependent on the repeated voluntary contraction of leg and foot intrarotating pelvic muscles that rotate the trunk over the stance foot, a synergy common to both protocols. We propose that stepping in place and voluntary turning can be a scheme ancillary to the rotating platform for training body segment coordination in patients with impairment of turning synergies of various origin.

  12. Evidence-based practice, step by step: critical appraisal of the evidence: part II: digging deeper--examining the "keeper" studies.

    Science.gov (United States)

    Fineout-Overholt, Ellen; Melnyk, Bernadette Mazurek; Stillwell, Susan B; Williamson, Kathleen M

    2010-09-01

    This is the sixth article in a series from the Arizona State University College of Nursing and Health Innovation's Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved. The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time. Articles will appear every two months to allow you time to incorporate information as you work toward implementing EBP at your institution. Also, we've scheduled "Chat with the Authors" calls every few months to provide a direct line to the experts to help you resolve questions. Details about how to participate in the next call will be published with November's Evidence-Based Practice, Step by Step.

  13. On the Pricing of Step-Up Bonds in the European Telecom Sector

    DEFF Research Database (Denmark)

    Lando, David; Mortensen, Allan

    This paper investigates the pricing of step-up bonds, i.e. corporatebonds with provisions stating that the coupon payments increase as thecredit rating level of the issuer declines. To assess the risk-neutral ratingtransition probabilities necessary to price these bonds, we introduce...... a newcalibration method within the reduced-form rating-based model of Jarrow,Lando, and Turnbull (1997). We also treat split ratings and adjust forrating outlook. Step-up bonds have been issued in large amounts in theEuropean telecom sector, and we find that, through most of the sample,step-up bonds issued...

  14. Overweight, obesity, steps, and moderate to vigorous physical activity in children

    Directory of Open Access Journals (Sweden)

    Luis Carlos Oliveira

    Full Text Available ABSTRACT OBJECTIVE The objective of this study is to establish cutoff points for the number of steps/day and minutes/day of moderate to vigorous physical activity in relation to the risk of childhood overweight and obesity and their respective associations. In addition, we aim to identify the amount of steps/day needed to achieve the recommendation of moderate to vigorous physical activity in children from São Caetano do Sul. METHODS In total, 494 children have used an accelerometer to monitor steps/day and the intensity of physical activity (min/day. The moderate to vigorous physical activity has been categorized according to the public health recommendation (≤ 60 versus > 60 min/day. Overweight or obesity is defined as body mass index > +1 SD, based on reference data from the World Health Organization. The data on family income, education of parents, screen time, diet pattern, and sedentary time have been collected by questionnaires. Logistic regression and Receiver Operating Characteristic curves have been constructed. RESULTS On average, boys walked more steps/day (1,850 and performed more min/day of moderate to vigorous physical activity (23.1 than girls. Overall, 51.4% of the children have been classified as eutrophic and 48.6% as overweight or obese. Eutrophic boys walked 1,525 steps/day and performed 18.6 minutes/day more of moderate to vigorous physical activity than those with overweight/obesity (p 0.05. The cutoff points to prevent overweight and obesity in boys and girls were 10,500 and 8,500 steps/day and 66 and 46 min/day of moderate to vigorous physical activity, respectively. The walking of 9,700 steps/day for boys and 9,400 steps/day for girls ensures the scope of the recommendation of moderate to vigorous physical activity. CONCLUSIONS In boys, steps/day and moderate to vigorous physical activity have been negatively associated with body mass index, regardless of race, family income, education of parents, screen time, diet

  15. A method for real-time memory efficient implementation of blob detection in large images

    Directory of Open Access Journals (Sweden)

    Petrović Vladimir L.

    2017-01-01

    Full Text Available In this paper we propose a method for real-time blob detection in large images with low memory cost. The method is suitable for implementation on the specialized parallel hardware such as multi-core platforms, FPGA and ASIC. It uses parallelism to speed-up the blob detection. The input image is divided into blocks of equal sizes to which the maximally stable extremal regions (MSER blob detector is applied in parallel. We propose the usage of multiresolution analysis for detection of large blobs which are not detected by processing the small blocks. This method can find its place in many applications such as medical imaging, text recognition, as well as video surveillance or wide area motion imagery (WAMI. We explored the possibilities of usage of detected blobs in the feature-based image alignment as well. When large images are processed, our approach is 10 to over 20 times more memory efficient than the state of the art hardware implementation of the MSER.

  16. Identification of Dobrava, Hantaan, Seoul, and Puumala viruses by one-step real-time RT-PCR.

    Science.gov (United States)

    Aitichou, Mohamed; Saleh, Sharron S; McElroy, Anita K; Schmaljohn, C; Ibrahim, M Sofi

    2005-03-01

    We developed four assays for specifically identifying Dobrava (DOB), Hantaan (HTN), Puumala (PUU), and Seoul (SEO) viruses. The assays are based on the real-time one-step reverse transcriptase polymerase chain reaction (RT-PCR) with the small segment used as the target sequence. The detection limits of DOB, HTN, PUU, and SEO assays were 25, 25, 25, and 12.5 plaque-forming units, respectively. The assays were evaluated in blinded experiments, each with 100 samples that contained Andes, Black Creek Canal, Crimean-Congo hemorrhagic fever, Rift Valley fever and Sin Nombre viruses in addition to DOB, HTN, PUU and SEO viruses. The sensitivity levels of the DOB, HTN, PUU, and SEO assays were 98%, 96%, 92% and 94%, respectively. The specificity of DOB, HTN and SEO assays was 100% and the specificity of the PUU assay was 98%. Because of the high levels of sensitivity, specificity, and reproducibility, we believe that these assays can be useful for diagnosing and differentiating these four Old-World hantaviruses.

  17. Process evaluation of treatment times in a large radiotherapy department

    International Nuclear Information System (INIS)

    Beech, R.; Burgess, K.; Stratford, J.

    2016-01-01

    Purpose/objective: The Department of Health (DH) recognises access to appropriate and timely radiotherapy (RT) services as crucial in improving cancer patient outcomes, especially when facing a predicted increase in cancer diagnosis. There is a lack of ‘real-time’ data regarding daily demand of a linear accelerator, the impact of increasingly complex techniques on treatment times, and whether current scheduling reflects time needed for RT delivery, which would be valuable in highlighting current RT provision. Material/methods: A systematic quantitative process evaluation was undertaken in a large regional cancer centre, including a satellite centre, between January and April 2014. Data collected included treatment room-occupancy time, RT site, RT and verification technique and patient mobility status. Data was analysed descriptively; average room-occupancy times were calculated for RT techniques and compared to historical standardised treatment times within the department. Results: Room-occupancy was recorded for over 1300 fractions, over 50% of which overran their allotted treatment time. In a focused sample of 16 common techniques, 10 overran their allocated timeslots. Verification increased room-occupancy by six minutes (50%) over non-imaging. Treatments for patients requiring mobility assistance took four minutes (29%) longer. Conclusion: The majority of treatments overran their standardised timeslots. Although technique advancement has reduced RT delivery time, room-occupancy has not necessarily decreased. Verification increases room-occupancy and needs to be considered when moving towards adaptive techniques. Mobility affects room-occupancy and will become increasingly significant in an ageing population. This evaluation assesses validity of current treatment times in this department, and can be modified and repeated as necessary. - Highlights: • A process evaluation examined room-occupancy for various radiotherapy techniques. • Appointment lengths

  18. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting.

    Science.gov (United States)

    Rashed-Ul Islam, S M; Jahan, Munira; Tabassum, Shahina

    2015-01-01

    Virological monitoring is the best predictor for the management of chronic hepatitis B virus (HBV) infections. Consequently, it is important to use the most efficient, rapid and cost-effective testing systems for HBV DNA quantification. The present study compared the performance characteristics of a one-step HBV polymerase chain reaction (PCR) vs the two-step HBV PCR method for quantification of HBV DNA from clinical samples. A total of 100 samples consisting of 85 randomly selected samples from patients with chronic hepatitis B (CHB) and 15 samples from apparently healthy individuals were enrolled in this study. Of the 85 CHB clinical samples tested, HBV DNA was detected from 81% samples by one-step PCR method with median HBV DNA viral load (VL) of 7.50 × 10 3 lU/ml. In contrast, 72% samples were detected by the two-step PCR system with median HBV DNA of 3.71 × 10 3 lU/ml. The one-step method showed strong linear correlation with two-step PCR method (r = 0.89; p Tabassum S. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting. Euroasian J Hepato-Gastroenterol 2015;5(1):11-15.

  19. Hypoattenuation on CTA images with large vessel occlusion: timing affects conspicuity

    Energy Technology Data Exchange (ETDEWEB)

    Dave, Prasham [University of Ottawa, MD Program, Faculty of Medicine, Ottawa, ON (Canada); Lum, Cheemun; Thornhill, Rebecca; Chakraborty, Santanu [University of Ottawa, Department of Radiology, Ottawa, ON (Canada); Ottawa Hospital Research Institute, Ottawa, ON (Canada); Dowlatshahi, Dar [Ottawa Hospital Research Institute, Ottawa, ON (Canada); University of Ottawa, Division of Neurology, Department of Medicine, Ottawa, ON (Canada)

    2017-05-15

    Parenchymal hypoattenuation distal to occlusions on CTA source images (CTASI) is perceived because of the differences in tissue contrast compared to normally perfused tissue. This difference in conspicuity can be measured objectively. We evaluated the effect of contrast timing on the conspicuity of ischemic areas. We collected consecutive patients, retrospectively, between 2012 and 2014 with large vessel occlusions that had dynamic multiphase CT angiography (CTA) and CT perfusion (CTP). We identified areas of low cerebral blood volume on CTP maps and drew the region of interest (ROI) on the corresponding CTASI. A second ROI was placed in an area of normally perfused tissue. We evaluated conspicuity by comparing the absolute and relative change in attenuation between ischemic and normally perfused tissue over seven time points. The median absolute and relative conspicuity was greatest at the peak arterial (8.6 HU (IQR 5.1-13.9); 1.15 (1.09-1.26)), notch (9.4 HU (5.8-14.9); 1.17 (1.10-1.27)), and peak venous phases (7.0 HU (3.1-12.7); 1.13 (1.05-1.23)) compared to other portions of the time-attenuation curve (TAC). There was a significant effect of phase on the TAC for the conspicuity of ischemic vs normally perfused areas (P < 0.00001). The conspicuity of ischemic areas distal to a large artery occlusion in acute stroke is dependent on the phase of contrast arrival with dynamic CTASI and is objectively greatest in the mid-phase of the TAC. (orig.)

  20. Marching on-in-time solution of the time domain magnetic field integral equation using a predictor-corrector scheme

    KAUST Repository

    Ulku, Huseyin Arda; Bagci, Hakan; Michielssen, Eric

    2013-01-01

    An explicit marching on-in-time (MOT) scheme for solving the time-domain magnetic field integral equation (TD-MFIE) is presented. The proposed MOT-TD-MFIE solver uses Rao-Wilton-Glisson basis functions for spatial discretization and a PE(CE)m-type linear multistep method for time marching. Unlike previous explicit MOT-TD-MFIE solvers, the time step size can be chosen as large as that of the implicit MOT-TD-MFIE solvers without adversely affecting accuracy or stability. An algebraic stability analysis demonstrates the stability of the proposed explicit solver; its accuracy and efficiency are established via numerical examples. © 1963-2012 IEEE.

  1. Marching on-in-time solution of the time domain magnetic field integral equation using a predictor-corrector scheme

    KAUST Repository

    Ulku, Huseyin Arda

    2013-08-01

    An explicit marching on-in-time (MOT) scheme for solving the time-domain magnetic field integral equation (TD-MFIE) is presented. The proposed MOT-TD-MFIE solver uses Rao-Wilton-Glisson basis functions for spatial discretization and a PE(CE)m-type linear multistep method for time marching. Unlike previous explicit MOT-TD-MFIE solvers, the time step size can be chosen as large as that of the implicit MOT-TD-MFIE solvers without adversely affecting accuracy or stability. An algebraic stability analysis demonstrates the stability of the proposed explicit solver; its accuracy and efficiency are established via numerical examples. © 1963-2012 IEEE.

  2. From a large-deviations principle to the Wasserstein gradient flow : a new micro-macro passage

    NARCIS (Netherlands)

    Adams, S.; Dirr, N.; Peletier, M.A.; Zimmer, J.

    2011-01-01

    We study the connection between a system of many independent Brownian particles on one hand and the deterministic diffusion equation on the other. For a fixed time step h > 0, a large-deviations rate functional J h characterizes the behaviour of the particle system at t = h in terms of the initial

  3. Seven steps to raise world security. Op-Ed, published in the Finanical Times

    International Nuclear Information System (INIS)

    ElBaradei, M.

    2005-01-01

    In recent years, three phenomena have radically altered the security landscape. They are the emergence of a nuclear black market, the determined efforts by more countries to acquire technology to produce the fissile material usable in nuclear weapons and the clear desire of terrorists to acquire weapons of mass destruction. The IAEA has been trying to solve these new problems with existing tools. But for every step forward, we have exposed vulnerabilities in the system. The system itself - the regime that implements non-proliferation treaty needs reinforcement. Some of the necessary remedies can be taken in New York at the Meeting to be held in May, but only if governments are ready to act. With seven straightforward steps, and without amending the treaty, this conference could reach a milestone in strengthening world security. The first step: put a five-year hold on additional facilities for uranium enrichment and plutonium separation. Second, speed up existing efforts, led by the US global threat reduction initiative and others, to modify the research reactors worldwide operating with highly enriched uranium - particularly those with metal fuel that could be readily employed as bomb material. Third, raise the bar for inspection standards by establishing the 'additional protocol' as the norm for verifying compliance with the NPT. Fourth, call on the United Nations Security Council to act swiftly and decisively in the case of any country that withdraws from the NPT, in terms of the threat the withdrawal poses to international peace and security. Fifth, urge states to act on the Security Council's recent resolution 1540, to pursue and prosecute any illicit trading in nuclear material and technology. Sixth, call on the five nuclear weapon states party to the NPT to accelerate implementation of their 'unequivocal commitment' to nuclear disarmament, building on efforts such as the 2002 Moscow treaty between Russia and the US. Last, acknowledge the volatility of

  4. Optimizing the number of steps in learning tasks for complex skills.

    Science.gov (United States)

    Nadolski, Rob J; Kirschner, Paul A; van Merriënboer, Jeroen J G

    2005-06-01

    Carrying out whole tasks is often too difficult for novice learners attempting to acquire complex skills. The common solution is to split up the tasks into a number of smaller steps. The number of steps must be optimized for efficient and effective learning. The aim of the study is to investigate the relation between the number of steps provided to learners and the quality of their learning of complex skills. It is hypothesized that students receiving an optimized number of steps will learn better than those receiving either the whole task in only one step or those receiving a large number of steps. Participants were 35 sophomore law students studying at Dutch universities, mean age=22.8 years (SD=3.5), 63% were female. Participants were randomly assigned to 1 of 3 computer-delivered versions of a multimedia programme on how to prepare and carry out a law plea. The versions differed only in the number of learning steps provided. Videotaped plea-performance results were determined, various related learning measures were acquired and all computer actions were logged and analyzed. Participants exposed to an intermediate (i.e. optimized) number of steps outperformed all others on the compulsory learning task. No differences in performance on a transfer task were found. A high number of steps proved to be less efficient for carrying out the learning task. An intermediate number of steps is the most effective, proving that the number of steps can be optimized for improving learning.

  5. Computing the sensitivity of drag and lift in flow past a circular cylinder: Time-stepping versus self-consistent analysis

    Science.gov (United States)

    Meliga, Philippe

    2017-07-01

    We provide in-depth scrutiny of two methods making use of adjoint-based gradients to compute the sensitivity of drag in the two-dimensional, periodic flow past a circular cylinder (Re≲189 ): first, the time-stepping analysis used in Meliga et al. [Phys. Fluids 26, 104101 (2014), 10.1063/1.4896941] that relies on classical Navier-Stokes modeling and determines the sensitivity to any generic control force from time-dependent adjoint equations marched backwards in time; and, second, a self-consistent approach building on the model of Mantič-Lugo et al. [Phys. Rev. Lett. 113, 084501 (2014), 10.1103/PhysRevLett.113.084501] to compute semilinear approximations of the sensitivity to the mean and fluctuating components of the force. Both approaches are applied to open-loop control by a small secondary cylinder and allow identifying the sensitive regions without knowledge of the controlled states. The theoretical predictions obtained by time-stepping analysis reproduce well the results obtained by direct numerical simulation of the two-cylinder system. So do the predictions obtained by self-consistent analysis, which corroborates the relevance of the approach as a guideline for efficient and systematic control design in the attempt to reduce drag, even though the Reynolds number is not close to the instability threshold and the oscillation amplitude is not small. This is because, unlike simpler approaches relying on linear stability analysis to predict the main features of the flow unsteadiness, the semilinear framework encompasses rigorously the effect of the control on the mean flow, as well as on the finite-amplitude fluctuation that feeds back nonlinearly onto the mean flow via the formation of Reynolds stresses. Such results are especially promising as the self-consistent approach determines the sensitivity from time-independent equations that can be solved iteratively, which makes it generally less computationally demanding. We ultimately discuss the extent to

  6. The design of a real-time formative evaluation of the implementation process of lifestyle interventions at two worksites using a 7-step strategy (BRAVO@Work).

    Science.gov (United States)

    Wierenga, Debbie; Engbers, Luuk H; van Empelen, Pepijn; Hildebrandt, Vincent H; van Mechelen, Willem

    2012-08-07

    Worksite health promotion programs (WHPPs) offer an attractive opportunity to improve the lifestyle of employees. Nevertheless, broad scale and successful implementation of WHPPs in daily practice often fails. In the present study, called BRAVO@Work, a 7-step implementation strategy was used to develop, implement and embed a WHPP in two different worksites with a focus on multiple lifestyle interventions.This article describes the design and framework for the formative evaluation of this 7-step strategy under real-time conditions by an embedded scientist with the purpose to gain insight into whether this this 7-step strategy is a useful and effective implementation strategy. Furthermore, we aim to gain insight into factors that either facilitate or hamper the implementation process, the quality of the implemented lifestyle interventions and the degree of adoption, implementation and continuation of these interventions. This study is a formative evaluation within two different worksites with an embedded scientist on site to continuously monitor the implementation process. Each worksite (i.e. a University of Applied Sciences and an Academic Hospital) will assign a participating faculty or a department, to implement a WHPP focusing on lifestyle interventions using the 7-step strategy. The primary focus will be to describe the natural course of development, implementation and maintenance of a WHPP by studying [a] the use and adherence to the 7-step strategy, [b] barriers and facilitators that influence the natural course of adoption, implementation and maintenance, and [c] the implementation process of the lifestyle interventions. All data will be collected using qualitative (i.e. real-time monitoring and semi-structured interviews) and quantitative methods (i.e. process evaluation questionnaires) applying data triangulation. Except for the real-time monitoring, the data collection will take place at baseline and after 6, 12 and 18 months. This is one of the few

  7. The design of a real-time formative evaluation of the implementation process of lifestyle interventions at two worksites using a 7-step strategy (BRAVO@Work

    Directory of Open Access Journals (Sweden)

    Wierenga Debbie

    2012-08-01

    Full Text Available Abstract Background Worksite health promotion programs (WHPPs offer an attractive opportunity to improve the lifestyle of employees. Nevertheless, broad scale and successful implementation of WHPPs in daily practice often fails. In the present study, called BRAVO@Work, a 7-step implementation strategy was used to develop, implement and embed a WHPP in two different worksites with a focus on multiple lifestyle interventions. This article describes the design and framework for the formative evaluation of this 7-step strategy under real-time conditions by an embedded scientist with the purpose to gain insight into whether this this 7-step strategy is a useful and effective implementation strategy. Furthermore, we aim to gain insight into factors that either facilitate or hamper the implementation process, the quality of the implemented lifestyle interventions and the degree of adoption, implementation and continuation of these interventions. Methods and design This study is a formative evaluation within two different worksites with an embedded scientist on site to continuously monitor the implementation process. Each worksite (i.e. a University of Applied Sciences and an Academic Hospital will assign a participating faculty or a department, to implement a WHPP focusing on lifestyle interventions using the 7-step strategy. The primary focus will be to describe the natural course of development, implementation and maintenance of a WHPP by studying [a] the use and adherence to the 7-step strategy, [b] barriers and facilitators that influence the natural course of adoption, implementation and maintenance, and [c] the implementation process of the lifestyle interventions. All data will be collected using qualitative (i.e. real-time monitoring and semi-structured interviews and quantitative methods (i.e. process evaluation questionnaires applying data triangulation. Except for the real-time monitoring, the data collection will take place at baseline and

  8. Step-height standards based on the rapid formation of monolayer steps on the surface of layered crystals

    Energy Technology Data Exchange (ETDEWEB)

    Komonov, A.I. [Rzhanov Institute of Semiconductor Physics, Siberian Branch of the Russian Academy of Sciences (ISP SBRAS), pr. Lavrentieva 13, Novosibirsk 630090 (Russian Federation); Prinz, V.Ya., E-mail: prinz@isp.nsc.ru [Rzhanov Institute of Semiconductor Physics, Siberian Branch of the Russian Academy of Sciences (ISP SBRAS), pr. Lavrentieva 13, Novosibirsk 630090 (Russian Federation); Seleznev, V.A. [Rzhanov Institute of Semiconductor Physics, Siberian Branch of the Russian Academy of Sciences (ISP SBRAS), pr. Lavrentieva 13, Novosibirsk 630090 (Russian Federation); Kokh, K.A. [Sobolev Institute of Geology and Mineralogy, Siberian Branch of the Russian Academy of Sciences (IGM SB RAS), pr. Koptyuga 3, Novosibirsk 630090 (Russian Federation); Shlegel, V.N. [Nikolaev Institute of Inorganic Chemistry, Siberian Branch of the Russian Academy of Sciences (NIIC SB RAS), pr. Lavrentieva 3, Novosibirsk 630090 (Russian Federation)

    2017-07-15

    Highlights: • Easily reproducible step-height standard for SPM calibrations was proposed. • Step-height standard is monolayer steps on the surface of layered single crystal. • Long-term change in surface morphology of Bi{sub 2}Se{sub 3} and ZnWO{sub 4} was investigated. • Conducting surface of Bi{sub 2}Se{sub 3} crystals appropriate for calibrating STM. • Ability of robust SPM calibrations under ambient conditions were demonstrated. - Abstract: Metrology is essential for nanotechnology, especially for structures and devices with feature sizes going down to nm. Scanning probe microscopes (SPMs) permits measurement of nanometer- and subnanometer-scale objects. Accuracy of size measurements performed using SPMs is largely defined by the accuracy of used calibration measures. In the present publication, we demonstrate that height standards of monolayer step (∼1 and ∼0.6 nm) can be easily prepared by cleaving Bi{sub 2}Se{sub 3} and ZnWO{sub 4} layered single crystals. It was shown that the conducting surface of Bi{sub 2}Se{sub 3} crystals offers height standard appropriate for calibrating STMs and for testing conductive SPM probes. Our AFM study of the morphology of freshly cleaved (0001) Bi{sub 2}Se{sub 3} surfaces proved that such surfaces remained atomically smooth during a period of at least half a year. The (010) surfaces of ZnWO{sub 4} crystals remained atomically smooth during one day, but already two days later an additional nanorelief of amplitude ∼0.3 nm appeared on those surfaces. This relief, however, did not further grow in height, and it did not hamper the calibration. Simplicity and the possibility of rapid fabrication of the step-height standards, as well as their high stability, make these standards available for a great, permanently growing number of users involved in 3D printing activities.

  9. Large-scale simulations of plastic neural networks on neuromorphic hardware

    Directory of Open Access Journals (Sweden)

    James Courtney Knight

    2016-04-01

    Full Text Available SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 20000 neurons and 51200000 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.

  10. Growth Limits in Large Scale Networks

    DEFF Research Database (Denmark)

    Knudsen, Thomas Phillip

    limitations. The rising complexity of network management with the convergence of communications platforms is shown as problematic for both automatic management feasibility and for manpower resource management. In the fourth step the scope is extended to include the present society with the DDN project as its......The Subject of large scale networks is approached from the perspective of the network planner. An analysis of the long term planning problems is presented with the main focus on the changing requirements for large scale networks and the potential problems in meeting these requirements. The problems...... the fundamental technological resources in network technologies are analysed for scalability. Here several technological limits to continued growth are presented. The third step involves a survey of major problems in managing large scale networks given the growth of user requirements and the technological...

  11. Microsoft® SQL Server® 2008 Analysis Services Step by Step

    CERN Document Server

    Cameron, Scott

    2009-01-01

    Teach yourself to use SQL Server 2008 Analysis Services for business intelligence-one step at a time. You'll start by building your understanding of the business intelligence platform enabled by SQL Server and the Microsoft Office System, highlighting the role of Analysis Services. Then, you'll create a simple multidimensional OLAP cube and progressively add features to help improve, secure, deploy, and maintain an Analysis Services database. You'll explore core Analysis Services 2008 features and capabilities, including dimension, cube, and aggregation design wizards; a new attribute relatio

  12. Rapid Large Earthquake and Run-up Characterization in Quasi Real Time

    Science.gov (United States)

    Bravo, F. J.; Riquelme, S.; Koch, P.; Cararo, S.

    2017-12-01

    Several test in quasi real time have been conducted by the rapid response group at CSN (National Seismological Center) to characterize earthquakes in Real Time. These methods are known for its robustness and realibility to create Finite Fault Models. The W-phase FFM Inversion, The Wavelet Domain FFM and The Body Wave and FFM have been implemented in real time at CSN, all these algorithms are running automatically and triggered by the W-phase Point Source Inversion. Dimensions (Large and Width ) are predefined by adopting scaling laws for earthquakes in subduction zones. We tested the last four major earthquakes occurred in Chile using this scheme: The 2010 Mw 8.8 Maule Earthquake, The 2014 Mw 8.2 Iquique Earthquake, The 2015 Mw 8.3 Illapel Earthquake and The 7.6 Melinka Earthquake. We obtain many solutions as time elapses, for each one of those we calculate the run-up using an analytical formula. Our results are in agreements with some FFM already accepted by the sicentific comunnity aswell as run-up observations in the field.

  13. pH-Controlled Two-Step Uncoating of Influenza Virus

    Science.gov (United States)

    Li, Sai; Sieben, Christian; Ludwig, Kai; Höfer, Chris T.; Chiantia, Salvatore; Herrmann, Andreas; Eghiaian, Frederic; Schaap, Iwan A.T.

    2014-01-01

    Upon endocytosis in its cellular host, influenza A virus transits via early to late endosomes. To efficiently release its genome, the composite viral shell must undergo significant structural rearrangement, but the exact sequence of events leading to viral uncoating remains largely speculative. In addition, no change in viral structure has ever been identified at the level of early endosomes, raising a question about their role. We performed AFM indentation on single viruses in conjunction with cellular assays under conditions that mimicked gradual acidification from early to late endosomes. We found that the release of the influenza genome requires sequential exposure to the pH of both early and late endosomes, with each step corresponding to changes in the virus mechanical response. Step 1 (pH 7.5–6) involves a modification of both hemagglutinin and the viral lumen and is reversible, whereas Step 2 (pH pH step or blocking the envelope proton channel M2 precludes proper genome release and efficient infection, illustrating the importance of viral lumen acidification during the early endosomal residence for influenza virus infection. PMID:24703306

  14. Interactive exploration of large-scale time-varying data using dynamic tracking graphs

    KAUST Repository

    Widanagamaachchi, W.

    2012-10-01

    Exploring and analyzing the temporal evolution of features in large-scale time-varying datasets is a common problem in many areas of science and engineering. One natural representation of such data is tracking graphs, i.e., constrained graph layouts that use one spatial dimension to indicate time and show the "tracks" of each feature as it evolves, merges or disappears. However, for practical data sets creating the corresponding optimal graph layouts that minimize the number of intersections can take hours to compute with existing techniques. Furthermore, the resulting graphs are often unmanageably large and complex even with an ideal layout. Finally, due to the cost of the layout, changing the feature definition, e.g. by changing an iso-value, or analyzing properly adjusted sub-graphs is infeasible. To address these challenges, this paper presents a new framework that couples hierarchical feature definitions with progressive graph layout algorithms to provide an interactive exploration of dynamically constructed tracking graphs. Our system enables users to change feature definitions on-the-fly and filter features using arbitrary attributes while providing an interactive view of the resulting tracking graphs. Furthermore, the graph display is integrated into a linked view system that provides a traditional 3D view of the current set of features and allows a cross-linked selection to enable a fully flexible spatio-temporal exploration of data. We demonstrate the utility of our approach with several large-scale scientific simulations from combustion science. © 2012 IEEE.

  15. The Value of Step-by-Step Risk Assessment for Unmanned Aircraft

    DEFF Research Database (Denmark)

    La Cour-Harbo, Anders

    2018-01-01

    The new European legislation expected in 2018 or 2019 will introduce a step-by-step process for conducting risk assessments for unmanned aircraft flight operations. This is a relatively simple approach to a very complex challenge. This work compares this step-by-step process to high fidelity risk...... modeling, and shows that at least for a series of example flight missions there is reasonable agreement between the two very different methods....

  16. Strong Stability Preserving Explicit Linear Multistep Methods with Variable Step Size

    KAUST Repository

    Hadjimichael, Yiannis

    2016-09-08

    Strong stability preserving (SSP) methods are designed primarily for time integration of nonlinear hyperbolic PDEs, for which the permissible SSP step size varies from one step to the next. We develop the first SSP linear multistep methods (of order two and three) with variable step size, and prove their optimality, stability, and convergence. The choice of step size for multistep SSP methods is an interesting problem because the allowable step size depends on the SSP coefficient, which in turn depends on the chosen step sizes. The description of the methods includes an optimal step-size strategy. We prove sharp upper bounds on the allowable step size for explicit SSP linear multistep methods and show the existence of methods with arbitrarily high order of accuracy. The effectiveness of the methods is demonstrated through numerical examples.

  17. Asymptotics for Large Time of Global Solutions to the Generalized Kadomtsev-Petviashvili Equation

    Science.gov (United States)

    Hayashi, Nakao; Naumkin, Pavel I.; Saut, Jean-Claude

    We study the large time asymptotic behavior of solutions to the generalized Kadomtsev-Petviashvili (KP) equations where σ= 1 or σ=- 1. When ρ= 2 and σ=- 1, (KP) is known as the KPI equation, while ρ= 2, σ=+ 1 corresponds to the KPII equation. The KP equation models the propagation along the x-axis of nonlinear dispersive long waves on the surface of a fluid, when the variation along the y-axis proceeds slowly [10]. The case ρ= 3, σ=- 1 has been found in the modeling of sound waves in antiferromagnetics [15]. We prove that if ρ>= 3 is an integer and the initial data are sufficiently small, then the solution u of (KP) satisfies the following estimates: for all t∈R, where κ= 1 if ρ= 3 and κ= 0 if ρ>= 4. We also find the large time asymptotics for the solution.

  18. Step-to-step reproducibility and asymmetry to study gait auto-optimization in healthy and cerebral palsied subjects.

    Science.gov (United States)

    Descatoire, A; Femery, V; Potdevin, F; Moretto, P

    2009-05-01

    The purpose of our study was to compare plantar pressure asymmetry and step-to-step reproducibility in both able-bodied persons and two groups of hemiplegics. The relevance of the research was to determine the efficiency of asymmetry and reproducibility as indexes for diagnosis and rehabilitation processes. This study comprised 31 healthy young subjects and 20 young subjects suffering from cerebral palsy hemiplegia assigned to two groups of 10 subjects according to the severity of their musculoskeletal disorders. The peaks of plantar pressure and the time to peak pressure were recorded with an in-shoe measurement system. The intra-individual coefficient of variability was calculated to indicate the consistency of plantar pressure during walking and to define gait stability. The effect size was computed to quantify the asymmetry and measurements were conducted at eight footprint locations. Results indicated few differences in step-to-step reproducibility between the healthy group and the less spastic group while the most affected group showed a more asymmetrical and unstable gait. From the concept of self-optimisation and depending on the neuromotor disorders the organism could make priorities based on pain, mobility, stability or energy expenditure to develop the best gait auto-optimisation.

  19. Near-Real-Time Monitoring of Insect Defoliation Using Landsat Time Series

    Directory of Open Access Journals (Sweden)

    Valerie J. Pasquarella

    2017-07-01

    Full Text Available Introduced insects and pathogens impact millions of acres of forested land in the United States each year, and large-scale monitoring efforts are essential for tracking the spread of outbreaks and quantifying the extent of damage. However, monitoring the impacts of defoliating insects presents a significant challenge due to the ephemeral nature of defoliation events. Using the 2016 gypsy moth (Lymantria dispar outbreak in Southern New England as a case study, we present a new approach for near-real-time defoliation monitoring using synthetic images produced from Landsat time series. By comparing predicted and observed images, we assessed changes in vegetation condition multiple times over the course of an outbreak. Initial measures can be made as imagery becomes available, and season-integrated products provide a wall-to-wall assessment of potential defoliation at 30 m resolution. Qualitative and quantitative comparisons suggest our Landsat Time Series (LTS products improve identification of defoliation events relative to existing products and provide a repeatable metric of change in condition. Our synthetic-image approach is an important step toward using the full temporal potential of the Landsat archive for operational monitoring of forest health over large extents, and provides an important new tool for understanding spatial and temporal dynamics of insect defoliators.

  20. New simulation capabilities of electron clouds in ion beams with large tune depression

    International Nuclear Information System (INIS)

    Vay, J.-L.; Furman, M.A.; Seidl, P.A.

    2007-01-01

    We have developed a new, comprehensive set of simulation tools aimed at modeling the interaction of intense ion beams and electron clouds (e-clouds). The set contains the 3-D accelerator PIC code WARP and the 2-D 'slice' e-cloud code POSINST [M. Furman, this workshop, paper TUAX05], as well as a merger of the two, augmented by new modules for impact ionization and neutral gas generation. The new capability runs on workstations or parallel supercomputers and contains advanced features such as mesh refinement, disparate adaptive time stepping, and a new 'drift-Lorentz' particle mover for tracking charged particles in magnetic fields using large time steps. It is being applied to the modeling of ion beams (1 MeV, 180 mA, K+) for heavy ion inertial fusion and warm dense matter studies, as they interact with electron clouds in the High-Current Experiment (HCX) [experimental results discussed by A. Molvik, this workshop, paper THAW02]. We describe the capabilities and present recent simulation results with detailed comparisons against the HCX experiment, as well as their application (in a different regime) to the modeling of e-clouds in the Large Hadron Collider (LHC). (author)

  1. New simulation capabilities of electron clouds in ion beams with large tune depression

    International Nuclear Information System (INIS)

    Lawrence Livermore National Laboratory

    2006-01-01

    We have developed a new, comprehensive set of simulation tools aimed at modeling the interaction of intense ion beams and electron clouds (e-clouds). The set contains the 3-D accelerator PIC code WARP and the 2-D ''slice'' e-cloud code POSINST [M. Furman, this workshop, paper TUAX05], as well as a merger of the two, augmented by new modules for impact ionization and neutral gas generation. The new capability runs on workstations or parallel supercomputers and contains advanced features such as mesh refinement, disparate adaptive time stepping, and a new ''drift-Lorentz'' particle mover for tracking charged particles in magnetic fields using large time steps. It is being applied to the modeling of ion beams (1 MeV, 180 mA, K+) for heavy ion inertial fusion and warm dense matter studies, as they interact with electron clouds in the High-Current Experiment (HCX) [experimental results discussed by A. Molvik, this workshop, paper THAW02]. We describe the capabilities and present recent simulation results with detailed comparisons against the HCX experiment, as well as their application (in a different regime) to the modeling of e-clouds in the Large Hadron Collider (LHC)

  2. A new deterministic Ensemble Kalman Filter with one-step-ahead smoothing for storm surge forecasting

    KAUST Repository

    Raboudi, Naila

    2016-11-01

    The Ensemble Kalman Filter (EnKF) is a popular data assimilation method for state-parameter estimation. Following a sequential assimilation strategy, it breaks the problem into alternating cycles of forecast and analysis steps. In the forecast step, the dynamical model is used to integrate a stochastic sample approximating the state analysis distribution (called analysis ensemble) to obtain a forecast ensemble. In the analysis step, the forecast ensemble is updated with the incoming observation using a Kalman-like correction, which is then used for the next forecast step. In realistic large-scale applications, EnKFs are implemented with limited ensembles, and often poorly known model errors statistics, leading to a crude approximation of the forecast covariance. This strongly limits the filter performance. Recently, a new EnKF was proposed in [1] following a one-step-ahead smoothing strategy (EnKF-OSA), which involves an OSA smoothing of the state between two successive analysis. At each time step, EnKF-OSA exploits the observation twice. The incoming observation is first used to smooth the ensemble at the previous time step. The resulting smoothed ensemble is then integrated forward to compute a "pseudo forecast" ensemble, which is again updated with the same observation. The idea of constraining the state with future observations is to add more information in the estimation process in order to mitigate for the sub-optimal character of EnKF-like methods. The second EnKF-OSA "forecast" is computed from the smoothed ensemble and should therefore provide an improved background. In this work, we propose a deterministic variant of the EnKF-OSA, based on the Singular Evolutive Interpolated Ensemble Kalman (SEIK) filter. The motivation behind this is to avoid the observations perturbations of the EnKF in order to improve the scheme\\'s behavior when assimilating big data sets with small ensembles. The new SEIK-OSA scheme is implemented and its efficiency is demonstrated

  3. One-step large-scale deposition of salt-free DNA origami nanostructures

    Science.gov (United States)

    Linko, Veikko; Shen, Boxuan; Tapio, Kosti; Toppari, J. Jussi; Kostiainen, Mauri A.; Tuukkanen, Sampo

    2015-01-01

    DNA origami nanostructures have tremendous potential to serve as versatile platforms in self-assembly -based nanofabrication and in highly parallel nanoscale patterning. However, uniform deposition and reliable anchoring of DNA nanostructures often requires specific conditions, such as pre-treatment of the chosen substrate or a fine-tuned salt concentration for the deposition buffer. In addition, currently available deposition techniques are suitable merely for small scales. In this article, we exploit a spray-coating technique in order to resolve the aforementioned issues in the deposition of different 2D and 3D DNA origami nanostructures. We show that purified DNA origamis can be controllably deposited on silicon and glass substrates by the proposed method. The results are verified using either atomic force microscopy or fluorescence microscopy depending on the shape of the DNA origami. DNA origamis are successfully deposited onto untreated substrates with surface coverage of about 4 objects/mm2. Further, the DNA nanostructures maintain their shape even if the salt residues are removed from the DNA origami fabrication buffer after the folding procedure. We believe that the presented one-step spray-coating method will find use in various fields of material sciences, especially in the development of DNA biochips and in the fabrication of metamaterials and plasmonic devices through DNA metallisation. PMID:26492833

  4. Changes in step-width during dual-task walking predicts falls.

    Science.gov (United States)

    Nordin, E; Moe-Nilssen, R; Ramnemark, A; Lundin-Olsson, L

    2010-05-01

    The aim was to evaluate whether gait pattern changes between single- and dual-task conditions were associated with risk of falling in older people. Dual-task cost (DTC) of 230 community living, physically independent people, 75 years or older, was determined with an electronic walkway. Participants were followed up each month for 1 year to record falls. Mean and variability measures of gait characteristics for 5 dual-task conditions were compared to single-task walking for each participant. Almost half (48%) of the participants fell at least once during follow-up. Risk of falling increased in individuals where DTC for performing a subtraction task demonstrated change in mean step-width compared to single-task walking. Risk of falling decreased in individuals where DTC for carrying a cup and saucer demonstrated change compared to single-task walking in mean step-width, mean step-time, and step-length variability. Degree of change in gait characteristics related to a change in risk of falling differed between measures. Prognostic guidance for fall risk was found for the above DTCs in mean step-width with a negative likelihood ratio of 0.5 and a positive likelihood ratio of 2.3, respectively. Findings suggest that changes in step-width, step-time, and step-length with dual tasking may be related to future risk of falling. Depending on the nature of the second task, DTC may indicate either an increased risk of falling, or a protective strategy to avoid falling. Copyright 2010. Published by Elsevier B.V.

  5. CT-Guided Percutaneous Step-by-Step Radiofrequency Ablation for the Treatment of Carcinoma in the Caudate Lobe

    Science.gov (United States)

    Dong, Jun; Li, Wang; Zeng, Qi; Li, Sheng; Gong, Xiao; Shen, Lujun; Mao, Siyue; Dong, Annan; Wu, Peihong

    2015-01-01

    Abstract The location of the caudate lobe and its complex anatomy make caudate lobectomy and radiofrequency ablation (RFA) under ultrasound guidance technically challenging. The objective of the exploratory study was to introduce a novel modality of treatment of lesions in caudate lobe and discuss all details with our experiences to make this novel treatment modality repeatable and educational. The study enrolled 39 patients with liver caudate lobe tumor first diagnosed by computerized tomography (CT) or magnetic resonance imaging (MRI). After consultation of multi-disciplinary team, 7 patients with hepatic caudate lobe lesions were enrolled and accepted CT-guided percutaneous step-by-step RFA treatment. A total of 8 caudate lobe lesions of the 7 patients were treated by RFA in 6 cases and RFA combined with percutaneous ethanol injection (PEI) in 1 case. Median tumor diameter was 29 mm (range, 18–69 mm). A right approach was selected for 6 patients and a dorsal approach for 1 patient. Median operative time was 64 min (range, 59–102 min). Median blood loss was 10 mL (range, 8-16 mL) and mainly due to puncture injury. Median hospitalization time was 4 days (range, 2–5 days). All lesions were completely ablated (8/8; 100%) and no recurrence at the site of previous RFA was observed during median 8 months follow-up (range 3–11 months). No major or life-threatening complications or deaths occurred. In conclusion, percutaneous step-by-step RFA under CT guidance is a novel and effective minimally invasive therapy for hepatic caudate lobe lesions with well repeatability. PMID:26426638

  6. Large deviation estimates for exceedance times of perpetuity sequences and their dual processes

    DEFF Research Database (Denmark)

    Buraczewski, Dariusz; Collamore, Jeffrey F.; Damek, Ewa

    2016-01-01

    In a variety of problems in pure and applied probability, it is of relevant to study the large exceedance probabilities of the perpetuity sequence $Y_n := B_1 + A_1 B_2 + \\cdots + (A_1 \\cdots A_{n-1}) B_n$, where $(A_i,B_i) \\subset (0,\\infty) \\times \\reals$. Estimates for the stationary tail dist......-time exceedance probabilities of $\\{ M_n^\\ast \\}$, yielding a new result concerning the convergence of $\\{ M_n^\\ast \\}$ to its stationary distribution.......In a variety of problems in pure and applied probability, it is of relevant to study the large exceedance probabilities of the perpetuity sequence $Y_n := B_1 + A_1 B_2 + \\cdots + (A_1 \\cdots A_{n-1}) B_n$, where $(A_i,B_i) \\subset (0,\\infty) \\times \\reals$. Estimates for the stationary tail...... distribution of $\\{ Y_n \\}$ have been developed in the seminal papers of Kesten (1973) and Goldie (1991). Specifically, it is well-known that if $M := \\sup_n Y_n$, then ${\\mathbb P} \\left\\{ M > u \\right\\} \\sim {\\cal C}_M u^{-\\xi}$ as $u \\to \\infty$. While much attention has been focused on extending...

  7. Real-Time Track Reallocation for Emergency Incidents at Large Railway Stations

    Directory of Open Access Journals (Sweden)

    Wei Liu

    2015-01-01

    Full Text Available After track capacity breakdowns at a railway station, train dispatchers need to generate appropriate track reallocation plans to recover the impacted train schedule and minimize the expected total train delay time under stochastic scenarios. This paper focuses on the real-time track reallocation problem when tracks break down at large railway stations. To represent these cases, virtual trains are introduced and activated to occupy the accident tracks. A mathematical programming model is developed, which aims at minimizing the total occupation time of station bottleneck sections to avoid train delays. In addition, a hybrid algorithm between the genetic algorithm and the simulated annealing algorithm is designed. The case study from the Baoji railway station in China verifies the efficiency of the proposed model and the algorithm. Numerical results indicate that, during a daily and shift transport plan from 8:00 to 8:30, if five tracks break down simultaneously, this will disturb train schedules (result in train arrival and departure delays.

  8. A Study of Low-Reynolds Number Effects in Backward-Facing Step Flow Using Large Eddy Simulations

    DEFF Research Database (Denmark)

    Davidson, Lars; Nielsen, Peter V.

    The flow in ventilated rooms is often not fully turbulent, but in some regions the flow can be laminar. Problems have been encountered when simulating this type of flow using RANS (Reynolds Averaged Navier-Stokes) methods. Restivo carried out experiment on the flow after a backward-facing step...

  9. Uncertainty analysis as essential step in the establishment of the dynamic Design Space of primary drying during freeze-drying

    DEFF Research Database (Denmark)

    Mortier, Severine Therese F. C.; Van Bockstal, Pieter-Jan; Corver, Jos

    2016-01-01

    Large molecules, such as biopharmaceuticals, are considered the key driver of growth for the pharmaceutical industry. Freeze-drying is the preferred way to stabilise these products when needed. However, it is an expensive, inefficient, time- and energy-consuming process. During freeze-drying, the......Large molecules, such as biopharmaceuticals, are considered the key driver of growth for the pharmaceutical industry. Freeze-drying is the preferred way to stabilise these products when needed. However, it is an expensive, inefficient, time- and energy-consuming process. During freeze...... for pharmaceutical freeze-drying. Traditionally, the chamber pressure and shelf temperature are kept constant during primary drying, leading to less optimal process conditions. In this paper it is demonstrated how a mechanistic model of the primary drying step gives the opportunity to determine the optimal dynamic...

  10. Step-by-Step Visual Manuals: Design and Development

    Science.gov (United States)

    Urata, Toshiyuki

    2004-01-01

    The types of handouts and manuals that are used in technology training vary. Some describe procedures in a narrative way without graphics; some employ step-by-step instructions with screen captures. According to Thirlway (1994), a training manual should be like a tutor that permits a student to learn at his own pace and gives him confidence for…

  11. Assigning probability gain for precursors of four large Chinese earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Cao, T.; Aki, K.

    1983-03-10

    We extend the concept of probability gain associated with a precursor (Aki, 1981) to a set of precursors which may be mutually dependent. Making use of a new formula, we derive a criterion for selecting precursors from a given data set in order to calculate the probability gain. The probabilities per unit time immediately before four large Chinese earthquakes are calculated. They are approximately 0.09, 0.09, 0.07 and 0.08 per day for 1975 Haicheng (M = 7.3), 1976 Tangshan (M = 7.8), 1976 Longling (M = 7.6), and Songpan (M = 7.2) earthquakes, respectively. These results are encouraging because they suggest that the investigated precursory phenomena may have included the complete information for earthquake prediction, at least for the above earthquakes. With this method, the step-by-step approach to prediction used in China may be quantified in terms of the probability of earthquake occurrence. The ln P versus t curve (where P is the probability of earthquake occurrence at time t) shows that ln P does not increase with t linearly but more rapidly as the time of earthquake approaches.

  12. A Two-Step Resume Information Extraction Algorithm

    Directory of Open Access Journals (Sweden)

    Jie Chen

    2018-01-01

    Full Text Available With the rapid growth of Internet-based recruiting, there are a great number of personal resumes among recruiting systems. To gain more attention from the recruiters, most resumes are written in diverse formats, including varying font size, font colour, and table cells. However, the diversity of format is harmful to data mining, such as resume information extraction, automatic job matching, and candidates ranking. Supervised methods and rule-based methods have been proposed to extract facts from resumes, but they strongly rely on hierarchical structure information and large amounts of labelled data, which are hard to collect in reality. In this paper, we propose a two-step resume information extraction approach. In the first step, raw text of resume is identified as different resume blocks. To achieve the goal, we design a novel feature, Writing Style, to model sentence syntax information. Besides word index and punctuation index, word lexical attribute and prediction results of classifiers are included in Writing Style. In the second step, multiple classifiers are employed to identify different attributes of fact information in resumes. Experimental results on a real-world dataset show that the algorithm is feasible and effective.

  13. A clinical measure of maximal and rapid stepping in older women.

    Science.gov (United States)

    Medell, J L; Alexander, N B

    2000-08-01

    In older adults, clinical measures have been used to assess fall risk based on the ability to maintain stance or to complete a functional task. However, in an impending fall situation, a stepping response is often used when strategies to maintain stance are inadequate. We examined how maximal and rapid stepping performance might differ among healthy young, healthy older, and balance-impaired older adults, and how this stepping performance related to other measures of balance and fall risk. Young (Y; n = 12; mean age, 21 years), unimpaired older (UO; n = 12; mean age, 69 years), and balance-impaired older women IO; n = 10; mean age, 77 years) were tested in their ability to take a maximal step (Maximum Step Length or MSL) and in their ability to take rapid steps in three directions (front, side, and back), termed the Rapid Step Test (RST). Time to complete the RST and stepping errors occurring during the RST were noted. The IO group, compared with the Y and UO groups, demonstrated significantly poorer balance and higher fall risk, based on performance on tasks such as unipedal stance. Mean MSL was significantly higher (by 16%) in the Y than in the UO group and in the UO (by 30%) than in the IO group. Mean RST time was significantly faster in the Y group versus the UO group (by 24%) and in the UO group versus the IO group (by 15%). Mean RST errors tended to be higher in the UO than in the Y group, but were significantly higher only in the UO versus the IO group. Both MSL and RST time correlated strongly (0.5 to 0.8) with other measures of balance and fall risk including unipedal stance, tandem walk, leg strength, and the Activities-Specific Balance Confidence (ABC) scale. We found substantial declines in the ability of both unimpaired and balance-impaired older adults to step maximally and to step rapidly. Stepping performance is closely related to other measures of balance and fall risk and might be considered in future studies as a predictor of falls and fall

  14. Bio-inspired step-climbing in a hexapod robot

    International Nuclear Information System (INIS)

    Chou, Ya-Cheng; Yu, Wei-Shun; Huang, Ke-Jung; Lin, Pei-Chun

    2012-01-01

    Inspired by the observation that the cockroach changes from a tripod gait to a different gait for climbing high steps, we report on the design and implementation of a novel, fully autonomous step-climbing maneuver, which enables a RHex-style hexapod robot to reliably climb a step up to 230% higher than the length of its leg. Similar to the climbing strategy most used by cockroaches, the proposed maneuver is composed of two stages. The first stage is the ‘rearing stage,’ inclining the body so the front side of the body is raised and it is easier for the front legs to catch the top of the step, followed by the ‘rising stage,’ maneuvering the body's center of mass to the top of the step. Two infrared range sensors are installed on the front of the robot to detect the presence of the step and its orientation relative to the robot's heading, so that the robot can perform automatic gait transition, from walking to step-climbing, as well as correct its initial tilt approaching posture. An inclinometer is utilized to measure body inclination and to compute step height, thus enabling the robot to adjust its gait automatically, in real time, and to climb steps of different heights and depths successfully. The algorithm is applicable for the robot to climb various rectangular obstacles, including a narrow bar, a bar and a step (i.e. a bar of infinite width). The performance of the algorithm is evaluated experimentally, and the comparison of climbing strategies and climbing behaviors in biological and robotic systems is discussed. (paper)

  15. Large-time behavior of solutions to a reaction-diffusion system with distributed microstructure

    NARCIS (Netherlands)

    Muntean, A.

    2009-01-01

    Abstract We study the large-time behavior of a class of reaction-diffusion systems with constant distributed microstructure arising when modeling diffusion and reaction in structured porous media. The main result of this Note is the following: As t ¿ 8 the macroscopic concentration vanishes, while

  16. CAN LARGE TIME DELAYS OBSERVED IN LIGHT CURVES OF CORONAL LOOPS BE EXPLAINED IN IMPULSIVE HEATING?

    International Nuclear Information System (INIS)

    Lionello, Roberto; Linker, Jon A.; Mikić, Zoran; Alexander, Caroline E.; Winebarger, Amy R.

    2016-01-01

    The light curves of solar coronal loops often peak first in channels associated with higher temperatures and then in those associated with lower temperatures. The delay times between the different narrowband EUV channels have been measured for many individual loops and recently for every pixel of an active region observation. The time delays between channels for an active region exhibit a wide range of values. The maximum time delay in each channel pair can be quite large, i.e., >5000 s. These large time delays make-up 3%–26% (depending on the channel pair) of the pixels where a trustworthy, positive time delay is measured. It has been suggested that these time delays can be explained by simple impulsive heating, i.e., a short burst of energy that heats the plasma to a high temperature, after which the plasma is allowed to cool through radiation and conduction back to its original state. In this paper, we investigate whether the largest observed time delays can be explained by this hypothesis by simulating a series of coronal loops with different heating rates, loop lengths, abundances, and geometries to determine the range of expected time delays between a set of four EUV channels. We find that impulsive heating cannot address the largest time delays observed in two of the channel pairs and that the majority of the large time delays can only be explained by long, expanding loops with photospheric abundances. Additional observations may rule out these simulations as an explanation for the long time delays. We suggest that either the time delays found in this manner may not be representative of real loop evolution, or that the impulsive heating and cooling scenario may be too simple to explain the observations, and other potential heating scenarios must be explored

  17. Dipole moments associated with edge atoms; a comparative study on stepped Pt, Au and W surfaces

    International Nuclear Information System (INIS)

    Besocke, K.; Krahl-Urban, B.; Wagner, H.

    1977-01-01

    Work function measurements have been performed on stepped Pt and Au surfaces with (111) terraces and on W surfaces with (110) terraces. In each case the work function decreases linearly with increasing step density and depends on the step orientation. The work function changes are attributed to dipole moments associated with the step edges. The dipole moments per unit step length are larger for open edge structures than for densely packed ones. The dipole moments for Pt are about twice as large as for Au and W. (Auth.)

  18. Steps that count! A feasibility study of a pedometer-based, health ...

    African Journals Online (AJOL)

    a large gap between the development of effective pedometer-based interventions and their ... [13] Using more cost-effective intervention strategies, such as ..... Abel M, Hannon J, Mullineaux D, Beighle A. Determination of step rate thresholds.

  19. Ensemble Kalman filtering with one-step-ahead smoothing

    KAUST Repository

    Raboudi, Naila F.

    2018-01-11

    The ensemble Kalman filter (EnKF) is widely used for sequential data assimilation. It operates as a succession of forecast and analysis steps. In realistic large-scale applications, EnKFs are implemented with small ensembles and poorly known model error statistics. This limits their representativeness of the background error covariances and, thus, their performance. This work explores the efficiency of the one-step-ahead (OSA) smoothing formulation of the Bayesian filtering problem to enhance the data assimilation performance of EnKFs. Filtering with OSA smoothing introduces an updated step with future observations, conditioning the ensemble sampling with more information. This should provide an improved background ensemble in the analysis step, which may help to mitigate the suboptimal character of EnKF-based methods. Here, the authors demonstrate the efficiency of a stochastic EnKF with OSA smoothing for state estimation. They then introduce a deterministic-like EnKF-OSA based on the singular evolutive interpolated ensemble Kalman (SEIK) filter. The authors show that the proposed SEIK-OSA outperforms both SEIK, as it efficiently exploits the data twice, and the stochastic EnKF-OSA, as it avoids observational error undersampling. They present extensive assimilation results from numerical experiments conducted with the Lorenz-96 model to demonstrate SEIK-OSA’s capabilities.

  20. Mining Outlier Data in Mobile Internet-Based Large Real-Time Databases

    Directory of Open Access Journals (Sweden)

    Xin Liu

    2018-01-01

    Full Text Available Mining outlier data guarantees access security and data scheduling of parallel databases and maintains high-performance operation of real-time databases. Traditional mining methods generate abundant interference data with reduced accuracy, efficiency, and stability, causing severe deficiencies. This paper proposes a new mining outlier data method, which is used to analyze real-time data features, obtain magnitude spectra models of outlier data, establish a decisional-tree information chain transmission model for outlier data in mobile Internet, obtain the information flow of internal outlier data in the information chain of a large real-time database, and cluster data. Upon local characteristic time scale parameters of information flow, the phase position features of the outlier data before filtering are obtained; the decision-tree outlier-classification feature-filtering algorithm is adopted to acquire signals for analysis and instant amplitude and to achieve the phase-frequency characteristics of outlier data. Wavelet transform threshold denoising is combined with signal denoising to analyze data offset, to correct formed detection filter model, and to realize outlier data mining. The simulation suggests that the method detects the characteristic outlier data feature response distribution, reduces response time, iteration frequency, and mining error rate, improves mining adaptation and coverage, and shows good mining outcomes.

  1. Self-consistent predictor/corrector algorithms for stable and efficient integration of the time-dependent Kohn-Sham equation

    Science.gov (United States)

    Zhu, Ying; Herbert, John M.

    2018-01-01

    The "real time" formulation of time-dependent density functional theory (TDDFT) involves integration of the time-dependent Kohn-Sham (TDKS) equation in order to describe the time evolution of the electron density following a perturbation. This approach, which is complementary to the more traditional linear-response formulation of TDDFT, is more efficient for computation of broad-band spectra (including core-excited states) and for systems where the density of states is large. Integration of the TDKS equation is complicated by the time-dependent nature of the effective Hamiltonian, and we introduce several predictor/corrector algorithms to propagate the density matrix, one of which can be viewed as a self-consistent extension of the widely used modified-midpoint algorithm. The predictor/corrector algorithms facilitate larger time steps and are shown to be more efficient despite requiring more than one Fock build per time step, and furthermore can be used to detect a divergent simulation on-the-fly, which can then be halted or else the time step modified.

  2. Attention demanding tasks during treadmill walking reduce step width variability in young adults

    Directory of Open Access Journals (Sweden)

    Troy Karen L

    2005-08-01

    Full Text Available Abstract Background The variability of step time and step width is associated with falls by older adults. Further, step time is significantly influenced when performing attention demanding tasks while walking. Without exception, step time variability has been reported to increase in normal and pathologically aging older adults. Because of the role of step width in managing frontal plane dynamic stability, documenting the influence of attention-demanding tasks on step width variability may provide insight to events that can disturb dynamic stability during locomotion and increase fall risk. Preliminary evidence suggests performance of an attention demanding task significantly decreases step width variability of young adults walking on a treadmill. The purpose of the present study was to confirm or refute this finding by characterizing the extent and direction of the effects of a widely used attention demanding task (Stroop test on the step width variability of young adults walking on a motorized treadmill. Methods Fifteen healthy young adults walked on a motorized treadmill at a self-selected velocity for 10 minutes under two conditions; without performing an attention demanding task and while performing the Stroop test. Step width of continuous and consecutive steps during the collection was derived from the data recorded using a motion capture system. Step width variability was computed as the standard deviation of all recorded steps. Results Step width decreased four percent during performance of the Stroop test but the effect was not significant (p = 0.10. In contrast, the 16 percent decrease in step width variability during the Stroop test condition was significant (p = 0.029. Conclusion The results support those of our previous work in which a different attention demanding task also decreased step width variability of young subjects while walking on a treadmill. The decreased step width variability observed while performing an attention

  3. Stability of one-step methods in transient nonlinear heat conduction

    International Nuclear Information System (INIS)

    Hughes, J.R.

    1977-01-01

    The purpose of the present work is to ascertain practical stability conditions for one-step methods commonly used in transient nonlinear heat conduction analyses. The class of problems considered is governed by a temporally continuous, spatially discrete system involving the capacity matrix C, conductivity matrix K, heat supply vector, temperature vector and time differenciation. In the linear case, in which K and C are constant, the stability behavior of one-step methods is well known. But in this paper the concepts of stability, appropriate to the nonlinear problem, are thoroughly discussed. They of course reduce to the usual stability criterion for the linear, constant coefficient case. However, for nonlinear problems there are differences and these ideas are of key importance in obtaining practical stability conditions. Of particular importance is a recent result which indicates that, in a sense, the trapezoidal and midpoint families are quivalent. Thus, stability results for one family may be translated into a result for the other. The main results obtained are summarized as follows. The stability behavior of the explicit Euler method in the nonlinear regime is analogous to that for linear problems. In particular, an a priori step size restriction may be determined for each time step. The precise time step restriction on implicit conditionally stable members of the trapezoidal and midpoint families is shown not to be determinable a priori. Of considerable practical significance, unconditionally stable members of the trapezoidal and midpoint families are identified

  4. Step-2 Thai Medical Licensing Examination result: a follow-up study.

    Science.gov (United States)

    Wanvarie, Samkaew; Prakunhungsit, Supavadee

    2008-12-01

    The Thai medical students sat for the Medical Licensing Examination of Thailand (MLET) Step 2 for the first time in 2008. This paper analysed the first batch of Ramathibodi students taking the MLET Steps 1 and 2 in 2006 and 2008 respectively. The scores from the MLET Steps1 and 2, and fifth-year cumulative grade point averages (GPAX) of 108 students were analysed. Only 6 (5.6%) students failed the MLET Step 2 examination. Students who failed the MLET Step1 were more likely to fail their MLET Step 2 (relative risk, 5.8; 95% confidence interval, 1.3-26.0). Students with low GPAX or scoring in the lowest quintile or tertile on the MLET Step1 were also at increased risk of failing the LET Step 2. The data suggest that performance on the MLET Step 1 and GPAX are important predictors of a student's chances of passing the MLET Step 2. Students with poor academic achievement or failing the MLET Step1 should be given intensive tutorials to pass the medical licensing examination.

  5. Strong Stability Preserving Explicit Linear Multistep Methods with Variable Step Size

    KAUST Repository

    Hadjimichael, Yiannis; Ketcheson, David I.; Loczi, Lajos; Né meth, Adriá n

    2016-01-01

    Strong stability preserving (SSP) methods are designed primarily for time integration of nonlinear hyperbolic PDEs, for which the permissible SSP step size varies from one step to the next. We develop the first SSP linear multistep methods (of order

  6. Step Complexity Measure for Emergency Operating Procedures - Determining Weighting Factors

    International Nuclear Information System (INIS)

    Park, Jinkyun; Jung, Wondea; Kim, Jaewhan; Ha, Jaejoo

    2003-01-01

    In complex systems, such as nuclear power plants (NPPs) or airplane control systems, human error has been regarded as the primary cause of many events. Therefore, to ensure system safety, extensive effort has been made to identify the significant factors that can cause human error. According to related studies, written manuals or operating procedures are revealed as one of the important factors, and the understandability is pointed out as one of the major reasons for procedure-related human errors.Many qualitative checklists have been suggested to evaluate emergency operating procedures (EOPs) of NPPs so as to minimize procedure-related human errors. However, since qualitative evaluations using checklists have some drawbacks, a quantitative measure that can quantify the complexity of EOPs is indispensable.From this necessity, Park et al. suggested the step complexity (SC) measure to quantify the complexity of procedural steps included in EOPs. To verify the appropriateness of the SC measure, averaged step performance time data obtained from emergency training records of the loss-of-coolant accident (LOCA) and the excess steam demand event were compared with estimated SC scores. However, although averaged step performance time data and estimated SC scores show meaningful correlation, some important issues such as determining proper weighting factors have to be clarified to ensure the appropriateness of the SC measure. These were not properly dealt with due to a lack of backup data.In this paper, to resolve one of the important issues, emergency training records are additionally collected and analyzed in order to determine proper weighting factors. The total number of collected records is 66, and the training scenarios cover five emergency conditions including the LOCA, the steam generator tube rupture, the loss of all feedwater, the loss of off-site power, and the station blackout. From these records, average step performance time data are retrieved, and new

  7. A Step-by-Step Framework on Discrete Events Simulation in Emergency Department; A Systematic Review.

    Science.gov (United States)

    Dehghani, Mahsa; Moftian, Nazila; Rezaei-Hachesu, Peyman; Samad-Soltani, Taha

    2017-04-01

    To systematically review the current literature of simulation in healthcare including the structured steps in the emergency healthcare sector by proposing a framework for simulation in the emergency department. For the purpose of collecting the data, PubMed and ACM databases were used between the years 2003 and 2013. The inclusion criteria were to select English-written articles available in full text with the closest objectives from among a total of 54 articles retrieved from the databases. Subsequently, 11 articles were selected for further analysis. The studies focused on the reduction of waiting time and patient stay, optimization of resources allocation, creation of crisis and maximum demand scenarios, identification of overcrowding bottlenecks, investigation of the impact of other systems on the existing system, and improvement of the system operations and functions. Subsequently, 10 simulation steps were derived from the relevant studies after an expert's evaluation. The 10-steps approach proposed on the basis of the selected studies provides simulation and planning specialists with a structured method for both analyzing problems and choosing best-case scenarios. Moreover, following this framework systematically enables the development of design processes as well as software implementation of simulation problems.

  8. Modeling and Control of a Large Nuclear Reactor A Three-Time-Scale Approach

    CERN Document Server

    Shimjith, S R; Bandyopadhyay, B

    2013-01-01

    Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property,...

  9. Using Aspen plus in thermodynamics instruction a step-by-step guide

    CERN Document Server

    Sandler, Stanley I

    2015-01-01

    A step-by-step guide for students (and faculty) on the use of Aspen in teaching thermodynamics Used for a wide variety of important engineering tasks, Aspen Plus software is a modeling tool used for conceptual design, optimization, and performance monitoring of chemical processes. After more than twenty years, it remains one of the most popular and powerful chemical engineering simulation programs used both industrially and academically. Using Aspen Plus in Thermodynamics Instruction: A Step by Step Guide introduces the reader to the use of Aspen Plus in courses in thermodynamics. It prov

  10. Moment analysis of the time-dependent transmission of a step-function input of a radioactive gas through an adsorber bed

    International Nuclear Information System (INIS)

    Lee, T.V.; Rothstein, D.; Madey, R.

    1986-01-01

    The time-dependent concentration of a radioactive gas at the outlet of an adsorber bed for a step change in the input concentration is analyzed by the method of moments. This moment analysis yields analytical expressions for calculating the kinetic parameters of a gas adsorbed on a porous solid in terms of observables from a time-dependent transmission curve. Transmission is the ratio of the adsorbate outlet concentration to that at the inlet. The three nonequilibrium parameters are the longitudinal diffusion coefficient, the solid-phase diffusion coefficient, and the interfacial mass-transfer coefficient. Three quantities that can be extracted in principle from an experimental transmission curve are the equilibrium transmission, the average residence (or propagation) time, and the first-moment relative to the propagation time. The propagation time for a radioactive gas is given by the time integral of one minus the transmission (expressed as a fraction of the steady-state transmission). The steady-state transmission, the propagation time, and the first-order moment are functions of the three kinetic parameters and the equilibrium adsorption capacity. The equilibrium adsorption capacity is extracted from an experimental transmission curve for a stable gaseous isotope. The three kinetic parameters can be obtained by solving the three analytical expressions simultaneously. No empirical correlations are required

  11. Time-resolved measurements of laser-induced diffusion of CO molecules on stepped Pt(111)-surfaces; Zeitaufgeloeste Untersuchung der laser-induzierten Diffusion von CO-Molekuelen auf gestuften Pt(111)-Oberflaechen

    Energy Technology Data Exchange (ETDEWEB)

    Lawrenz, M.

    2007-10-30

    In the present work the dynamics of CO-molecules on a stepped Pt(111)-surface induced by fs-laser pulses at low temperatures was studied by using laser spectroscopy. In the first part of the work, the laser-induced diffusion for the CO/Pt(111)-system could be demonstrated and modelled successfully for step diffusion. At first, the diffusion of CO-molecules from the step sites to the terrace sites on the surface was traced. The experimentally discovered energy transfer time of 500 fs for this process confirms the assumption of an electronically induced process. In the following it was explained how the experimental results were modelled. A friction coefficient which depends on the electron temperature yields a consistent model, whereas for the understanding of the fluence dependence and time-resolved measurements parallel the same set of parameters was used. Furthermore, the analysis was extended to the CO-terrace diffusion. Small coverages of CO were adsorbed to the terraces and the diffusion was detected as the temporal evolution of the occupation of the step sites acting as traps for the diffusing molecules. The additional performed two-pulse correlation measurements also indicate an electronically induced process. At the substrate temperature of 40 K the cross-correlation - where an energy transfer time of 1.8 ps was extracted - suggests also an electronically induced energy transfer mechanism. Diffusion experiments were performed for different substrate temperatures. (orig.)

  12. How many steps/day are enough? For children and adolescents

    LENUS (Irish Health Repository)

    Tudor-Locke, Catrine

    2011-07-28

    Abstract Worldwide, public health physical activity guidelines include special emphasis on populations of children (typically 6-11 years) and adolescents (typically 12-19 years). Existing guidelines are commonly expressed in terms of frequency, time, and intensity of behaviour. However, the simple step output from both accelerometers and pedometers is gaining increased credibility in research and practice as a reasonable approximation of daily ambulatory physical activity volume. Therefore, the purpose of this article is to review existing child and adolescent objectively monitored step-defined physical activity literature to provide researchers, practitioners, and lay people who use accelerometers and pedometers with evidence-based translations of these public health guidelines in terms of steps\\/day. In terms of normative data (i.e., expected values), the updated international literature indicates that we can expect 1) among children, boys to average 12,000 to 16,000 steps\\/day and girls to average 10,000 to 13,000 steps\\/day; and, 2) adolescents to steadily decrease steps\\/day until approximately 8,000-9,000 steps\\/day are observed in 18-year olds. Controlled studies of cadence show that continuous MVPA walking produces 3,300-3,500 steps in 30 minutes or 6,600-7,000 steps in 60 minutes in 10-15 year olds. Limited evidence suggests that a total daily physical activity volume of 10,000-14,000 steps\\/day is associated with 60-100 minutes of MVPA in preschool children (approximately 4-6 years of age). Across studies, 60 minutes of MVPA in primary\\/elementary school children appears to be achieved, on average, within a total volume of 13,000 to 15,000 steps\\/day in boys and 11,000 to 12,000 steps\\/day in girls. For adolescents (both boys and girls), 10,000 to 11,700 may be associated with 60 minutes of MVPA. Translations of time- and intensity-based guidelines may be higher than existing normative data (e.g., in adolescents) and therefore will be more

  13. The Influences of Time and Velocity of Inert Gas on the Quality of theProcessing Product of Graphite Matrix on the Baking Step

    International Nuclear Information System (INIS)

    Imam-Dahroni; Dwi-Herwidhi; NS, Kasilani

    2000-01-01

    The research of the synthesis of matrix graphite on the step of bakingprocess was conducted, by focusing on the influence of time and velocityvariables of the inert gas. The investigation on baking times ranging from 5minutes to 55 minutes and by varying the velocity of inert gas from 0.30l/minute to 3.60 l/minute, resulted the product of different matrix.Optimizing at the time of operation and the flow rate of argon gas indicatedthat the baking time for 30 minutes and by the flow rate of argon gas of 2.60l/minute resulted best matrix graphite that has a hardness value of 11kg/mm 2 of hardness and the ductility of 1800 Newton. (author)

  14. A Pragmatic Randomized Controlled Trial of 6-Step vs 3-Step Hand Hygiene Technique in Acute Hospital Care in the United Kingdom.

    Science.gov (United States)

    Reilly, Jacqui S; Price, Lesley; Lang, Sue; Robertson, Chris; Cheater, Francine; Skinner, Kirsty; Chow, Angela

    2016-06-01

    OBJECTIVE To evaluate the microbiologic effectiveness of the World Health Organization's 6-step and the Centers for Disease Control and Prevention's 3-step hand hygiene techniques using alcohol-based handrub. DESIGN A parallel group randomized controlled trial. SETTING An acute care inner-city teaching hospital (Glasgow). PARTICIPANTS Doctors (n=42) and nurses (n=78) undertaking direct patient care. INTERVENTION Random 1:1 allocation of the 6-step (n=60) or the 3-step (n=60) technique. RESULTS The 6-step technique was microbiologically more effective at reducing the median log10 bacterial count. The 6-step technique reduced the count from 3.28 CFU/mL (95% CI, 3.11-3.38 CFU/mL) to 2.58 CFU/mL (2.08-2.93 CFU/mL), whereas the 3-step reduced it from 3.08 CFU/mL (2.977-3.27 CFU/mL) to 2.88 CFU/mL (-2.58 to 3.15 CFU/mL) (P=.02). However, the 6-step technique did not increase the total hand coverage area (98.8% vs 99.0%, P=.15) and required 15% (95% CI, 6%-24%) more time (42.50 seconds vs 35.0 seconds, P=.002). Total hand coverage was not related to the reduction in bacterial count. CONCLUSIONS Two techniques for hand hygiene using alcohol-based handrub are promoted in international guidance, the 6-step by the World Health Organization and 3-step by the Centers for Disease Control and Prevention. The study provides the first evidence in a randomized controlled trial that the 6-step technique is superior, thus these international guidance documents should consider this evidence, as should healthcare organizations using the 3-step technique in practice. Infect Control Hosp Epidemiol 2016;37:661-666.

  15. An experimentalists view on the analogy between step edges and quantum mechanical particles

    NARCIS (Netherlands)

    Zandvliet, Henricus J.W.

    1995-01-01

    Guided by scanning tunnelling microscopy images of regularly stepped surfaces it will be illustrated that there is a striking similarity between the behaviour of monoatomic step edges and quantum mechanical particles (spinless fermions). The direction along the step edge is equivalent to the time,

  16. Rapid detection of Enterovirus and Coxsackievirus A10 by a TaqMan based duplex one-step real time RT-PCR assay.

    Science.gov (United States)

    Chen, Jingfang; Zhang, Rusheng; Ou, Xinhua; Yao, Dong; Huang, Zheng; Li, Linzhi; Sun, Biancheng

    2017-06-01

    A TaqMan based duplex one-step real time RT-PCR (rRT-PCR) assay was developed for the rapid detection of Coxsackievirus A10 (CV-A10) and other enterovirus (EVs) in clinical samples. The assay was fully evaluated and found to be specific and sensitive. When applied in 115 clinical samples, a 100% diagnostic sensitivity in CV-A10 detection and 97.4% diagnostic sensitivity in other EVs were found. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Characterization of olive oil volatiles by multi-step direct thermal desorption-comprehensive gas chromatography-time-of-flight mass spectrometry using a programmed temperature vaporizing injector

    NARCIS (Netherlands)

    de Koning, S.; Kaal, E.; Janssen, H.-G.; van Platerink, C.; Brinkman, U.A.Th.

    2008-01-01

    The feasibility of a versatile system for multi-step direct thermal desorption (DTD) coupled to comprehensive gas chromatography (GC × GC) with time-of-flight mass spectrometric (TOF-MS) detection is studied. As an application the system is used for the characterization of fresh versus aged olive

  18. Rotor Cascade Shape Optimization with Unsteady Passing Wakes Using Implicit Dual-Time Stepping and a Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Eun Seok Lee

    2003-01-01

    Full Text Available An axial turbine rotor cascade-shape optimization with unsteady passing wakes was performed to obtain an improved aerodynamic performance using an unsteady flow, Reynolds-averaged Navier-Stokes equations solver that was based on explicit, finite difference; Runge-Kutta multistage time marching; and the diagonalized alternating direction implicit scheme. The code utilized Baldwin-Lomax algebraic and k-ε turbulence modeling. The full approximation storage multigrid method and preconditioning were implemented as iterative convergence-acceleration techniques. An implicit dual-time stepping method was incorporated in order to simulate the unsteady flow fields. The objective function was defined as minimization of total pressure loss and maximization of lift, while the mass flow rate was fixed during the optimization. The design variables were several geometric parameters characterizing airfoil leading edge, camber, stagger angle, and inter-row spacing. The genetic algorithm was used as an optimizer, and the penalty method was introduced for combining the constraints with the objective function. Each individual's objective function was computed simultaneously by using a 32-processor distributedmemory computer. The optimization results indicated that only minor improvements are possible in unsteady rotor/stator aerodynamics by varying these geometric parameters.

  19. The NIST Step Class Library (Step Into the Future)

    Science.gov (United States)

    1990-09-01

    Figure 6. Excerpt from a STEP exclange file based on the Geometry model 1be NIST STEP Class Libary Page 13 An issue of concern in this...Scheifler, R., Gettys, J., and Newman, P., X Window System: C Library and Protocol Reference. Digital Press, Bedford, Mass, 1988. [Schenck90] Schenck, D

  20. Valve cam design using numerical step-by-step method

    OpenAIRE

    Vasilyev, Aleksandr; Bakhracheva, Yuliya; Kabore, Ousman; Zelenskiy, Yuriy

    2014-01-01

    This article studies the numerical step-by-step method of cam profile design. The results of the study are used for designing the internal combustion engine valve gear. This method allows to profile the peak efficiency of cams in view of many restrictions, connected with valve gear serviceability and reliability.

  1. Phonics Pathways Clear Steps to Easy Reading and Perfect Spelling

    CERN Document Server

    Hiskes, Dolores G

    2011-01-01

    Teaches students of all ages the basics of phonics with a time-tested, foolproof method This tenth edition of the best-selling book teaches reading using sounds and spelling patterns. These sounds and patterns are introduced one at a time, and slowly built into words, syllables, phrases, and sentences. Simple step-by-step directions begin every lesson. Although originally designed for K-2 emergent readers, this award-winning book is also successfully being used with adolescent and adult learners, as well as second language learners and students with learning disabilities. Wise and humorous pro

  2. Composition of single-step media used for human embryo culture.

    Science.gov (United States)

    Morbeck, Dean E; Baumann, Nikola A; Oglesbee, Devin

    2017-04-01

    To determine compositions of commercial single-step culture media and test with a murine model whether differences in composition are biologically relevant. Experimental laboratory study. University-based laboratory. Inbred female mice were superovulated and mated with outbred male mice. Amino acid, organic acid, and ions content were determined for single-step culture media: CSC, Global, G-TL, and 1-Step. To determine whether differences in composition of these media are biologically relevant, mouse one-cell embryos were cultured for 96 hours in each culture media at 5% and 20% oxygen in a time-lapse incubator. Compositions of four culture media were analyzed for concentrations of 30 amino acids, organic acids, and ions. Blastocysts at 96 hours of culture and cell cycle timings were calculated, and experiments were repeated in triplicate. Of the more than 30 analytes, concentrations of glucose, lactate, pyruvate, amino acids, phosphate, calcium, and magnesium varied in concentrations. Mouse embryos were differentially affected by oxygen in G-TL and 1-Step. Four single-step culture media have compositions that vary notably in pyruvate, lactate, and amino acids. Blastocyst development was affected by culture media and its interaction with oxygen concentration. Copyright © 2017 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  3. Large area spark counters with fine time and position resolution

    International Nuclear Information System (INIS)

    Ogawa, A.; Atwood, W.B.; Fujiwara, N.; Pestov, Yu.N.; Sugahara, R.

    1983-10-01

    Spark counters trace their history back over three decades but have been used in only a limited number of experiments. The key properties of these devices include their capability of precision timing (at the sub 100 ps level) and of measuring the position of the charged particle to high accuracy. At SLAC we have undertaken a program to develop these devices for use in high energy physics experiments involving large detectors. A spark counter of size 1.2 m x 0.1 m has been constructed and has been operating continuously in our test setup for several months. In this talk I will discuss some details of its construction and its properties as a particle detector. 14 references

  4. Large, real time detectors for solar neutrinos and magnetic monopoles

    International Nuclear Information System (INIS)

    Gonzalez-Mestres, L.

    1990-01-01

    We discuss the present status of superheated superconducting granules (SSG) development for the real time detection of magnetic monopoles of any speed and of low energy solar neutrinos down to the pp region (indium project). Basic properties of SSG and progress made in the recent years are briefly reviewed. Possible ways for further improvement are discussed. The performances reached in ultrasonic grain production at ∼ 100 μm size, as well as in conventional read-out electronics, look particularly promising for a large scale monopole experiment. Alternative approaches are briefly dealt with: induction loops for magnetic monopoles; scintillators, semiconductors or superconducting tunnel junctions for a solar neutrino detector based on an indium target

  5. Development of Tandem Amorphous/Microcrystalline Silicon Thin-Film Large-Area See-Through Color Solar Panels with Reflective Layer and 4-Step Laser Scribing for Building-Integrated Photovoltaic Applications

    Directory of Open Access Journals (Sweden)

    Chin-Yi Tsai

    2014-01-01

    Full Text Available In this work, tandem amorphous/microcrystalline silicon thin-film large-area see-through color solar modules were successfully designed and developed for building-integrated photovoltaic applications. Novel and key technologies of reflective layers and 4-step laser scribing were researched, developed, and introduced into the production line to produce solar panels with various colors, such as purple, dark blue, light blue, silver, golden, orange, red wine, and coffee. The highest module power is 105 W and the highest visible light transmittance is near 20%.

  6. Controllable 3D architectures of aligned carbon nanotube arrays by multi-step processes

    Science.gov (United States)

    Huang, Shaoming

    2003-06-01

    An effective way to fabricate large area three-dimensional (3D) aligned CNTs pattern based on pyrolysis of iron(II) phthalocyanine (FePc) by two-step processes is reported. The controllable generation of different lengths and selective growth of the aligned CNT arrays on metal-patterned (e.g., Ag and Au) substrate are the bases for generating such 3D aligned CNTs architectures. By controlling experimental conditions 3D aligned CNT arrays with different lengths/densities and morphologies/structures as well as multi-layered architectures can be fabricated in large scale by multi-step pyrolysis of FePc. These 3D architectures could have interesting properties and be applied for developing novel nanotube-based devices.

  7. Stepping to the Beat: Feasibility and Potential Efficacy of a Home-Based Auditory-Cued Step Training Program in Chronic Stroke

    Directory of Open Access Journals (Sweden)

    Rachel L. Wright

    2017-08-01

    Full Text Available BackgroundHemiparesis after stroke typically results in a reduced walking speed, an asymmetrical gait pattern and a reduced ability to make gait adjustments. The purpose of this pilot study was to investigate the feasibility and preliminary efficacy of home-based training involving auditory cueing of stepping in place.MethodsTwelve community-dwelling participants with chronic hemiparesis completed two 3-week blocks of home-based stepping to music overlaid with an auditory metronome. Tempo of the metronome was increased 5% each week. One 3-week block used a regular metronome, whereas the other 3-week block had phase shift perturbations randomly inserted to cue stepping adjustments.ResultsAll participants reported that they enjoyed training, with 75% completing all training blocks. No adverse events were reported. Walking speed, Timed Up and Go (TUG time and Dynamic Gait Index (DGI scores (median [inter-quartile range] significantly improved between baseline (speed = 0.61 [0.32, 0.85] m⋅s−1; TUG = 20.0 [16.0, 39.9] s; DGI = 14.5 [11.3, 15.8] and post stepping training (speed = 0.76 [0.39, 1.03] m⋅s−1; TUG = 16.3 [13.3, 35.1] s; DGI = 16.0 [14.0, 19.0] and was maintained at follow-up (speed = 0.75 [0.41, 1.03] m⋅s−1; TUG = 16.5 [12.9, 34.1] s; DGI = 16.5 [13.5, 19.8].ConclusionThis pilot study suggests that auditory-cued stepping conducted at home was feasible and well-tolerated by participants post-stroke, with improvements in walking and functional mobility. No differences were detected between regular and phase-shift training with the metronome at each assessment point.

  8. Leading Change Step-by-Step: Tactics, Tools, and Tales

    Science.gov (United States)

    Spiro, Jody

    2010-01-01

    "Leading Change Step-by-Step" offers a comprehensive and tactical guide for change leaders. Spiro's approach has been field-tested for more than a decade and proven effective in a wide variety of public sector organizations including K-12 schools, universities, international agencies and non-profits. The book is filled with proven tactics for…

  9. A journey of a thousand miles begins with one small step - human agency, hydrological processes and time in socio-hydrology

    Science.gov (United States)

    Ertsen, M. W.; Murphy, J. T.; Purdue, L. E.; Zhu, T.

    2014-04-01

    When simulating social action in modeling efforts, as in socio-hydrology, an issue of obvious importance is how to ensure that social action by human agents is well-represented in the analysis and the model. Generally, human decision-making is either modeled on a yearly basis or lumped together as collective social structures. Both responses are problematic, as human decision-making is more complex and organizations are the result of human agency and cannot be used as explanatory forces. A way out of the dilemma of how to include human agency is to go to the largest societal and environmental clustering possible: society itself and climate, with time steps of years or decades. In the paper, another way out is developed: to face human agency squarely, and direct the modeling approach to the agency of individuals and couple this with the lowest appropriate hydrological level and time step. This approach is supported theoretically by the work of Bruno Latour, the French sociologist and philosopher. We discuss irrigation archaeology, as it is in this discipline that the issues of scale and explanatory force are well discussed. The issue is not just what scale to use: it is what scale matters. We argue that understanding the arrangements that permitted the management of irrigation over centuries requires modeling and understanding the small-scale, day-to-day operations and personal interactions upon which they were built. This effort, however, must be informed by the longer-term dynamics, as these provide the context within which human agency is acted out.

  10. "Silicon millefeuille": From a silicon wafer to multiple thin crystalline films in a single step

    Science.gov (United States)

    Hernández, David; Trifonov, Trifon; Garín, Moisés; Alcubilla, Ramon

    2013-04-01

    During the last years, many techniques have been developed to obtain thin crystalline films from commercial silicon ingots. Large market applications are foreseen in the photovoltaic field, where important cost reductions are predicted, and also in advanced microelectronics technologies as three-dimensional integration, system on foil, or silicon interposers [Dross et al., Prog. Photovoltaics 20, 770-784 (2012); R. Brendel, Thin Film Crystalline Silicon Solar Cells (Wiley-VCH, Weinheim, Germany 2003); J. N. Burghartz, Ultra-Thin Chip Technology and Applications (Springer Science + Business Media, NY, USA, 2010)]. Existing methods produce "one at a time" silicon layers, once one thin film is obtained, the complete process is repeated to obtain the next layer. Here, we describe a technology that, from a single crystalline silicon wafer, produces a large number of crystalline films with controlled thickness in a single technological step.

  11. Microsoft Office Word 2007 step by step

    CERN Document Server

    Cox, Joyce

    2007-01-01

    Experience learning made easy-and quickly teach yourself how to create impressive documents with Word 2007. With Step By Step, you set the pace-building and practicing the skills you need, just when you need them!Apply styles and themes to your document for a polished lookAdd graphics and text effects-and see a live previewOrganize information with new SmartArt diagrams and chartsInsert references, footnotes, indexes, a table of contentsSend documents for review and manage revisionsTurn your ideas into blogs, Web pages, and moreYour all-in-one learning experience includes:Files for building sk

  12. Step by Step Microsoft Office Visio 2003

    CERN Document Server

    Lemke, Judy

    2004-01-01

    Experience learning made easy-and quickly teach yourself how to use Visio 2003, the Microsoft Office business and technical diagramming program. With STEP BY STEP, you can take just the lessons you need, or work from cover to cover. Either way, you drive the instruction-building and practicing the skills you need, just when you need them! Produce computer network diagrams, organization charts, floor plans, and moreUse templates to create new diagrams and drawings quicklyAdd text, color, and 1-D and 2-D shapesInsert graphics and pictures, such as company logosConnect shapes to create a basic f

  13. Three-step management of pneumothorax: time for a re-think on initial management†

    Science.gov (United States)

    Kaneda, Hiroyuki; Nakano, Takahito; Taniguchi, Yohei; Saito, Tomohito; Konobu, Toshifumi; Saito, Yukihito

    2013-01-01

    Pneumothorax is a common disease worldwide, but surprisingly, its initial management remains controversial. There are some published guidelines for the management of spontaneous pneumothorax. However, they differ in some respects, particularly in initial management. In published trials, the objective of treatment has not been clarified and it is not possible to compare the treatment strategies between different trials because of inappropriate evaluations of the air leak. Therefore, there is a need to outline the optimal management strategy for pneumothorax. In this report, we systematically review published randomized controlled trials of the different treatments of primary spontaneous pneumothorax, point out controversial issues and finally propose a three-step strategy for the management of pneumothorax. There are three important characteristics of pneumothorax: potentially lethal respiratory dysfunction; air leak, which is the obvious cause of the disease; frequent recurrence. These three characteristics correspond to the three steps. The central idea of the strategy is that the lung should not be expanded rapidly, unless absolutely necessary. The primary objective of both simple aspiration and chest drainage should be the recovery of acute respiratory dysfunction or the avoidance of respiratory dysfunction and subsequent complications. We believe that this management strategy is simple and clinically relevant and not dependent on the classification of pneumothorax. PMID:23117233

  14. Crowdsourcing step-by-step information extraction to enhance existing how-to videos

    OpenAIRE

    Nguyen, Phu Tran; Weir, Sarah; Guo, Philip J.; Miller, Robert C.; Gajos, Krzysztof Z.; Kim, Ju Ho

    2014-01-01

    Millions of learners today use how-to videos to master new skills in a variety of domains. But browsing such videos is often tedious and inefficient because video player interfaces are not optimized for the unique step-by-step structure of such videos. This research aims to improve the learning experience of existing how-to videos with step-by-step annotations. We first performed a formative study to verify that annotations are actually useful to learners. We created ToolScape, an interac...

  15. Turbulence, dynamic similarity and scale effects in high-velocity free-surface flows above a stepped chute

    Science.gov (United States)

    Felder, Stefan; Chanson, Hubert

    2009-07-01

    In high-velocity free-surface flows, air entrainment is common through the interface, and intense interactions take place between turbulent structures and entrained bubbles. Two-phase flow properties were measured herein in high-velocity open channel flows above a stepped chute. Detailed turbulence measurements were conducted in a large-size facility, and a comparative analysis was applied to test the validity of the Froude and Reynolds similarities. The results showed consistently that the Froude similitude was not satisfied using a 2:1 geometric scaling ratio. Lesser number of entrained bubbles and comparatively greater bubble sizes were observed at the smaller Reynolds numbers, as well as lower turbulence levels and larger turbulent length and time scales. The results implied that small-size models did underestimate the rate of energy dissipation and the aeration efficiency of prototype stepped spillways for similar flow conditions. Similarly a Reynolds similitude was tested. The results showed also some significant scale effects. However a number of self-similar relationships remained invariant under changes of scale and confirmed the analysis of Chanson and Carosi (Exp Fluids 42:385-401, 2007). The finding is significant because self-similarity may provide a picture general enough to be used to characterise the air-water flow field in large prototype channels.

  16. Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method

    Science.gov (United States)

    Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang

    2017-06-01

    Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.

  17. Real-time vibration compensation for large telescopes

    Science.gov (United States)

    Böhm, M.; Pott, J.-U.; Sawodny, O.; Herbst, T.; Kürster, M.

    2014-08-01

    We compare different strategies for minimizing the effects of telescope vibrations to the differential piston (optical pathway difference) for the Near-InfraRed/Visible Adaptive Camera and INterferometer for Astronomy (LINC-NIRVANA) at the Large Binocular Telescope (LBT) using an accelerometer feedforward compensation approach. We summarize, why this technology is important for LINC-NIRVANA, and also for future telescopes and already existing instruments. The main objective is outlining a solution for the estimation problem in general and its specifics at the LBT. Emphasis is put on realistic evaluation of the used algorithms in the laboratory, such that predictions for the expected performance at the LBT can be made. Model-based estimation and broad-band filtering techniques can be used to solve the estimation task, and the differences are discussed. Simulation results and measurements are shown to motivate our choice of the estimation algorithm for LINC-NIRVANA. The laboratory setup is aimed at imitating the vibration behaviour at the LBT in general, and the M2 as main contributor in particular. For our measurements, we introduce a disturbance time series which has a frequency spectrum comparable to what can be measured at the LBT on a typical night. The controllers' ability to suppress vibrations in the critical frequency range of 8-60 Hz is demonstrated. The experimental results are promising, indicating the ability to suppress differential piston induced by telescope vibrations by a factor of about 5 (rms), which is significantly better than any currently commissioned system.

  18. ABOUT MODELING COMPLEX ASSEMBLIES IN SOLIDWORKS – LARGE AXIAL BEARING

    Directory of Open Access Journals (Sweden)

    Cătălin IANCU

    2017-12-01

    Full Text Available In this paperwork is presented the modeling strategy used in SOLIDWORKS for modeling special items as large axial bearing and the steps to be taken in order to obtain a better design. In the paper are presented the features that are used for modeling parts, and then the steps that must be taken in order to obtain the 3D model of a large axial bearing used for bucket-wheel equipment for charcoal moving.

  19. Xylose Isomerization with Zeolites in a Two-Step Alcohol–Water Process

    DEFF Research Database (Denmark)

    Paniagua, Marta; Shunmugavel, Saravanamurugan; Melián Rodriguez, Mayra

    2015-01-01

    Isomerization of xylose to xylulose was efficiently catalyzed by large-pore zeolites in a two-step methanol–water process that enhanced the product yield significantly. The reaction pathway involves xylose isomerization to xylulose, which, in part, subsequently reacts with methanol to form methyl...

  20. Nanopatterning of magnetic disks by single-step Ar+ Ion projection

    NARCIS (Netherlands)

    Dietzel, A.H.; Berger, R.; Loeschner, H.; Platzgummer, E.; Stengl, G.; Bruenger, W.H.; Letzkus, F.

    2003-01-01

    Large-area Ar+ projection has been used to generate planar magnetic nanostructures on a 1¿-format hard disk in a single step (see Figure). The recording pattern was transferred to a Co/Pt multilayer without resist processes or any other contact to the delicate media surface. It is conceivable that

  1. Developing Large Web Applications

    CERN Document Server

    Loudon, Kyle

    2010-01-01

    How do you create a mission-critical site that provides exceptional performance while remaining flexible, adaptable, and reliable 24/7? Written by the manager of a UI group at Yahoo!, Developing Large Web Applications offers practical steps for building rock-solid applications that remain effective even as you add features, functions, and users. You'll learn how to develop large web applications with the extreme precision required for other types of software. Avoid common coding and maintenance headaches as small websites add more pages, more code, and more programmersGet comprehensive soluti

  2. Three-year randomized controlled clinical study of a one step universal adhesive and a two-step self-etch adhesive in Class II resin composite restorations

    DEFF Research Database (Denmark)

    van Dijken, Jan WV; Pallesen, Ulla

    2017-01-01

    Purpose: To evaluate in a randomized clinical evaluation the 3-year clinical durability of a one-step universal adhesive bonding system and compare it intraindividually with a 2-step self-etch adhesive in Class II restorations. Materials and Methods: Each of 57 participants (mean age 58.3 yr......) received at least two, as similar as possible, extended Class II restorations. The cavities in each of the 60 individual pairs of cavities were randomly distributed to the 1-step universal adhesive (All Bond Universal: AU) and the control 2-step self-etch adhesive (Optibond XTR: OX). A low shrinkage resin......) success rates (p>0.05). Annual failure rates were 1.8% and 2.6%, respectively.The main reason for failure was resin composite fracture. Conclusion: Class II resin composite restorations placed with a one-step universal adhesive showed good short time effectiveness....

  3. Event processing time prediction at the CMS experiment of the Large Hadron Collider

    International Nuclear Information System (INIS)

    Cury, Samir; Gutsche, Oliver; Kcira, Dorian

    2014-01-01

    The physics event reconstruction is one of the biggest challenges for the computing of the LHC experiments. Among the different tasks that computing systems of the CMS experiment performs, the reconstruction takes most of the available CPU resources. The reconstruction time of single collisions varies according to event complexity. Measurements were done in order to determine this correlation quantitatively, creating means to predict it based on the data-taking conditions of the input samples. Currently the data processing system splits tasks in groups with the same number of collisions and does not account for variations in the processing time. These variations can be large and can lead to a considerable increase in the time it takes for CMS workflows to finish. The goal of this study was to use estimates on processing time to more efficiently split the workflow into jobs. By considering the CPU time needed for each job the spread of the job-length distribution in a workflow is reduced.

  4. A two-step real-time PCR assay for quantitation and genotyping of human parvovirus 4.

    Science.gov (United States)

    Väisänen, E; Lahtinen, A; Eis-Hübinger, A M; Lappalainen, M; Hedman, K; Söderlund-Venermo, M

    2014-01-01

    Human parvovirus 4 (PARV4) of the family Parvoviridae was discovered in a plasma sample of a patient with an undiagnosed acute infection in 2005. Currently, three PARV4 genotypes have been identified, however, with an unknown clinical significance. Interestingly, these genotypes seem to differ in epidemiology. In Northern Europe, USA and Asia, genotypes 1 and 2 have been found to occur mainly in persons with a history of injecting drug use or other parenteral exposure. In contrast, genotype 3 appears to be endemic in sub-Saharan Africa, where it infects children and adults without such risk behaviour. In this study, a novel straightforward and cost-efficient molecular assay for both quantitation and genotyping of PARV4 DNA was developed. The two-step method first applies a single-probe pan-PARV4 qPCR for screening and quantitation of this relatively rare virus, and subsequently, only the positive samples undergo a real-time PCR-based multi-probe genotyping. The new qPCR-GT method is highly sensitive and specific regardless of the genotype, and thus being suitable for studying the clinical impact and occurrence of the different PARV4 genotypes. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Automatic Optimization for Large-Scale Real-Time Coastal Water Simulation

    Directory of Open Access Journals (Sweden)

    Shunli Wang

    2016-01-01

    Full Text Available We introduce an automatic optimization approach for the simulation of large-scale coastal water. To solve the singular problem of water waves obtained with the traditional model, a hybrid deep-shallow-water model is estimated by using an automatic coupling algorithm. It can handle arbitrary water depth and different underwater terrain. As a certain feature of coastal terrain, coastline is detected with the collision detection technology. Then, unnecessary water grid cells are simplified by the automatic simplification algorithm according to the depth. Finally, the model is calculated on Central Processing Unit (CPU and the simulation is implemented on Graphics Processing Unit (GPU. We show the effectiveness of our method with various results which achieve real-time rendering on consumer-level computer.

  6. Direct observation of the myosin Va recovery stroke that contributes to unidirectional stepping along actin.

    Directory of Open Access Journals (Sweden)

    Katsuyuki Shiroguchi

    2011-04-01

    Full Text Available Myosins are ATP-driven linear molecular motors that work as cellular force generators, transporters, and force sensors. These functions are driven by large-scale nucleotide-dependent conformational changes, termed "strokes"; the "power stroke" is the force-generating swinging of the myosin light chain-binding "neck" domain relative to the motor domain "head" while bound to actin; the "recovery stroke" is the necessary initial motion that primes, or "cocks," myosin while detached from actin. Myosin Va is a processive dimer that steps unidirectionally along actin following a "hand over hand" mechanism in which the trailing head detaches and steps forward ∼72 nm. Despite large rotational Brownian motion of the detached head about a free joint adjoining the two necks, unidirectional stepping is achieved, in part by the power stroke of the attached head that moves the joint forward. However, the power stroke alone cannot fully account for preferential forward site binding since the orientation and angle stability of the detached head, which is determined by the properties of the recovery stroke, dictate actin binding site accessibility. Here, we directly observe the recovery stroke dynamics and fluctuations of myosin Va using a novel, transient caged ATP-controlling system that maintains constant ATP levels through stepwise UV-pulse sequences of varying intensity. We immobilized the neck of monomeric myosin Va on a surface and observed real time motions of bead(s attached site-specifically to the head. ATP induces a transient swing of the neck to the post-recovery stroke conformation, where it remains for ∼40 s, until ATP hydrolysis products are released. Angle distributions indicate that the post-recovery stroke conformation is stabilized by ≥ 5 k(BT of energy. The high kinetic and energetic stability of the post-recovery stroke conformation favors preferential binding of the detached head to a forward site 72 nm away. Thus, the recovery

  7. Stability analysis of the Backward Euler time discretization for the pin-resolved transport transient reactor calculation

    International Nuclear Information System (INIS)

    Zhu, Ang; Xu, Yunlin; Downar, Thomas

    2016-01-01

    Three-dimensional, full core transport modeling with pin-resolved detail for reactor dynamic simulation is important for some multi-physics reactor applications. However, it can be computationally intensive due to the difficulty in maintaining accuracy while minimizing the number of time steps. A recently proposed Transient Multi-Level (TML) methodology overcomes this difficulty by use multi-level transient solvers to capture the physical phenomenal in different time domains and thus maximize the numerical accuracy and computational efficiency. One major problem with the TML method is the negative flux/precursor number density generated using large time steps for the MOC solver, which is due to the Backward Euler discretization scheme. In this paper, the stability issue of Backward Euler discretization is first investigated using the Point Kinetics Equations (PKEs), and the predicted maximum allowed time step for SPERT test 60 case is shown to be less than 10 ms. To overcome this difficulty, linear and exponential transformations are investigated using the PKEs. The linear transformation is shown to increase the maximum time step by a factor of 2, and the exponential transformation is shown to increase the maximum time step by a factor of 5, as well as provide unconditionally stability above a specified threshold. The two sets of transformations are then applied to TML scheme in the MPACT code, and the numerical results presented show good agreement for standard, linear transformed, and exponential transformed maximum time step between the PKEs model and the MPACT whole core transport solution for three different cases, including a pin cell case, a 3D SPERT assembly case and a row of assemblies (“striped assembly case”) from the SPERT model. Finally, the successful whole transient execution of the stripe assembly case shows the ability of the exponential transformation method to use 10 ms and 20 ms time steps, which all failed using the standard method.

  8. Wealth Transfers Among Large Customers from Implementing Real-Time Retail Electricity Pricing

    OpenAIRE

    Borenstein, Severin

    2007-01-01

    Adoption of real-time electricity pricing — retail prices that vary hourly to reflect changing wholesale prices — removes existing cross-subsidies to those customers that consume disproportionately more when wholesale prices are highest. If their losses are substantial, these customers are likely to oppose RTP initiatives unless there is a supplemental program to offset their loss. Using data on a sample of 1142 large industrial and commercial customers in northern California, I show that RTP...

  9. Photon Production through Multi-step Processes Important in Nuclear Fluorescence Experiments

    International Nuclear Information System (INIS)

    Hagmann, C; Pruet, J

    2006-01-01

    The authors present calculations describing the production of photons through multi-step processes occurring when a beam of gamma rays interacts with a macroscopic material. These processes involve the creation of energetic electrons through Compton scattering, photo-absorption and pair production, the subsequent scattering of these electrons, and the creation of energetic photons occurring as these electrons are slowed through Bremsstrahlung emission. Unlike single Compton collisions, during which an energetic photon that is scattered through a large angle loses most of its energy, these multi-step processes result in a sizable flux of energetic photons traveling at large angles relative to an incident photon beam. These multi-step processes are also a key background in experiments that measure nuclear resonance fluorescence by shining photons on a thin foil and observing the spectrum of back-scattered photons. Effective cross sections describing the production of backscattered photons are presented in a tabular form that allows simple estimates of backgrounds expected in a variety of experiments. Incident photons with energies between 0.5 MeV and 8 MeV are considered. These calculations of effective cross sections may be useful for those designing NRF experiments or systems that detect specific isotopes in well-shielded environments through observation of resonance fluorescence

  10. Sampling large landscapes with small-scale stratification-User's Manual

    Science.gov (United States)

    Bart, Jonathan

    2011-01-01

    This manual explains procedures for partitioning a large landscape into plots, assigning the plots to strata, and selecting plots in each stratum to be surveyed. These steps are referred to as the "sampling large landscapes (SLL) process." We assume that users of the manual have a moderate knowledge of ArcGIS and Microsoft ® Excel. The manual is written for a single user but in many cases, some steps will be carried out by a biologist designing the survey and some steps will be carried out by a quantitative assistant. Thus, the manual essentially may be passed back and forth between these users. The SLL process primarily has been used to survey birds, and we refer to birds as subjects of the counts. The process, however, could be used to count any objects. ®

  11. On line surveillance of large systems: applications to nuclear and chemical plant

    International Nuclear Information System (INIS)

    Zwingelstein, G.

    1978-01-01

    An on line surveillance method for large scale and distributed parameter systems is achieved by comparing in real time the internal physical parameter values to the reference values. It is shown that the following steps are necessary: modeling, model validation using dynamic testing and on line estimation of parameters. For large scale systems where only few outputs are measurable, an estimation algorithm was developed, selecting the measurable output giving the minimum variance of the physical parameters. This estimation scheme uses a quasilinearization technique associated to the sensitivity equation and the recursive least squares techniques. For large scale systems of order greater than 100, two versions of the estimation scheme are proposed to decrease the computation time. An application to a nuclear reactor core (state variable model of order 29) is proposed and used real data. For distributed systems the estimation scheme was developed with either measurements at fixed time or at fixed space. The estimation algorithm selects the set of measurements that gives the minimum variance of the estimates. An application to a liquid-liquid extraction column, modelized by a set of four coupled partial differential equations, demonstrates the efficiency of the method

  12. Development of the step complexity measure for emergency operating procedures using entropy concepts

    International Nuclear Information System (INIS)

    Park, Jinkyun; Jung, Wondea; Ha, Jaejoo

    2001-01-01

    For a nuclear power plant (NPP), symptom-based emergency operating procedures (EOPs) have been adopted to enhance the safety of NPPs through reduction of operators' workload under emergency conditions. Symptom-based EOPs, however, could place a workload on operators because they have to not only identify related symptoms, but also understand the context of steps that should be carried out. Therefore, many qualitative checklists are suggested to ensure the appropriateness of steps included in EOPs. However, since these qualitative evaluations have some drawbacks, a quantitative measure that can roughly estimate the complexity of EOP steps is imperative to compensate for them. In this paper, a method to evaluate the complexity of an EOP step is developed based on entropy measures that have been used in software engineering. Based on these, step complexity (SC) measure that can evaluate SC from various viewpoints (such as the amount of information/operators' actions included in each EOP step, and the logic structure of each EOP step) was developed. To verify the suitableness of the SC measure, estimated SC values are compared with subjective task load scores obtained from the NASA-TLX (task load index) method and step performance time obtained from a full scope simulator. From these comparisons, it was observed that estimated SC values generally agree with the NASA-TLX scores and step performance time data. Thus, it could be concluded that the developed SC measure would be considered for evaluating SC of an EOP step

  13. Two-Step Incision for Periarterial Sympathectomy of the Hand

    Directory of Open Access Journals (Sweden)

    Seung Bae Jeon

    2015-11-01

    Full Text Available BackgroundSurgical scars on the palmar surface of the hand may lead to functional and also aesthetic and psychological consequences. The objective of this study was to introduce a new incision technique for periarterial sympathectomy of the hand and to compare the results of the new two-step incision technique with those of a Koman incision by using an objective questionnaire.MethodsA total of 40 patients (17 men and 23 women with intractable Raynaud's disease or syndrome underwent surgery in our hospital, conducted by a single surgeon, between January 2008 and January 2013. Patients who had undergone extended sympathectomy or vessel graft were excluded. Clinical evaluation of postoperative scars was performed in both groups one year after surgery using the patient and observer scar assessment scale (POSAS and the Wake Forest University rating scale.ResultsThe total patient score was 8.59 (range, 6-15 in the two-step incision group and 9.62 (range, 7-18 in the Koman incision group. A significant difference was found between the groups in the total PS score (P-value=0.034 but not in the total observer score. Our analysis found no significant difference in preoperative and postoperative Wake Forest University rating scale scores between the two-step and Koman incision groups. The time required for recovery prior to returning to work after surgery was shorter in the two-step incision group, with a mean of 29.48 days in the two-step incision group and 34.15 days in the Koman incision group (P=0.03.ConclusionsCompared to the Koman incision, the new two-step incision technique provides better aesthetic results, similar symptom improvement, and a reduction in the recovery time required before returning to work. Furthermore, this incision allows the surgeon to access a wide surgical field and a sufficient exposure of anatomical structures.

  14. Ultrasonic transesterification of Jatrophacurcas L. oil to biodiesel by a two-step process

    International Nuclear Information System (INIS)

    Deng Xin; Fang Zhen; Liu Yunhu

    2010-01-01

    Transesterification of high free fatty acid content Jatropha oil with methanol to biodiesel catalyzed directly by NaOH and high-concentrated H 2 SO 4 or by two-step process were studied in an ultrasonic reactor at 60 deg. C. If NaOH was used as catalyst, biodiesel yield was only 47.2% with saponification problem. With H 2 SO 4 as catalyst, biodiesel yield was increased to 92.8%. However, longer reaction time (4 h) was needed and the biodiesel was not stable. A two-step, acid-esterification and base-transesterification process was further used for biodiesel production. It was found that after the first-step pretreatment with H 2 SO 4 for 1 h, the acid value of Jatropha oil was reduced from 10.45 to 1.2 mg KOH/g, and subsequently, NaOH was used for the second-step transesterification. Stable and clear yellowish biodiesel was obtained with 96.4% yield after reaction for 0.5 h. The total production time was only 1.5 h that is just half of the previous reported. The two-step process with ultrasonic radiation is effective and time-saving for biodiesel production from Jatropha oil.

  15. Comparison study on mechanical properties single step and three step artificial aging on duralium

    Science.gov (United States)

    Tsamroh, Dewi Izzatus; Puspitasari, Poppy; Andoko, Sasongko, M. Ilman N.; Yazirin, Cepi

    2017-09-01

    Duralium is kind of non-ferro alloy that used widely in industrial. That caused its properties such as mild, high ductility, and resistance from corrosion. This study aimed to know mechanical properties of duralium on single step and three step articial aging process. Mechanical properties that discussed in this study focused on toughness value, tensile strength, and microstructure of duralium. Toughness value of single step artificial aging was 0.082 joule/mm2, and toughness value of three step artificial aging was 0,0721 joule/mm2. Duralium tensile strength of single step artificial aging was 32.36 kgf/mm^2, and duralium tensile strength of three step artificial aging was 32,70 kgf/mm^2. Based on microstructure photo of duralium of single step artificial aging showed that precipitate (θ) was not spreading evenly indicated by black spot which increasing the toughness of material. While microstructure photo of duralium that treated by three step artificial aging showed that it had more precipitate (θ) spread evenly compared with duralium that treated by single step artificial aging.

  16. Dynamic state estimation techniques for large-scale electric power systems

    International Nuclear Information System (INIS)

    Rousseaux, P.; Pavella, M.

    1991-01-01

    This paper presents the use of dynamic type state estimators for energy management in electric power systems. Various dynamic type estimators have been developed, but have never been implemented. This is primarily because of dimensionality problems posed by the conjunction of an extended Kalman filter with a large scale power system. This paper precisely focuses on how to circumvent the high dimensionality, especially prohibitive in the filtering step, by using a decomposition-aggregation hierarchical scheme; to appropriately model the power system dynamics, the authors introduce new state variables in the prediction step and rely on a load forecasting method. The combination of these two techniques succeeds in solving the overall dynamic state estimation problem not only in a tractable and realistic way, but also in compliance with real-time computational requirements. Further improvements are also suggested, bound to the specifics of the high voltage electric transmission systems

  17. A Novel Spatial-Temporal Voronoi Diagram-Based Heuristic Approach for Large-Scale Vehicle Routing Optimization with Time Constraints

    Directory of Open Access Journals (Sweden)

    Wei Tu

    2015-10-01

    Full Text Available Vehicle routing optimization (VRO designs the best routes to reduce travel cost, energy consumption, and carbon emission. Due to non-deterministic polynomial-time hard (NP-hard complexity, many VROs involved in real-world applications require too much computing effort. Shortening computing time for VRO is a great challenge for state-of-the-art spatial optimization algorithms. From a spatial-temporal perspective, this paper presents a spatial-temporal Voronoi diagram-based heuristic approach for large-scale vehicle routing problems with time windows (VRPTW. Considering time constraints, a spatial-temporal Voronoi distance is derived from the spatial-temporal Voronoi diagram to find near neighbors in the space-time searching context. A Voronoi distance decay strategy that integrates a time warp operation is proposed to accelerate local search procedures. A spatial-temporal feature-guided search is developed to improve unpromising micro route structures. Experiments on VRPTW benchmarks and real-world instances are conducted to verify performance. The results demonstrate that the proposed approach is competitive with state-of-the-art heuristics and achieves high-quality solutions for large-scale instances of VRPTWs in a short time. This novel approach will contribute to spatial decision support community by developing an effective vehicle routing optimization method for large transportation applications in both public and private sectors.

  18. One-step deterministic multipartite entanglement purification with linear optics

    Energy Technology Data Exchange (ETDEWEB)

    Sheng, Yu-Bo [Department of Physics, Tsinghua University, Beijing 100084 (China); Long, Gui Lu, E-mail: gllong@tsinghua.edu.cn [Department of Physics, Tsinghua University, Beijing 100084 (China); Center for Atomic and Molecular NanoSciences, Tsinghua University, Beijing 100084 (China); Key Laboratory for Quantum Information and Measurements, Beijing 100084 (China); Deng, Fu-Guo [Department of Physics, Applied Optics Beijing Area Major Laboratory, Beijing Normal University, Beijing 100875 (China)

    2012-01-09

    We present a one-step deterministic multipartite entanglement purification scheme for an N-photon system in a Greenberger–Horne–Zeilinger state with linear optical elements. The parties in quantum communication can in principle obtain a maximally entangled state from each N-photon system with a success probability of 100%. That is, it does not consume the less-entangled photon systems largely, which is far different from other multipartite entanglement purification schemes. This feature maybe make this scheme more feasible in practical applications. -- Highlights: ► We proposed a deterministic entanglement purification scheme for GHZ states. ► The scheme uses only linear optical elements and has a success probability of 100%. ► The scheme gives a purified GHZ state in just one-step.

  19. The role of particle jamming on the formation and stability of step-pool morphology: insight from a reduced-complexity model

    Science.gov (United States)

    Saletti, M.; Molnar, P.; Hassan, M. A.

    2017-12-01

    Granular processes have been recognized as key drivers in earth surface dynamics, especially in steep landscapes because of the large size of sediment found in channels. In this work we focus on step-pool morphologies, studying the effect of particle jamming on step formation. Starting from the jammed-state hypothesis, we assume that grains generate steps because of particle jamming and those steps are inherently more stable because of additional force chains in the transversal direction. We test this hypothesis with a particle-based reduced-complexity model, CAST2, where sediment is organized in patches and entrainment, transport and deposition of grains depend on flow stage and local topography through simplified phenomenological rules. The model operates with 2 grain sizes: fine grains, that can be mobilized both my large and moderate flows, and coarse grains, mobile only during large floods. First, we identify the minimum set of processes necessary to generate and maintain steps in a numerical channel: (a) occurrence of floods, (b) particle jamming, (c) low sediment supply, and (d) presence of sediment with different entrainment probabilities. Numerical results are compared with field observations collected in different step-pool channels in terms of step density, a variable that captures the proportion of the channel occupied by steps. Not only the longitudinal profiles of numerical channels display step sequences similar to those observed in real step-pool streams, but also the values of step density are very similar when all the processes mentioned before are considered. Moreover, with CAST2 it is possible to run long simulations with repeated flood events, to test the effect of flood frequency on step formation. Numerical results indicate that larger step densities belong to system more frequently perturbed by floods, compared to system having a lower flood frequency. Our results highlight the important interactions between external hydrological forcing and

  20. Typical Periods for Two-Stage Synthesis by Time-Series Aggregation with Bounded Error in Objective Function

    Energy Technology Data Exchange (ETDEWEB)

    Bahl, Björn; Söhler, Theo; Hennen, Maike; Bardow, André, E-mail: andre.bardow@ltt.rwth-aachen.de [Institute of Technical Thermodynamics, RWTH Aachen University, Aachen (Germany)

    2018-01-08

    Two-stage synthesis problems simultaneously consider here-and-now decisions (e.g., optimal investment) and wait-and-see decisions (e.g., optimal operation). The optimal synthesis of energy systems reveals such a two-stage character. The synthesis of energy systems involves multiple large time series such as energy demands and energy prices. Since problem size increases with the size of the time series, synthesis of energy systems leads to complex optimization problems. To reduce the problem size without loosing solution quality, we propose a method for time-series aggregation to identify typical periods. Typical periods retain the chronology of time steps, which enables modeling of energy systems, e.g., with storage units or start-up cost. The aim of the proposed method is to obtain few typical periods with few time steps per period, while accurately representing the objective function of the full time series, e.g., cost. Thus, we determine the error of time-series aggregation as the cost difference between operating the optimal design for the aggregated time series and for the full time series. Thereby, we rigorously bound the maximum performance loss of the optimal energy system design. In an initial step, the proposed method identifies the best length of typical periods by autocorrelation analysis. Subsequently, an adaptive procedure determines aggregated typical periods employing the clustering algorithm k-medoids, which groups similar periods into clusters and selects one representative period per cluster. Moreover, the number of time steps per period is aggregated by a novel clustering algorithm maintaining chronology of the time steps in the periods. The method is iteratively repeated until the error falls below a threshold value. A case study based on a real-world synthesis problem of an energy system shows that time-series aggregation from 8,760 time steps to 2 typical periods with each 2 time steps results in an error smaller than the optimality gap of