A parallel nearly implicit time-stepping scheme
Botchev, Mike A.; van der Vorst, Henk A.
2001-01-01
Across-the-space parallelism still remains the most mature, convenient and natural way to parallelize large scale problems. One of the major problems here is that implicit time stepping is often difficult to parallelize due to the structure of the system. Approximate implicit schemes have been suggested to circumvent the problem. These schemes have attractive stability properties and they are also very well parallelizable. The purpose of this article is to give an overall assessment of the pa...
Positivity-preserving dual time stepping schemes for gas dynamics
Parent, Bernard
2018-05-01
A new approach at discretizing the temporal derivative of the Euler equations is here presented which can be used with dual time stepping. The temporal discretization stencil is derived along the lines of the Cauchy-Kowalevski procedure resulting in cross differences in spacetime but with some novel modifications which ensure the positivity of the discretization coefficients. It is then shown that the so-obtained spacetime cross differences result in changes to the wave speeds and can thus be incorporated within Roe or Steger-Warming schemes (with and without reconstruction-evolution) simply by altering the eigenvalues. The proposed approach is advantaged over alternatives in that it is positivity-preserving for the Euler equations. Further, it yields monotone solutions near discontinuities while exhibiting a truncation error in smooth regions less than the one of the second- or third-order accurate backward-difference-formula (BDF) for either small or large time steps. The high resolution and positivity preservation of the proposed discretization stencils are independent of the convergence acceleration technique which can be set to multigrid, preconditioning, Jacobian-free Newton-Krylov, block-implicit, etc. Thus, the current paper also offers the first implicit integration of the time-accurate Euler equations that is positivity-preserving in the strict sense (that is, the density and temperature are guaranteed to remain positive). This is in contrast to all previous positivity-preserving implicit methods which only guaranteed the positivity of the density, not of the temperature or pressure. Several stringent reacting and inert test cases confirm the positivity-preserving property of the proposed method as well as its higher resolution and higher computational efficiency over other second-order and third-order implicit temporal discretization strategies.
Chu, Chunlei; Stoffa, Paul L.; Seif, Roustam
2009-01-01
We present two Lax‐Wendroff type high‐order time stepping schemes and apply them to solving the 3D elastic wave equation. The proposed schemes have the same format as the Taylor series expansion based schemes, only with modified temporal extrapolation coefficients. We demonstrate by both theoretical analysis and numerical examples that the modified schemes significantly improve the stability conditions.
Discrete maximal regularity of time-stepping schemes for fractional evolution equations.
Jin, Bangti; Li, Buyang; Zhou, Zhi
2018-01-01
In this work, we establish the maximal [Formula: see text]-regularity for several time stepping schemes for a fractional evolution model, which involves a fractional derivative of order [Formula: see text], [Formula: see text], in time. These schemes include convolution quadratures generated by backward Euler method and second-order backward difference formula, the L1 scheme, explicit Euler method and a fractional variant of the Crank-Nicolson method. The main tools for the analysis include operator-valued Fourier multiplier theorem due to Weis (Math Ann 319:735-758, 2001. doi:10.1007/PL00004457) and its discrete analogue due to Blunck (Stud Math 146:157-176, 2001. doi:10.4064/sm146-2-3). These results generalize the corresponding results for parabolic problems.
Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.
2018-04-01
An optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubic "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a constraint on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.
Radtke, H.; Burchard, H.
2015-01-01
In this paper, an unconditionally positive and multi-element conserving time stepping scheme for systems of non-linearly coupled ODE's is presented. These systems of ODE's are used to describe biogeochemical transformation processes in marine ecosystem models. The numerical scheme is a positive-definite modification of the Runge-Kutta method, it can have arbitrarily high order of accuracy and does not require time step adaption. If the scheme is combined with a modified Patankar-Runge-Kutta method from Burchard et al. (2003), it also gets the ability to solve a certain class of stiff numerical problems, but the accuracy is restricted to second-order then. The performance of the new scheme on two test case problems is shown.
Some Comments on the Behavior of the RELAP5 Numerical Scheme at Very Small Time Steps
International Nuclear Information System (INIS)
Tiselj, Iztok; Cerne, Gregor
2000-01-01
The behavior of the RELAP5 code at very short time steps is described, i.e., δt [approximately equal to] 0.01 δx/c. First, the property of the RELAP5 code to trace acoustic waves with 'almost' second-order accuracy is demonstrated. Quasi-second-order accuracy is usually achieved for acoustic waves at very short time steps but can never be achieved for the propagation of nonacoustic temperature and void fraction waves. While this feature may be beneficial for the simulations of fast transients describing pressure waves, it also has an adverse effect: The lack of numerical diffusion at very short time steps can cause typical second-order numerical oscillations near steep pressure jumps. This behavior explains why an automatic halving of the time step, which is used in RELAP5 when numerical difficulties are encountered, in some cases leads to the failure of the simulation.Second, the integration of the stiff interphase exchange terms in RELAP5 is studied. For transients with flashing and/or rapid condensation as the main phenomena, results strongly depend on the time step used. Poor accuracy is achieved with 'normal' time steps (δt [approximately equal to] δx/v) because of the very short characteristic timescale of the interphase mass and heat transfer sources. In such cases significantly different results are predicted with very short time steps because of the more accurate integration of the stiff interphase exchange terms
Stability control for approximate implicit time-stepping schemes with minimal residual iterations
Botchev, M.A.; Sleijpen, G.L.G.; Vorst, H.A. van der
1997-01-01
Implicit schemes for the integration of ODE's are popular when stabil- ity is more of concern than accuracy, for instance for the computation of a steady state solution. However, in particular for very large sys- tems the solution of the involved linear systems maybevery expensive. In this
Stability control for approximate implicit timestepping schemes with minimal residual iterations
Botchev, M.A.; Sleijpen, G.L.G.; Vorst, H.A. van der
1997-01-01
Implicit schemes for the integration of ODE's are popular when stabil ity is more of concern than accuracy, for instance for the computation of a steady state solution. However, in particular for very large sys tems the solution of the involved linear systems may be very expensive. In this
Diffeomorphic image registration with automatic time-step adjustment
DEFF Research Database (Denmark)
Pai, Akshay Sadananda Uppinakudru; Klein, S.; Sommer, Stefan Horst
2015-01-01
In this paper, we propose an automated Euler's time-step adjustment scheme for diffeomorphic image registration using stationary velocity fields (SVFs). The proposed variational problem aims at bounding the inverse consistency error by adaptively adjusting the number of Euler's step required to r...... accuracy as a fixed time-step scheme however at a much less computational cost....
Directory of Open Access Journals (Sweden)
Emily Lyle
2012-03-01
Full Text Available Indo-European mythology is known only through written records but it needs to be understood in terms of the preliterate oral-cultural context in which it was rooted. It is proposed that this world was conceptually organized through a memory-capsule consisting of the current generation and the three before it, and that there was a system of alternate generations with each generation taking a step into the future under the leadership of a white or red king.
Analysis of reaction schemes using maximum rates of constituent steps
Motagamwala, Ali Hussain; Dumesic, James A.
2016-01-01
We show that the steady-state kinetics of a chemical reaction can be analyzed analytically in terms of proposed reaction schemes composed of series of steps with stoichiometric numbers equal to unity by calculating the maximum rates of the constituent steps, rmax,i, assuming that all of the remaining steps are quasi-equilibrated. Analytical expressions can be derived in terms of rmax,i to calculate degrees of rate control for each step to determine the extent to which each step controls the rate of the overall stoichiometric reaction. The values of rmax,i can be used to predict the rate of the overall stoichiometric reaction, making it possible to estimate the observed reaction kinetics. This approach can be used for catalytic reactions to identify transition states and adsorbed species that are important in controlling catalyst performance, such that detailed calculations using electronic structure calculations (e.g., density functional theory) can be carried out for these species, whereas more approximate methods (e.g., scaling relations) are used for the remaining species. This approach to assess the feasibility of proposed reaction schemes is exact for reaction schemes where the stoichiometric coefficients of the constituent steps are equal to unity and the most abundant adsorbed species are in quasi-equilibrium with the gas phase and can be used in an approximate manner to probe the performance of more general reaction schemes, followed by more detailed analyses using full microkinetic models to determine the surface coverages by adsorbed species and the degrees of rate control of the elementary steps. PMID:27162366
High-resolution seismic wave propagation using local time stepping
Peter, Daniel
2017-03-13
High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step size for ground-motion simulations due to numerical stability conditions. To alleviate this problem, local time stepping (LTS) algorithms allow an explicit time stepping scheme to adapt the time step to the element size, allowing nearoptimal time steps everywhere in the mesh. This can potentially lead to significantly faster simulation runtimes.
Symplectic integrators with adaptive time steps
Richardson, A. S.; Finn, J. M.
2012-01-01
In recent decades, there have been many attempts to construct symplectic integrators with variable time steps, with rather disappointing results. In this paper, we identify the causes for this lack of performance, and find that they fall into two categories. In the first, the time step is considered a function of time alone, Δ = Δ(t). In this case, backward error analysis shows that while the algorithms remain symplectic, parametric instabilities may arise because of resonance between oscillations of Δ(t) and the orbital motion. In the second category the time step is a function of phase space variables Δ = Δ(q, p). In this case, the system of equations to be solved is analyzed by introducing a new time variable τ with dt = Δ(q, p) dτ. The transformed equations are no longer in Hamiltonian form, and thus do not benefit from integration methods which would be symplectic for Hamiltonian systems. We analyze two methods for integrating the transformed equations which do, however, preserve the structure of the original equations. The first is an extended phase space method, which has been successfully used in previous studies of adaptive time step symplectic integrators. The second, novel, method is based on a non-canonical mixed-variable generating function. Numerical trials for both of these methods show good results, without parametric instabilities or spurious growth or damping. It is then shown how to adapt the time step to an error estimate found by backward error analysis, in order to optimize the time-stepping scheme. Numerical results are obtained using this formulation and compared with other time-stepping schemes for the extended phase space symplectic method.
High-resolution seismic wave propagation using local time stepping
Peter, Daniel; Rietmann, Max; Galvez, Percy; Ampuero, Jean Paul
2017-01-01
High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step
Time step MOTA thermostat simulation
International Nuclear Information System (INIS)
Guthrie, G.L.
1978-09-01
The report details the logic, program layout, and operating procedures for the time-step MOTA (Materials Open Test Assembly) thermostat simulation program known as GYRD. It will enable prospective users to understand the operation of the program, run it, and interpret the results. The time-step simulation analysis was the approach chosen to determine the maximum value gain that could be used to minimize steady temperature offset without risking undamped thermal oscillations. The advantage of the GYRD program is that it directly shows hunting, ringing phenomenon, and similar events. Programs BITT and CYLB are faster, but do not directly show ringing time
Time step length versus efficiency of Monte Carlo burnup calculations
International Nuclear Information System (INIS)
Dufek, Jan; Valtavirta, Ville
2014-01-01
Highlights: • Time step length largely affects efficiency of MC burnup calculations. • Efficiency of MC burnup calculations improves with decreasing time step length. • Results were obtained from SIE-based Monte Carlo burnup calculations. - Abstract: We demonstrate that efficiency of Monte Carlo burnup calculations can be largely affected by the selected time step length. This study employs the stochastic implicit Euler based coupling scheme for Monte Carlo burnup calculations that performs a number of inner iteration steps within each time step. In a series of calculations, we vary the time step length and the number of inner iteration steps; the results suggest that Monte Carlo burnup calculations get more efficient as the time step length is reduced. More time steps must be simulated as they get shorter; however, this is more than compensated by the decrease in computing cost per time step needed for achieving a certain accuracy
A New time Integration Scheme for Cahn-hilliard Equations
Schaefer, R.
2015-06-01
In this paper we present a new integration scheme that can be applied to solving difficult non-stationary non-linear problems. It is obtained by a successive linearization of the Crank- Nicolson scheme, that is unconditionally stable, but requires solving non-linear equation at each time step. We applied our linearized scheme for the time integration of the challenging Cahn-Hilliard equation, modeling the phase separation in fluids. At each time step the resulting variational equation is solved using higher-order isogeometric finite element method, with B- spline basis functions. The method was implemented in the PETIGA framework interfaced via the PETSc toolkit. The GMRES iterative solver was utilized for the solution of a resulting linear system at every time step. We also apply a simple adaptivity rule, which increases the time step size when the number of GMRES iterations is lower than 30. We compared our method with a non-linear, two stage predictor-multicorrector scheme, utilizing a sophisticated step length adaptivity. We controlled the stability of our simulations by monitoring the Ginzburg-Landau free energy functional. The proposed integration scheme outperforms the two-stage competitor in terms of the execution time, at the same time having a similar evolution of the free energy functional.
A New time Integration Scheme for Cahn-hilliard Equations
Schaefer, R.; Smol-ka, M.; Dalcin, L; Paszyn'ski, M.
2015-01-01
In this paper we present a new integration scheme that can be applied to solving difficult non-stationary non-linear problems. It is obtained by a successive linearization of the Crank- Nicolson scheme, that is unconditionally stable, but requires solving non-linear equation at each time step. We applied our linearized scheme for the time integration of the challenging Cahn-Hilliard equation, modeling the phase separation in fluids. At each time step the resulting variational equation is solved using higher-order isogeometric finite element method, with B- spline basis functions. The method was implemented in the PETIGA framework interfaced via the PETSc toolkit. The GMRES iterative solver was utilized for the solution of a resulting linear system at every time step. We also apply a simple adaptivity rule, which increases the time step size when the number of GMRES iterations is lower than 30. We compared our method with a non-linear, two stage predictor-multicorrector scheme, utilizing a sophisticated step length adaptivity. We controlled the stability of our simulations by monitoring the Ginzburg-Landau free energy functional. The proposed integration scheme outperforms the two-stage competitor in terms of the execution time, at the same time having a similar evolution of the free energy functional.
Amezcua, Javier
in the mean value of the function. Using statistical significance tests both at the local and field level, it is shown that the climatology of the SPEEDY model is not modified by the changed time stepping scheme; hence, no retuning of the parameterizations is required. It is found the accuracy of the medium-term forecasts is increased by using the RAW filter.
One-step trinary signed-digit arithmetic using an efficient encoding scheme
Salim, W. Y.; Fyath, R. S.; Ali, S. A.; Alam, Mohammad S.
2000-11-01
The trinary signed-digit (TSD) number system is of interest for ultra fast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.
Time-division-multiplex control scheme for voltage multiplier rectifiers
Directory of Open Access Journals (Sweden)
Bin-Han Liu
2017-03-01
Full Text Available A voltage multiplier rectifier with a novel time-division-multiplexing (TDM control scheme for high step-up converters is proposed in this study. In the proposed TDM control scheme, two full-wave voltage doubler rectifiers can be combined to realise a voltage quadrupler rectifier. The proposed voltage quadrupler rectifier can reduce transformer turn ratio and transformer size for high step-up converters and also reduce voltage stress for the output capacitors and rectifier diodes. An N-times voltage rectifier can be straightforwardly produced by extending the concepts from the proposed TDM control scheme. A phase-shift full-bridge (PSFB converter is adopted in the primary side of the proposed voltage quadrupler rectifier to construct a PSFB quadrupler converter. Experimental results for the PSFB quadrupler converter demonstrate the performance of the proposed TDM control scheme for voltage quadrupler rectifiers. An 8-times voltage rectifier is simulated to determine the validity of extending the proposed TDM control scheme to realise an N-times voltage rectifier. Experimental and simulation results show that the proposed TDM control scheme has great potential to be used in high step-up converters.
Grief: Difficult Times, Simple Steps.
Waszak, Emily Lane
This guide presents techniques to assist others in coping with the loss of a loved one. Using the language of 9 layperson, the book contains more than 100 tips for caregivers or loved ones. A simple step is presented on each page, followed by reasons and instructions for each step. Chapters include: "What to Say"; "Helpful Things to Do"; "Dealing…
New schemes for high-voltage pulsed generators based on stepped transmission lines
International Nuclear Information System (INIS)
Bossamykin, V.S.; Gordeev, V.S.; Pavlovskii, A.I.
1993-01-01
Wave processes were analyzed from the point of effective energy delivery in pulsed power systems based on transmission lines. A series of new schemes for the pulsed generators based on multistage stepped transmission lines both with the capacitive and inductive energy storage was found. These devices can provide voltage or current transformation up to 5-10 times due to wave processes if stage's characteristic impedances are in a certain correlation. The schemes suggested can be widely applied in the new powerful pulsed power accelerators. The theoretical conclusions are justified experimentally
Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping
Bonito, Andrea
2014-10-31
© Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.
Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping
Bonito, Andrea; Guermond, Jean-Luc; Lee, Sanghyun
2014-01-01
© Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.
Chao, W. C.
1982-01-01
With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.
Newmark local time stepping on high-performance computing architectures
Rietmann, Max
2016-11-25
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.
Newmark local time stepping on high-performance computing architectures
Energy Technology Data Exchange (ETDEWEB)
Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland); Institute of Geophysics, ETH Zurich (Switzerland); Grote, Marcus, E-mail: marcus.grote@unibas.ch [Department of Mathematics and Computer Science, University of Basel (Switzerland); Peter, Daniel, E-mail: daniel.peter@kaust.edu.sa [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland); Institute of Geophysics, ETH Zurich (Switzerland); Schenk, Olaf, E-mail: olaf.schenk@usi.ch [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland)
2017-04-01
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.
Newmark local time stepping on high-performance computing architectures
Rietmann, Max; Grote, Marcus; Peter, Daniel; Schenk, Olaf
2016-01-01
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.
Liu, Meilin
2011-07-01
A discontinuous Galerkin finite element method (DG-FEM) with a highly-accurate time integration scheme is presented. The scheme achieves its high accuracy using numerically constructed predictor-corrector integration coefficients. Numerical results show that this new time integration scheme uses considerably larger time steps than the fourth-order Runge-Kutta method when combined with a DG-FEM using higher-order spatial discretization/basis functions for high accuracy. © 2011 IEEE.
Space-Time Transformation in Flux-form Semi-Lagrangian Schemes
Directory of Open Access Journals (Sweden)
Peter C. Chu Chenwu Fan
2010-01-01
Full Text Available With a finite volume approach, a flux-form semi-Lagrangian (TFSL scheme with space-time transformation was developed to provide stable and accurate algorithm in solving the advection-diffusion equation. Different from the existing flux-form semi-Lagrangian schemes, the temporal integration of the flux from the present to the next time step is transformed into a spatial integration of the flux at the side of a grid cell (space for the present time step using the characteristic-line concept. The TFSL scheme not only keeps the good features of the semi-Lagrangian schemes (no Courant number limitation, but also has higher accuracy (of a second order in both time and space. The capability of the TFSL scheme is demonstrated by the simulation of the equatorial Rossby-soliton propagation. Computational stability and high accuracy makes this scheme useful in ocean modeling, computational fluid dynamics, and numerical weather prediction.
A two-step chemical scheme for kerosene-air premixed flames
Energy Technology Data Exchange (ETDEWEB)
Franzelli, B.; Riber, E.; Sanjose, M. [CERFACS, CFD Team, 42 Avenue G. Coriolis, 31057 Toulouse Cedex 01 (France); Poinsot, T. [IMFT-UMR 5502, allee du Professeur Camille Soula, 31400 Toulouse (France)
2010-07-15
A reduced two-step scheme (called 2S-KERO-BFER) for kerosene-air premixed flames is presented in the context of Large Eddy Simulation of reacting turbulent flows in industrial applications. The chemical mechanism is composed of two reactions corresponding to the fuel oxidation into CO and H{sub 2}O, and the CO - CO{sub 2} equilibrium. To ensure the validity of the scheme for rich combustion, the pre-exponential constants of the two reactions are tabulated versus the local equivalence ratio. The fuel and oxidizer exponents are chosen to guarantee the correct dependence of laminar flame speed with pressure. Due to a lack of experimental results, the detailed mechanism of Dagaut composed of 209 species and 1673 reactions, and the skeletal mechanism of Luche composed of 91 species and 991 reactions have been used to validate the reduced scheme. Computations of one-dimensional laminar flames have been performed with the 2S{sub K}ERO{sub B}FER scheme using the CANTERA and COSILAB softwares for a wide range of pressure ([1; 12] atm), fresh gas temperature ([300; 700] K), and equivalence ratio ([0.6; 2.0]). Results show that the flame speed is correctly predicted for the whole range of parameters, showing a maximum for stoichiometric flames, a decrease for rich combustion and a satisfactory pressure dependence. The burnt gas temperature and the dilution by Exhaust Gas Recirculation are also well reproduced. Moreover, the results for ignition delay time are in good agreement with the experiments. (author)
A stable higher order space time Galerkin marching-on-in-time scheme
Pray, Andrew J.
2013-07-01
We present a method for the stable solution of time-domain integral equations. The method uses a technique developed in [1] to accurately evaluate matrix elements. As opposed to existing stabilization schemes, the method presented uses higher order basis functions in time to improve the accuracy of the solver. The method is validated by showing convergence in temporal basis function order, time step size, and geometric discretization order. © 2013 IEEE.
Acceleration of step and linear discontinuous schemes for the method of characteristics in DRAGON5
Directory of Open Access Journals (Sweden)
Alain Hébert
2017-09-01
Full Text Available The applicability of the algebraic collapsing acceleration (ACA technique to the method of characteristics (MOC in cases with scattering anisotropy and/or linear sources was investigated. Previously, the ACA was proven successful in cases with isotropic scattering and uniform (step sources. A presentation is first made of the MOC implementation, available in the DRAGON5 code. Two categories of schemes are available for integrating the propagation equations: (1 the first category is based on exact integration and leads to the classical step characteristics (SC and linear discontinuous characteristics (LDC schemes and (2 the second category leads to diamond differencing schemes of various orders in space. The acceleration of these MOC schemes using a combination of the generalized minimal residual [GMRES(m] method preconditioned with the ACA technique was focused on. Numerical results are provided for a two-dimensional (2D eight-symmetry pressurized water reactor (PWR assembly mockup in the context of the DRAGON5 code.
Al Jarro, Ahmed; Salem, Mohamed; Bagci, Hakan; Benson, Trevor; Sewell, Phillip D.; Vuković, Ana
2012-01-01
An explicit marching-on-in-time (MOT) scheme for solving the time domain volume integral equation is presented. The proposed method achieves its stability by employing, at each time step, a corrector scheme, which updates/corrects fields computed by the explicit predictor scheme. The proposedmethod is computationally more efficient when compared to the existing filtering techniques used for the stabilization of explicit MOT schemes. Numerical results presented in this paper demonstrate that the proposed method maintains its stability even when applied to the analysis of electromagnetic wave interactions with electrically large structures meshed using approximately half a million discretization elements.
Al Jarro, Ahmed
2012-11-01
An explicit marching-on-in-time (MOT) scheme for solving the time domain volume integral equation is presented. The proposed method achieves its stability by employing, at each time step, a corrector scheme, which updates/corrects fields computed by the explicit predictor scheme. The proposedmethod is computationally more efficient when compared to the existing filtering techniques used for the stabilization of explicit MOT schemes. Numerical results presented in this paper demonstrate that the proposed method maintains its stability even when applied to the analysis of electromagnetic wave interactions with electrically large structures meshed using approximately half a million discretization elements.
Multiple time step integrators in ab initio molecular dynamics
International Nuclear Information System (INIS)
Luehr, Nathan; Martínez, Todd J.; Markland, Thomas E.
2014-01-01
Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy
Supervising PTCA treatment with a scheme of steps combination SPECT 99Tcm-MIBI myocardial imaging
International Nuclear Information System (INIS)
Luan Zhaosheng; Tian Jiahe; Peng Yong; Zhou Wen; Gai Luyue; Sun Zhijun; Su Yuwen; Liu Xiaohu
2001-01-01
Objective: To set up a useful supervising method around PTCA treatment. Methods: A scheme of steps combination SPECT 99 Tc m -MIBI myocardial imaging was devised. 87 patients with coronary artery disease were selected into the study. 3-step imaging, exercise, rest and intravenous infusion of nitroglycerine (NTG) imaging, was performed at 1 week before PTCA. 2-step imaging, exercise and rest imaging, was performed 1-2 weeks after PTCA. All of the indexes obtained from steps combination imaging before and after PTCA were contrasted with each other in pairs. Results: 1) Compared with outcome of clinical assessing, the imaging after PTCA could correctly assess PTCA outcome. 2) The myocardial defects showed in 3-step imaging before PTCA appeared to be ameliorated after PTCA and the amelioration showed in exercise imaging was more evident than that in rest imaging. The myocardial perfusion state revealed by NTG imaging was similar to that revealed by rest imaging after PTCA. 3) The 3-step imaging findings correlated with PTCA outcome, the best correlation was found between indexes of myocardial defect changes showed by NTG and exercise imaging and indexes of PTCA outcome (r = 0.9470, P 99 Tc m -MIBI myocardial SPECT imaging scheme is a very useful supervising method around PTCA. 2-step imaging after PTCA could assess outcome correctly, and 3-step imaging before it could predict outcome correctly, too
Time step size limitation introduced by the BSSN Gamma Driver
Energy Technology Data Exchange (ETDEWEB)
Schnetter, Erik, E-mail: schnetter@cct.lsu.ed [Department of Physics and Astronomy, Louisiana State University, LA (United States)
2010-08-21
Many mesh refinement simulations currently performed in numerical relativity counteract instabilities near the outer boundary of the simulation domain either by changes to the mesh refinement scheme or by changes to the gauge condition. We point out that the BSSN Gamma Driver gauge condition introduces a time step size limitation in a similar manner as a Courant-Friedrichs-Lewy condition, but which is independent of the spatial resolution. We give a didactic explanation of this issue, show why, especially, mesh refinement simulations suffer from it, and point to a simple remedy. (note)
Optimal order and time-step criterion for Aarseth-type N-body integrators
International Nuclear Information System (INIS)
Makino, Junichiro
1991-01-01
How the selection of the time-step criterion and the order of the integrator change the efficiency of Aarseth-type N-body integrators is discussed. An alternative to Aarseth's scheme based on the direct calculation of the time derivative of the force using the Hermite interpolation is compared to Aarseth's scheme, which uses the Newton interpolation to construct the predictor and corrector. How the number of particles in the system changes the behavior of integrators is examined. The Hermite scheme allows a time step twice as large as that for the standard Aarseth scheme for the same accuracy. The calculation cost of the Hermite scheme per time step is roughly twice as much as that of the standard Aarseth scheme. The optimal order of the integrators depends on both the particle number and the accuracy required. The time-step criterion of the standard Aarseth scheme is found to be inapplicable to higher-order integrators, and a more uniformly reliable criterion is proposed. 18 refs
Comparison of time stepping schemes on the cable equation
Directory of Open Access Journals (Sweden)
Chuan Li
2010-09-01
Full Text Available Electrical propagation in excitable tissue, such as nerve fibers and heart muscle, is described by a parabolic PDE for the transmembrane voltage $V(x,t$, known as the cable equation, $$ frac{1}{r_a}frac{partial^2V}{partial x^2} = C_mfrac{partial V}{partial t} + I_{m ion}(V,t + I_{m stim}(t $$ where $r_a$ and $C_m$ are the axial resistance and membrane capacitance. The source term $I_{m ion}$ represents the total ionic current across the membrane, governed by the Hodgkin-Huxley or other more complicated ionic models. $I_{m stim}(t$ is an applied stimulus current.
Time to pause before the next step
International Nuclear Information System (INIS)
Siemon, R.E.
1998-01-01
Many scientists, who have staunchly supported ITER for years, are coming to realize it is time to further rethink fusion energy's development strategy. Specifically, as was suggested by Grant Logan and Dale Meade, and in keeping with the restructuring of 1996, a theme of better, cheaper, faster fusion would serve the program more effectively than ''demonstrating controlled ignition...and integrated testing of the high-heat-flux and nuclear components required to utilize fusion energy...'' which are the important ingredients of ITER's objectives. The author has personally shifted his view for a mixture of technical and political reasons. On the technical side, he senses that through advanced tokamak research, spherical tokamak research, and advanced stellarator work, scientists are coming to a new understanding that might make a burning-plasma device significantly smaller and less expensive. Thus waiting for a few years, even ten years, seems prudent. Scientifically, there is fascinating physics to be learned through studies of burning plasma on a tokamak. And clearly if one wishes to study burning plasma physics in a sustained plasma, there is no other configuration with an adequate database on which to proceed. But what is the urgency of moving towards an ITER-like step focused on burning plasma? Some of the arguments put forward and the counter arguments are discussed here
Time-and-ID-Based Proxy Reencryption Scheme
Directory of Open Access Journals (Sweden)
Kambombo Mtonga
2014-01-01
Full Text Available Time- and ID-based proxy reencryption scheme is proposed in this paper in which a type-based proxy reencryption enables the delegator to implement fine-grained policies with one key pair without any additional trust on the proxy. However, in some applications, the time within which the data was sampled or collected is very critical. In such applications, for example, healthcare and criminal investigations, the delegatee may be interested in only some of the messages with some types sampled within some time bound instead of the entire subset. Hence, in order to carter for such situations, in this paper, we propose a time-and-identity-based proxy reencryption scheme that takes into account the time within which the data was collected as a factor to consider when categorizing data in addition to its type. Our scheme is based on Boneh and Boyen identity-based scheme (BB-IBE and Matsuo’s proxy reencryption scheme for identity-based encryption (IBE to IBE. We prove that our scheme is semantically secure in the standard model.
Multiple-time-stepping generalized hybrid Monte Carlo methods
Energy Technology Data Exchange (ETDEWEB)
Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
The large discretization step method for time-dependent partial differential equations
Haras, Zigo; Taasan, Shlomo
1995-01-01
A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.
A discrete-time adaptive control scheme for robot manipulators
Tarokh, M.
1990-01-01
A discrete-time model reference adaptive control scheme is developed for trajectory tracking of robot manipulators. The scheme utilizes feedback, feedforward, and auxiliary signals, obtained from joint angle measurement through simple expressions. Hyperstability theory is utilized to derive the adaptation laws for the controller gain matrices. It is shown that trajectory tracking is achieved despite gross robot parameter variation and uncertainties. The method offers considerable design flexibility and enables the designer to improve the performance of the control system by adjusting free design parameters. The discrete-time adaptation algorithm is extremely simple and is therefore suitable for real-time implementation. Simulations and experimental results are given to demonstrate the performance of the scheme.
Near-Space TOPSAR Large-Scene Full-Aperture Imaging Scheme Based on Two-Step Processing
Directory of Open Access Journals (Sweden)
Qianghui Zhang
2016-07-01
Full Text Available Free of the constraints of orbit mechanisms, weather conditions and minimum antenna area, synthetic aperture radar (SAR equipped on near-space platform is more suitable for sustained large-scene imaging compared with the spaceborne and airborne counterparts. Terrain observation by progressive scans (TOPS, which is a novel wide-swath imaging mode and allows the beam of SAR to scan along the azimuth, can reduce the time of echo acquisition for large scene. Thus, near-space TOPS-mode SAR (NS-TOPSAR provides a new opportunity for sustained large-scene imaging. An efficient full-aperture imaging scheme for NS-TOPSAR is proposed in this paper. In this scheme, firstly, two-step processing (TSP is adopted to eliminate the Doppler aliasing of the echo. Then, the data is focused in two-dimensional frequency domain (FD based on Stolt interpolation. Finally, a modified TSP (MTSP is performed to remove the azimuth aliasing. Simulations are presented to demonstrate the validity of the proposed imaging scheme for near-space large-scene imaging application.
Recent advances in marching-on-in-time schemes for solving time domain volume integral equations
Sayed, Sadeed Bin; Ulku, Huseyin Arda; Bagci, Hakan
2015-01-01
Transient electromagnetic field interactions on inhomogeneous penetrable scatterers can be analyzed by solving time domain volume integral equations (TDVIEs). TDVIEs are constructed by setting the summation of the incident and scattered field intensities to the total field intensity on the volumetric support of the scatterer. The unknown can be the field intensity or flux/current density. Representing the total field intensity in terms of the unknown using the relevant constitutive relation and the scattered field intensity in terms of the spatiotemporal convolution of the unknown with the Green function yield the final form of the TDVIE. The unknown is expanded in terms of local spatial and temporal basis functions. Inserting this expansion into the TDVIE and testing the resulting equation at discrete times yield a system of equations that is solved by the marching on-in-time (MOT) scheme. At each time step, a smaller system of equations, termed MOT system is solved for the coefficients of the expansion. The right-hand side of this system consists of the tested incident field and discretized spatio-temporal convolution of the unknown samples computed at the previous time steps with the Green function.
Recent advances in marching-on-in-time schemes for solving time domain volume integral equations
Sayed, Sadeed Bin
2015-05-16
Transient electromagnetic field interactions on inhomogeneous penetrable scatterers can be analyzed by solving time domain volume integral equations (TDVIEs). TDVIEs are constructed by setting the summation of the incident and scattered field intensities to the total field intensity on the volumetric support of the scatterer. The unknown can be the field intensity or flux/current density. Representing the total field intensity in terms of the unknown using the relevant constitutive relation and the scattered field intensity in terms of the spatiotemporal convolution of the unknown with the Green function yield the final form of the TDVIE. The unknown is expanded in terms of local spatial and temporal basis functions. Inserting this expansion into the TDVIE and testing the resulting equation at discrete times yield a system of equations that is solved by the marching on-in-time (MOT) scheme. At each time step, a smaller system of equations, termed MOT system is solved for the coefficients of the expansion. The right-hand side of this system consists of the tested incident field and discretized spatio-temporal convolution of the unknown samples computed at the previous time steps with the Green function.
The importance of time-stepping errors in ocean models
Williams, P. D.
2011-12-01
Many ocean models use leapfrog time stepping. The Robert-Asselin (RA) filter is usually applied after each leapfrog step, to control the computational mode. However, it will be shown in this presentation that the RA filter generates very large amounts of numerical diapycnal mixing. In some ocean models, the numerical diapycnal mixing from the RA filter is as large as the physical diapycnal mixing. This lowers our confidence in the fidelity of the simulations. In addition to the above problem, the RA filter also damps the physical solution and degrades the numerical accuracy. These two concomitant problems occur because the RA filter does not conserve the mean state, averaged over the three time slices on which it operates. The presenter has recently proposed a simple modification to the RA filter, which does conserve the three-time-level mean state. The modified filter has become known as the Robert-Asselin-Williams (RAW) filter. When used in conjunction with the leapfrog scheme, the RAW filter eliminates the numerical damping of the physical solution and increases the amplitude accuracy by two orders, yielding third-order accuracy. The phase accuracy is unaffected and remains second-order. The RAW filter can easily be incorporated into existing models of the ocean, typically via the insertion of just a single line of code. Better simulations are obtained, at almost no additional computational expense. Results will be shown from recent implementations of the RAW filter in various ocean models. For example, in the UK Met Office Hadley Centre ocean model, sea-surface temperature and sea-ice biases in the North Atlantic Ocean are found to be reduced. These improvements are encouraging for the use of the RAW filter in other ocean models.
An adaptive time-stepping strategy for solving the phase field crystal model
International Nuclear Information System (INIS)
Zhang, Zhengru; Ma, Yuan; Qiao, Zhonghua
2013-01-01
In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. The numerical experiments demonstrate that the CPU time is significantly saved for long time simulations
Time-and-ID-Based Proxy Reencryption Scheme
Mtonga, Kambombo; Paul, Anand; Rho, Seungmin
2014-01-01
Time- and ID-based proxy reencryption scheme is proposed in this paper in which a type-based proxy reencryption enables the delegator to implement fine-grained policies with one key pair without any additional trust on the proxy. However, in some applications, the time within which the data was sampled or collected is very critical. In such applications, for example, healthcare and criminal investigations, the delegatee may be interested in only some of the messages with some types sampled wi...
Geevers, Sjoerd; van der Vegt, J.J.W.
2017-01-01
We present sharp and sucient bounds for the interior penalty term and time step size to ensure stability of the symmetric interior penalty discontinuous Galerkin (SIPDG) method combined with an explicit time-stepping scheme. These conditions hold for generic meshes, including unstructured
An explicit multi-time-stepping algorithm for aerodynamic flows
Niemann-Tuitman, B.E.; Veldman, A.E.P.
1997-01-01
An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for aerodynamic turbulent flows. For two-dimensional flows speedups in the order of five with respect to single time stepping are obtained.
International Nuclear Information System (INIS)
Alberola, Emilie; Solier, Boris
2012-01-01
Among the publications of CDC Climat Research, 'Climate Reports' offer in-depth analyses on a given subject. This issue addresses the following points: CO 2 emissions from international aviation, which accounted for 2% of global emissions in 2009, are not currently capped by any international agreement. The inclusion of the aviation sector in the European Union Emissions Trading Scheme (EU ETS) from January 1 2012 onwards represents a first step towards the implementation of emission reduction regulations based on an emissions trading scheme After the gradual extension of the scope of the EU ETS to new countries since 2005, the European Commission is now assimilating around 5,400 airlines that operate in Europe, two-thirds of which are non-European, into the EU ETS to join the energy generation and manufacturing industries. This European Union's decision assigns quantified CO 2 emission reduction targets to airlines: a 3% reduction in 2012 compared with average CO 2 emissions for the sector between 2004 and 2006, then a 5% reduction between 2013 and 2020. In the short term, the inclusion of the aviation sector in the EU ETS should have an impact on the scheme. Indeed, the aviation sector is expected to represent a new source of demand for allowances. Based on the assumption of an average 2.5% increase in annual emissions between 2012 and 2014, and then of an increase of 2% over the period between 2015 and 2020, airlines would create a shortfall of 382 MtCO 2 between 2012 and 2020. The limited use of Kyoto credits to help them comply offers a maximum import potential of almost 65 MtCO 2 between 2012 and 2020. This inclusion is a test of the EU's proactive policy, which involves encouraging other countries to define their own climate policy, without breaching international law,. The potential exemption of airline operators from emitter countries that introduce equivalent regulations would be a success for the European policy. For the time being, the reaction of some
Combating cancer one step at a time
Directory of Open Access Journals (Sweden)
R.N Sugitha Nadarajah
2016-10-01
widespread consequences, not only in a medical sense but also socially and economically,” says Dr. Abdel-Rahman. “We need to put in every effort to combat this fatal disease,” he adds.Tackling the spread of cancer and the increase in the number of cases reported every year is not without its challenges, he asserts. “I see the key challenges as the unequal availability of cancer treatments worldwide, the increasing cost of cancer treatment, and the increased median age of the population in many parts of the world, which carries with it a consequent increase in the risk of certain cancers,” he says. “We need to reassess the current pace and orientation of cancer research because, with time, cancer research is becoming industry-oriented rather than academia-oriented — which, in my view, could be very dangerous to the future of cancer research,” adds Dr. Abdel-Rahman. “Governments need to provide more research funding to improve the outcome of cancer patients,” he explains.His efforts and hard work have led to him receiving a number of distinguished awards, namely the UICC International Cancer Technology Transfer (ICRETT fellowship in 2014 at the Investigational New Drugs Unit in the European Institute of Oncology, Milan, Italy; EACR travel fellowship in 2015 at The Christie NHS Foundation Trust, Manchester, UK; and also several travel grants to Ireland, Switzerland, Belgium, Spain, and many other countries where he attended medical conferences. Dr. Abdel-Rahman is currently engaged in a project to establish a clinical/translational cancer research center at his institute, which seeks to incorporate various cancer-related disciplines in order to produce a real bench-to-bedside practice, hoping that it would “change research that may help shape the future of cancer therapy”.Dr. Abdel-Rahman is also an active founding member of the clinical research unit at his institute and is a representative to the prestigious European Organization for Research and
An explicit multi-time-stepping algorithm for aerodynamic flows
Niemann-Tuitman, B.E.; Veldman, A.E.P.
1997-01-01
An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for
A space-time lower-upper symmetric Gauss-Seidel scheme for the time-spectral method
Zhan, Lei; Xiong, Juntao; Liu, Feng
2016-05-01
The time-spectral method (TSM) offers the advantage of increased order of accuracy compared to methods using finite-difference in time for periodic unsteady flow problems. Explicit Runge-Kutta pseudo-time marching and implicit schemes have been developed to solve iteratively the space-time coupled nonlinear equations resulting from TSM. Convergence of the explicit schemes is slow because of the stringent time-step limit. Many implicit methods have been developed for TSM. Their computational efficiency is, however, still limited in practice because of delayed implicit temporal coupling, multiple iterative loops, costly matrix operations, or lack of strong diagonal dominance of the implicit operator matrix. To overcome these shortcomings, an efficient space-time lower-upper symmetric Gauss-Seidel (ST-LU-SGS) implicit scheme with multigrid acceleration is presented. In this scheme, the implicit temporal coupling term is split as one additional dimension of space in the LU-SGS sweeps. To improve numerical stability for periodic flows with high frequency, a modification to the ST-LU-SGS scheme is proposed. Numerical results show that fast convergence is achieved using large or even infinite Courant-Friedrichs-Lewy (CFL) numbers for unsteady flow problems with moderately high frequency and with the use of moderately high numbers of time intervals. The ST-LU-SGS implicit scheme is also found to work well in calculating periodic flow problems where the frequency is not known a priori and needed to be determined by using a combined Fourier analysis and gradient-based search algorithm.
Displacement in the parameter space versus spurious solution of discretization with large time step
International Nuclear Information System (INIS)
Mendes, Eduardo; Letellier, Christophe
2004-01-01
In order to investigate a possible correspondence between differential and difference equations, it is important to possess discretization of ordinary differential equations. It is well known that when differential equations are discretized, the solution thus obtained depends on the time step used. In the majority of cases, such a solution is considered spurious when it does not resemble the expected solution of the differential equation. This often happens when the time step taken into consideration is too large. In this work, we show that, even for quite large time steps, some solutions which do not correspond to the expected ones are still topologically equivalent to solutions of the original continuous system if a displacement in the parameter space is considered. To reduce such a displacement, a judicious choice of the discretization scheme should be made. To this end, a recent discretization scheme, based on the Lie expansion of the original differential equations, proposed by Monaco and Normand-Cyrot will be analysed. Such a scheme will be shown to be sufficient for providing an adequate discretization for quite large time steps compared to the pseudo-period of the underlying dynamics
Ulku, Huseyin Arda; Bagci, Hakan; Michielssen, Eric
2013-01-01
An explicit marching on-in-time (MOT) scheme for solving the time-domain magnetic field integral equation (TD-MFIE) is presented. The proposed MOT-TD-MFIE solver uses Rao-Wilton-Glisson basis functions for spatial discretization and a PE(CE)m-type linear multistep method for time marching. Unlike previous explicit MOT-TD-MFIE solvers, the time step size can be chosen as large as that of the implicit MOT-TD-MFIE solvers without adversely affecting accuracy or stability. An algebraic stability analysis demonstrates the stability of the proposed explicit solver; its accuracy and efficiency are established via numerical examples. © 1963-2012 IEEE.
Ulku, Huseyin Arda
2013-08-01
An explicit marching on-in-time (MOT) scheme for solving the time-domain magnetic field integral equation (TD-MFIE) is presented. The proposed MOT-TD-MFIE solver uses Rao-Wilton-Glisson basis functions for spatial discretization and a PE(CE)m-type linear multistep method for time marching. Unlike previous explicit MOT-TD-MFIE solvers, the time step size can be chosen as large as that of the implicit MOT-TD-MFIE solvers without adversely affecting accuracy or stability. An algebraic stability analysis demonstrates the stability of the proposed explicit solver; its accuracy and efficiency are established via numerical examples. © 1963-2012 IEEE.
International Nuclear Information System (INIS)
Wei, L.F.; Nori, Franco
2003-01-01
Based on the exact conditional quantum dynamics for a two-ion system, we propose an efficient single-step scheme for coherently manipulating quantum information of two trapped cold ions by using a pair of synchronous laser pulses. Neither the auxiliary atomic level nor the Lamb-Dicke approximation are needed
Investigation of optimal photoionization schemes for Sm by multi-step resonance ionization
International Nuclear Information System (INIS)
Cha, H.; Song, K.; Lee, J.
1997-01-01
Excited states of Sm atoms are investigated by using multi-color resonance enhanced multiphoton ionization spectroscopy. Among the ionization signals one observed at 577.86 nm is regarded as the most efficient excited state if an 1-color 3-photon scheme is applied. Meanwhile an observed level located at 587.42 nm is regarded as the most efficient state if one uses a 2-color scheme. For 2-color scheme a level located at 573.50 nm from this first excited state is one of the best second excited state for the optimal photoionization scheme. Based on this ionization scheme various concentrations of standard solutions for samarium are determined. The minimum amount of sample which can be detected by a 2-color scheme is determined as 200 fg. The detection sensitivity is limited mainly due to the pollution of the graphite atomizer. copyright 1997 American Institute of Physics
Time step size selection for radiation diffusion calculations
International Nuclear Information System (INIS)
Rider, W.J.; Knoll, D.A.
1999-01-01
The purpose of this note is to describe a time step control technique as applied to radiation diffusion. Standard practice only provides a heuristic criteria related to the relative change in the dependent variables. The authors propose an alternative based on relatively simple physical principles. This time step control applies to methods of solution that are unconditionally stable and converges nonlinearities within a time step in the governing equations. Commonly, nonlinearities in the governing equations are evaluated using existing (old time) data. The authors refer to this as the semi-implicit (SI) method. When a method converges nonlinearities within a time step, the entire governing equation including all nonlinearities is self-consistently evaluated using advance time data (with appropriate time centering for accuracy)
Rigid Body Sampling and Individual Time Stepping for Rigid-Fluid Coupling of Fluid Simulation
Directory of Open Access Journals (Sweden)
Xiaokun Wang
2017-01-01
Full Text Available In this paper, we propose an efficient and simple rigid-fluid coupling scheme with scientific programming algorithms for particle-based fluid simulation and three-dimensional visualization. Our approach samples the surface of rigid bodies with boundary particles that interact with fluids. It contains two procedures, that is, surface sampling and sampling relaxation, which insures uniform distribution of particles with less iterations. Furthermore, we present a rigid-fluid coupling scheme integrating individual time stepping to rigid-fluid coupling, which gains an obvious speedup compared to previous method. The experimental results demonstrate the effectiveness of our approach.
Vaidya, Bhargav; Prasad, Deovrat; Mignone, Andrea; Sharma, Prateek; Rickler, Luca
2017-12-01
An important ingredient in numerical modelling of high temperature magnetized astrophysical plasmas is the anisotropic transport of heat along magnetic field lines from higher to lower temperatures. Magnetohydrodynamics typically involves solving the hyperbolic set of conservation equations along with the induction equation. Incorporating anisotropic thermal conduction requires to also treat parabolic terms arising from the diffusion operator. An explicit treatment of parabolic terms will considerably reduce the simulation time step due to its dependence on the square of the grid resolution (Δx) for stability. Although an implicit scheme relaxes the constraint on stability, it is difficult to distribute efficiently on a parallel architecture. Treating parabolic terms with accelerated super-time-stepping (STS) methods has been discussed in literature, but these methods suffer from poor accuracy (first order in time) and also have difficult-to-choose tuneable stability parameters. In this work, we highlight a second-order (in time) Runge-Kutta-Legendre (RKL) scheme (first described by Meyer, Balsara & Aslam 2012) that is robust, fast and accurate in treating parabolic terms alongside the hyperbolic conversation laws. We demonstrate its superiority over the first-order STS schemes with standard tests and astrophysical applications. We also show that explicit conduction is particularly robust in handling saturated thermal conduction. Parallel scaling of explicit conduction using RKL scheme is demonstrated up to more than 104 processors.
Sivak, David A; Chodera, John D; Crooks, Gavin E
2014-06-19
When simulating molecular systems using deterministic equations of motion (e.g., Newtonian dynamics), such equations are generally numerically integrated according to a well-developed set of algorithms that share commonly agreed-upon desirable properties. However, for stochastic equations of motion (e.g., Langevin dynamics), there is still broad disagreement over which integration algorithms are most appropriate. While multiple desiderata have been proposed throughout the literature, consensus on which criteria are important is absent, and no published integration scheme satisfies all desiderata simultaneously. Additional nontrivial complications stem from simulating systems driven out of equilibrium using existing stochastic integration schemes in conjunction with recently developed nonequilibrium fluctuation theorems. Here, we examine a family of discrete time integration schemes for Langevin dynamics, assessing how each member satisfies a variety of desiderata that have been enumerated in prior efforts to construct suitable Langevin integrators. We show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting (related to the velocity Verlet discretization) that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts.
A stable higher order space time Galerkin marching-on-in-time scheme
Pray, Andrew J.; Shanker, Balasubramaniam; Bagci, Hakan
2013-01-01
We present a method for the stable solution of time-domain integral equations. The method uses a technique developed in [1] to accurately evaluate matrix elements. As opposed to existing stabilization schemes, the method presented uses higher order
A Novel Real-Time Feature Matching Scheme
Directory of Open Access Journals (Sweden)
Ying Liu
2014-02-01
Full Text Available Affine Scale Invariant Feature Transform (ASIFT can obtain fully affine invariance, however, its time cost reaches about twice that in Scale Invariant Feature Transform (SIFT. We propose an improved ASIFT algorithm based on feature points in scale space for real-time application. In order to detect the affine invariant feature point, we establish a second-order difference of Gaussian (DOG pyramid and replace the extreme detection in the DOG pyramid by zero detection in the proposed second-order DOG pyramid, which decreases the complexity of the scheme. Experimental results show that the proposed method has a big progress in the real-time performance compared to the traditional one, while preserving the fully affine invariance and precision.
Molecular dynamics based enhanced sampling of collective variables with very large time steps
Chen, Pei-Yang; Tuckerman, Mark E.
2018-01-01
Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.
International Nuclear Information System (INIS)
Géré, Antoine; Hack, Thomas-Paul; Pinamonti, Nicola
2016-01-01
We develop a renormalisation scheme for time-ordered products in interacting field theories on curved space–times that consists of an analytic regularisation of Feynman amplitudes and a minimal subtraction of the resulting pole parts. This scheme is directly applicable to space–times with Lorentzian signature, manifestly generally covariant, invariant under any space–time isometries present, and constructed to all orders in perturbation theory. Moreover, the scheme correctly captures the nongeometric state-dependent contribution of Feynman amplitudes, and it is well suited for practical computations. To illustrate this last point, we compute explicit examples on a generic curved space–time and demonstrate how momentum space computations in cosmological space–times can be performed in our scheme. In this work, we discuss only scalar fields in four space–time dimensions, but we argue that the renormalisation scheme can be directly generalised to other space–time dimensions and field theories with higher spin as well as to theories with local gauge invariance. (paper)
Kudryavtsev, Yuri; Ferrer, Rafael; Huyse, Mark; Van den Bergh, Paul; Van Duppen, Piet; Vermeeren, L.
2014-01-01
The in-gas laser ionization and spectroscopy technique has been developed at the Leuven isotope separator on-line facility for the production and in-source laser spectroscopy studies of short-lived radioactive isotopes. In this article, results from a study to identify efficient optical schemes for the two-step resonance laser ionization of 18 elements are presented. © 2013 AIP Publishing LLC.
International Nuclear Information System (INIS)
Delzeit, R.; Holm-Mueller, K.
2009-01-01
Taking Brazilian bioethanol as an example, this paper presents possible sustainability criteria for a certification scheme aimed to minimize negative socio-ecological impacts and to increase the sustainable production of biomass. We describe the methods that have led us to the identification of a first set of feasible sustainability criteria for Brazilian bioethanol and discuss issues to be considered when developing certification schemes for sustainability. General problems of a certification scheme lie in the inherent danger of introducing new non-tariff trade barriers and in the problems of including important higher scale issues like land conversion and food security. A certification system cannot replace a thorough analysis of policy impacts on sustainability issues. (author)
Rapid expansion method (REM) for time‐stepping in reverse time migration (RTM)
Pestana, Reynam C.
2009-01-01
We show that the wave equation solution using a conventional finite‐difference scheme, derived commonly by the Taylor series approach, can be derived directly from the rapid expansion method (REM). After some mathematical manipulation we consider an analytical approximation for the Bessel function where we assume that the time step is sufficiently small. From this derivation we find that if we consider only the first two Chebyshev polynomials terms in the rapid expansion method we can obtain the second order time finite‐difference scheme that is frequently used in more conventional finite‐difference implementations. We then show that if we use more terms from the REM we can obtain a more accurate time integration of the wave field. Consequently, we have demonstrated that the REM is more accurate than the usual finite‐difference schemes and it provides a wave equation solution which allows us to march in large time steps without numerical dispersion and is numerically stable. We illustrate the method with post and pre stack migration results.
International Nuclear Information System (INIS)
Abd Nassir Ibrahim
2001-01-01
Development of skilled manpower in the field of NDT is one the most important component that must be given priority in order to ensure the sustainability of the technology in any country. In this respect ISO 9712 provides a guideline on the implementation of HRD program in the field of NDT that involved training, qualification and certification processes. ISO 9712 was developed with the hope that it provided a guideline for the establishment of qualification and certification scheme acceptable to the whole NDT community throughout the world. With this guideline, the process of qualification and certification of NDT personnel of different countries throughout the world will be harmonized. In Malaysia, such a scheme was established in 1985 with the National Vocational Training Council was appointed as the Certification Body. Although the scheme was developed based on ISO 97121 some local requirement were included which made the scheme somewhat deviated from the ISO practices. Twenty years after it was first implemented, the scheme was revised and amended to ensure that requirements of ISO 9712 are complied. The new scheme was revised and approved in April 2000 and was implemented for the first time in November radiography level 1 examination. (Author)
Multi-time-step domain coupling method with energy control
DEFF Research Database (Denmark)
Mahjoubi, N.; Krenk, Steen
2010-01-01
the individual time step. It is demonstrated that displacement continuity between the subdomains leads to cancelation of the interface contributions to the energy balance equation, and thus stability and algorithmic damping properties of the original algorithms are retained. The various subdomains can...... by a numerical example using a refined mesh around concentrated forces. Copyright © 2010 John Wiley & Sons, Ltd....
Sharing Steps in the Workplace: Changing Privacy Concerns Over Time
DEFF Research Database (Denmark)
Jensen, Nanna Gorm; Shklovski, Irina
2016-01-01
study of a Danish workplace participating in a step counting campaign. We find that concerns of employees who choose to participate and those who choose not to differ. Moreover, privacy concerns of participants develop and change over time. Our findings challenge the assumption that consumers...
Studies on steps affecting tritium residence time in solid blanket
International Nuclear Information System (INIS)
Tanaka, Satoru
1987-01-01
For the self sustaining of CTR fuel cycle, the effective tritium recovery from blankets is essential. This means that not only tritium breeding ratio must be larger than 1.0, but also high recovering speed is required for the short residence time of tritium in blankets. Short residence time means that the tritium inventory in blankets is small. In this paper, the tritium residence time and tritium inventory in a solid blanket are modeled by considering the steps constituting tritium release. Some of these tritium migration processes were experimentally evaluated. The tritium migration steps in a solid blanket using sintered breeding materials consist of diffusion in grains, desorption at grain edges, diffusion and permeation through grain boundaries, desorption at particle edges, diffusion and percolation through interconnected pores to purging stream, and convective mass transfer to stream. Corresponding to these steps, diffusive, soluble, adsorbed and trapped tritium inventories and the tritium in gas phase are conceivable. The code named TTT was made for calculating these tritium inventories and the residence time of tritium. An example of the results of calculation is shown. The blanket is REPUTER-1, which is the conceptual design of a commercial reversed field pinch fusion reactor studied at the University of Tokyo. The experimental studies on the migration steps of tritium are reported. (Kako, I.)
Liu, Meilin; Bagci, Hakan
2011-01-01
A discontinuous Galerkin finite element method (DG-FEM) with a highly-accurate time integration scheme is presented. The scheme achieves its high accuracy using numerically constructed predictor-corrector integration coefficients. Numerical results
A three–step discretization scheme for direct numerical solution of ...
African Journals Online (AJOL)
In this paper, a three-step discretization (numerical) formula is developed for direct integration of second-order initial value problems in ordinary differential equations. The development of the method and analysis of its basic properties adopt Taylor series expansion and Dahlquist stability test methods. The results show that ...
Decoupled Scheme for Time-Dependent Natural Convection Problem II: Time Semidiscreteness
Directory of Open Access Journals (Sweden)
Tong Zhang
2014-01-01
stability and the corresponding optimal error estimates are presented. Furthermore, a decoupled numerical scheme is proposed by decoupling the nonlinear terms via temporal extrapolation; optimal error estimates are established. Finally, some numerical results are provided to verify the performances of the developed algorithms. Compared with the coupled numerical scheme, the decoupled algorithm not only keeps good accuracy but also saves a lot of computational cost. Both theoretical analysis and numerical experiments show the efficiency and effectiveness of the decoupled method for time-dependent natural convection problem.
Plante, Ianik; Devroye, Luc
2017-10-01
Ionizing radiation interacts with the water molecules of the tissues mostly by ionizations and excitations, which result in the formation of the radiation track structure and the creation of radiolytic species such as H.,.OH, H2, H2O2, and e-aq. After their creation, these species diffuse and may chemically react with the neighboring species and with the molecules of the medium. Therefore radiation chemistry is of great importance in radiation biology. As the chemical species are not distributed homogeneously, the use of conventional models of homogeneous reactions cannot completely describe the reaction kinetics of the particles. Actually, many simulations of radiation chemistry are done using the Independent Reaction Time (IRT) method, which is a very fast technique to calculate radiochemical yields but which do not calculate the positions of the radiolytic species as a function of time. Step-by-step (SBS) methods, which are able to provide such information, have been used only sparsely because these are time-consuming in terms of calculation. Recent improvements in computer performance now allow the regular use of the SBS method in radiation chemistry. The SBS and IRT methods are both based on the Green's functions of the diffusion equation (GFDE). In this paper, several sampling algorithms of the GFDE and for the IRT method are presented. We show that the IRT and SBS methods are exactly equivalent for 2-particles systems for diffusion and partially diffusion-controlled reactions between non-interacting particles. We also show that the results obtained with the SBS simulation method with periodic boundary conditions are in agreement with the predictions by classical reaction kinetics theory, which is an important step towards using this method for modelling of biochemical networks and metabolic pathways involved in oxidative stress. Finally, the first simulation results obtained with the code RITRACKS (Relativistic Ion Tracks) are presented.
Adaptive time-stepping Monte Carlo integration of Coulomb collisions
Särkimäki, K.; Hirvijoki, E.; Terävä, J.
2018-01-01
We report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell-Jüttner statistics. The implementation is based on the Beliaev-Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space. Detailed description is provided for both the physics and implementation of the operator. The focus is in adaptive integration of stochastic differential equations, which is an overlooked aspect among existing Monte Carlo implementations of Coulomb collision operators. We verify that our operator converges to known analytical results and demonstrate that careless implementation of the adaptive time step can lead to severely erroneous results. The operator is provided as a self-contained Fortran 95 module and can be included into existing orbit-following tools that trace either the full Larmor motion or the guiding center dynamics. The adaptive time-stepping algorithm is expected to be useful in situations where the collision frequencies vary greatly over the course of a simulation. Examples include the slowing-down of fusion products or other fast ions, and the Dreicer generation of runaway electrons as well as the generation of fast ions or electrons with ion or electron cyclotron resonance heating.
Aubry, R.; Oñate, E.; Idelsohn, S. R.
2006-09-01
The method presented in Aubry et al. (Comput Struc 83:1459-1475, 2005) for the solution of an incompressible viscous fluid flow with heat transfer using a fully Lagrangian description of motion is extended to three dimensions (3D) with particular emphasis on mass conservation. A modified fractional step (FS) based on the pressure Schur complement (Turek 1999), and related to the class of algebraic splittings Quarteroni et al. (Comput Methods Appl Mech Eng 188:505-526, 2000), is used and a new advantage of the splittings of the equations compared with the classical FS is highlighted for free surface problems. The temperature is semi-coupled with the displacement, which is the main variable in a Lagrangian description. Comparisons for various mesh Reynolds numbers are performed with the classical FS, an algebraic splitting and a monolithic solution, in order to illustrate the behaviour of the Uzawa operator and the mass conservation. As the classical fractional step is equivalent to one iteration of the Uzawa algorithm performed with a standard Laplacian as a preconditioner, it will behave well only in a Reynold mesh number domain where the preconditioner is efficient. Numerical results are provided to assess the superiority of the modified algebraic splitting to the classical FS.
International Nuclear Information System (INIS)
Franzè, Giuseppe; Lucia, Walter; Tedesco, Francesco
2014-01-01
This paper proposes a Model Predictive Control (MPC) strategy to address regulation problems for constrained polytopic Linear Parameter Varying (LPV) systems subject to input and state constraints in which both plant measurements and command signals in the loop are sent through communication channels subject to time-varying delays (Networked Control System (NCS)). The results here proposed represent a significant extension to the LPV framework of a recent Receding Horizon Control (RHC) scheme developed for the so-called robust case. By exploiting the parameter availability, the pre-computed sequences of one- step controllable sets inner approximations are less conservative than the robust counterpart. The resulting framework guarantees asymptotic stability and constraints fulfilment regardless of plant uncertainties and time-delay occurrences. Finally, experimental results on a laboratory two-tank test-bed show the effectiveness of the proposed approach
International Nuclear Information System (INIS)
Finn, John M.
2015-01-01
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012
[Collaborative application of BEPS at different time steps.
Lu, Wei; Fan, Wen Yi; Tian, Tian
2016-09-01
BEPSHourly is committed to simulate the ecological and physiological process of vegetation at hourly time steps, and is often applied to analyze the diurnal change of gross primary productivity (GPP), net primary productivity (NPP) at site scale because of its more complex model structure and time-consuming solving process. However, daily photosynthetic rate calculation in BEPSDaily model is simpler and less time-consuming, not involving many iterative processes. It is suitable for simulating the regional primary productivity and analyzing the spatial distribution of regional carbon sources and sinks. According to the characteristics and applicability of BEPSDaily and BEPSHourly models, this paper proposed a method of collaborative application of BEPS at daily and hourly time steps. Firstly, BEPSHourly was used to optimize the main photosynthetic parameters: the maximum rate of carboxylation (V c max ) and the maximum rate of photosynthetic electron transport (J max ) at site scale, and then the two optimized parameters were introduced into BEPSDaily model to estimate regional NPP at regional scale. The results showed that optimization of the main photosynthesis parameters based on the flux data could improve the simulate ability of the model. The primary productivity of different forest types in descending order was deciduous broad-leaved forest, mixed forest, coniferous forest in 2011. The collaborative application of carbon cycle models at different steps proposed in this study could effectively optimize the main photosynthesis parameters V c max and J max , simulate the monthly averaged diurnal GPP, NPP, calculate the regional NPP, and analyze the spatial distribution of regional carbon sources and sinks.
A higher order space-time Galerkin scheme for time domain integral equations
Pray, Andrew J.; Beghein, Yves; Nair, Naveen V.; Cools, Kristof; Bagci, Hakan; Shanker, Balasubramaniam
2014-01-01
Stability of time domain integral equation (TDIE) solvers has remained an elusive goal formany years. Advancement of this research has largely progressed on four fronts: 1) Exact integration, 2) Lubich quadrature, 3) smooth temporal basis functions, and 4) space-time separation of convolutions with the retarded potential. The latter method's efficacy in stabilizing solutions to the time domain electric field integral equation (TD-EFIE) was previously reported for first-order surface descriptions (flat elements) and zeroth-order functions as the temporal basis. In this work, we develop the methodology necessary to extend the scheme to higher order surface descriptions as well as to enable its use with higher order basis functions in both space and time. These basis functions are then used in a space-time Galerkin framework. A number of results are presented that demonstrate convergence in time. The viability of the space-time separation method in producing stable results is demonstrated experimentally for these examples.
Beghein, Yves; Cools, Kristof; Bagci, Hakan; De Zutter, Danië l
2013-01-01
electrically conducting bodies, is free from spurious resonances. The standard marching-on-in-time technique for discretizing the TD-CFIE uses Galerkin and collocation schemes in space and time, respectively. Unfortunately, the standard scheme is theoretically
Two-Step Time of Arrival Estimation for Pulse-Based Ultra-Wideband Systems
Directory of Open Access Journals (Sweden)
H. Vincent Poor
2008-05-01
Full Text Available In cooperative localization systems, wireless nodes need to exchange accurate position-related information such as time-of-arrival (TOA and angle-of-arrival (AOA, in order to obtain accurate location information. One alternative for providing accurate position-related information is to use ultra-wideband (UWB signals. The high time resolution of UWB signals presents a potential for very accurate positioning based on TOA estimation. However, it is challenging to realize very accurate positioning systems in practical scenarios, due to both complexity/cost constraints and adverse channel conditions such as multipath propagation. In this paper, a two-step TOA estimation algorithm is proposed for UWB systems in order to provide accurate TOA estimation under practical constraints. In order to speed up the estimation process, the first step estimates a coarse TOA of the received signal based on received signal energy. Then, in the second step, the arrival time of the first signal path is estimated by considering a hypothesis testing approach. The proposed scheme uses low-rate correlation outputs and is able to perform accurate TOA estimation in reasonable time intervals. The simulation results are presented to analyze the performance of the estimator.
A symmetrical image encryption scheme in wavelet and time domain
Luo, Yuling; Du, Minghui; Liu, Junxiu
2015-02-01
There has been an increasing concern for effective storages and secure transactions of multimedia information over the Internet. Then a great variety of encryption schemes have been proposed to ensure the information security while transmitting, but most of current approaches are designed to diffuse the data only in spatial domain which result in reducing storage efficiency. A lightweight image encryption strategy based on chaos is proposed in this paper. The encryption process is designed in transform domain. The original image is decomposed into approximation and detail components using integer wavelet transform (IWT); then as the more important component of the image, the approximation coefficients are diffused by secret keys generated from a spatiotemporal chaotic system followed by inverse IWT to construct the diffused image; finally a plain permutation is performed for diffusion image by the Logistic mapping in order to reduce the correlation between adjacent pixels further. Experimental results and performance analysis demonstrate the proposed scheme is an efficient, secure and robust encryption mechanism and it realizes effective coding compression to satisfy desirable storage.
On an adaptive time stepping strategy for solving nonlinear diffusion equations
International Nuclear Information System (INIS)
Chen, K.; Baines, M.J.; Sweby, P.K.
1993-01-01
A new time step selection procedure is proposed for solving non- linear diffusion equations. It has been implemented in the ASWR finite element code of Lorenz and Svoboda [10] for 2D semiconductor process modelling diffusion equations. The strategy is based on equi-distributing the local truncation errors of the numerical scheme. The use of B-splines for interpolation (as well as for the trial space) results in a banded and diagonally dominant matrix. The approximate inverse of such a matrix can be provided to a high degree of accuracy by another banded matrix, which in turn can be used to work out the approximate finite difference scheme corresponding to the ASWR finite element method, and further to calculate estimates of the local truncation errors of the numerical scheme. Numerical experiments on six full simulation problems arising in semiconductor process modelling have been carried out. Results show that our proposed strategy is more efficient and better conserves the total mass. 18 refs., 6 figs., 2 tabs
Chen, Yunjie; Kale, Seyit; Weare, Jonathan; Dinner, Aaron R; Roux, Benoît
2016-04-12
A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method.
Solving the Sea-Level Equation in an Explicit Time Differencing Scheme
Klemann, V.; Hagedoorn, J. M.; Thomas, M.
2016-12-01
In preparation of coupling the solid-earth to an ice-sheet compartment in an earth-system model, the dependency of initial topography on the ice-sheet history and viscosity structure has to be analysed. In this study, we discuss this dependency and how it influences the reconstruction of former sea level during a glacial cycle. The modelling is based on the VILMA code in which the field equations are solved in the time domain applying an explicit time-differencing scheme. The sea-level equation is solved simultaneously in the same explicit scheme as the viscoleastic field equations (Hagedoorn et al., 2007). With the assumption of only small changes, we neglect the iterative solution at each time step as suggested by e.g. Kendall et al. (2005). Nevertheless, the prediction of the initial paleo topography in case of moving coastlines remains to be iterated by repeated integration of the whole load history. The sensitivity study sketched at the beginning is accordingly motivated by the question if the iteration of the paleo topography can be replaced by a predefined one. This study is part of the German paleoclimate modelling initiative PalMod. Lit:Hagedoorn JM, Wolf D, Martinec Z, 2007. An estimate of global mean sea-level rise inferred from tide-gauge measurements using glacial-isostatic models consistent with the relative sea-level record. Pure appl. Geophys. 164: 791-818, doi:10.1007/s00024-007-0186-7Kendall RA, Mitrovica JX, Milne GA, 2005. On post-glacial sea level - II. Numerical formulation and comparative reesults on spherically symmetric models. Geophys. J. Int., 161: 679-706, doi:10.1111/j.365-246.X.2005.02553.x
A higher order space-time Galerkin scheme for time domain integral equations
Pray, Andrew J.
2014-12-01
Stability of time domain integral equation (TDIE) solvers has remained an elusive goal formany years. Advancement of this research has largely progressed on four fronts: 1) Exact integration, 2) Lubich quadrature, 3) smooth temporal basis functions, and 4) space-time separation of convolutions with the retarded potential. The latter method\\'s efficacy in stabilizing solutions to the time domain electric field integral equation (TD-EFIE) was previously reported for first-order surface descriptions (flat elements) and zeroth-order functions as the temporal basis. In this work, we develop the methodology necessary to extend the scheme to higher order surface descriptions as well as to enable its use with higher order basis functions in both space and time. These basis functions are then used in a space-time Galerkin framework. A number of results are presented that demonstrate convergence in time. The viability of the space-time separation method in producing stable results is demonstrated experimentally for these examples.
Operation Modes and Control Schemes for Internet-Based Teleoperation System with Time Delay
Institute of Scientific and Technical Information of China (English)
曾庆军; 宋爱国
2003-01-01
Teleoperation system plays an important role in executing task under hazard environment. As the computer networks such as the Internet are being used as the communication channel of teleoperation system, varying time delay causes the overall system unstable and reduces the performance of transparency. This paper proposed twelve operation modes with different control schemes for teleoperation on the Internet with time delay. And an optimal operation mode with control scheme was specified for teleoperation with time delay, based on the tradeoff between passivity and transparency properties. It experimentally confirmed the validity of the proposed optimal mode and control scheme by using a simple one DOF master-slave manipulator system.
Ulku, Huseyin Arda; Bagci, Hakan; Michielssen, Eric
2012-01-01
An explicit yet stable marching-on-in-time (MOT) scheme for solving the time domain magnetic field integral equation (TD-MFIE) is presented. The stability of the explicit scheme is achieved via (i) accurate evaluation of the MOT matrix elements using closed form expressions and (ii) a PE(CE) m type linear multistep method for time marching. Numerical results demonstrate the accuracy and stability of the proposed explicit MOT-TD-MFIE solver. © 2012 IEEE.
Ulku, Huseyin Arda
2012-09-01
An explicit yet stable marching-on-in-time (MOT) scheme for solving the time domain magnetic field integral equation (TD-MFIE) is presented. The stability of the explicit scheme is achieved via (i) accurate evaluation of the MOT matrix elements using closed form expressions and (ii) a PE(CE) m type linear multistep method for time marching. Numerical results demonstrate the accuracy and stability of the proposed explicit MOT-TD-MFIE solver. © 2012 IEEE.
A modification scheme for seismic acceleration - time histories
International Nuclear Information System (INIS)
Bethell, J.
1979-05-01
A technique is described for the modification of recorded earthquake acceleration-time histories which gives reduced peak accelerations whilst leaving other significant characteristics unchanged. Such modifications are of use in constructing design basis acceleration-time histories such that all important parameters conform to a specified return period. The technique is applied to two recordings from the 1966 Parkfield earthquake, their peak accelerations being reduced in each case from about 40% g to 25% g. (author)
A Time Marching Scheme for Solving Volume Integral Equations on Nonlinear Scatterers
Bagci, Hakan
2015-01-01
Transient electromagnetic field interactions on inhomogeneous penetrable scatterers can be analyzed by solving time domain volume integral equations (TDVIEs). TDVIEs are oftentimes solved using marchingon-in-time (MOT) schemes. Unlike finite difference and finite element schemes, MOT-TDVIE solvers require discretization of only the scatterers, do not call for artificial absorbing boundary conditions, and are more robust to numerical phase dispersion. On the other hand, their computational cost is high, they suffer from late-time instabilities, and their implicit nature makes incorporation of nonlinear constitutive relations more difficult. Development of plane-wave time-domain (PWTD) and FFT-based schemes has significantly reduced the computational cost of the MOT-TDVIE solvers. Additionally, latetime instability problem has been alleviated for all practical purposes with the development of accurate integration schemes and specially designed temporal basis functions. Addressing the third challenge is the topic of this presentation. I will talk about an explicit MOT scheme developed for solving the TDVIE on scatterers with nonlinear material properties. The proposed scheme separately discretizes the TDVIE and the nonlinear constitutive relation between electric field intensity and flux density. The unknown field intensity and flux density are expanded using half and full Schaubert-Wilton-Glisson (SWG) basis functions in space and polynomial temporal interpolators in time. The resulting coupled system of the discretized TDVIE and constitutive relation is integrated in time using an explicit P E(CE) m scheme to yield the unknown expansion coefficients. Explicitness of time marching allows for straightforward incorporation of the nonlinearity as a function evaluation on the right hand side of the coupled system of equations. Consequently, the resulting MOT scheme does not call for a Newton-like nonlinear solver. Numerical examples, which demonstrate the applicability
A Time Marching Scheme for Solving Volume Integral Equations on Nonlinear Scatterers
Bagci, Hakan
2015-01-07
Transient electromagnetic field interactions on inhomogeneous penetrable scatterers can be analyzed by solving time domain volume integral equations (TDVIEs). TDVIEs are oftentimes solved using marchingon-in-time (MOT) schemes. Unlike finite difference and finite element schemes, MOT-TDVIE solvers require discretization of only the scatterers, do not call for artificial absorbing boundary conditions, and are more robust to numerical phase dispersion. On the other hand, their computational cost is high, they suffer from late-time instabilities, and their implicit nature makes incorporation of nonlinear constitutive relations more difficult. Development of plane-wave time-domain (PWTD) and FFT-based schemes has significantly reduced the computational cost of the MOT-TDVIE solvers. Additionally, latetime instability problem has been alleviated for all practical purposes with the development of accurate integration schemes and specially designed temporal basis functions. Addressing the third challenge is the topic of this presentation. I will talk about an explicit MOT scheme developed for solving the TDVIE on scatterers with nonlinear material properties. The proposed scheme separately discretizes the TDVIE and the nonlinear constitutive relation between electric field intensity and flux density. The unknown field intensity and flux density are expanded using half and full Schaubert-Wilton-Glisson (SWG) basis functions in space and polynomial temporal interpolators in time. The resulting coupled system of the discretized TDVIE and constitutive relation is integrated in time using an explicit P E(CE) m scheme to yield the unknown expansion coefficients. Explicitness of time marching allows for straightforward incorporation of the nonlinearity as a function evaluation on the right hand side of the coupled system of equations. Consequently, the resulting MOT scheme does not call for a Newton-like nonlinear solver. Numerical examples, which demonstrate the applicability
International Nuclear Information System (INIS)
Grashilin, V.A.; Karyshev, Yu.Ya.
1982-01-01
A 6-cycle scheme of step motor is described. The block-diagram and the basic circuit of the step motor control are presented. The step motor control comprises a pulse shaper, electronic commutator and power amplifiers. The step motor supply from 6-cycle electronic commutator provides for higher reliability and accuracy than from 3-cycle commutator. The control of step motor work is realised by the program given by the external source of control signals. Time-dependent diagrams for step motor control are presented. The specifications of the step-motor is given
Modern EMC analysis I time-domain computational schemes
Kantartzis, Nikolaos V
2008-01-01
The objective of this two-volume book is the systematic and comprehensive description of the most competitive time-domain computational methods for the efficient modeling and accurate solution of contemporary real-world EMC problems. Intended to be self-contained, it performs a detailed presentation of all well-known algorithms, elucidating on their merits or weaknesses, and accompanies the theoretical content with a variety of applications. Outlining the present volume, the analysis covers the theory of the finite-difference time-domain, the transmission-line matrix/modeling, and the finite i
On a Stable and Consistent Finite Difference Scheme for a Time ...
African Journals Online (AJOL)
NJABS
established time independent Schrodinger Wave Equation (SWE). To develop the stability criterion .... the rate at which signals in the numerical scheme travel will be faster than their real world counterparts and this unrealistic expectation leads ...
Aggressive time step selection for the time asymptotic velocity diffusion problem
International Nuclear Information System (INIS)
Hewett, D.W.; Krapchev, V.B.; Hizanidis, K.; Bers, A.
1984-12-01
An aggressive time step selector for an ADI algorithm is preseneted that is applied to the linearized 2-D Fokker-Planck equation including an externally imposed quasilinear diffusion term. This method provides a reduction in CPU requirements by factors of two or three compared to standard ADI. More important, the robustness of the procedure greatly reduces the work load of the user. The procedure selects a nearly optimal Δt with a minimum of intervention by the user thus relieving the need to supervise the algorithm. In effect, the algorithm does its own supervision by discarding time steps made with Δt too large
Arbitrary Dimension Convection-Diffusion Schemes for Space-Time Discretizations
Energy Technology Data Exchange (ETDEWEB)
Bank, Randolph E. [Univ. of California, San Diego, CA (United States); Vassilevski, Panayot S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Zikatanov, Ludmil T. [Bulgarian Academy of Sciences, Sofia (Bulgaria)
2016-01-20
This note proposes embedding a time dependent PDE into a convection-diffusion type PDE (in one space dimension higher) with singularity, for which two discretization schemes, the classical streamline-diffusion and the EAFE (edge average finite element) one, are investigated in terms of stability and error analysis. The EAFE scheme, in particular, is extended to be arbitrary order which is of interest on its own. Numerical results, in combined space-time domain demonstrate the feasibility of the proposed approach.
Application of the symplectic finite-difference time-domain scheme to electromagnetic simulation
International Nuclear Information System (INIS)
Sha, Wei; Huang, Zhixiang; Wu, Xianliang; Chen, Mingsheng
2007-01-01
An explicit fourth-order finite-difference time-domain (FDTD) scheme using the symplectic integrator is applied to electromagnetic simulation. A feasible numerical implementation of the symplectic FDTD (SFDTD) scheme is specified. In particular, new strategies for the air-dielectric interface treatment and the near-to-far-field (NFF) transformation are presented. By using the SFDTD scheme, both the radiation and the scattering of three-dimensional objects are computed. Furthermore, the energy-conserving characteristic hold for the SFDTD scheme is verified under long-term simulation. Numerical results suggest that the SFDTD scheme is more efficient than the traditional FDTD method and other high-order methods, and can save computational resources
Time Reversal UWB Communication System: A Novel Modulation Scheme with Experimental Validation
Directory of Open Access Journals (Sweden)
Khaleghi A
2010-01-01
Full Text Available A new modulation scheme is proposed for a time reversal (TR ultra wide-band (UWB communication system. The new modulation scheme uses the binary pulse amplitude modulation (BPAM and adds a new level of modulation to increase the data rate of a TR UWB communication system. Multiple data bits can be transmitted simultaneously with a cost of little added interference. Bit error rate (BER performance and the maximum achievable data rate of the new modulation scheme are theoretically analyzed. Two separate measurement campaigns are carried out to analyze the proposed modulation scheme. In the first campaign, the frequency responses of a typical indoor channel are measured and the performance is studied by the simulations using the measured frequency responses. Theoretical and the simulative performances are in strong agreement with each other. Furthermore, the BER performance of the proposed modulation scheme is compared with the performance of existing modulation schemes. It is shown that the proposed modulation scheme outperforms QAM and PAM for in an AWGN channel. In the second campaign, an experimental validation of the proposed modulation scheme is done. It is shown that the performances with the two measurement campaigns are in good agreement.
Positivity-preserving space-time CE/SE scheme for high speed flows
Shen, Hua
2017-03-02
We develop a space-time conservation element and solution element (CE/SE) scheme using a simple slope limiter to preserve the positivity of the density and pressure in computations of inviscid and viscous high-speed flows. In general, the limiter works with all existing CE/SE schemes. Here, we test the limiter on a central Courant number insensitive (CNI) CE/SE scheme implemented on hybrid unstructured meshes. Numerical examples show that the proposed limiter preserves the positivity of the density and pressure without disrupting the conservation law; it also improves robustness without losing accuracy in solving high-speed flows.
Positivity-preserving space-time CE/SE scheme for high speed flows
Shen, Hua; Parsani, Matteo
2017-01-01
We develop a space-time conservation element and solution element (CE/SE) scheme using a simple slope limiter to preserve the positivity of the density and pressure in computations of inviscid and viscous high-speed flows. In general, the limiter works with all existing CE/SE schemes. Here, we test the limiter on a central Courant number insensitive (CNI) CE/SE scheme implemented on hybrid unstructured meshes. Numerical examples show that the proposed limiter preserves the positivity of the density and pressure without disrupting the conservation law; it also improves robustness without losing accuracy in solving high-speed flows.
Performance evaluation of time-delay control schemes for uninterruptible power supplies
DEFF Research Database (Denmark)
Loh, P.C; Tang, Y.; Blaabjerg, Frede
2008-01-01
a more powerful processor. Avoiding these added complexities, this paper presents and compares a number of time-delay control schemes for UPS control, where the main building blocks needed are readily available memory storages and simple transfer functions formulated with either no or at least one......-matching characteristic, the presented control schemes are expected to be more robust and less sensitive to implementation noises. In addition, the presented control schemes are deduced to have fast dynamic response, implying that the supplypsilas output voltage is virtually not influenced by any transient load...
Development of real time diagnostics and feedback algorithms for JET in view of the next step
International Nuclear Information System (INIS)
Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Murari, A.; Barana, O.; Albanese, R.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.
2004-01-01
Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with internal transport barriers. Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)
Development of real time diagnostics and feedback algorithms for JET in view of the next step
International Nuclear Information System (INIS)
Murari, A.; Barana, O.; Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Albanese, R.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.
2004-01-01
Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)
Development of real time diagnostics and feedback algorithms for JET in view of the next step
Energy Technology Data Exchange (ETDEWEB)
Murari, A.; Barana, O. [Consorzio RFX Associazione EURATOM ENEA per la Fusione, Corso Stati Uniti 4, Padua (Italy); Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F. [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon (United Kingdom); Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D. [Association EURATOM-CEA, CEA Cadarache, 13 - Saint-Paul-lez-Durance (France); Albanese, R. [Assoc. Euratom-ENEA-CREATE, Univ. Mediterranea RC (Italy); Arena, P.; Bruno, M. [Assoc. Euratom-ENEA-CREATE, Univ.di Catania (Italy); Ambrosino, G.; Ariola, M. [Assoc. Euratom-ENEA-CREATE, Univ. Napoli Federico Napoli (Italy); Crisanti, F. [Associazone EURATOM ENEA sulla Fusione, C.R. Frascati (Italy); Luna, E. de la; Sanchez, J. [Associacion EURATOM CIEMAT para Fusion, Madrid (Spain)
2004-07-01
Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)
Extension of the time-average model to Candu refueling schemes involving reshuffling
International Nuclear Information System (INIS)
Rouben, Benjamin; Nichita, Eleodor
2008-01-01
Candu reactors consist of a horizontal non-pressurized heavy-water-filled vessel penetrated axially by fuel channels, each containing twelve 50-cm-long fuel bundles cooled by pressurized heavy water. Candu reactors are refueled on-line and, as a consequence, the core flux and power distributions change continuously. For design purposes, a 'time-average' model was developed in the 1970's to calculate the average over time of the flux and power distribution and to study the effects of different refueling schemes. The original time-average model only allows treatment of simple push-through refueling schemes whereby fresh fuel is inserted at one end of the channel and irradiated fuel is removed from the other end. With the advent of advanced fuel cycles and new Candu designs, novel refueling schemes may be considered, such as reshuffling discharged fuel from some channels into other channels, to achieve better overall discharge burnup. Such reshuffling schemes cannot be handled by the original time-average model. This paper presents an extension of the time-average model to allow for the treatment of refueling schemes with reshuffling. Equations for the extended model are presented, together with sample results for a simple demonstration case. (authors)
Experimental Investigation of Cooperative Schemes on a Real-Time DSP-Based Testbed
Directory of Open Access Journals (Sweden)
Mavrokefalidis Christos
2009-01-01
Full Text Available Experimental results on the well-known cooperating relaying schemes, amplify-and-forward (AF, detect-and-forward (DF, cooperative maximum ratio combining (CMRC, and distributed space-time coding (DSTC, are presented in this paper. A novel relaying scheme named "selection relaying" (SR, in which one of two relays are selected base on path-loss, is also tested. For all schemes except AF receive antenna diversity is as an option which can be switched on or off. For DF and DSTC a feature "selective" where the relay only forwards frames with a receive SNR above 6 dB is introduced. In our measurements, all cooperative relaying schemes above increase the coverage area as compared with direct transmission. The features "antenna diversity" and "selective" improve the performance. Good performance is obtained with CMRC, DSTC, and SR.
Resdiansyah; O. K Rahmat, R. A.; Ismail, A.
2018-03-01
Green transportation refers to a sustainable transport that gives the least impact in terms of social and environmental but at the same time is able to supply energy sources globally that includes non-motorized transport strategies deployment to promote healthy lifestyles, also known as Mobility Management Scheme (MMS). As construction of road infrastructure cannot help solve the problem of congestion, past research has shown that MMS is an effective measure to mitigate congestion and to achieve green transportation. MMS consists of different strategies and policies that subdivided into categories according to how they are able to influence travel behaviour. Appropriate selection of mobility strategies will ensure its effectiveness in mitigating congestion problems. Nevertheless, determining appropriate strategies requires human expert and depends on a number of success factors. This research has successfully developed a computer clone system based on human expert, called E-MMS. The process of knowledge acquisition for MMS strategies and the next following process to selection of strategy has been encode in a knowledge-based system using a shell expert system. The newly developed computer cloning system was successfully verified, validated and evaluated (VV&E) by comparing the result output with the real transportation expert recommendation in which the findings suggested Introduction
Directory of Open Access Journals (Sweden)
Babak MEHRAN
2009-01-01
Full Text Available Evaluation of the efficiency of congestion relief schemes on expressways has generally been based on average travel time analysis. However, road authorities are much more interested in knowing the possible impacts of improvement schemes on safety and travel time reliability prior to implementing them in real conditions. A methodology is presented to estimate travel time reliability based on modeling travel time variations as a function of demand, capacity and weather conditions. For a subject expressway segment, patterns of demand and capacity were generated for each 5-minute interval over a year by using the Monte-Carlo simulation technique, and accidents were generated randomly according to traffic conditions. A whole year analysis was performed by comparing demand and available capacity for each scenario and shockwave analysis was used to estimate the queue length at each time interval. Travel times were estimated from refined speed-flow relationships and buffer time index was estimated as a measure of travel time reliability. it was shown that the estimated reliability measures and predicted number of accidents are very close to observed values through empirical data. After validation, the methodology was applied to assess the impact of two alternative congestion relief schemes on a subject expressway segment. one alternative was to open the hard shoulder to traffic during the peak period, while the other was to reduce the peak period demand by 15%. The extent of improvements in travel conditions and safety, likewise the reduction in road users' costs after implementing each improvement scheme were estimated. it was shown that both strategies can result in up to 23% reduction in the number of occurred accidents and significant improvements in travel time reliability. Finally, the advantages and challenging issues of selecting each improvement scheme were discussed.
Directory of Open Access Journals (Sweden)
Chuan Zhu
2014-01-01
Full Text Available This paper exploits sink mobility to prolong the lifetime of sensor networks while maintaining the data transmission delay relatively low. A location predictive and time adaptive data gathering scheme is proposed. In this paper, we introduce a sink location prediction principle based on loose time synchronization and deduce the time-location formulas of the mobile sink. According to local clocks and the time-location formulas of the mobile sink, nodes in the network are able to calculate the current location of the mobile sink accurately and route data packets timely toward the mobile sink by multihop relay. Considering that data packets generating from different areas may be different greatly, an adaptive dwelling time adjustment method is also proposed to balance energy consumption among nodes in the network. Simulation results show that our data gathering scheme enables data routing with less data transmission time delay and balance energy consumption among nodes.
Zhu, Chuan; Wang, Yao; Han, Guangjie; Rodrigues, Joel J P C; Lloret, Jaime
2014-01-01
This paper exploits sink mobility to prolong the lifetime of sensor networks while maintaining the data transmission delay relatively low. A location predictive and time adaptive data gathering scheme is proposed. In this paper, we introduce a sink location prediction principle based on loose time synchronization and deduce the time-location formulas of the mobile sink. According to local clocks and the time-location formulas of the mobile sink, nodes in the network are able to calculate the current location of the mobile sink accurately and route data packets timely toward the mobile sink by multihop relay. Considering that data packets generating from different areas may be different greatly, an adaptive dwelling time adjustment method is also proposed to balance energy consumption among nodes in the network. Simulation results show that our data gathering scheme enables data routing with less data transmission time delay and balance energy consumption among nodes.
International Nuclear Information System (INIS)
Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.
2014-01-01
Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s 2 times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful
Coherent states for the time dependent harmonic oscillator: the step function
International Nuclear Information System (INIS)
Moya-Cessa, Hector; Fernandez Guasti, Manuel
2003-01-01
We study the time evolution for the quantum harmonic oscillator subjected to a sudden change of frequency. It is based on an approximate analytic solution to the time dependent Ermakov equation for a step function. This approach allows for a continuous treatment that differs from former studies that involve the matching of two time independent solutions at the time when the step occurs
Woźniak, M.; Smołka, M.; Cortes, Adriano Mauricio; Paszyński, M.; Schaefer, R.
2016-01-01
We study the features of a new mixed integration scheme dedicated to solving the non-stationary variational problems. The scheme is composed of the FEM approximation with respect to the space variable coupled with a 3-leveled time integration scheme
Part-time work scheme as a pre-retirement measure
2004-01-01
Upon the proposal of the Standing Concertation Committee at its meeting on 3 December 2003, the Director-General has approved the extension for one year of the Part-time work scheme as a pre-retirement measure, until 1 January 2005 inclusive. Human Resources Department Tel. 72808/74128
10 Gb/s Real-Time All-VCSEL Low Complexity Coherent scheme for PONs
DEFF Research Database (Denmark)
Rodes Lopez, Roberto; Cheng, Ning; Jensen, Jesper Bevensee
2012-01-01
Real time demodulation of a 10 Gb/s all-VCSEL based coherent PON link with a simplified coherent receiver scheme is demonstrated. Receiver sensitivity of −33 dBm is achieved providing high splitting ratio and link reach....
On the security of the Winternitz one-time signature scheme
Buchmann, Johannes; Dahmen, Erik; Ereth, Sarah; Hülsing, Andreas; Rückert, Markus; Nitaj, A.; Pointcheval, D.
2011-01-01
We show that the Winternitz one-time signature scheme is existentially unforgeable under adaptive chosen message attacks when instantiated with a family of pseudo random functions. Compared to previous results, which require a collision resistant hash function, our result provides significantly
Institute of Scientific and Technical Information of China (English)
ZHANG Ai-ling; ZHANG Yue; SONG Hong-yun; YAO Yuan; PAN Hong-gang
2018-01-01
An optical modulation format generation scheme based on spectral filtering and frequency-to-time mapping is experimentally demonstrated.Many modulation formats with continuously adjustable duty radio and bit rate can be formed by changing the dispersion of dispersion element and the bandwidth of shaped spectrum in this scheme.In the experiment,non-return-to-zero (NRZ) signal with bit rate of 29.41 Gbit/s and 1/2 duty ratio return-to-zero (RZ) signal with bit rate of 13.51 Gbit/s are obtained.The maximum bit rate of modulation format signal is also analyzed.
Extended neural network-based scheme for real-time force tracking with magnetorheological dampers
DEFF Research Database (Denmark)
Weber, Felix; Bhowmik, Subrata; Høgsberg, Jan Becker
2014-01-01
This paper validates numerically and experimentally a new neural network-based real-time force tracking scheme for magnetorheological (MR) dampers on a five-storey shear frame with MR damper. The inverse model is trained with absolute values of measured velocity and force because the targeted...... the pre-yield to the post-yield region. A control-oriented approach is presented to compensate for these drawbacks. The resulting control force tracking scheme is validated for the emulation of viscous damping, clipped viscous damping with negative stiffness, and friction damping with negative stiffness...
A New Quantum Key Distribution Scheme Based on Frequency and Time Coding
International Nuclear Information System (INIS)
Chang-Hua, Zhu; Chang-Xing, Pei; Dong-Xiao, Quan; Jing-Liang, Gao; Nan, Chen; Yun-Hui, Yi
2010-01-01
A new scheme of quantum key distribution (QKD) using frequency and time coding is proposed, in which the security is based on the frequency-time uncertainty relation. In this scheme, the binary information sequence is encoded randomly on either the central frequency or the time delay of the optical pulse at the sender. The central frequency of the single photon pulse is set as ω 1 for bit 0 and set as ω 2 for bit 1 when frequency coding is selected. However, the single photon pulse is not delayed for bit 0 and is delayed in τ for 1 when time coding is selected. At the receiver, either the frequency or the time delay of the pulse is measured randomly, and the final key is obtained after basis comparison, data reconciliation and privacy amplification. With the proposed method, the effect of the noise in the fiber channel and environment on the QKD system can be reduced effectively
Zanotti, Olindo; Dumbser, Michael
2016-01-01
We present a new version of conservative ADER-WENO finite volume schemes, in which both the high order spatial reconstruction as well as the time evolution of the reconstruction polynomials in the local space-time predictor stage are performed in primitive variables, rather than in conserved ones. To obtain a conservative method, the underlying finite volume scheme is still written in terms of the cell averages of the conserved quantities. Therefore, our new approach performs the spatial WENO reconstruction twice: the first WENO reconstruction is carried out on the known cell averages of the conservative variables. The WENO polynomials are then used at the cell centers to compute point values of the conserved variables, which are subsequently converted into point values of the primitive variables. This is the only place where the conversion from conservative to primitive variables is needed in the new scheme. Then, a second WENO reconstruction is performed on the point values of the primitive variables to obtain piecewise high order reconstruction polynomials of the primitive variables. The reconstruction polynomials are subsequently evolved in time with a novel space-time finite element predictor that is directly applied to the governing PDE written in primitive form. The resulting space-time polynomials of the primitive variables can then be directly used as input for the numerical fluxes at the cell boundaries in the underlying conservative finite volume scheme. Hence, the number of necessary conversions from the conserved to the primitive variables is reduced to just one single conversion at each cell center. We have verified the validity of the new approach over a wide range of hyperbolic systems, including the classical Euler equations of gas dynamics, the special relativistic hydrodynamics (RHD) and ideal magnetohydrodynamics (RMHD) equations, as well as the Baer-Nunziato model for compressible two-phase flows. In all cases we have noticed that the new ADER
A stable computational scheme for stiff time-dependent constitutive equations
International Nuclear Information System (INIS)
Shih, C.F.; Delorenzi, H.G.; Miller, A.K.
1977-01-01
Viscoplasticity and creep type constitutive equations are increasingly being employed in finite element codes for evaluating the deformation of high temperature structural members. These constitutive equations frequently exhibit stiff regimes which makes an analytical assessment of the structure very costly. A computational scheme for handling deformation in stiff regimes is proposed in this paper. By the finite element discretization, the governing partial differential equations in the spatial (x) and time (t) variables are reduced to a system of nonlinear ordinary differential equations in the independent variable t. The constitutive equations are expanded in a Taylor's series about selected values of t. The resulting system of differential equations are then integrated by an implicit scheme which employs a predictor technique to initiate the Newton-Raphson procedure. To examine the stability and accuracy of the computational scheme, a series of calculations were carried out for uniaxial specimens and thick wall tubes subjected to mechanical and thermal loading. (Auth.)
Efficient coding schemes with power allocation using space-time-frequency spreading
Institute of Scientific and Technical Information of China (English)
Jiang Haining; Luo Hanwen; Tian Jifeng; Song Wentao; Liu Xingzhao
2006-01-01
An efficient space-time-frequency (STF) coding strategy for multi-input multi-output orthogonal frequency division multiplexing (MIMO-OFDM) systems is presented for high bit rate data transmission over frequency selective fading channels. The proposed scheme is a new approach to space-time-frequency coded OFDM (COFDM) that combines OFDM with space-time coding, linear precoding and adaptive power allocation to provide higher quality of transmission in terms of the bit error rate performance and power efficiency. In addition to exploiting the maximum diversity gain in frequency, time and space, the proposed scheme enjoys high coding advantages and low-complexity decoding. The significant performance improvement of our design is confirmed by corroborating numerical simulations.
New Encryption Scheme of One-Time Pad Based on KDC
Xie, Xin; Chen, Honglei; Wu, Ying; Zhang, Heng; Wu, Peng
As more and more leakage incidents come up, traditional encryption system has not adapted to the complex and volatile network environment, so, there should be a new encryption system that can protect information security very well, this is the starting point of this paper. Based on DES and RSA encryption system, this paper proposes a new scheme of one time pad, which really achieves "One-time pad" and provides information security a new and more reliable encryption method.
Solving point reactor kinetic equations by time step-size adaptable numerical methods
International Nuclear Information System (INIS)
Liao Chaqing
2007-01-01
Based on the analysis of effects of time step-size on numerical solutions, this paper showed the necessity of step-size adaptation. Based on the relationship between error and step-size, two-step adaptation methods for solving initial value problems (IVPs) were introduced. They are Two-Step Method and Embedded Runge-Kutta Method. PRKEs were solved by implicit Euler method with step-sizes optimized by using Two-Step Method. It was observed that the control error has important influence on the step-size and the accuracy of solutions. With suitable control errors, the solutions of PRKEs computed by the above mentioned method are accurate reasonably. The accuracy and usage of MATLAB built-in ODE solvers ode23 and ode45, both of which adopt Runge-Kutta-Fehlberg method, were also studied and discussed. (authors)
Minotti, Luca; Savaré, Giuseppe
2018-02-01
We propose the new notion of Visco-Energetic solutions to rate-independent systems {(X, E,} d) driven by a time dependent energy E and a dissipation quasi-distance d in a general metric-topological space X. As for the classic Energetic approach, solutions can be obtained by solving a modified time Incremental Minimization Scheme, where at each step the dissipation quasi-distance d is incremented by a viscous correction {δ} (for example proportional to the square of the distance d), which penalizes far distance jumps by inducing a localized version of the stability condition. We prove a general convergence result and a typical characterization by Stability and Energy Balance in a setting comparable to the standard energetic one, thus capable of covering a wide range of applications. The new refined Energy Balance condition compensates for the localized stability and provides a careful description of the jump behavior: at every jump the solution follows an optimal transition, which resembles in a suitable variational sense the discrete scheme that has been implemented for the whole construction.
Lee, Eun Seok
2000-10-01
An improved aerodynamics performance of a turbine cascade shape can be achieved by an understanding of the flow-field associated with the stator-rotor interaction. In this research, an axial gas turbine airfoil cascade shape is optimized for improved aerodynamic performance by using an unsteady Navier-Stokes solver and a parallel genetic algorithm. The objective of the research is twofold: (1) to develop a computational fluid dynamics code having faster convergence rate and unsteady flow simulation capabilities, and (2) to optimize a turbine airfoil cascade shape with unsteady passing wakes for improved aerodynamic performance. The computer code solves the Reynolds averaged Navier-Stokes equations. It is based on the explicit, finite difference, Runge-Kutta time marching scheme and the Diagonalized Alternating Direction Implicit (DADI) scheme, with the Baldwin-Lomax algebraic and k-epsilon turbulence modeling. Improvements in the code focused on the cascade shape design capability, convergence acceleration and unsteady formulation. First, the inverse shape design method was implemented in the code to provide the design capability, where a surface transpiration concept was employed as an inverse technique to modify the geometry satisfying the user specified pressure distribution on the airfoil surface. Second, an approximation storage multigrid method was implemented as an acceleration technique. Third, the preconditioning method was adopted to speed up the convergence rate in solving the low Mach number flows. Finally, the implicit dual time stepping method was incorporated in order to simulate the unsteady flow-fields. For the unsteady code validation, the Stokes's 2nd problem and the Poiseuille flow were chosen and compared with the computed results and analytic solutions. To test the code's ability to capture the natural unsteady flow phenomena, vortex shedding past a cylinder and the shock oscillation over a bicircular airfoil were simulated and compared with
DEFF Research Database (Denmark)
Laitinen, Tommi; Pivnenko, Sergey; Nielsen, Jeppe Majlund
2011-01-01
In this paper, the relatively recently introduced double phi-step theta-scanning scheme and the probe correction technique associated with it is examined against the traditional phi-scanning scheme and the first-order probe correction. The important result of this paper is that the double phi......-step theta-scanning scheme is shown to be clearly less sensitive to the probe misalignment errors compared to the phi-scanning scheme. The two methods show similar sensitivity to noise and channel balance error....
Kossieris, Panagiotis; Makropoulos, Christos; Onof, Christian; Koutsoyiannis, Demetris
2018-01-01
Many hydrological applications, such as flood studies, require the use of long rainfall data at fine time scales varying from daily down to 1 min time step. However, in the real world there is limited availability of data at sub-hourly scales. To cope with this issue, stochastic disaggregation techniques are typically employed to produce possible, statistically consistent, rainfall events that aggregate up to the field data collected at coarser scales. A methodology for the stochastic disaggregation of rainfall at fine time scales was recently introduced, combining the Bartlett-Lewis process to generate rainfall events along with adjusting procedures to modify the lower-level variables (i.e., hourly) so as to be consistent with the higher-level one (i.e., daily). In the present paper, we extend the aforementioned scheme, initially designed and tested for the disaggregation of daily rainfall into hourly depths, for any sub-hourly time scale. In addition, we take advantage of the recent developments in Poisson-cluster processes incorporating in the methodology a Bartlett-Lewis model variant that introduces dependence between cell intensity and duration in order to capture the variability of rainfall at sub-hourly time scales. The disaggregation scheme is implemented in an R package, named HyetosMinute, to support disaggregation from daily down to 1-min time scale. The applicability of the methodology was assessed on a 5-min rainfall records collected in Bochum, Germany, comparing the performance of the above mentioned model variant against the original Bartlett-Lewis process (non-random with 5 parameters). The analysis shows that the disaggregation process reproduces adequately the most important statistical characteristics of rainfall at wide range of time scales, while the introduction of the model with dependent intensity-duration results in a better performance in terms of skewness, rainfall extremes and dry proportions.
On an efficient multiple time step Monte Carlo simulation of the SABR model
Leitao Rodriguez, A.; Grzelak, L.A.; Oosterlee, C.W.
2017-01-01
In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math.
International Nuclear Information System (INIS)
Ando, S; Nara, T; Kurihara, T
2014-01-01
Spatial filtering velocimetry was proposed in 1963 by Ator as a velocity-sensing technique for aerial camera-control systems. The total intensity of a moving surface is observed through a set of parallel-slit reticles, resulting in a narrow-band temporal signal whose frequency is directly proportional to the image velocity. However, even despite its historical importance and inherent technical advantages, the mathematical formulation of this technique is only valid when infinite-length observation in both space and time is possible, which causes significant errors in most applications where a small receptive window and high resolution in both axes are desired. In this study, we apply a novel mathematical technique, the weighted integral method, to solve this problem, and obtain exact sensing schemes and algorithms for finite (arbitrarily small but non-zero) size reticles and short-time estimation. Practical considerations for utilizing these schemes are also explored both theoretically and experimentally. (paper)
Labour Supply Effects of a Subsidised Old-Age Part-Time Scheme in Austria
Nikolaus Graf; Helmut Hofer; Rudolf Winter-Ebmer
2009-01-01
In this paper we evaluate the impact of the old-age part-time scheme (OAPT) on the Austrian labour market which was a policy to allow flexible retirement options for the elderly with an aim to increase labour supply. According to our matching estimates employment probability increases slightly, especially in the first two years after entrance into the programme. Furthermore, the programme seems to reduce the measured unemployment risk. However, the total number of hours worked is significantl...
RCNF: Real-time Collaborative Network Forensic Scheme for Evidence Analysis
Moustafa, Nour; Slay, Jill
2017-01-01
Network forensic techniques help in tracking different types of cyber attack by monitoring and inspecting network traffic. However, with the high speed and large sizes of current networks, and the sophisticated philosophy of attackers, in particular mimicking normal behaviour and/or erasing traces to avoid detection, investigating such crimes demands intelligent network forensic techniques. This paper suggests a real-time collaborative network Forensic scheme (RCNF) that can monitor and inves...
International Nuclear Information System (INIS)
Kramme, Johanna; Diehl, Volker; Madai, Vince I.; Sobesky, Jan; Guenther, Matthias
2015-01-01
The improvement in Arterial Spin Labeling (ASL) perfusion quantification, especially for delayed bolus arrival times (BAT), with an acquisition redistribution scheme mitigating the T1 decay of the label in multi-TI ASL measurements is investigated. A multi inflow time (TI) 3D-GRASE sequence is presented which adapts the distribution of acquisitions accordingly, by keeping the scan time constant. The MR sequence increases the number of averages at long TIs and decreases their number at short TIs and thus compensating the T1 decay of the label. The improvement of perfusion quantification is evaluated in simulations as well as in-vivo in healthy volunteers and patients with prolonged BATs due to age or steno-occlusive disease. The improvement in perfusion quantification depends on BAT. At healthy BATs the differences are small, but become larger for longer BATs typically found in certain diseases. The relative error of perfusion is improved up to 30% at BATs > 1500 ms in comparison to the standard acquisition scheme. This adapted acquisition scheme improves the perfusion measurement in comparison to standard multi-TI ASL implementations. It provides relevant benefit in clinical conditions that cause prolonged BATs and is therefore of high clinical relevance for neuroimaging of steno-occlusive diseases.
An energy-efficient transmission scheme for real-time data in wireless sensor networks.
Kim, Jin-Woo; Barrado, José Ramón Ramos; Jeon, Dong-Keun
2015-05-20
The Internet of things (IoT) is a novel paradigm where all things or objects in daily life can communicate with other devices and provide services over the Internet. Things or objects need identifying, sensing, networking and processing capabilities to make the IoT paradigm a reality. The IEEE 802.15.4 standard is one of the main communication protocols proposed for the IoT. The IEEE 802.15.4 standard provides the guaranteed time slot (GTS) mechanism that supports the quality of service (QoS) for the real-time data transmission. In spite of some QoS features in IEEE 802.15.4 standard, the problem of end-to-end delay still remains. In order to solve this problem, we propose a cooperative medium access scheme (MAC) protocol for real-time data transmission. We also evaluate the performance of the proposed scheme through simulation. The simulation results demonstrate that the proposed scheme can improve the network performance.
Directory of Open Access Journals (Sweden)
Asad Rehman
Full Text Available An upwind space-time conservation element and solution element (CE/SE scheme is extended to numerically approximate the dusty gas flow model. Unlike central CE/SE schemes, the current method uses the upwind procedure to derive the numerical fluxes through the inner boundary of conservation elements. These upwind fluxes are utilized to calculate the gradients of flow variables. For comparison and validation, the central upwind scheme is also applied to solve the same dusty gas flow model. The suggested upwind CE/SE scheme resolves the contact discontinuities more effectively and preserves the positivity of flow variables in low density flows. Several case studies are considered and the results of upwind CE/SE are compared with the solutions of central upwind scheme. The numerical results show better performance of the upwind CE/SE method as compared to the central upwind scheme. Keywords: Dusty gas flow, Solid particles, Upwind schemes, Rarefaction wave, Shock wave, Contact discontinuity
A scheme for the evaluation of dominant time-eigenvalues of a nuclear reactor
International Nuclear Information System (INIS)
Modak, R.S.; Gupta, Anurag
2007-01-01
This paper presents a scheme to obtain the fundamental and few dominant solutions of the prompt time eigenvalue problem (referred to as α-eigenvalue problem) for a nuclear reactor using multi-group neutron diffusion theory. The scheme is based on the use of an algorithm called Orthomin(1). This algorithm was originally proposed by Suetomi and Sekimoto [Suetomi, E., Sekimoto, H., 1991. Conjugate gradient like methods and their application to eigenvalue problems for neutron diffusion equations. Ann. Nucl. Energy 18 (4), 205-227] to obtain the fundamental K-eigenvalue (K-effective) of nuclear reactors. Recently, it has been shown that the algorithm can be used to obtain the further dominant K-modes also. Since α-eigenvalue problem is usually more difficult to solve than the K-eigenvalue problem, an attempt has been made here to use Orthomin(1) for its solution. Numerical results are given for realistic 3-D test case
Adaptive Time Stepping for Transient Network Flow Simulation in Rocket Propulsion Systems
Majumdar, Alok K.; Ravindran, S. S.
2017-01-01
Fluid and thermal transients found in rocket propulsion systems such as propellant feedline system is a complex process involving fast phases followed by slow phases. Therefore their time accurate computation requires use of short time step initially followed by the use of much larger time step. Yet there are instances that involve fast-slow-fast phases. In this paper, we present a feedback control based adaptive time stepping algorithm, and discuss its use in network flow simulation of fluid and thermal transients. The time step is automatically controlled during the simulation by monitoring changes in certain key variables and by feedback. In order to demonstrate the viability of time adaptivity for engineering problems, we applied it to simulate water hammer and cryogenic chill down in pipelines. Our comparison and validation demonstrate the accuracy and efficiency of this adaptive strategy.
GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling
Miki, Yohei; Umemura, Masayuki
2017-04-01
The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.
Schaefli, Bettina; Breuer, Elke
2010-05-01
TANDEMplusIDEA was a European mentoring programme conducted by the technical universities RWTH Aachen, Imperial College London, ETH Zurich and TU Delft between 2007 and 2010 to achieve more gender equality in science. Given the continuing underrepresentation of women in science and technology and the well-known structural and systematic disadvantages in male-dominated scientific cultures, the main goal of this programme was to promote excellent female scientists through a high-level professional and personal development programme. Based on the mentoring concept of the RWTH Aachen, TANDEMplusIDEA was the first mentoring programme for female scientists realized in international cooperation. As a pilot scheme funded by the 6th Framework Programme of the European Commission, the scientific evaluation was an essential part of the programme, in particular in view of the development of a best practice model for international mentoring. The participants of this programme were female scientists at an early stage of their academic career (postdoc or assistant professor) covering a wide range of science disciplines, including geosciences. This transdisciplinarity as well as the international dimension of the programme have been identified by the participants as one of the keys of success of the programme. In particular, the peer-mentoring across discipline boarders proved to have been an invaluable component of the development programme. This presentation will highlight some of the main findings of the scientific evaluation of the programme and focus on some additional personal insights from the participants.
Nutt, John G.; Horak, Fay B.
2011-01-01
Background. This study asked whether older adults were more likely than younger adults to err in the initial direction of their anticipatory postural adjustment (APA) prior to a step (indicating a motor program error), whether initial motor program errors accounted for reaction time differences for step initiation, and whether initial motor program errors were linked to inhibitory failure. Methods. In a stepping task with choice reaction time and simple reaction time conditions, we measured forces under the feet to quantify APA onset and step latency and we used body kinematics to quantify forward movement of center of mass and length of first step. Results. Trials with APA errors were almost three times as common for older adults as for younger adults, and they were nine times more likely in choice reaction time trials than in simple reaction time trials. In trials with APA errors, step latency was delayed, correlation between APA onset and step latency was diminished, and forward motion of the center of mass prior to the step was increased. Participants with more APA errors tended to have worse Stroop interference scores, regardless of age. Conclusions. The results support the hypothesis that findings of slow choice reaction time step initiation in older adults are attributable to inclusion of trials with incorrect initial motor preparation and that these errors are caused by deficits in response inhibition. By extension, the results also suggest that mixing of trials with correct and incorrect initial motor preparation might explain apparent choice reaction time slowing with age in upper limb tasks. PMID:21498431
Analysis and Design of Timing Recovery Schemes for DMT Systems over Indoor Power-Line Channels
Directory of Open Access Journals (Sweden)
Cortés José Antonio
2007-01-01
Full Text Available Discrete multitone (DMT modulation is a suitable technique to cope with main impairments of broadband indoor power-line channels: spectral selectivity and cyclic time variations. Due to the high-density constellations employed to achieve the required bit-rates, synchronization issues became an important concern in these scenarios. This paper analyzes the performance of a conventional DMT timing recovery scheme designed for linear time-invariant (LTI channels when employed over indoor power lines. The influence of the channel cyclic short-term variations and the sampling jitter on the system performance is assessed. Bit-rate degradation due to timing errors is evaluated in a set of measured channels. It is shown that this synchronization mechanism limits the system performance in many residential channels. Two improvements are proposed to avoid this end: a new phase error estimator that takes into account the short-term changes in the channel response, and the introduction of notch filters in the timing recovery loop. Simulations confirm that the new scheme eliminates the bit-rate loss in most situations.
An effective implementation scheme of just-in-time protocol for optical burst switching networks
Wu, Guiling; Li, Xinwan; Chen, Jian-Ping; Wang, Hui
2005-02-01
Optical burst switching (OBS) has been emerging as a promising technology that can effectively support the next generation IP-oriented transportation networks. JIT signaling protocol for OBS is relatively simple and easy to be implemented by hardware. This paper presented an effective scheme to implement the JIT protocol, which not only can effectively implement reservation and release of optical channels based on JIT, but also can process the failure of channel reservation and release due to loss of burst control packets. The scheme includes: (1) a BHP (burst head packet) path table is designed and built at each OBS node. It is used to guarantee the corresponding burst control packet, i.e. BHP, BEP (burst end packet) and BEP_ACK (BEP acknowledgement), to be transmitted in the same path. (2) The timed retransmission of BEP and the reversed deletion of the item in BHP path tables triggered by the corresponding BEP_ACK are combined to solve the problems caused by the loss of the signaling messages in channel reservation and release process. (3) Burst head packets and BEP_ACK are transmitted using "best-effort" method. Related signaling messages and their formats for the proposed scheme are also given.
Rehman, Asad; Ali, Ishtiaq; Qamar, Shamsul
An upwind space-time conservation element and solution element (CE/SE) scheme is extended to numerically approximate the dusty gas flow model. Unlike central CE/SE schemes, the current method uses the upwind procedure to derive the numerical fluxes through the inner boundary of conservation elements. These upwind fluxes are utilized to calculate the gradients of flow variables. For comparison and validation, the central upwind scheme is also applied to solve the same dusty gas flow model. The suggested upwind CE/SE scheme resolves the contact discontinuities more effectively and preserves the positivity of flow variables in low density flows. Several case studies are considered and the results of upwind CE/SE are compared with the solutions of central upwind scheme. The numerical results show better performance of the upwind CE/SE method as compared to the central upwind scheme.
One-step electrodeposition process of CuInSe2: Deposition time effect
Indian Academy of Sciences (India)
Administrator
CuInSe2 thin films were prepared by one-step electrodeposition process using a simplified two- electrodes system. ... homojunctions or heterojunctions (Rincon et al 1983). Efficiency of ... deposition times onto indium thin oxide (ITO)-covered.
Stability analysis and time-step limits for a Monte Carlo Compton-scattering method
International Nuclear Information System (INIS)
Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.
2010-01-01
A Monte Carlo method for simulating Compton scattering in high energy density applications has been presented that models the photon-electron collision kinematics exactly [E. Canfield, W.M. Howard, E.P. Liang, Inverse Comptonization by one-dimensional relativistic electrons, Astrophys. J. 323 (1987) 565]. However, implementing this technique typically requires an explicit evaluation of the material temperature, which can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and develop two time-step limits that avoid undesirable behavior. The first time-step limit prevents instabilities, while the second, more restrictive time-step limit avoids both instabilities and nonphysical oscillations. With a set of numerical examples, we demonstrate the efficacy of these time-step limits.
Woźniak, M.
2016-06-02
We study the features of a new mixed integration scheme dedicated to solving the non-stationary variational problems. The scheme is composed of the FEM approximation with respect to the space variable coupled with a 3-leveled time integration scheme with a linearized right-hand side operator. It was applied in solving the Cahn-Hilliard parabolic equation with a nonlinear, fourth-order elliptic part. The second order of the approximation along the time variable was proven. Moreover, the good scalability of the software based on this scheme was confirmed during simulations. We verify the proposed time integration scheme by monitoring the Ginzburg-Landau free energy. The numerical simulations are performed by using a parallel multi-frontal direct solver executed over STAMPEDE Linux cluster. Its scalability was compared to the results of the three direct solvers, including MUMPS, SuperLU and PaSTiX.
Comparative study of numerical schemes of TVD3, UNO3-ACM and optimized compact scheme
Lee, Duck-Joo; Hwang, Chang-Jeon; Ko, Duck-Kon; Kim, Jae-Wook
1995-01-01
Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.
Guermond, J.-L.; Salgado, Abner J.
2011-01-01
In this paper we analyze the convergence properties of a new fractional time-stepping technique for the solution of the variable density incompressible Navier-Stokes equations. The main feature of this method is that, contrary to other existing algorithms, the pressure is determined by just solving one Poisson equation per time step. First-order error estimates are proved, and stability of a formally second-order variant of the method is established. © 2011 Society for Industrial and Applied Mathematics.
Energy-Efficient Optimization for HARQ Schemes over Time-Correlated Fading Channels
Shi, Zheng
2018-03-19
Energy efficiency of three common hybrid automatic repeat request (HARQ) schemes including Type I HARQ, HARQ with chase combining (HARQ-CC) and HARQ with incremental redundancy (HARQ-IR), is analyzed and joint power allocation and rate selection to maximize the energy efficiency is investigated in this paper. Unlike prior literature, time-correlated fading channels is considered and two widely concerned quality of service (QoS) constraints, i.e., outage and goodput constraints, are also considered in the optimization, which further differentiates this work from prior ones. Using a unified expression of asymptotic outage probabilities, optimal transmission powers and optimal rate are derived in closed-forms to maximize the energy efficiency while satisfying the QoS constraints. These closed-form solutions then enable a thorough analysis of the maximal energy efficiencies of various HARQ schemes. It is revealed that with low outage constraint, the maximal energy efficiency achieved by Type I HARQ is
International Nuclear Information System (INIS)
Messud, Jeremie
2009-01-01
The stationary internal density functional theory (DFT) formalism and Kohn-Sham scheme are generalized to the time-dependent case. It is proven that, in the time-dependent case, the internal properties of a self-bound system (such as an atomic nuclei or a helium droplet) are all defined by the internal one-body density and the initial state. A time-dependent internal Kohn-Sham scheme is set up as a practical way to compute the internal density. The main difference from the traditional DFT formalism and Kohn-Sham scheme is the inclusion of the center-of-mass correlations in the functional.
Meng, Tianhui; Li, Xiaofan; Zhang, Sha; Zhao, Yubin
2016-09-28
Wireless sensor networks (WSNs) have recently gained popularity for a wide spectrum of applications. Monitoring tasks can be performed in various environments. This may be beneficial in many scenarios, but it certainly exhibits new challenges in terms of security due to increased data transmission over the wireless channel with potentially unknown threats. Among possible security issues are timing attacks, which are not prevented by traditional cryptographic security. Moreover, the limited energy and memory resources prohibit the use of complex security mechanisms in such systems. Therefore, balancing between security and the associated energy consumption becomes a crucial challenge. This paper proposes a secure scheme for WSNs while maintaining the requirement of the security-performance tradeoff. In order to proceed to a quantitative treatment of this problem, a hybrid continuous-time Markov chain (CTMC) and queueing model are put forward, and the tradeoff analysis of the security and performance attributes is carried out. By extending and transforming this model, the mean time to security attributes failure is evaluated. Through tradeoff analysis, we show that our scheme can enhance the security of WSNs, and the optimal rekeying rate of the performance and security tradeoff can be obtained.
International Nuclear Information System (INIS)
Nagy, D.
2007-01-01
Previous conceptual studies made clear that the ITER blanket concept and segmentation is not suitable for the environment of a potential fusion power plant (DEMO). One promising concept to be used instead is the so-called Multi-Module-Segment (MMS) concept. Each MMS consists of a number of blankets arranged on a strong back plate thus forming ''banana'' shaped in-board (IB) and out-board (OB) segments. With respect to port size, weight, or other limiting aspects the IB and OB MMS are segmented in toroidal direction. The number of segments to be replaced would be below 100. For this segmentation concept a new maintenance scenario had to be worked out. The aim of this paper is to present a promising MMS maintenance scenario, a flexible scheme for time estimations under varying boundary conditions and preliminary time estimates. According to the proposed scenario two upper, vertical arranged maintenance ports have to be opened for blanket maintenance on opposite sides of the tokamak. Both ports are central to a 180 degree sector and the MMS are removed and inserted through both ports. In-vessel machines are operating to transport the elements in toroidal direction and also to insert and attach the MMS to the shield. Outside the vessel the elements have to be transported between the tokamak and the hot cell to be refurbished. Calculating the maintenance time for such a scenario is rather challenging due to the numerous parallel processes involved. For this reason a flexible, multi-level calculation scheme has been developed in which the operations are organized into three levels: At the lowest level the basic maintenance steps are determined. These are organized into maintenance sequences that take into account parallelisms in the system. Several maintenance sequences constitute the maintenance phases which correspond to a certain logistics scenario. By adding the required times of the maintenance phases the total maintenance time is obtained. The paper presents
Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung
2016-01-01
Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.
Purity of Gaussian states: Measurement schemes and time evolution in noisy channels
International Nuclear Information System (INIS)
Paris, Matteo G.A.; Illuminati, Fabrizio; Serafini, Alessio; De Siena, Silvio
2003-01-01
We present a systematic study of the purity for Gaussian states of single-mode continuous variable systems. We prove the connection of purity to observable quantities for these states, and show that the joint measurement of two conjugate quadratures is necessary and sufficient to determine the purity at any time. The statistical reliability and the range of applicability of the proposed measurement scheme are tested by means of Monte Carlo simulated experiments. We then consider the dynamics of purity in noisy channels. We derive an evolution equation for the purity of general Gaussian states both in thermal and in squeezed thermal baths. We show that purity is maximized at any given time for an initial coherent state evolving in a thermal bath, or for an initial squeezed state evolving in a squeezed thermal bath whose asymptotic squeezing is orthogonal to that of the input state
Jin, Long; Liao, Bolin; Liu, Mei; Xiao, Lin; Guo, Dongsheng; Yan, Xiaogang
2017-01-01
By incorporating the physical constraints in joint space, a different-level simultaneous minimization scheme, which takes both the robot kinematics and robot dynamics into account, is presented and investigated for fault-tolerant motion planning of redundant manipulator in this paper. The scheme is reformulated as a quadratic program (QP) with equality and bound constraints, which is then solved by a discrete-time recurrent neural network. Simulative verifications based on a six-link planar redundant robot manipulator substantiate the efficacy and accuracy of the presented acceleration fault-tolerant scheme, the resultant QP and the corresponding discrete-time recurrent neural network.
Lutz, Barry; Liang, Tinny; Fu, Elain; Ramachandran, Sujatha; Kauffman, Peter; Yager, Paul
2013-07-21
Lateral flow tests (LFTs) are an ingenious format for rapid and easy-to-use diagnostics, but they are fundamentally limited to assay chemistries that can be reduced to a single chemical step. In contrast, most laboratory diagnostic assays rely on multiple timed steps carried out by a human or a machine. Here, we use dissolvable sugar applied to paper to create programmable flow delays and present a paper network topology that uses these time delays to program automated multi-step fluidic protocols. Solutions of sucrose at different concentrations (10-70% of saturation) were added to paper strips and dried to create fluidic time delays spanning minutes to nearly an hour. A simple folding card format employing sugar delays was shown to automate a four-step fluidic process initiated by a single user activation step (folding the card); this device was used to perform a signal-amplified sandwich immunoassay for a diagnostic biomarker for malaria. The cards are capable of automating multi-step assay protocols normally used in laboratories, but in a rapid, low-cost, and easy-to-use format.
Tetrahedral-Mesh Simulation of Turbulent Flows with the Space-Time Conservative Schemes
Chang, Chau-Lyan; Venkatachari, Balaji; Cheng, Gary C.
2015-01-01
Direct numerical simulations of turbulent flows are predominantly carried out using structured, hexahedral meshes despite decades of development in unstructured mesh methods. Tetrahedral meshes offer ease of mesh generation around complex geometries and the potential of an orientation free grid that would provide un-biased small-scale dissipation and more accurate intermediate scale solutions. However, due to the lack of consistent multi-dimensional numerical formulations in conventional schemes for triangular and tetrahedral meshes at the cell interfaces, numerical issues exist when flow discontinuities or stagnation regions are present. The space-time conservative conservation element solution element (CESE) method - due to its Riemann-solver-free shock capturing capabilities, non-dissipative baseline schemes, and flux conservation in time as well as space - has the potential to more accurately simulate turbulent flows using unstructured tetrahedral meshes. To pave the way towards accurate simulation of shock/turbulent boundary-layer interaction, a series of wave and shock interaction benchmark problems that increase in complexity, are computed in this paper with triangular/tetrahedral meshes. Preliminary computations for the normal shock/turbulence interactions are carried out with a relatively coarse mesh, by direct numerical simulations standards, in order to assess other effects such as boundary conditions and the necessity of a buffer domain. The results indicate that qualitative agreement with previous studies can be obtained for flows where, strong shocks co-exist along with unsteady waves that display a broad range of scales, with a relatively compact computational domain and less stringent requirements for grid clustering near the shock. With the space-time conservation properties, stable solutions without any spurious wave reflections can be obtained without a need for buffer domains near the outflow/farfield boundaries. Computational results for the
Beghein, Yves
2013-03-01
The time domain combined field integral equation (TD-CFIE), which is constructed from a weighted sum of the time domain electric and magnetic field integral equations (TD-EFIE and TD-MFIE) for analyzing transient scattering from closed perfect electrically conducting bodies, is free from spurious resonances. The standard marching-on-in-time technique for discretizing the TD-CFIE uses Galerkin and collocation schemes in space and time, respectively. Unfortunately, the standard scheme is theoretically not well understood: stability and convergence have been proven for only one class of space-time Galerkin discretizations. Moreover, existing discretization schemes are nonconforming, i.e., the TD-MFIE contribution is tested with divergence conforming functions instead of curl conforming functions. We therefore introduce a novel space-time mixed Galerkin discretization for the TD-CFIE. A family of temporal basis and testing functions with arbitrary order is introduced. It is explained how the corresponding interactions can be computed efficiently by existing collocation-in-time codes. The spatial mixed discretization is made fully conforming and consistent by leveraging both Rao-Wilton-Glisson and Buffa-Christiansen basis functions and by applying the appropriate bi-orthogonalization procedures. The combination of both techniques is essential when high accuracy over a broad frequency band is required. © 2012 IEEE.
Time dependent theory of two-step absorption of two pulses
Energy Technology Data Exchange (ETDEWEB)
Rebane, Inna, E-mail: inna.rebane@ut.ee
2015-09-25
The time dependent theory of two step-absorption of two different light pulses with arbitrary duration in the electronic three-level model is proposed. The probability that the third level is excited at the moment t is found in depending on the time delay between pulses, the spectral widths of the pulses and the energy relaxation constants of the excited electronic levels. The time dependent perturbation theory is applied without using “doorway–window” approach. The time and spectral behavior of the spectrum using in calculations as simple as possible model is analyzed. - Highlights: • Time dependent theory of two-step absorption in the three-level model is proposed. • Two different light pulses with arbitrary duration is observed. • The time dependent perturbation theory is applied without “door–window” approach. • The time and spectral behavior of the spectra is analyzed for several cases.
Lu, Qiang; Han, Qing-Long; Zhang, Botao; Liu, Dongliang; Liu, Shirong
2017-12-01
This paper deals with the problem of environmental monitoring by developing an event-triggered finite-time control scheme for mobile sensor networks. The proposed control scheme can be executed by each sensor node independently and consists of two parts: one part is a finite-time consensus algorithm while the other part is an event-triggered rule. The consensus algorithm is employed to enable the positions and velocities of sensor nodes to quickly track the position and velocity of a virtual leader in finite time. The event-triggered rule is used to reduce the updating frequency of controllers in order to save the computational resources of sensor nodes. Some stability conditions are derived for mobile sensor networks with the proposed control scheme under both a fixed communication topology and a switching communication topology. Finally, simulation results illustrate the effectiveness of the proposed control scheme for the problem of environmental monitoring.
The hyperbolic step potential: Anti-bound states, SUSY partners and Wigner time delays
Energy Technology Data Exchange (ETDEWEB)
Gadella, M. [Departamento de Física Teórica, Atómica y Óptica and IMUVA, Universidad de Valladolid, E-47011 Valladolid (Spain); Kuru, Ş. [Department of Physics, Faculty of Science, Ankara University, 06100 Ankara (Turkey); Negro, J., E-mail: jnegro@fta.uva.es [Departamento de Física Teórica, Atómica y Óptica and IMUVA, Universidad de Valladolid, E-47011 Valladolid (Spain)
2017-04-15
We study the scattering produced by a one dimensional hyperbolic step potential, which is exactly solvable and shows an unusual interest because of its asymmetric character. The analytic continuation of the scattering matrix in the momentum representation has a branch cut and an infinite number of simple poles on the negative imaginary axis which are related with the so called anti-bound states. This model does not show resonances. Using the wave functions of the anti-bound states, we obtain supersymmetric (SUSY) partners which are the series of Rosen–Morse II potentials. We have computed the Wigner reflection and transmission time delays for the hyperbolic step and such SUSY partners. Our results show that the more bound states a partner Hamiltonian has the smaller is the time delay. We also have evaluated time delays for the hyperbolic step potential in the classical case and have obtained striking similitudes with the quantum case. - Highlights: • The scattering matrix of hyperbolic step potential is studied. • The scattering matrix has a branch cut and an infinite number of poles. • The poles are associated to anti-bound states. • Susy partners using antibound states are computed. • Wigner time delays for the hyperbolic step and partner potentials are compared.
Transition among synchronous schemes in coupled nonidentical multiple time delay systems
International Nuclear Information System (INIS)
Thang Manh Hoang
2009-01-01
We present the transition among possible synchronous schemes in coupled nonidentical multiple time delay systems, i.e., lag, projective-lag, complete, anticipating and projective-anticipating synchronization. The number of nonlinear transforms in the master's equation can be different from that in slave's, and nonlinear transforms can be in various forms. The driving signal is the sum of nonlinearly transformed components of delayed state variable. Moreover, the equation representing for driving signal is constructed exactly so that the difference between the master's and slave's structures is complemented. The sufficient condition for synchronization is considered by the Krasovskii-Lyapunov theory. The specific examples will demonstrate and verify the effectiveness of the proposed models.
Nguyen, Tien Long; Sansour, Carlo; Hjiaj, Mohammed
2017-05-01
In this paper, an energy-momentum method for geometrically exact Timoshenko-type beam is proposed. The classical time integration schemes in dynamics are known to exhibit instability in the non-linear regime. The so-called Timoshenko-type beam with the use of rotational degree of freedom leads to simpler strain relations and simpler expressions of the inertial terms as compared to the well known Bernoulli-type model. The treatment of the Bernoulli-model has been recently addressed by the authors. In this present work, we extend our approach of using the strain rates to define the strain fields to in-plane geometrically exact Timoshenko-type beams. The large rotational degrees of freedom are exactly computed. The well-known enhanced strain method is used to avoid locking phenomena. Conservation of energy, momentum and angular momentum is proved formally and numerically. The excellent performance of the formulation will be demonstrated through a range of examples.
PHISICS/RELAP5-3D Adaptive Time-Step Method Demonstrated for the HTTR LOFC#1 Simulation
Energy Technology Data Exchange (ETDEWEB)
Baker, Robin Ivey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Balestra, Paolo [Univ. of Rome (Italy); Strydom, Gerhard [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2017-05-01
A collaborative effort between Japan Atomic Energy Agency (JAEA) and Idaho National Laboratory (INL) as part of the Civil Nuclear Energy Working Group is underway to model the high temperature engineering test reactor (HTTR) loss of forced cooling (LOFC) transient that was performed in December 2010. The coupled version of RELAP5-3D, a thermal fluids code, and PHISICS, a neutronics code, were used to model the transient. The focus of this report is to summarize the changes made to the PHISICS-RELAP5-3D code for implementing an adaptive time step methodology into the code for the first time, and to test it using the full HTTR PHISICS/RELAP5-3D model developed by JAEA and INL and the LOFC simulation. Various adaptive schemes are available based on flux or power convergence criteria that allow significantly larger time steps to be taken by the neutronics module. The report includes a description of the HTTR and the associated PHISICS/RELAP5-3D model test results as well as the University of Rome sub-contractor report documenting the adaptive time step theory and methodology implemented in PHISICS/RELAP5-3D. Two versions of the HTTR model were tested using 8 and 26 energy groups. It was found that most of the new adaptive methods lead to significant improvements in the LOFC simulation time required without significant accuracy penalties in the prediction of the fission power and the fuel temperature. In the best performing 8 group model scenarios, a LOFC simulation of 20 hours could be completed in real-time, or even less than real-time, compared with the previous version of the code that completed the same transient 3-8 times slower than real-time. A few of the user choice combinations between the methodologies available and the tolerance settings did however result in unacceptably high errors or insignificant gains in simulation time. The study is concluded with recommendations on which methods to use for this HTTR model. An important caveat is that these findings
Rapid expansion method (REM) for time‐stepping in reverse time migration (RTM)
Pestana, Reynam C.; Stoffa, Paul L.
2009-01-01
an analytical approximation for the Bessel function where we assume that the time step is sufficiently small. From this derivation we find that if we consider only the first two Chebyshev polynomials terms in the rapid expansion method we can obtain the second
International Nuclear Information System (INIS)
Omelyan, Igor; Kovalenko, Andriy
2013-01-01
We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nosé-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics
Directory of Open Access Journals (Sweden)
Romain Tisserand
2016-11-01
Full Text Available In the case of disequilibrium, the capacity to step quickly is critical to avoid falling for elderly. This capacity can be simply assessed through the choice stepping reaction time test (CSRT, where elderly fallers (F take longer to step than elderly non-fallers (NF. However, reasons why elderly F elongate their stepping time remain unclear. The purpose of this study is to assess the characteristics of anticipated postural adjustments (APA that elderly F develop in a stepping context and their consequences on the dynamic stability. 44 community-dwelling elderly subjects (20 F and 22 NF performed a CSRT where kinematics and ground reaction forces were collected. Variables were analyzed using two-way repeated measures ANOVAs. Results for F compared to NF showed that stepping time is elongated, due to a longer APA phase. During APA, they seem to use two distinct balance strategies, depending on the axis: in the anteroposterior direction, we measured a smaller backward movement and slower peak velocity of the center of pressure (CoP; in the mediolateral direction, the CoP movement was similar in amplitude and peak velocity between groups but lasted longer. The biomechanical consequence of both strategies was an increased margin of stability (MoS at foot-off, in the respective direction. By elongating their APA, elderly F use a safer balance strategy that prioritizes dynamic stability conditions instead of the objective of the task. Such a choice in balance strategy probably comes from muscular limitations and/or a higher fear of falling and paradoxically indicates an increased risk of fall.
Multi-Step Time Series Forecasting with an Ensemble of Varied Length Mixture Models.
Ouyang, Yicun; Yin, Hujun
2018-05-01
Many real-world problems require modeling and forecasting of time series, such as weather temperature, electricity demand, stock prices and foreign exchange (FX) rates. Often, the tasks involve predicting over a long-term period, e.g. several weeks or months. Most existing time series models are inheritably for one-step prediction, that is, predicting one time point ahead. Multi-step or long-term prediction is difficult and challenging due to the lack of information and uncertainty or error accumulation. The main existing approaches, iterative and independent, either use one-step model recursively or treat the multi-step task as an independent model. They generally perform poorly in practical applications. In this paper, as an extension of the self-organizing mixture autoregressive (AR) model, the varied length mixture (VLM) models are proposed to model and forecast time series over multi-steps. The key idea is to preserve the dependencies between the time points within the prediction horizon. Training data are segmented to various lengths corresponding to various forecasting horizons, and the VLM models are trained in a self-organizing fashion on these segments to capture these dependencies in its component AR models of various predicting horizons. The VLM models form a probabilistic mixture of these varied length models. A combination of short and long VLM models and an ensemble of them are proposed to further enhance the prediction performance. The effectiveness of the proposed methods and their marked improvements over the existing methods are demonstrated through a number of experiments on synthetic data, real-world FX rates and weather temperatures.
How to decide the optimal scheme and the optimal time for construction
International Nuclear Information System (INIS)
Gjermundsen, T.; Dalsnes, B.; Jensen, T.
1991-01-01
Since the development in Norway began some 105 years ago the mean annual generation has reached approximately 110 TWh. This means that there is a large potential for uprating and refurbishing (U/R). A project undertaken by the Norwegian Water Resources and Energy Administration (NVE) has identified energy resources by means of U/R to about 10 TWh annual generation. One problem in harnessing the potential owned by small and medium sized electricity boards is the lack of simple tools to help us carry out the right decisions. The paper describes a simple model to find the best solution of scheme and the optimal time to start. The principle of present value is used. The main input is: production, price, annual costs of maintenance, the remaining lifetime and the social rate of return. The model calculates the present value of U/R/N for different points of time to start U/R/N. In addition the present value of the existing plant is calculated. Several alternatives can be considered. The best one will be the one which gives the highest present value according to the value of the existing plant. The internal rate of return is also calculated. To be aware of the sensitivity a star diagram is shown. The model gives the opportunity to include environmental charges and the value of effect (peak power). (Author)
Shen, Hua
2016-10-19
A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.
Shen, Hua; Wen, Chih-Yung; Parsani, Matteo; Shu, Chi-Wang
2016-01-01
A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.
2012-06-01
The purpose of these step-by-step guidelines is to assist in planning, designing, and deploying a system that uses radio frequency identification (RFID) technology to measure the time needed for commercial vehicles to complete the northbound border c...
Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul
2016-08-01
Numerical solutions of the hydrodynamical model of semiconductor devices are presented in one and two-space dimension. The model describes the charge transport in semiconductor devices. Mathematically, the models can be written as a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the conservation element and solution element (CE/SE) method for hyperbolic step, and a semi-implicit scheme for the relaxation step. The numerical results of the suggested scheme are compared with the splitting scheme based on Nessyahu-Tadmor (NT) central scheme for convection step and the same semi-implicit scheme for the relaxation step. The effects of various parameters such as low field mobility, device length, lattice temperature and voltages for one-space dimensional hydrodynamic model are explored to further validate the generic applicability of the CE/SE method for the current model equations. A two dimensional simulation is also performed by CE/SE method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.
Multi-step-prediction of chaotic time series based on co-evolutionary recurrent neural network
International Nuclear Information System (INIS)
Ma Qianli; Zheng Qilun; Peng Hong; Qin Jiangwei; Zhong Tanwei
2008-01-01
This paper proposes a co-evolutionary recurrent neural network (CERNN) for the multi-step-prediction of chaotic time series, it estimates the proper parameters of phase space reconstruction and optimizes the structure of recurrent neural networks by co-evolutionary strategy. The searching space was separated into two subspaces and the individuals are trained in a parallel computational procedure. It can dynamically combine the embedding method with the capability of recurrent neural network to incorporate past experience due to internal recurrence. The effectiveness of CERNN is evaluated by using three benchmark chaotic time series data sets: the Lorenz series, Mackey-Glass series and real-world sun spot series. The simulation results show that CERNN improves the performances of multi-step-prediction of chaotic time series
A coupled weather generator - rainfall-runoff approach on hourly time steps for flood risk analysis
Winter, Benjamin; Schneeberger, Klaus; Dung Nguyen, Viet; Vorogushyn, Sergiy; Huttenlau, Matthias; Merz, Bruno; Stötter, Johann
2017-04-01
The evaluation of potential monetary damage of flooding is an essential part of flood risk management. One possibility to estimate the monetary risk is to analyze long time series of observed flood events and their corresponding damages. In reality, however, only few flood events are documented. This limitation can be overcome by the generation of a set of synthetic, physically and spatial plausible flood events and subsequently the estimation of the resulting monetary damages. In the present work, a set of synthetic flood events is generated by a continuous rainfall-runoff simulation in combination with a coupled weather generator and temporal disaggregation procedure for the study area of Vorarlberg (Austria). Most flood risk studies focus on daily time steps, however, the mesoscale alpine study area is characterized by short concentration times, leading to large differences between daily mean and daily maximum discharge. Accordingly, an hourly time step is needed for the simulations. The hourly metrological input for the rainfall-runoff model is generated in a two-step approach. A synthetic daily dataset is generated by a multivariate and multisite weather generator and subsequently disaggregated to hourly time steps with a k-Nearest-Neighbor model. Following the event generation procedure, the negative consequences of flooding are analyzed. The corresponding flood damage for each synthetic event is estimated by combining the synthetic discharge at representative points of the river network with a loss probability relation for each community in the study area. The loss probability relation is based on exposure and susceptibility analyses on a single object basis (residential buildings) for certain return periods. For these impact analyses official inundation maps of the study area are used. Finally, by analyzing the total event time series of damages, the expected annual damage or losses associated with a certain probability of occurrence can be estimated for
Multi-Time Step Service Restoration for Advanced Distribution Systems and Microgrids
International Nuclear Information System (INIS)
Chen, Bo; Chen, Chen; Wang, Jianhui; Butler-Purry, Karen L.
2017-01-01
Modern power systems are facing increased risk of disasters that can cause extended outages. The presence of remote control switches (RCSs), distributed generators (DGs), and energy storage systems (ESS) provides both challenges and opportunities for developing post-fault service restoration methodologies. Inter-temporal constraints of DGs, ESS, and loads under cold load pickup (CLPU) conditions impose extra complexity on problem formulation and solution. In this paper, a multi-time step service restoration methodology is proposed to optimally generate a sequence of control actions for controllable switches, ESSs, and dispatchable DGs to assist the system operator with decision making. The restoration sequence is determined to minimize the unserved customers by energizing the system step by step without violating operational constraints at each time step. The proposed methodology is formulated as a mixed-integer linear programming (MILP) model and can adapt to various operation conditions. Furthermore, the proposed method is validated through several case studies that are performed on modified IEEE 13-node and IEEE 123-node test feeders.
Development of a real time activity monitoring Android application utilizing SmartStep.
Hegde, Nagaraj; Melanson, Edward; Sazonov, Edward
2016-08-01
Footwear based activity monitoring systems are becoming popular in academic research as well as consumer industry segments. In our previous work, we had presented developmental aspects of an insole based activity and gait monitoring system-SmartStep, which is a socially acceptable, fully wireless and versatile insole. The present work describes the development of an Android application that captures the SmartStep data wirelessly over Bluetooth Low energy (BLE), computes features on the received data, runs activity classification algorithms and provides real time feedback. The development of activity classification methods was based on the the data from a human study involving 4 participants. Participants were asked to perform activities of sitting, standing, walking, and cycling while they wore SmartStep insole system. Multinomial Logistic Discrimination (MLD) was utilized in the development of machine learning model for activity prediction. The resulting classification model was implemented in an Android Smartphone. The Android application was benchmarked for power consumption and CPU loading. Leave one out cross validation resulted in average accuracy of 96.9% during model training phase. The Android application for real time activity classification was tested on a human subject wearing SmartStep resulting in testing accuracy of 95.4%.
Beyer, A. D.; Kenyon, M. E.; Bumble, B.; Runyan, M. C.; Echternach, P. E.; Holmes, W. A.; Bock, J. J.; Bradford, C. M.
2013-01-01
We present measurements of the thermal conductance, G, and effective time constants, tau, of three transition-edge sensors (TESs) populated in arrays operated from 80-87mK with T(sub C) approximately 120mK. Our TES arrays include several variations of thermal architecture enabling determination of the architecture that demonstrates the minimum noise equivalent power (NEP), the lowest tau and the trade-offs among designs. The three TESs we report here have identical Mo/Cu bilayer thermistors and wiring structures, while the thermal architectures are: 1) a TES with straight support beams of 1mm length, 2) a TES with meander support beams of total length 2mm and with 2 phononfilter blocks per beam, and 3) a TES with meander support beams of total length 2mm and with 6 phonon-filter blocks per beam. Our wiring scheme aims to lower the thermistor normal state resistance R(sub N) and increase the sharpness of the transition alpha=dlogR/dlogT at the transition temperature T(sub C). We find an upper limit of given by (25+/-10), and G values of 200fW/K for 1), 15fW/K for 2), and 10fW/K for 3). The value of alpha can be improved by slightly increasing the length of our thermistors.
Traffic safety and step-by-step driving licence for young people
DEFF Research Database (Denmark)
Tønning, Charlotte; Agerholm, Niels
2017-01-01
presents a review of safety effects from step-by-step driving licence schemes. Most of the investigated schemes consist of a step-by-step driving licence with Step 1) various tests and education, Step 2) a period where driving is only allowed together with an experienced driver and Step 3) driving without...... companion is allowed but with various restrictions and, in some cases, additional driving education and tests. In general, a step-by-step driving licence improves traffic safety even though the young people are permitted to drive a car earlier on. The effects from driving with an experienced driver vary......Young novice car drivers are much more accident-prone than other drivers - up to 10 times that of their parents' generation. A central solution to improve the traffic safety for this group is implementation of a step-by-step driving licence. A number of countries have introduced a step...
A Novel, Automatic Quality Control Scheme for Real Time Image Transmission
Directory of Open Access Journals (Sweden)
S. Ramachandran
2002-01-01
Full Text Available A novel scheme to compute energy on-the-fly and thereby control the quality of the image frames dynamically is presented along with its FPGA implementation. This scheme is suitable for incorporation in image compression systems such as video encoders. In this new scheme, processing is automatically stopped when the desired quality is achieved for the image being processed by using a concept called pruning. Pruning also increases the processing speed by a factor of more than two when compared to the conventional method of processing without pruning. An MPEG-2 encoder implemented using this scheme is capable of processing good quality monochrome and color images of sizes up to 1024 × 768 pixels at the rate of 42 and 28 frames per second, respectively, with a compression ratio of over 17:1. The encoder is also capable of working in the fixed pruning level mode with user programmable features.
Truncation scheme of time-dependent density-matrix approach II
Energy Technology Data Exchange (ETDEWEB)
Tohyama, Mitsuru [Kyorin University School of Medicine, Mitaka, Tokyo (Japan); Schuck, Peter [Institut de Physique Nucleaire, IN2P3-CNRS, Universite Paris-Sud, Orsay (France); Laboratoire de Physique et de Modelisation des Milieux Condenses, CNRS et Universite Joseph Fourier, Grenoble (France)
2017-09-15
A truncation scheme of the Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy for reduced density matrices, where a three-body density matrix is approximated by two-body density matrices, is improved to take into account a normalization effect. The truncation scheme is tested for the Lipkin model. It is shown that the obtained results are in good agreement with the exact solutions. (orig.)
Chek, Mohd Zaki Awang; Ahmad, Abu Bakar; Ridzwan, Ahmad Nur Azam Ahmad; Jelas, Imran Md.; Jamal, Nur Faezah; Ismail, Isma Liana; Zulkifli, Faiz; Noor, Syamsul Ikram Mohd
2012-09-01
The main objective of this study is to forecast the future claims amount of Invalidity Pension Scheme (IPS). All data were derived from SOCSO annual reports from year 1972 - 2010. These claims consist of all claims amount from 7 benefits offered by SOCSO such as Invalidity Pension, Invalidity Grant, Survivors Pension, Constant Attendance Allowance, Rehabilitation, Funeral and Education. Prediction of future claims of Invalidity Pension Scheme will be made using Univariate Forecasting Models to predict the future claims among workforce in Malaysia.
Real-time-service-based Distributed Scheduling Scheme for IEEE 802.16j Networks
Kuo-Feng Huang; Shih-Jung Wu
2013-01-01
Supporting Quality of Service (QoS) guarantees for diverse multimedia services is the primary concern for IEEE802.16j networks. A scheduling scheme that satisfies the QoS requirements has become more important for wireless communications. We proposed an adaptive nontransparent-based distributed scheduling scheme (ANDS) for IEEE 802.16j networks. ANDS comprises three major components: Priority Assignment, Resource Allocation, Preserved Bandwidth Adjustment. Different service-type connections p...
Directory of Open Access Journals (Sweden)
Eun Seok Lee
2003-01-01
Full Text Available An axial turbine rotor cascade-shape optimization with unsteady passing wakes was performed to obtain an improved aerodynamic performance using an unsteady flow, Reynolds-averaged Navier-Stokes equations solver that was based on explicit, finite difference; Runge-Kutta multistage time marching; and the diagonalized alternating direction implicit scheme. The code utilized Baldwin-Lomax algebraic and k-ε turbulence modeling. The full approximation storage multigrid method and preconditioning were implemented as iterative convergence-acceleration techniques. An implicit dual-time stepping method was incorporated in order to simulate the unsteady flow fields. The objective function was defined as minimization of total pressure loss and maximization of lift, while the mass flow rate was fixed during the optimization. The design variables were several geometric parameters characterizing airfoil leading edge, camber, stagger angle, and inter-row spacing. The genetic algorithm was used as an optimizer, and the penalty method was introduced for combining the constraints with the objective function. Each individual's objective function was computed simultaneously by using a 32-processor distributedmemory computer. The optimization results indicated that only minor improvements are possible in unsteady rotor/stator aerodynamics by varying these geometric parameters.
Takahiro Sayama; Jeffrey J. McDonnell
2009-01-01
Hydrograph source components and stream water residence time are fundamental behavioral descriptors of watersheds but, as yet, are poorly represented in most rainfall-runoff models. We present a new time-space accounting scheme (T-SAS) to simulate the pre-event and event water fractions, mean residence time, and spatial source of streamflow at the watershed scale. We...
Steps of Supercritical Fluid Extraction of Natural Products and Their Characteristic Times
Sovová, H. (Helena)
2012-01-01
Kinetics of supercritical fluid extraction (SFE) from plants is variable due to different micro-structure of plants and their parts, different properties of extracted substances and solvents, and different flow patterns in the extractor. Variety of published mathematical models for SFE of natural products corresponds to this diversification. This study presents simplified equations of extraction curves in terms of characteristic times of four single extraction steps: internal diffusion, exter...
A Hybrid DGTD-MNA Scheme for Analyzing Complex Electromagnetic Systems
Li, Peng; Jiang, Li-Jun; Bagci, Hakan
2015-01-01
lumped circuit elements, the standard Newton-Raphson method is applied at every time step. Additionally, a local time-stepping scheme is developed to improve the efficiency of the hybrid solver. Numerical examples consisting of EM systems loaded
Navon, I. M.; Yu, Jian
A FORTRAN computer program is presented and documented applying the Turkel-Zwas explicit large time-step scheme to a hemispheric barotropic model with constraint restoration of integral invariants of the shallow-water equations. We then proceed to detail the algorithms embodied in the code EXSHALL in this paper, particularly algorithms related to the efficiency and stability of T-Z scheme and the quadratic constraint restoration method which is based on a variational approach. In particular we provide details about the high-latitude filtering, Shapiro filtering, and Robert filtering algorithms used in the code. We explain in detail the various subroutines in the EXSHALL code with emphasis on algorithms implemented in the code and present the flowcharts of some major subroutines. Finally, we provide a visual example illustrating a 4-day run using real initial data, along with a sample printout and graphic isoline contours of the height field and velocity fields.
Energy Technology Data Exchange (ETDEWEB)
Izosimov, I. [Joint Institute for Nuclear Research, Joliot Curie 6, Dubna 141980 (Russian Federation)
2016-07-01
The use of laser radiation with tunable wavelength allows the selective excitation of actinide/lanthanide species with subsequent registration of luminescence/chemiluminescence for their detection. This work is devoted to applications of the time-resolved laser-induced luminescence spectroscopy and time-resolved laser-induced chemiluminescence spectroscopy for the detection of lanthanides and actinides. Results of the experiments on U, Eu, and Sm detection by TRLIF (Time Resolved Laser Induced Fluorescence) method in blood plasma and urine are presented. Data on luminol chemiluminescence in solutions containing Sm(III), U(IV), and Pu(IV) are analyzed. It is shown that appropriate selectivity of lanthanide/actinide detection can be reached when chemiluminescence is initiated by transitions within 4f- or 5f-electron shell of lanthanide/actinide ions corresponding to the visible spectral range. In this case chemiluminescence of chemiluminogen (luminol) arises when the ion of f element is excited by multi-quantum absorption of visible light. The multi-photon scheme of chemiluminescence excitation makes chemiluminescence not only a highly sensitive but also a highly selective tool for the detection of lanthanide/actinide species in solutions. (author)
Sayed, Sadeed Bin
2015-05-05
A time domain electric field volume integral equation (TD-EFVIE) solver is proposed for characterizing transient electromagnetic wave interactions on high-contrast dielectric scatterers. The TD-EFVIE is discretized using the Schaubert- Wilton-Glisson (SWG) and approximate prolate spherical wave (APSW) functions in space and time, respectively. The resulting system of equations can not be solved by a straightforward application of the marching on-in-time (MOT) scheme since the two-sided APSW interpolation functions require the knowledge of unknown “future” field samples during time marching. Causality of the MOT scheme is restored using an extrapolation technique that predicts the future samples from known “past” ones. Unlike the extrapolation techniques developed for MOT schemes that are used in solving time domain surface integral equations, this scheme trains the extrapolation coefficients using samples of exponentials with exponents on the complex frequency plane. This increases the stability of the MOT-TD-EFVIE solver significantly, since the temporal behavior of decaying and oscillating electromagnetic modes induced inside the scatterers is very accurately taken into account by this new extrapolation scheme. Numerical results demonstrate that the proposed MOT solver maintains its stability even when applied to analyzing wave interactions on high-contrast scatterers.
Sayed, Sadeed Bin; Ulku, Huseyin; Bagci, Hakan
2015-01-01
A time domain electric field volume integral equation (TD-EFVIE) solver is proposed for characterizing transient electromagnetic wave interactions on high-contrast dielectric scatterers. The TD-EFVIE is discretized using the Schaubert- Wilton-Glisson (SWG) and approximate prolate spherical wave (APSW) functions in space and time, respectively. The resulting system of equations can not be solved by a straightforward application of the marching on-in-time (MOT) scheme since the two-sided APSW interpolation functions require the knowledge of unknown “future” field samples during time marching. Causality of the MOT scheme is restored using an extrapolation technique that predicts the future samples from known “past” ones. Unlike the extrapolation techniques developed for MOT schemes that are used in solving time domain surface integral equations, this scheme trains the extrapolation coefficients using samples of exponentials with exponents on the complex frequency plane. This increases the stability of the MOT-TD-EFVIE solver significantly, since the temporal behavior of decaying and oscillating electromagnetic modes induced inside the scatterers is very accurately taken into account by this new extrapolation scheme. Numerical results demonstrate that the proposed MOT solver maintains its stability even when applied to analyzing wave interactions on high-contrast scatterers.
Real-time, single-step bioassay using nanoplasmonic resonator with ultra-high sensitivity
Energy Technology Data Exchange (ETDEWEB)
Zhang, Xiang; Ellman, Jonathan A; Chen, Fanqing Frank; Su, Kai-Hang; Wei, Qi-Huo; Sun, Cheng
2014-04-01
A nanoplasmonic resonator (NPR) comprising a metallic nanodisk with alternating shielding layer(s), having a tagged biomolecule conjugated or tethered to the surface of the nanoplasmonic resonator for highly sensitive measurement of enzymatic activity. NPRs enhance Raman signals in a highly reproducible manner, enabling fast detection of protease and enzyme activity, such as Prostate Specific Antigen (paPSA), in real-time, at picomolar sensitivity levels. Experiments on extracellular fluid (ECF) from paPSA-positive cells demonstrate specific detection in a complex bio-fluid background in real-time single-step detection in very small sample volumes.
DEFF Research Database (Denmark)
Pedersen, Mikkel Melters; Hansen, Michael Rygaard; Ballebye, Morten
2010-01-01
This paper describes the implementation of an interactive real-time dynamic simulation model of a hydraulic crane. The user input to the model is given continuously via joystick and output is presented continuously in a 3D animation. Using this simulation model, a tool point control scheme...... is developed for the specific crane, considering the saturation phenomena of the system and practical implementation....
Combined Effects of Numerical Method Type and Time Step on Water Stressed Actual Crop ET
Directory of Open Access Journals (Sweden)
B. Ghahraman
2016-02-01
Full Text Available Introduction: Actual crop evapotranspiration (Eta is important in hydrologic modeling and irrigation water management issues. Actual ET depends on an estimation of a water stress index and average soil water at crop root zone, and so depends on a chosen numerical method and adapted time step. During periods with no rainfall and/or irrigation, actual ET can be computed analytically or by using different numerical methods. Overal, there are many factors that influence actual evapotranspiration. These factors are crop potential evapotranspiration, available root zone water content, time step, crop sensitivity, and soil. In this paper different numerical methods are compared for different soil textures and different crops sensitivities. Materials and Methods: During a specific time step with no rainfall or irrigation, change in soil water content would be equal to evapotranspiration, ET. In this approach, however, deep percolation is generally ignored due to deep water table and negligible unsaturated hydraulic conductivity below rooting depth. This differential equation may be solved analytically or numerically considering different algorithms. We adapted four different numerical methods, as explicit, implicit, and modified Euler, midpoint method, and 3-rd order Heun method to approximate the differential equation. Three general soil types of sand, silt, and clay, and three different crop types of sensitive, moderate, and resistant under Nishaboor plain were used. Standard soil fraction depletion (corresponding to ETc=5 mm.d-1, pstd, below which crop faces water stress is adopted for crop sensitivity. Three values for pstd were considered in this study to cover the common crops in the area, including winter wheat and barley, cotton, alfalfa, sugar beet, saffron, among the others. Based on this parameter, three classes for crop sensitivity was considered, sensitive crops with pstd=0.2, moderate crops with pstd=0.5, and resistive crops with pstd=0
Sleep, John; Irving, Malcolm; Burton, Kevin
2005-03-15
The time course of isometric force development following photolytic release of ATP in the presence of Ca(2+) was characterized in single skinned fibres from rabbit psoas muscle. Pre-photolysis force was minimized using apyrase to remove contaminating ATP and ADP. After the initial force rise induced by ATP release, a rapid shortening ramp terminated by a step stretch to the original length was imposed, and the time course of the subsequent force redevelopment was again characterized. Force development after ATP release was accurately described by a lag phase followed by one or two exponential components. At 20 degrees C, the lag was 5.6 +/- 0.4 ms (s.e.m., n = 11), and the force rise was well fitted by a single exponential with rate constant 71 +/- 4 s(-1). Force redevelopment after shortening-restretch began from about half the plateau force level, and its single-exponential rate constant was 68 +/- 3 s(-1), very similar to that following ATP release. When fibres were activated by the addition of Ca(2+) in ATP-containing solution, force developed more slowly, and the rate constant for force redevelopment following shortening-restretch reached a maximum value of 38 +/- 4 s(-1) (n = 6) after about 6 s of activation. This lower value may be associated with progressive sarcomere disorder at elevated temperature. Force development following ATP release was much slower at 5 degrees C than at 20 degrees C. The rate constant of a single-exponential fit to the force rise was 4.3 +/- 0.4 s(-1) (n = 22), and this was again similar to that after shortening-restretch in the same activation at this temperature, 3.8 +/- 0.2 s(-1). We conclude that force development after ATP release and shortening-restretch are controlled by the same steps in the actin-myosin ATPase cycle. The present results and much previous work on mechanical-chemical coupling in muscle can be explained by a kinetic scheme in which force is generated by a rapid conformational change bracketed by two
A predictive control scheme for real-time demand response applications
Lampropoulos, I.; Baghina, N.G.; Kling, W.L.; Ribeiro, P.F.
2013-01-01
In this work, the focus is placed on the proof of concept of a novel control scheme for demand response. The control architecture considers a uniform representation of non-homogeneous distributed energy resources and allows the participation of virtually all system users in electricity markets. The
Intake flow and time step analysis in the modeling of a direct injection Diesel engine
Energy Technology Data Exchange (ETDEWEB)
Zancanaro Junior, Flavio V.; Vielmo, Horacio A. [Federal University of Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Mechanical Engineering Dept.], E-mails: zancanaro@mecanica.ufrgs.br, vielmoh@mecanica.ufrgs.br
2010-07-01
This paper discusses the effects of the time step on turbulence flow structure in the intake and in-cylinder systems of a Diesel engine during the intake process, under the motored condition. The three-dimensional modeling of a reciprocating engine geometry comprising a bowl-in-piston combustion chamber, intake port of shallow ramp helical type and exhaust port of conventional type. The equations are numerically solved, including a transient analysis, valves and piston movements, for engine speed of 1500 rpm, using a commercial Finite Volumes CFD code. A parallel computation is employed. For the purpose of examining the in-cylinder turbulence characteristics two parameters are observed: the discharge coefficient and swirl ratio. This two parameters quantify the fluid flow characteristics inside cylinder in the intake stroke, therefore, it is very important their study and understanding. Additionally, the evolution of the discharge coefficient and swirl ratio, along crank angle, are correlated and compared, with the objective of clarifying the physical mechanisms. Regarding the turbulence, computations are performed with the Eddy Viscosity Model k-u SST, in its Low-Reynolds approaches, with standard near wall treatment. The system of partial differential equations to be solved consists of the Reynolds-averaged compressible Navier-Stokes equations with the constitutive relations for an ideal gas, and using a segregated solution algorithm. The enthalpy equation is also solved. A moving hexahedral trimmed mesh independence study is presented. In the same way many convergence tests are performed, and a secure criterion established. The results of the pressure fields are shown in relation to vertical plane that passes through the valves. Areas of low pressure can be seen in the valve curtain region, due to strong jet flows. Also, it is possible to note divergences between the time steps, mainly for the smaller time step. (author)
Perturbed Strong Stability Preserving Time-Stepping Methods For Hyperbolic PDEs
Hadjimichael, Yiannis
2017-09-30
A plethora of physical phenomena are modelled by hyperbolic partial differential equations, for which the exact solution is usually not known. Numerical methods are employed to approximate the solution to hyperbolic problems; however, in many cases it is difficult to satisfy certain physical properties while maintaining high order of accuracy. In this thesis, we develop high-order time-stepping methods that are capable of maintaining stability constraints of the solution, when coupled with suitable spatial discretizations. Such methods are called strong stability preserving (SSP) time integrators, and we mainly focus on perturbed methods that use both upwind- and downwind-biased spatial discretizations. Firstly, we introduce a new family of third-order implicit Runge–Kuttas methods with arbitrarily large SSP coefficient. We investigate the stability and accuracy of these methods and we show that they perform well on hyperbolic problems with large CFL numbers. Moreover, we extend the analysis of SSP linear multistep methods to semi-discretized problems for which different terms on the right-hand side of the initial value problem satisfy different forward Euler (or circle) conditions. Optimal perturbed and additive monotonicity-preserving linear multistep methods are studied in the context of such problems. Optimal perturbed methods attain augmented monotonicity-preserving step sizes when the different forward Euler conditions are taken into account. On the other hand, we show that optimal SSP additive methods achieve a monotonicity-preserving step-size restriction no better than that of the corresponding non-additive SSP linear multistep methods. Furthermore, we develop the first SSP linear multistep methods of order two and three with variable step size, and study their optimality. We describe an optimal step-size strategy and demonstrate the effectiveness of these methods on various one- and multi-dimensional problems. Finally, we establish necessary conditions
Smart Wireless Power Transfer Operated by Time-Modulated Arrays via a Two-Step Procedure
Directory of Open Access Journals (Sweden)
Diego Masotti
2015-01-01
Full Text Available The paper introduces a novel method for agile and precise wireless power transmission operated by a time-modulated array. The unique, almost real-time reconfiguration capability of these arrays is fully exploited by a two-step procedure: first, a two-element time-modulated subarray is used for localization of tagged sensors to be energized; the entire 16-element TMA then provides the power to the detected tags, by exploiting the fundamental and first-sideband harmonic radiation. An investigation on the best array architecture is carried out, showing the importance of the adopted nonlinear/full-wave computer-aided-design platform. Very promising simulated energy transfer performance of the entire nonlinear radiating system is demonstrated.
Coenen, Pieter; Healy, Genevieve N; Winkler, Elisabeth A H; Dunstan, David W; Owen, Neville; Moodie, Marj; LaMontagne, Anthony D; Eakin, Elizabeth A; O'Sullivan, Peter B; Straker, Leon M
2018-04-22
We examined the association of musculoskeletal symptoms (MSS) with workplace sitting, standing and stepping time, as well as sitting and standing time accumulation (i.e. usual bout duration of these activities), measured objectively with the activPAL3 monitor. Using baseline data from the Stand Up Victoria trial (216 office workers, 14 workplaces), cross-sectional associations of occupational activities with self-reported MSS (low-back, upper and lower extremity symptoms in the last three months) were examined using probit regression, correcting for clustering and adjusting for confounders. Sitting bout duration was significantly (p < 0.05) associated, non-linearly, with MSS, such that those in the middle tertile displayed the highest prevalence of upper extremity symptoms. Other associations were non-significant but sometimes involved large differences in symptom prevalence (e.g. 38%) by activity. Though causation is unclear, these non-linear associations suggest that sitting and its alternatives (i.e. standing and stepping) interact with MSS and this should be considered when designing safe work systems. Practitioner summary: We studied associations of objectively assessed occupational activities with musculoskeletal symptoms in office workers. Workers who accumulated longer sitting bouts reported fewer upper extremity symptoms. Total activity duration was not significantly associated with musculoskeletal symptoms. We underline the importance of considering total volumes and patterns of activity time in musculoskeletal research.
Machicoane, Nathanaël; López-Caballero, Miguel; Bourgoin, Mickael; Aliseda, Alberto; Volk, Romain
2017-10-01
We present a method to improve the accuracy of velocity measurements for fluid flow or particles immersed in it, based on a multi-time-step approach that allows for cancellation of noise in the velocity measurements. Improved velocity statistics, a critical element in turbulent flow measurements, can be computed from the combination of the velocity moments computed using standard particle tracking velocimetry (PTV) or particle image velocimetry (PIV) techniques for data sets that have been collected over different values of time intervals between images. This method produces Eulerian velocity fields and Lagrangian velocity statistics with much lower noise levels compared to standard PIV or PTV measurements, without the need of filtering and/or windowing. Particle displacement between two frames is computed for multiple different time-step values between frames in a canonical experiment of homogeneous isotropic turbulence. The second order velocity structure function of the flow is computed with the new method and compared to results from traditional measurement techniques in the literature. Increased accuracy is also demonstrated by comparing the dissipation rate of turbulent kinetic energy measured from this function against previously validated measurements.
A one-step, real-time PCR assay for rapid detection of rhinovirus.
Do, Duc H; Laus, Stella; Leber, Amy; Marcon, Mario J; Jordan, Jeanne A; Martin, Judith M; Wadowsky, Robert M
2010-01-01
One-step, real-time PCR assays for rhinovirus have been developed for a limited number of PCR amplification platforms and chemistries, and some exhibit cross-reactivity with genetically similar enteroviruses. We developed a one-step, real-time PCR assay for rhinovirus by using a sequence detection system (Applied Biosystems; Foster City, CA). The primers were designed to amplify a 120-base target in the noncoding region of picornavirus RNA, and a TaqMan (Applied Biosystems) degenerate probe was designed for the specific detection of rhinovirus amplicons. The PCR assay had no cross-reactivity with a panel of 76 nontarget nucleic acids, which included RNAs from 43 enterovirus strains. Excellent lower limits of detection relative to viral culture were observed for the PCR assay by using 38 of 40 rhinovirus reference strains representing different serotypes, which could reproducibly detect rhinovirus serotype 2 in viral transport medium containing 10 to 10,000 TCID(50) (50% tissue culture infectious dose endpoint) units/ml of the virus. However, for rhinovirus serotypes 59 and 69, the PCR assay was less sensitive than culture. Testing of 48 clinical specimens from children with cold-like illnesses for rhinovirus by the PCR and culture assays yielded detection rates of 16.7% and 6.3%, respectively. For a batch of 10 specimens, the entire assay was completed in 4.5 hours. This real-time PCR assay enables detection of many rhinovirus serotypes with the Applied Biosystems reagent-instrument platform.
Imaginary Time Step Method to Solve the Dirac Equation with Nonlocal Potential
International Nuclear Information System (INIS)
Zhang Ying; Liang Haozhao; Meng Jie
2009-01-01
The imaginary time step (ITS) method is applied to solve the Dirac equation with nonlocal potentials in coordinate space. Taking the nucleus 12 C as an example, even with nonlocal potentials, the direct ITS evolution for the Dirac equation still meets the disaster of the Dirac sea. However, following the recipe in our former investigation, the disaster can be avoided by the ITS evolution for the corresponding Schroedinger-like equation without localization, which gives the convergent results exactly the same with those obtained iteratively by the shooting method with localized effective potentials.
The enhancement of time-stepping procedures in SYVAC A/C
International Nuclear Information System (INIS)
Broyd, T.W.
1986-01-01
This report summarises the work carried out an SYVAC A/C between February and May 1985 aimed at improving the way in which time-stepping procedures are handled. The majority of the work was concerned with three types of problem, viz: i) Long vault release, short geosphere response ii) Short vault release, long geosphere response iii) Short vault release, short geosphere response The report contains details of changes to the logic and structure of SYVAC A/C, as well as the results of code implementation tests. It has been written primarily for members of the UK SYVAC development team, and should not be used or referred to in isolation. (author)
Time ordering of two-step processes in energetic ion-atom collisions: Basic formalism
International Nuclear Information System (INIS)
Stolterfoht, N.
1993-01-01
The semiclassical approximation is applied in second order to describe time ordering of two-step processes in energetic ion-atom collisions. Emphasis is given to the conditions for interferences between first- and second-order terms. In systems with two active electrons, time ordering gives rise to a pair of associated paths involving a second-order process and its time-inverted process. Combining these paths within the independent-particle frozen orbital model, time ordering is lost. It is shown that the loss of time ordering modifies the second-order amplitude so that its ability to interfere with the first-order amplitude is essentially reduced. Time ordering and the capability for interference is regained, as one path is blocked by means of the Pauli exclusion principle. The time-ordering formalism is prepared for papers dealing with collision experiments of single excitation [Stolterfoht et al., following paper, Phys. Rev. A 48, 2986 (1993)] and double excitation [Stolterfoht et al. (unpublished)
International Nuclear Information System (INIS)
Wang, RuLin; Zheng, Xiao; Kwok, YanHo; Xie, Hang; Chen, GuanHua; Yam, ChiYung
2015-01-01
Understanding electronic dynamics on material surfaces is fundamentally important for applications including nanoelectronics, inhomogeneous catalysis, and photovoltaics. Practical approaches based on time-dependent density functional theory for open systems have been developed to characterize the dissipative dynamics of electrons in bulk materials. The accuracy and reliability of such approaches depend critically on how the electronic structure and memory effects of surrounding material environment are accounted for. In this work, we develop a novel squared-Lorentzian decomposition scheme, which preserves the positive semi-definiteness of the environment spectral matrix. The resulting electronic dynamics is guaranteed to be both accurate and convergent even in the long-time limit. The long-time stability of electronic dynamics simulation is thus greatly improved within the current decomposition scheme. The validity and usefulness of our new approach are exemplified via two prototypical model systems: quasi-one-dimensional atomic chains and two-dimensional bilayer graphene
Wang, RuLin; Zheng, Xiao; Kwok, YanHo; Xie, Hang; Chen, GuanHua; Yam, ChiYung
2015-04-14
Understanding electronic dynamics on material surfaces is fundamentally important for applications including nanoelectronics, inhomogeneous catalysis, and photovoltaics. Practical approaches based on time-dependent density functional theory for open systems have been developed to characterize the dissipative dynamics of electrons in bulk materials. The accuracy and reliability of such approaches depend critically on how the electronic structure and memory effects of surrounding material environment are accounted for. In this work, we develop a novel squared-Lorentzian decomposition scheme, which preserves the positive semi-definiteness of the environment spectral matrix. The resulting electronic dynamics is guaranteed to be both accurate and convergent even in the long-time limit. The long-time stability of electronic dynamics simulation is thus greatly improved within the current decomposition scheme. The validity and usefulness of our new approach are exemplified via two prototypical model systems: quasi-one-dimensional atomic chains and two-dimensional bilayer graphene.
A Tree Based Broadcast Scheme for (m, k)-firm Real-Time Stream in Wireless Sensor Networks.
Park, HoSung; Kim, Beom-Su; Kim, Kyong Hoon; Shah, Babar; Kim, Ki-Il
2017-11-09
Recently, various unicast routing protocols have been proposed to deliver measured data from the sensor node to the sink node within the predetermined deadline in wireless sensor networks. In parallel with their approaches, some applications demand the specific service, which is based on broadcast to all nodes within the deadline, the feasible real-time traffic model and improvements in energy efficiency. However, current protocols based on either flooding or one-to-one unicast cannot meet the above requirements entirely. Moreover, as far as the authors know, there is no study for the real-time broadcast protocol to support the application-specific traffic model in WSN yet. Based on the above analysis, in this paper, we propose a new ( m , k )-firm-based Real-time Broadcast Protocol (FRBP) by constructing a broadcast tree to satisfy the ( m , k )-firm, which is applicable to the real-time model in resource-constrained WSNs. The broadcast tree in FRBP is constructed by the distance-based priority scheme, whereas energy efficiency is improved by selecting as few as nodes on a tree possible. To overcome the unstable network environment, the recovery scheme invokes rapid partial tree reconstruction in order to designate another node as the parent on a tree according to the measured ( m , k )-firm real-time condition and local states monitoring. Finally, simulation results are given to demonstrate the superiority of FRBP compared to the existing schemes in terms of average deadline missing ratio, average throughput and energy consumption.
De Basabe, Joná s D.; Sen, Mrinal K.
2010-01-01
popular in the recent past. We consider the Lax-Wendroff method (LWM) for time stepping and show that it allows for a larger time step than the classical leap-frog finite difference method, with higher-order accuracy. In particular the fourth-order LWM
Bunce, David; Haynes, Becky I; Lord, Stephen R; Gschwind, Yves J; Kochan, Nicole A; Reppermund, Simone; Brodaty, Henry; Sachdev, Perminder S; Delbaere, Kim
2017-06-01
Reaction time measures have considerable potential to aid neuropsychological assessment in a variety of health care settings. One such measure, the intraindividual reaction time variability (IIV), is of particular interest as it is thought to reflect neurobiological disturbance. IIV is associated with a variety of age-related neurological disorders, as well as gait impairment and future falls in older adults. However, although persons diagnosed with Mild Cognitive Impairment (MCI) are at high risk of falling, the association between IIV and prospective falls is unknown. We conducted a longitudinal cohort study in cognitively intact (n = 271) and MCI (n = 154) community-dwelling adults aged 70-90 years. IIV was assessed through a variety of measures including simple and choice hand reaction time and choice stepping reaction time tasks (CSRT), the latter administered as a single task and also with a secondary working memory task. Logistic regression did not show an association between IIV on the hand-held tasks and falls. Greater IIV in both CSRT tasks, however, did significantly increase the risk of future falls. This effect was specific to the MCI group, with a stronger effect in persons exhibiting gait, posture, or physiological impairment. The findings suggest that increased stepping IIV may indicate compromised neural circuitry involved in executive function, gait, and posture in persons with MCI increasing their risk of falling. IIV measures have potential to assess neurobiological disturbance underlying physical and cognitive dysfunction in old age, and aid fall risk assessment and routine care in community and health care settings. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Energy Technology Data Exchange (ETDEWEB)
Sharma, M; Todor, D [Virginia Commonwealth University, Richmond, VA (United States); Fields, E [Virginia Commonwealth University, Richmond, Virginia (United States)
2014-06-01
Purpose: To present a novel method allowing fast, true volumetric optimization of T and O HDR treatments and to quantify its benefits. Materials and Methods: 27 CT planning datasets and treatment plans from six consecutive cervical cancer patients treated with 4–5 intracavitary T and O insertions were used. Initial treatment plans were created with a goal of covering high risk (HR)-CTV with D90 > 90% and minimizing D2cc to rectum, bladder and sigmoid with manual optimization, approved and delivered. For the second step, each case was re-planned adding a new structure, created from the 100% prescription isodose line of the manually optimized plan to the existent physician delineated HR-CTV, rectum, bladder and sigmoid. New, more rigorous DVH constraints for the critical OARs were used for the optimization. D90 for the HR-CTV and D2cc for OARs were evaluated in both plans. Results: Two-step optimized plans had consistently smaller D2cc's for all three OARs while preserving good D90s for HR-CTV. On plans with “excellent” CTV coverage, average D90 of 96% (range 91–102), sigmoid D2cc was reduced on average by 37% (range 16–73), bladder by 28% (range 20–47) and rectum by 27% (range 15–45). Similar reductions were obtained on plans with “good” coverage, with an average D90 of 93% (range 90–99). For plans with inferior coverage, average D90 of 81%, an increase in coverage to 87% was achieved concurrently with D2cc reductions of 31%, 18% and 11% for sigmoid, bladder and rectum. Conclusions: A two-step DVH-based optimization can be added with minimal planning time increase, but with the potential of dramatic and systematic reductions of D2cc for OARs and in some cases with concurrent increases in target dose coverage. These single-fraction modifications would be magnified over the course of 4–5 intracavitary insertions and may have real clinical implications in terms of decreasing both acute and late toxicity.
Cavaglieri, Daniele; Bewley, Thomas; Mashayek, Ali
2015-11-01
We present a new code, Diablo 2.0, for the simulation of the incompressible NSE in channel and duct flows with strong grid stretching near walls. The code leverages the fractional step approach with a few twists. New low-storage IMEX (implicit-explicit) Runge-Kutta time-marching schemes are tested which are superior to the traditional and widely-used CN/RKW3 (Crank-Nicolson/Runge-Kutta-Wray) approach; the new schemes tested are L-stable in their implicit component, and offer improved overall order of accuracy and stability with, remarkably, similar computational cost and storage requirements. For duct flow simulations, our new code also introduces a new smoother for the multigrid solver for the pressure Poisson equation. The classic approach, involving alternating-direction zebra relaxation, is replaced by a new scheme, dubbed tweed relaxation, which achieves the same convergence rate with roughly half the computational cost. The code is then tested on the simulation of a shear flow instability in a duct, a classic problem in fluid mechanics which has been the object of extensive numerical modelling for its role as a canonical pathway to energetic turbulence in several fields of science and engineering.
Hoepfer, Matthias
co-simulation approach to modeling and simulation. It lays out the general approach to dynamic system co-simulation, and gives a comprehensive overview of what co-simulation is and what it is not. It creates a taxonomy of the requirements and limits of co-simulation, and the issues arising with co-simulating sub-models. Possible solutions towards resolving the stated problems are investigated to a certain depth. A particular focus is given to the issue of time stepping. It will be shown that for dynamic models, the selection of the simulation time step is a crucial issue with respect to computational expense, simulation accuracy, and error control. The reasons for this are discussed in depth, and a time stepping algorithm for co-simulation with unknown dynamic sub-models is proposed. Motivations and suggestions for the further treatment of selected issues are presented.
Synchronized Scheme of Continuous Space-Vector PWM with the Real-Time Control Algorithms
DEFF Research Database (Denmark)
Oleschuk, V.; Blaabjerg, Frede
2004-01-01
This paper describes in details the basic peculiarities of a new method of feedforward synchronous pulsewidth modulation (PWM) of three-phase voltage source inverters for adjustable speed ac drives. It is applied to a continuous scheme of voltage space vector modulation. The method is based...... their position inside clock-intervals. In order to provide smooth shock-less pulse-ratio changing and quarter-wave symmetry of the voltage waveforms, special synchronising signals are formed on the boundaries of the 60 clock-intervals. The process of gradual transition from continuous to discontinuous...
Directory of Open Access Journals (Sweden)
Sheraz Aslam
2017-12-01
Full Text Available The smart grid plays a vital role in decreasing electricity cost through Demand Side Management (DSM. Smart homes, a part of the smart grid, contribute greatly to minimizing electricity consumption cost via scheduling home appliances. However, user waiting time increases due to the scheduling of home appliances. This scheduling problem is the motivation to find an optimal solution that could minimize the electricity cost and Peak to Average Ratio (PAR with minimum user waiting time. There are many studies on Home Energy Management (HEM for cost minimization and peak load reduction. However, none of the systems gave sufficient attention to tackle multiple parameters (i.e., electricity cost and peak load reduction at the same time as user waiting time was minimum for residential consumers with multiple homes. Hence, in this work, we propose an efficient HEM scheme using the well-known meta-heuristic Genetic Algorithm (GA, the recently developed Cuckoo Search Optimization Algorithm (CSOA and the Crow Search Algorithm (CSA, which can be used for electricity cost and peak load alleviation with minimum user waiting time. The integration of a smart Electricity Storage System (ESS is also taken into account for more efficient operation of the Home Energy Management System (HEMS. Furthermore, we took the real-time electricity consumption pattern for every residence, i.e., every home has its own living pattern. The proposed scheme is implemented in a smart building; comprised of thirty smart homes (apartments, Real-Time Pricing (RTP and Critical Peak Pricing (CPP signals are examined in terms of electricity cost estimation for both a single smart home and a smart building. In addition, feasible regions are presented for single and multiple smart homes, which show the relationship among the electricity cost, electricity consumption and user waiting time. Experimental results demonstrate the effectiveness of our proposed scheme for single and multiple smart
Nikzad, Nasim; Sahari, Mohammad A; Vanak, Zahra Piravi; Safafar, Hamed; Boland-nazar, Seyed A
2013-08-01
Weight, oil, fatty acids, tocopherol, polyphenol, and sterol properties of 5 olive cultivars (Zard, Fishomi, Ascolana, Amigdalolia, and Conservalia) during crude, lye treatment, washing, fermentation, and pasteurization steps were studied. Results showed: oil percent was higher and lower in Ascolana (crude step) and in Fishomi (pasteurization step), respectively; during processing steps, in all cultivars, oleic, palmitic, linoleic, and stearic acids were higher; the highest changes in saturated and unsaturated fatty acids were in fermentation step; the highest and the lowest ratios of ω3 / ω6 were in Ascolana (washing step) and in Zard (pasteurization step), respectively; the highest and the lowest tocopherol were in Amigdalolia and Fishomi, respectively, and major damage occurred in lye step; the highest and the lowest polyphenols were in Ascolana (crude step) and in Zard and Ascolana (pasteurization step), respectively; the major damage among cultivars occurred during lye step, in which the polyphenol reduced to 1/10 of first content; sterol did not undergo changes during steps. Reviewing of olive patents shows that many compositions of fruits such as oil quality, fatty acids, quantity and its fraction can be changed by alteration in cultivar and process.
Avoid the tsunami of the Dirac sea in the imaginary time step method
International Nuclear Information System (INIS)
Zhang, Ying; Liang, Haozhao; Meng, Jie
2010-01-01
The discrete single-particle spectra in both the Fermi and Dirac sea have been calculated by the imaginary time step (ITS) method for the Schroedinger-like equation after avoiding the "tsunami" of the Dirac sea, i.e. the diving behavior of the single-particle level into the Dirac sea in the direct application of the ITS method for the Dirac equation. It is found that by the transform from the Dirac equation to the Schroedinger-like equation, the single-particle spectra, which extend from the positive to the negative infinity, can be separately obtained by the ITS evolution in either the Fermi sea or the Dirac sea. Identical results with those in the conventional shooting method have been obtained via the ITS evolution for the equivalent Schroedinger-like equation, which demonstrates the feasibility, practicality and reliability of the present algorithm and dispels the doubts on the ITS method in the relativistic system. (author)
Computer experiments of the time-sequence of individual steps in multiple Coulomb-excitation
International Nuclear Information System (INIS)
Boer, J. de; Dannhaueser, G.
1982-01-01
The way in which the multiple E2 steps in the Coulomb-excitation of a rotational band of a nucleus follow one another is elucidated for selected examples using semiclassical computer experiments. The role a given transition plays for the excitation of a given final state is measured by a quantity named ''importance function''. It is found that these functions, calculated for the highest rotational state, peak at times forming a sequence for the successive E2 transitions starting from the ground state. This sequential behaviour is used to approximately account for the effects on the projectile orbit of the sequential transfer of excitation energy and angular momentum from projectile to target. These orbits lead to similar deflection functions and cross sections as those obtained from a symmetrization procedure approximately accounting for the transfer of angular momentum and energy. (Auth.)
Detection and Correction of Step Discontinuities in Kepler Flux Time Series
Kolodziejczak, J. J.; Morris, R. L.
2011-01-01
PDC 8.0 includes an implementation of a new algorithm to detect and correct step discontinuities appearing in roughly one of every 20 stellar light curves during a given quarter. The majority of such discontinuities are believed to result from high-energy particles (either cosmic or solar in origin) striking the photometer and causing permanent local changes (typically -0.5%) in quantum efficiency, though a partial exponential recovery is often observed [1]. Since these features, dubbed sudden pixel sensitivity dropouts (SPSDs), are uncorrelated across targets they cannot be properly accounted for by the current detrending algorithm. PDC detrending is based on the assumption that features in flux time series are due either to intrinsic stellar phenomena or to systematic errors and that systematics will exhibit measurable correlations across targets. SPSD events violate these assumptions and their successful removal not only rectifies the flux values of affected targets, but demonstrably improves the overall performance of PDC detrending [1].
Hsu, Ming-Chen
2010-02-01
The objective of this paper is to show that use of the element-vector-based definition of stabilization parameters, introduced in [T.E. Tezduyar, Computation of moving boundaries and interfaces and stabilization parameters, Int. J. Numer. Methods Fluids 43 (2003) 555-575; T.E. Tezduyar, Y. Osawa, Finite element stabilization parameters computed from element matrices and vectors, Comput. Methods Appl. Mech. Engrg. 190 (2000) 411-430], circumvents the well-known instability associated with conventional stabilized formulations at small time steps. We describe formulations for linear advection-diffusion and incompressible Navier-Stokes equations and test them on three benchmark problems: advection of an L-shaped discontinuity, laminar flow in a square domain at low Reynolds number, and turbulent channel flow at friction-velocity Reynolds number of 395. © 2009 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Schneeberger, B.; Breuleux, R.
1977-01-01
Assuming that earthquake ground motion is a stationary time function, the seismic analysis of a linear structure can be done by probailistic methods using the 'power spectral density function' (PSD), instead of applying the more traditional time-step-integration using earthquake time histories (TH). A given structure was analysed both by PSD and TH methods computing and comparing 'floor response spectra'. The analysis using TH was performed for two different TH and different frequency intervals for the 'floor-response-spectra'. The analysis using PSD first produced PSD functions of the responses of the floors and these were then converted into 'foor-response-spectra'. Plots of the resulting 'floor-response-spectra' show: (1) The agreement of TH and PSD results is quite close. (2) The curves produced by PSD are much smoother than those produced by TH and mostly form an enelope of the latter. (3) The curves produced by TH are quite jagged with the location and magnitude of the peaks depending on the choice of frequencies at which the 'floor-response-spectra' were evaluated and on the choice of TH. (Auth.)
Detection of Tomato black ring virus by real-time one-step RT-PCR.
Harper, Scott J; Delmiglio, Catia; Ward, Lisa I; Clover, Gerard R G
2011-01-01
A TaqMan-based real-time one-step RT-PCR assay was developed for the rapid detection of Tomato black ring virus (TBRV), a significant plant pathogen which infects a wide range of economically important crops. Primers and a probe were designed against existing genomic sequences to amplify a 72 bp fragment from RNA-2. The assay amplified all isolates of TBRV tested, but no amplification was observed from the RNA of other nepovirus species or healthy host plants. The detection limit of the assay was estimated to be around nine copies of the TBRV target region in total RNA. A comparison with conventional RT-PCR and ELISA, indicated that ELISA, the current standard test method, lacked specificity and reacted to all nepovirus species tested, while conventional RT-PCR was approximately ten-fold less sensitive than the real-time RT-PCR assay. Finally, the real-time RT-PCR assay was tested using five different RT-PCR reagent kits and was found to be robust and reliable, with no significant differences in sensitivity being found. The development of this rapid assay should aid in quarantine and post-border surveys for regulatory agencies. Copyright © 2010 Elsevier B.V. All rights reserved.
Ugon, B.; Nandong, J.; Zang, Z.
2017-06-01
The presence of unstable dead-time systems in process plants often leads to a daunting challenge in the design of standard PID controllers, which are not only intended to provide close-loop stability but also to give good performance-robustness overall. In this paper, we conduct stability analysis on a double-loop control scheme based on the Routh-Hurwitz stability criteria. We propose to use this unstable double-loop control scheme which employs two P/PID controllers to control first-order or second-order unstable dead-time processes typically found in process industries. Based on the Routh-Hurwitz stability necessary and sufficient criteria, we establish several stability regions which enclose within them the P/PID parameter values that guarantee close-loop stability of the double-loop control scheme. A systematic tuning rule is developed for the purpose of obtaining the optimal P/PID parameter values within the established regions. The effectiveness of the proposed tuning rule is demonstrated using several numerical examples and the result are compared with some well-established tuning methods reported in the literature.
Directory of Open Access Journals (Sweden)
Lingyang Song
2007-04-01
Full Text Available We report a simple differential modulation scheme for quasi-orthogonal space-time block codes. A new class of quasi-orthogonal coding structures that can provide partial transmit diversity is presented for various numbers of transmit antennas. Differential encoding and decoding can be simplified for differential Alamouti-like codes by grouping the signals in the transmitted matrix and decoupling the detection of data symbols, respectively. The new scheme can achieve constant amplitude of transmitted signals, and avoid signal constellation expansion; in addition it has a linear signal detector with very low complexity. Simulation results show that these partial-diversity codes can provide very useful results at low SNR for current communication systems. Extension to more than four transmit antennas is also considered.
Huot, F; Bertrand, P; Sonnendrücker, E; Coulaud, O
2003-01-01
The Time Splitting Scheme (TSS) has been examined within the context of the one-dimensional (1D) relativistic Vlasov-Maxwell model. In the strongly relativistic regime of the laser-plasma interaction, the TSS cannot be applied to solve the Vlasov equation. We propose a new semi-Lagrangian scheme based on a full 2D advection and study its advantages over the classical Splitting procedure. Details of the underlying integration of the Vlasov equation appear to be important in achieving accurate plasma simulations. Examples are given which are related to the relativistic modulational instability and the self-induced transparency of an ultra-intense electromagnetic pulse in the relativistic regime.
A Tree Based Broadcast Scheme for (m, k-firm Real-Time Stream in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
HoSung Park
2017-11-01
Full Text Available Recently, various unicast routing protocols have been proposed to deliver measured data from the sensor node to the sink node within the predetermined deadline in wireless sensor networks. In parallel with their approaches, some applications demand the specific service, which is based on broadcast to all nodes within the deadline, the feasible real-time traffic model and improvements in energy efficiency. However, current protocols based on either flooding or one-to-one unicast cannot meet the above requirements entirely. Moreover, as far as the authors know, there is no study for the real-time broadcast protocol to support the application-specific traffic model in WSN yet. Based on the above analysis, in this paper, we propose a new (m, k-firm-based Real-time Broadcast Protocol (FRBP by constructing a broadcast tree to satisfy the (m, k-firm, which is applicable to the real-time model in resource-constrained WSNs. The broadcast tree in FRBP is constructed by the distance-based priority scheme, whereas energy efficiency is improved by selecting as few as nodes on a tree possible. To overcome the unstable network environment, the recovery scheme invokes rapid partial tree reconstruction in order to designate another node as the parent on a tree according to the measured (m, k-firm real-time condition and local states monitoring. Finally, simulation results are given to demonstrate the superiority of FRBP compared to the existing schemes in terms of average deadline missing ratio, average throughput and energy consumption.
Grobe, Thomas G
2016-08-01
With the introduction of a new occupational classification at the end of 2011, employment characteristics are reported by employees to social insurance agencies in Germany in more detail than in previous years. In addition to other changes, the new classification allows a distinction between full- and part-time work to be made. This provided a reason to consider the health-related aspects of part-time work on the basis of data from a statutory health insurance scheme. Our analysis is based on the data of 3.8 million employees insured with the Techniker Krankenkasse (TK), a statutory health insurance scheme, in 2012. In addition to daily information on employment situations, details of periods and diagnoses of sick leave and the drugs prescribed were available. Although approximately 50 % of women of middle to higher working age worked part-time in 2012, the corresponding percentage of men employed in part-time work was less than 10 %. Overall, part-time employees were on sick leave for fewer days than full-time employees, but among men, sick leave due to mental disorders was longer for part-time employees than for full-time employees, whereas women working part time were affected to a lesser extent by corresponding periods of absence than those working full time. The results provide indications for the assertion that men in gender-specifically atypical employment situations are more frequently affected by mental disorders. Further evidence supports this assertion. With the long-term availability of these new employment characteristics, longitudinal analyses could help to clarify this cause-effect relationship.
An Adaptive Medium Access Parameter Prediction Scheme for IEEE 802.11 Real-Time Applications
Directory of Open Access Journals (Sweden)
Estefanía Coronado
2017-01-01
Full Text Available Multimedia communications have experienced an unprecedented growth due mainly to the increase in the content quality and the emergence of smart devices. The demand for these contents is tending towards wireless technologies. However, these transmissions are quite sensitive to network delays. Therefore, ensuring an optimum QoS level becomes of great importance. The IEEE 802.11e amendment was released to address the lack of QoS capabilities in the original IEEE 802.11 standard. Accordingly, the Enhanced Distributed Channel Access (EDCA function was introduced, allowing it to differentiate traffic streams through a group of Medium Access Control (MAC parameters. Although EDCA recommends a default configuration for these parameters, it has been proved that it is not optimum in many scenarios. In this work a dynamic prediction scheme for these parameters is presented. This approach ensures an appropriate traffic differentiation while maintaining compatibility with the stations without QoS support. As the APs are the only devices that use this algorithm, no changes are required to current network cards. The results show improvements in both voice and video transmissions, as well as in the QoS level of the network that the proposal achieves with regard to EDCA.
Chidori, Kazuhiro; Yamamoto, Yuji
2017-01-01
The aim of this study was to evaluate the effects of the lateral amplitude and regularity of upper body fluctuation on step time variability. Return map analysis was used to clarify the relationship between step time variability and a history of falling. Eleven healthy, community-dwelling older adults and twelve younger adults participated in the study. All of the subjects walked 25 m at a comfortable speed. Trunk acceleration was measured using triaxial accelerometers attached to the third lumbar vertebrae (L3) and the seventh cervical vertebrae (C7). The normalized average magnitude of acceleration, the coefficient of determination ($R^2$) of the return map, and the step time variabilities, were calculated. Cluster analysis using the average fluctuation and the regularity of C7 fluctuation identified four walking patterns in the mediolateral (ML) direction. The participants with higher fluctuation and lower regularity showed significantly greater step time variability compared with the others. Additionally, elderly participants who had fallen in the past year had higher amplitude and a lower regularity of fluctuation during walking. In conclusion, by focusing on the time evolution of each step, it is possible to understand the cause of stride and/or step time variability that is associated with a risk of falls.
Directory of Open Access Journals (Sweden)
Po Hu
2016-01-01
Full Text Available Implementing real-time machining process control at shop floor has great significance on raising the efficiency and quality of product manufacturing. A framework and implementation methods of real-time machining process control based on STEP-NC are presented in this paper. Data model compatible with ISO 14649 standard is built to transfer high-level real-time machining process control information between CAPP systems and CNC systems, in which EXPRESS language is used to define new STEP-NC entities. Methods for implementing real-time machining process control at shop floor are studied and realized on an open STEP-NC controller, which is developed using object-oriented, multithread, and shared memory technologies conjunctively. Cutting force at specific direction of machining feature in side mill is chosen to be controlled object, and a fuzzy control algorithm with self-adjusting factor is designed and embedded in the software CNC kernel of STEP-NC controller. Experiments are carried out to verify the proposed framework, STEP-NC data model, and implementation methods for real-time machining process control. The results of experiments prove that real-time machining process control tasks can be interpreted and executed correctly by the STEP-NC controller at shop floor, in which actual cutting force is kept around ideal value, whether axial cutting depth changes suddenly or continuously.
Caetano, Maria Joana D; Lord, Stephen R; Allen, Natalie E; Brodie, Matthew A; Song, Jooeun; Paul, Serene S; Canning, Colleen G; Menant, Jasmine C
2018-02-01
Decline in the ability to take effective steps and to adapt gait, particularly under challenging conditions, may be important reasons why people with Parkinson's disease (PD) have an increased risk of falling. This study aimed to determine the extent of stepping and gait adaptability impairments in PD individuals as well as their associations with PD symptoms, cognitive function and previous falls. Thirty-three older people with PD and 33 controls were assessed in choice stepping reaction time, Stroop stepping and gait adaptability tests; measurements identified as fall risk factors in older adults. People with PD had similar mean choice stepping reaction times to healthy controls, but had significantly greater intra-individual variability. In the Stroop stepping test, the PD participants were more likely to make an error (48 vs 18%), took 715 ms longer to react (2312 vs 1517 ms) and had significantly greater response variability (536 vs 329 ms) than the healthy controls. People with PD also had more difficulties adapting their gait in response to targets (poorer stepping accuracy) and obstacles (increased number of steps) appearing at short notice on a walkway. Within the PD group, higher disease severity, reduced cognition and previous falls were associated with poorer stepping and gait adaptability performances. People with PD have reduced ability to adapt gait to unexpected targets and obstacles and exhibit poorer stepping responses, particularly in a test condition involving conflict resolution. Such impaired stepping responses in Parkinson's disease are associated with disease severity, cognitive impairment and falls. Copyright © 2017 Elsevier Ltd. All rights reserved.
An on-time power-aware scheduling scheme for medical sensor SoC-based WBAN systems.
Hwang, Tae-Ho; Kim, Dong-Sun; Kim, Jung-Guk
2012-12-27
The focus of many leading technologies in the field of medical sensor systems is on low power consumption and robust data transmission. For example, the implantable cardioverter-defibrillator (ICD), which is used to maintain the heart in a healthy state, requires a reliable wireless communication scheme with an extremely low duty-cycle, high bit rate, and energy-efficient media access protocols. Because such devices must be sustained for over 5 years without access to battery replacement, they must be designed to have extremely low power consumption in sleep mode. Here, an on-time, energy-efficient scheduling scheme is proposed that performs power adjustments to minimize the sleep-mode current. The novelty of this scheduler is that it increases the determinacy of power adjustment and the predictability of scheduling by employing non-pre-emptible dual priority scheduling. This predictable scheduling also guarantees the punctuality of important periodic tasks based on their serialization, by using their worst case execution time) and the power consumption optimization. The scheduler was embedded into a system on chip (SoC) developed to support the wireless body area network-a wakeup-radio and wakeup-timer for implantable medical devices. This scheduling system is validated by the experimental results of its performance when used with life-time extensions of ICD devices.
An On-Time Power-Aware Scheduling Scheme for Medical Sensor SoC-Based WBAN Systems
Hwang, Tae-Ho; Kim, Dong-Sun; Kim, Jung-Guk
2013-01-01
The focus of many leading technologies in the field of medical sensor systems is on low power consumption and robust data transmission. For example, the implantable cardioverter-defibrillator (ICD), which is used to maintain the heart in a healthy state, requires a reliable wireless communication scheme with an extremely low duty-cycle, high bit rate, and energy-efficient media access protocols. Because such devices must be sustained for over 5 years without access to battery replacement, they must be designed to have extremely low power consumption in sleep mode. Here, an on-time, energy-efficient scheduling scheme is proposed that performs power adjustments to minimize the sleep-mode current. The novelty of this scheduler is that it increases the determinacy of power adjustment and the predictability of scheduling by employing non-pre-emptible dual priority scheduling. This predictable scheduling also guarantees the punctuality of important periodic tasks based on their serialization, by using their worst case execution time) and the power consumption optimization. The scheduler was embedded into a system on chip (SoC) developed to support the wireless body area network—a wakeup-radio and wakeup-timer for implantable medical devices. This scheduling system is validated by the experimental results of its performance when used with life-time extensions of ICD devices. PMID:23271602
An On-Time Power-Aware Scheduling Scheme for Medical Sensor SoC-Based WBAN Systems
Directory of Open Access Journals (Sweden)
Jung-Guk Kim
2012-12-01
Full Text Available The focus of many leading technologies in the field of medical sensor systems is on low power consumption and robust data transmission. For example, the implantable cardioverter-defibrillator (ICD, which is used to maintain the heart in a healthy state, requires a reliable wireless communication scheme with an extremely low duty-cycle, high bit rate, and energy-efficient media access protocols. Because such devices must be sustained for over 5 years without access to battery replacement, they must be designed to have extremely low power consumption in sleep mode. Here, an on-time, energy-efficient scheduling scheme is proposed that performs power adjustments to minimize the sleep-mode current. The novelty of this scheduler is that it increases the determinacy of power adjustment and the predictability of scheduling by employing non-pre-emptible dual priority scheduling. This predictable scheduling also guarantees the punctuality of important periodic tasks based on their serialization, by using their worst case execution time and the power consumption optimization. The scheduler was embedded into a system on chip (SoC developed to support the wireless body area network—a wakeup-radio and wakeup-timer for implantable medical devices. This scheduling system is validated by the experimental results of its performance when used with life-time extensions of ICD devices.
Directory of Open Access Journals (Sweden)
Trang Nguyen
2016-05-01
Full Text Available This paper presents a modulation scheme in the time domain based on On-Off-Keying and proposes various compatible supports for different types of image sensors. The content of this article is a sub-proposal to the IEEE 802.15.7r1 Task Group (TG7r1 aimed at Optical Wireless Communication (OWC using an image sensor as the receiver. The compatibility support is indispensable for Image Sensor Communications (ISC because the rolling shutter image sensors currently available have different frame rates, shutter speeds, sampling rates, and resolutions. However, focusing on unidirectional communications (i.e., data broadcasting, beacons, an asynchronous communication prototype is also discussed in the paper. Due to the physical limitations associated with typical image sensors (including low and varying frame rates, long exposures, and low shutter speeds, the link speed performance is critically considered. Based on the practical measurement of camera response to modulated light, an operating frequency range is suggested along with the similar system architecture, decoding procedure, and algorithms. A significant feature of our novel data frame structure is that it can support both typical frame rate cameras (in the oversampling mode as well as very low frame rate cameras (in the error detection mode for a camera whose frame rate is lower than the transmission packet rate. A high frame rate camera, i.e., no less than 20 fps, is supported in an oversampling mode in which a majority voting scheme for decoding data is applied. A low frame rate camera, i.e., when the frame rate drops to less than 20 fps at some certain time, is supported by an error detection mode in which any missing data sub-packet is detected in decoding and later corrected by external code. Numerical results and valuable analysis are also included to indicate the capability of the proposed schemes.
DEFF Research Database (Denmark)
Rasmussen, Birgit
2016-01-01
Acoustic regulations or guidelines for schools exist in all five Nordic countries. The acoustic criteria depend on room uses and deal with airborne and impact sound insulation, reverberation time, sound absorption, traffic noise, service equipment noise and other acoustic performance...... have become more extensive and stricter during the last two decades. The paper focuses on comparison of sound insulation and reverberation time criteria for classrooms in regulations and classification schemes in the Nordic countries. Limit values and changes over time will be discussed as well as how...... not identical. The national criteria for quality level C correspond to the national regulations or recommendations for new-build. The quality levels A and B are intended to define better acoustic performance than C, and D lower performance. Typically, acoustic regulations and classification criteria for schools...
Parsani, Matteo
2013-04-10
Explicit Runge--Kutta schemes with large stable step sizes are developed for integration of high-order spectral difference spatial discretizations on quadrilateral grids. The new schemes permit an effective time step that is substantially larger than the maximum admissible time step of standard explicit Runge--Kutta schemes available in the literature. Furthermore, they have a small principal error norm and admit a low-storage implementation. The advantages of the new schemes are demonstrated through application to the Euler equations and the linearized Euler equations.
Parsani, Matteo; Ketcheson, David I.; Deconinck, W.
2013-01-01
Explicit Runge--Kutta schemes with large stable step sizes are developed for integration of high-order spectral difference spatial discretizations on quadrilateral grids. The new schemes permit an effective time step that is substantially larger than the maximum admissible time step of standard explicit Runge--Kutta schemes available in the literature. Furthermore, they have a small principal error norm and admit a low-storage implementation. The advantages of the new schemes are demonstrated through application to the Euler equations and the linearized Euler equations.
Hidri, Lotfi; Gharbi, Anis; Louly, Mohamed Aly
2014-01-01
We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures.
A Time-Domain Filtering Scheme for the Modified Root-MUSIC Algorithm
Yamada, Hiroyoshi; Yamaguchi, Yoshio; Sengoku, Masakazu
1996-01-01
A new superresolution technique is proposed for high-resolution estimation of the scattering analysis. For complicated multipath propagation environment, it is not enough to estimate only the delay-times of the signals. Some other information should be required to identify the signal path. The proposed method can estimate the frequency characteristic of each signal in addition to its delay-time. One method called modified (Root) MUSIC algorithm is known as a technique that can treat both of t...
Timing of the steps in transformation of C3H 10T1/2 cells by X-irradiation
International Nuclear Information System (INIS)
Kennedy, A.R.; Cairns, J.; Little, J.B.
1984-01-01
Transformation of cells in culture by chemical carcinogens or X-rays seems to require at least two steps. The initial step is a frequent event; for example, after transient exposure to either methylcholanthrene or X-rays. It has been hypothesized that the second step behaves like a spontaneous mutation in having a constant but small probability of occurring each time an initiated cell divides. We show here that the clone size distribution of transformed cells in growing cultures initiated by X-rays, is, indeed, exactly what would be expected on that hypothesis. (author)
De Basabe, Jonás D.
2010-04-01
We investigate the stability of some high-order finite element methods, namely the spectral element method and the interior-penalty discontinuous Galerkin method (IP-DGM), for acoustic or elastic wave propagation that have become increasingly popular in the recent past. We consider the Lax-Wendroff method (LWM) for time stepping and show that it allows for a larger time step than the classical leap-frog finite difference method, with higher-order accuracy. In particular the fourth-order LWM allows for a time step 73 per cent larger than that of the leap-frog method; the computational cost is approximately double per time step, but the larger time step partially compensates for this additional cost. Necessary, but not sufficient, stability conditions are given for the mentioned methods for orders up to 10 in space and time. The stability conditions for IP-DGM are approximately 20 and 60 per cent more restrictive than those for SEM in the acoustic and elastic cases, respectively. © 2010 The Authors Journal compilation © 2010 RAS.
Bartels, Robert E.
2002-01-01
A variable order method of integrating initial value ordinary differential equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. While it is more complex than most other methods, it produces exact solutions at arbitrary time step size when the time variation of the system can be modeled exactly by a polynomial. Solutions to several nonlinear problems exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with an exact solution and with solutions obtained by established methods.
Research on time synchronization scheme of MES systems in manufacturing enterprise
Yuan, Yuan; Wu, Kun; Sui, Changhao; Gu, Jin
2018-04-01
With the popularity of information and automatic production in the manufacturing enterprise, data interaction between business systems is more and more frequent. Therefore, the accuracy of time is getting higher and higher. However, the NTP network time synchronization methods lack the corresponding redundancy and monitoring mechanisms. When failure occurs, it can only make up operations after the event, which has a great effect on production data and systems interaction. Based on this, the paper proposes a RHCS-based NTP server architecture, automatically detect NTP status and failover by script.
Time Pattern Locking Scheme for Secure Multimedia Contents in Human-Centric Device
Directory of Open Access Journals (Sweden)
Hyun-Woo Kim
2014-01-01
Full Text Available Among the various smart multimedia devices, multimedia smartphones have become the most widespread due to their convenient portability and real-time information sharing, as well as various other built-in features. Accordingly, since personal and business activities can be carried out using multimedia smartphones without restrictions based on time and location, people have more leisure time and convenience than ever. However, problems such as loss, theft, and information leakage because of convenient portability have also increased proportionally. As a result, most multimedia smartphones are equipped with various built-in locking features. Pattern lock, personal identification numbers, and passwords are the most used locking features on current smartphones, but these are vulnerable to shoulder surfing and smudge attacks, allowing malicious users to bypass the security feature easily. In particular, the smudge attack technique is a convenient way to unlock multimedia smartphones after they have been stolen. In this paper, we propose the secure locking screen using time pattern (SLSTP focusing on improved security and convenience for users to support human-centric multimedia device completely. The SLSTP can provide a simple interface to users and reduce the risk factors pertaining to security leakage to malicious third parties.
Time pattern locking scheme for secure multimedia contents in human-centric device.
Kim, Hyun-Woo; Kim, Jun-Ho; Park, Jong Hyuk; Jeong, Young-Sik
2014-01-01
Among the various smart multimedia devices, multimedia smartphones have become the most widespread due to their convenient portability and real-time information sharing, as well as various other built-in features. Accordingly, since personal and business activities can be carried out using multimedia smartphones without restrictions based on time and location, people have more leisure time and convenience than ever. However, problems such as loss, theft, and information leakage because of convenient portability have also increased proportionally. As a result, most multimedia smartphones are equipped with various built-in locking features. Pattern lock, personal identification numbers, and passwords are the most used locking features on current smartphones, but these are vulnerable to shoulder surfing and smudge attacks, allowing malicious users to bypass the security feature easily. In particular, the smudge attack technique is a convenient way to unlock multimedia smartphones after they have been stolen. In this paper, we propose the secure locking screen using time pattern (SLSTP) focusing on improved security and convenience for users to support human-centric multimedia device completely. The SLSTP can provide a simple interface to users and reduce the risk factors pertaining to security leakage to malicious third parties.
A hybrid scheme for real-time prediction of bus trajectories
Fadaei, Masoud; Cats, O.; Bhaskar, Ashish
2016-01-01
The uncertainty associated with public transport services can be partially counteracted by developing real-time models to predict downstream service conditions. In this study, a hybrid approach for predicting bus trajectories by integrating multiple predictors is proposed. The prediction model
Simulation of Cavity Flow by the Lattice Boltzmann Method using Multiple-Relaxation-Time scheme
International Nuclear Information System (INIS)
Ryu, Seung Yeob; Kang, Ha Nok; Seo, Jae Kwang; Yun, Ju Hyeon; Zee, Sung Quun
2006-01-01
Recently, the lattice Boltzmann method(LBM) has gained much attention for its ability to simulate fluid flows, and for its potential advantages over conventional CFD method. The key advantages of LBM are, (1) suitability for parallel computations, (2) absence of the need to solve the time-consuming Poisson equation for pressure, and (3) ease with multiphase flows, complex geometries and interfacial dynamics may be treated. The LBM using relaxation technique was introduced by Higuerea and Jimenez to overcome some drawbacks of lattice gas automata(LGA) such as large statistical noise, limited range of physical parameters, non- Galilean invariance, and implementation difficulty in three-dimensional problem. The simplest LBM is the lattice Bhatnager-Gross-Krook(LBGK) equation, which based on a single-relaxation-time(SRT) approximation. Due to its extreme simplicity, the lattice BGK(LBGK) equation has become the most popular lattice Boltzmann model in spite of its well-known deficiencies, for example, in simulating high-Reynolds numbers flow. The Multiple-Relaxation-Time(MRT) LBM was originally developed by D'Humieres. Lallemand and Luo suggests that the use of a Multiple-Relaxation-Time(MRT) models are much more stable than LBGK, because the different relaxation times can be individually tuned to achieve 'optimal' stability. A lid-driven cavity flow is selected as the test problem because it has geometrically singular points in the flow, but geometrically simple. Results are compared with those using SRT, MRT model in the LBGK method and previous simulation data using Navier-Stokes equations for the same flow conditions. In summary, LBM using MRT model introduces much less spatial oscillations near geometrical singular points, which is important for the successful simulation of higher Reynolds number flows
The impact of time step definition on code convergence and robustness
Venkateswaran, S.; Weiss, J. M.; Merkle, C. L.
1992-01-01
We have implemented preconditioning for multi-species reacting flows in two independent codes, an implicit (ADI) code developed in-house and the RPLUS code (developed at LeRC). The RPLUS code was modified to work on a four-stage Runge-Kutta scheme. The performance of both the codes was tested, and it was shown that preconditioning can improve convergence by a factor of two to a hundred depending on the problem. Our efforts are currently focused on evaluating the effect of chemical sources and on assessing how preconditioning may be applied to improve convergence and robustness in the calculation of reacting flows.
Directory of Open Access Journals (Sweden)
Minh Y Nguyen
2012-12-01
Full Text Available Under a deregulated environment, wind power producers are subject to many regulation costs due to the intermittence of natural resources and the accuracy limits of existing prediction tools. This paper addresses the operation (charging/discharging problem of battery energy storage installed in a wind generation system in order to improve the value of wind power in the real-time market. Depending on the prediction of market prices and the probabilistic information of wind generation, wind power producers can schedule the battery energy storage for the next day in order to maximize the profit. In addition, by taking into account the expenses of using batteries, the proposed charging/discharging scheme is able to avoid the detrimental operation of battery energy storage which can lead to a significant reduction of battery lifetime, i.e., uneconomical operation. The problem is formulated in a dynamic programming framework and solved by a dynamic programming backward algorithm. The proposed scheme is then applied to the study cases, and the results of simulation show its effectiveness.
A Fully Polynomial-Time Approximation Scheme for Speed Scaling with Sleep State
Antoniadis, Antonios; Huang, Chien-Chung; Ott, Sebastian
2014-01-01
We study classical deadline-based preemptive scheduling of tasks in a computing environment equipped with both dynamic speed scaling and sleep state capabilities: Each task is specified by a release time, a deadline and a processing volume, and has to be scheduled on a single, speed-scalable processor that is supplied with a sleep state. In the sleep state, the processor consumes no energy, but a constant wake-up cost is required to transition back to the active state. In contrast to speed sc...
A hierarchical graph neuron scheme for real-time pattern recognition.
Nasution, B B; Khan, A I
2008-02-01
The hierarchical graph neuron (HGN) implements a single cycle memorization and recall operation through a novel algorithmic design. The HGN is an improvement on the already published original graph neuron (GN) algorithm. In this improved approach, it recognizes incomplete/noisy patterns. It also resolves the crosstalk problem, which is identified in the previous publications, within closely matched patterns. To accomplish this, the HGN links multiple GN networks for filtering noise and crosstalk out of pattern data inputs. Intrinsically, the HGN is a lightweight in-network processing algorithm which does not require expensive floating point computations; hence, it is very suitable for real-time applications and tiny devices such as the wireless sensor networks. This paper describes that the HGN's pattern matching capability and the small response time remain insensitive to the increases in the number of stored patterns. Moreover, the HGN does not require definition of rules or setting of thresholds by the operator to achieve the desired results nor does it require heuristics entailing iterative operations for memorization and recall of patterns.
A Novel Time-Based Readout Scheme for a Combined PET-CT Detector Using APDs
Powolny, F; Hillemanns, H; Jarron, P; Lecoq, P; Meyer, T C; Moraes, D
2008-01-01
This paper summarizes CERN R&D work done in the framework of the European Commission's FP6 BioCare Project. The objective was to develop a novel "time-based" signal processing technique to read out LSO-APD photodetectors for medical imaging. An important aspect was to employ the technique in a combined scenario for both computer tomography (CT) and positron emission tomography (PET) with effectively no tradeoffs in efficiency and resolution compared to traditional single mode machines. This made the use of low noise and yet very high-speed monolithic front-end electronics essential so as to assure the required timing characteristics together with a high signal-to-noise ratio. Using APDs for photon detection, two chips, traditionally employed for particle physics, could be identified to meet the above criteria. Although both were not optimized for their intended new medical application, excellent performance in conjunction with LSO-APD sensors could be derived. Whereas a measured energy resolution of 16% (...
An overview on real-time control schemes for wheeled mobile robot
Radzak, M. S. A.; Ali, M. A. H.; Sha’amri, S.; Azwan, A. R.
2018-04-01
The purpose of this paper is to review real-time control motion algorithms for wheeled mobile robot (WMR) when navigating in environment such as road. Its need a good controller to avoid collision with any disturbance and maintain a track error at zero level. The controllers are used with other aiding sensors to measure the WMR’s velocities, posture, and interference to estimate the required torque to be applied on the wheels of mobile robot. Four main categories for wheeled mobile robot control systems have been found in literature which are namely: Kinematic based controller, Dynamic based controllers, artificial intelligence based control system, and Active Force control. A MATLAB/Simulink software is the main software to simulate and implement the control system. The real-time toolbox in MATLAB/SIMULINK are used to receive/send data from sensors/to actuator with presence of disturbances, however others software such C, C++ and visual basic are rare to be used.
Space-Time Trellis Coded 8PSK Schemes for Rapid Rayleigh Fading Channels
Directory of Open Access Journals (Sweden)
Salam A. Zummo
2002-05-01
Full Text Available This paper presents the design of 8PSK space-time (ST trellis codes suitable for rapid fading channels. The proposed codes utilize the design criteria of ST codes over rapid fading channels. Two different approaches have been used. The first approach maximizes the symbol-wise Hamming distance (HD between signals leaving from or entering to the same encoderÃ¢Â€Â²s state. In the second approach, set partitioning based on maximizing the sum of squared Euclidean distances (SSED between the ST signals is performed; then, the branch-wise HD is maximized. The proposed codes were simulated over independent and correlated Rayleigh fading channels. Coding gains up to 4 dB have been observed over other ST trellis codes of the same complexity.
BIOMAP A Daily Time Step, Mechanistic Model for the Study of Ecosystem Dynamics
Wells, J. R.; Neilson, R. P.; Drapek, R. J.; Pitts, B. S.
2010-12-01
of both climate and ecosystems must be done at coarse grid resolutions; smaller domains require higher resolution for the simulation of natural resource processes at the landscape scale and that of on-the-ground management practices. Via a combined multi-agency and private conservation effort we have implemented a Nested Scale Experiment (NeScE) that ranges from 1/2 degree resolution (global, ca. 50 km) to ca. 8km (North America) and 800 m (conterminous U.S.). Our first DGVM, MC1, has been implemented at all 3 scales. We are just beginning to implement BIOMAP into NeScE, with its unique features, and daily time step, as a counterpoint to MC1. We believe it will be more accurate at all resolutions providing better simulations of vegetation distribution, carbon balance, runoff, fire regimes and drought impacts.
Pandey, Saurabh; Majhi, Somanath; Ghorai, Prasenjit
2017-07-01
In this paper, the conventional relay feedback test has been modified for modelling and identification of a class of real-time dynamical systems in terms of linear transfer function models with time-delay. An ideal relay and unknown systems are connected through a negative feedback loop to bring the sustained oscillatory output around the non-zero setpoint. Thereafter, the obtained limit cycle information is substituted in the derived mathematical equations for accurate identification of unknown plants in terms of overdamped, underdamped, critically damped second-order plus dead time and stable first-order plus dead time transfer function models. Typical examples from the literature are included for the validation of the proposed identification scheme through computer simulations. Subsequently, the comparisons between estimated model and true system are drawn through integral absolute error criterion and frequency response plots. Finally, the obtained output responses through simulations are verified experimentally on real-time liquid level control system using Yokogawa Distributed Control System CENTUM CS3000 set up.
An HMM-Like Dynamic Time Warping Scheme for Automatic Speech Recognition
Directory of Open Access Journals (Sweden)
Ing-Jr Ding
2014-01-01
Full Text Available In the past, the kernel of automatic speech recognition (ASR is dynamic time warping (DTW, which is feature-based template matching and belongs to the category technique of dynamic programming (DP. Although DTW is an early developed ASR technique, DTW has been popular in lots of applications. DTW is playing an important role for the known Kinect-based gesture recognition application now. This paper proposed an intelligent speech recognition system using an improved DTW approach for multimedia and home automation services. The improved DTW presented in this work, called HMM-like DTW, is essentially a hidden Markov model- (HMM- like method where the concept of the typical HMM statistical model is brought into the design of DTW. The developed HMM-like DTW method, transforming feature-based DTW recognition into model-based DTW recognition, will be able to behave as the HMM recognition technique and therefore proposed HMM-like DTW with the HMM-like recognition model will have the capability to further perform model adaptation (also known as speaker adaptation. A series of experimental results in home automation-based multimedia access service environments demonstrated the superiority and effectiveness of the developed smart speech recognition system by HMM-like DTW.
Directory of Open Access Journals (Sweden)
Guo-Jheng Yang
2013-08-01
Full Text Available The fragile watermarking technique is used to protect intellectual property rights while also providing security and rigorous protection. In order to protect the copyright of the creators, it can be implanted in some representative text or totem. Because all of the media on the Internet are digital, protection has become a critical issue, and determining how to use digital watermarks to protect digital media is thus the topic of our research. This paper uses the Logistic map with parameter u = 4 to generate chaotic dynamic behavior with the maximum entropy 1. This approach increases the security and rigor of the protection. The main research target of information hiding is determining how to hide confidential data so that the naked eye cannot see the difference. Next, we introduce one method of information hiding. Generally speaking, if the image only goes through Arnold’s cat map and the Logistic map, it seems to lack sufficient security. Therefore, our emphasis is on controlling Arnold’s cat map and the initial value of the chaos system to undergo small changes and generate different chaos sequences. Thus, the current time is used to not only make encryption more stringent but also to enhance the security of the digital media.
Zhang, Bin; Deng, Congying; Zhang, Yi
2018-03-01
Rolling element bearings are mechanical components used frequently in most rotating machinery and they are also vulnerable links representing the main source of failures in such systems. Thus, health condition monitoring and fault diagnosis of rolling element bearings have long been studied to improve operational reliability and maintenance efficiency of rotatory machines. Over the past decade, prognosis that enables forewarning of failure and estimation of residual life attracted increasing attention. To accurately and efficiently predict failure of the rolling element bearing, the degradation requires to be well represented and modelled. For this purpose, degradation of the rolling element bearing is analysed with the delay-time-based model in this paper. Also, a hybrid feature selection and health indicator construction scheme is proposed for extraction of the bearing health relevant information from condition monitoring sensor data. Effectiveness of the presented approach is validated through case studies on rolling element bearing run-to-failure experiments.
Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun
2017-09-19
In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions.
Directory of Open Access Journals (Sweden)
Tandale Babasaheb V
2008-12-01
Full Text Available Abstract Background Chandipura virus (CHPV, a member of family Rhabdoviridae was attributed to an explosive outbreak of acute encephalitis in children in Andhra Pradesh, India in 2003 and a small outbreak among tribal children from Gujarat, Western India in 2004. The case-fatality rate ranged from 55–75%. Considering the rapid progression of the disease and high mortality, a highly sensitive method for quantifying CHPV RNA by real-time one step reverse transcriptase PCR (real-time one step RT-PCR using TaqMan technology was developed for rapid diagnosis. Methods Primers and probe for P gene were designed and used to standardize real-time one step RT-PCR assay for CHPV RNA quantitation. Standard RNA was prepared by PCR amplification, TA cloning and run off transcription. The optimized real-time one step RT-PCR assay was compared with the diagnostic nested RT-PCR and different virus isolation systems [in vivo (mice in ovo (eggs, in vitro (Vero E6, PS, RD and Sand fly cell line] for the detection of CHPV. Sensitivity and specificity of real-time one step RT-PCR assay was evaluated with diagnostic nested RT-PCR, which is considered as a gold standard. Results Real-time one step RT-PCR was optimized using in vitro transcribed (IVT RNA. Standard curve showed linear relationship for wide range of 102-1010 (r2 = 0.99 with maximum Coefficient of variation (CV = 5.91% for IVT RNA. The newly developed real-time RT-PCR was at par with nested RT-PCR in sensitivity and superior to cell lines and other living systems (embryonated eggs and infant mice used for the isolation of the virus. Detection limit of real-time one step RT-PCR and nested RT-PCR was found to be 1.2 × 100 PFU/ml. RD cells, sand fly cells, infant mice, and embryonated eggs showed almost equal sensitivity (1.2 × 102 PFU/ml. Vero and PS cell-lines (1.2 × 103 PFU/ml were least sensitive to CHPV infection. Specificity of the assay was found to be 100% when RNA from other viruses or healthy
Comparing an Annual and a Daily Time-Step Model for Predicting Field-Scale Phosphorus Loss.
Bolster, Carl H; Forsberg, Adam; Mittelstet, Aaron; Radcliffe, David E; Storm, Daniel; Ramirez-Avila, John; Sharpley, Andrew N; Osmond, Deanna
2017-11-01
A wide range of mathematical models are available for predicting phosphorus (P) losses from agricultural fields, ranging from simple, empirically based annual time-step models to more complex, process-based daily time-step models. In this study, we compare field-scale P-loss predictions between the Annual P Loss Estimator (APLE), an empirically based annual time-step model, and the Texas Best Management Practice Evaluation Tool (TBET), a process-based daily time-step model based on the Soil and Water Assessment Tool. We first compared predictions of field-scale P loss from both models using field and land management data collected from 11 research sites throughout the southern United States. We then compared predictions of P loss from both models with measured P-loss data from these sites. We observed a strong and statistically significant ( loss between the two models; however, APLE predicted, on average, 44% greater dissolved P loss, whereas TBET predicted, on average, 105% greater particulate P loss for the conditions simulated in our study. When we compared model predictions with measured P-loss data, neither model consistently outperformed the other, indicating that more complex models do not necessarily produce better predictions of field-scale P loss. Our results also highlight limitations with both models and the need for continued efforts to improve their accuracy. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Pijnappels, M.A.G.M.; Delbaere, K.; Sturnieks, D.L.; Lord, S.R.
2010-01-01
Background: choice stepping reaction time (CSRT) is a functional measure that has been shown to significantly discriminate older fallers from non-fallers. Objective: to investigate how physiological and cognitive factors mediate the association between CSRT performance and multiple falls by use of
Bi, Jianjun; Song, Rengang; Yang, Huilan; Li, Bingling; Fan, Jianyong; Liu, Zhongrong; Long, Chaoqin
2011-01-01
Identification of immunodominant epitopes is the first step in the rational design of peptide vaccines aimed at T-cell immunity. To date, however, it is yet a great challenge for accurately predicting the potent epitope peptides from a pool of large-scale candidates with an efficient manner. In this study, a method that we named StepRank has been developed for the reliable and rapid prediction of binding capabilities/affinities between proteins and genome-wide peptides. In this procedure, instead of single strategy used in most traditional epitope identification algorithms, four steps with different purposes and thus different computational demands are employed in turn to screen the large-scale peptide candidates that are normally generated from, for example, pathogenic genome. The steps 1 and 2 aim at qualitative exclusion of typical nonbinders by using empirical rule and linear statistical approach, while the steps 3 and 4 focus on quantitative examination and prediction of the interaction energy profile and binding affinity of peptide to target protein via quantitative structure-activity relationship (QSAR) and structure-based free energy analysis. We exemplify this method through its application to binding predictions of the peptide segments derived from the 76 known open-reading frames (ORFs) of herpes simplex virus type 1 (HSV-1) genome with or without affinity to human major histocompatibility complex class I (MHC I) molecule HLA-A*0201, and find that the predictive results are well compatible with the classical anchor residue theory and perfectly match for the extended motif pattern of MHC I-binding peptides. The putative epitopes are further confirmed by comparisons with 11 experimentally measured HLA-A*0201-restrcited peptides from the HSV-1 glycoproteins D and K. We expect that this well-designed scheme can be applied in the computational screening of other viral genomes as well.
Iteratively improving Hi-C experiments one step at a time.
Golloshi, Rosela; Sanders, Jacob T; McCord, Rachel Patton
2018-04-30
The 3D organization of eukaryotic chromosomes affects key processes such as gene expression, DNA replication, cell division, and response to DNA damage. The genome-wide chromosome conformation capture (Hi-C) approach can characterize the landscape of 3D genome organization by measuring interaction frequencies between all genomic regions. Hi-C protocol improvements and rapid advances in DNA sequencing power have made Hi-C useful to study diverse biological systems, not only to elucidate the role of 3D genome structure in proper cellular function, but also to characterize genomic rearrangements, assemble new genomes, and consider chromatin interactions as potential biomarkers for diseases. Yet, the Hi-C protocol is still complex and subject to variations at numerous steps that can affect the resulting data. Thus, there is still a need for better understanding and control of factors that contribute to Hi-C experiment success and data quality. Here, we evaluate recently proposed Hi-C protocol modifications as well as often overlooked variables in sample preparation and examine their effects on Hi-C data quality. We examine artifacts that can occur during Hi-C library preparation, including microhomology-based artificial template copying and chimera formation that can add noise to the downstream data. Exploring the mechanisms underlying Hi-C artifacts pinpoints steps that should be further optimized in the future. To improve the utility of Hi-C in characterizing the 3D genome of specialized populations of cells or small samples of primary tissue, we identify steps prone to DNA loss which should be considered to adapt Hi-C to lower cell numbers. Copyright © 2018 Elsevier Inc. All rights reserved.
A split-beam probe-pump-probe scheme for femtosecond time resolved protein X-ray crystallography
Directory of Open Access Journals (Sweden)
Jasper J. van Thor
2015-01-01
Full Text Available In order to exploit the femtosecond pulse duration of X-ray Free-Electron Lasers (XFEL operating in the hard X-ray regime for ultrafast time-resolved protein crystallography experiments, critical parameters that determine the crystallographic signal-to-noise (I/σI must be addressed. For single-crystal studies under low absorbed dose conditions, it has been shown that the intrinsic pulse intensity stability as well as mode structure and jitter of this structure, significantly affect the crystallographic signal-to-noise. Here, geometrical parameters are theoretically explored for a three-beam scheme: X-ray probe, optical pump, X-ray probe (or “probe-pump-probe” which will allow experimental determination of the photo-induced structure factor amplitude differences, ΔF, in a ratiometric manner, thereby internally referencing the intensity noise of the XFEL source. In addition to a non-collinear split-beam geometry which separates un-pumped and pumped diffraction patterns on an area detector, applying an additional convergence angle to both beams by focusing leads to integration over mosaic blocks in the case of well-ordered stationary protein crystals. Ray-tracing X-ray diffraction simulations are performed for an example using photoactive yellow protein crystals in order to explore the geometrical design parameters which would be needed. The specifications for an X-ray split and delay instrument that implements both an offset angle and focused beams are discussed, for implementation of a probe-pump-probe scheme at the European XFEL. We discuss possible extension of single crystal studies to serial femtosecond crystallography, particularly in view of the expected X-ray damage and ablation due to the first probe pulse.
Seven steps to raise world security. Op-Ed, published in the Finanical Times
International Nuclear Information System (INIS)
ElBaradei, M.
2005-01-01
In recent years, three phenomena have radically altered the security landscape. They are the emergence of a nuclear black market, the determined efforts by more countries to acquire technology to produce the fissile material usable in nuclear weapons and the clear desire of terrorists to acquire weapons of mass destruction. The IAEA has been trying to solve these new problems with existing tools. But for every step forward, we have exposed vulnerabilities in the system. The system itself - the regime that implements non-proliferation treaty needs reinforcement. Some of the necessary remedies can be taken in New York at the Meeting to be held in May, but only if governments are ready to act. With seven straightforward steps, and without amending the treaty, this conference could reach a milestone in strengthening world security. The first step: put a five-year hold on additional facilities for uranium enrichment and plutonium separation. Second, speed up existing efforts, led by the US global threat reduction initiative and others, to modify the research reactors worldwide operating with highly enriched uranium - particularly those with metal fuel that could be readily employed as bomb material. Third, raise the bar for inspection standards by establishing the 'additional protocol' as the norm for verifying compliance with the NPT. Fourth, call on the United Nations Security Council to act swiftly and decisively in the case of any country that withdraws from the NPT, in terms of the threat the withdrawal poses to international peace and security. Fifth, urge states to act on the Security Council's recent resolution 1540, to pursue and prosecute any illicit trading in nuclear material and technology. Sixth, call on the five nuclear weapon states party to the NPT to accelerate implementation of their 'unequivocal commitment' to nuclear disarmament, building on efforts such as the 2002 Moscow treaty between Russia and the US. Last, acknowledge the volatility of
Hybrid monitoring scheme for end-to-end performance enhancement of multicast-based real-time media
Park, Ju-Won; Kim, JongWon
2004-10-01
As real-time media applications based on IP multicast networks spread widely, end-to-end QoS (quality of service) provisioning for these applications have become very important. To guarantee the end-to-end QoS of multi-party media applications, it is essential to monitor the time-varying status of both network metrics (i.e., delay, jitter and loss) and system metrics (i.e., CPU and memory utilization). In this paper, targeting the multicast-enabled AG (Access Grid) a next-generation group collaboration tool based on multi-party media services, the applicability of hybrid monitoring scheme that combines active and passive monitoring is investigated. The active monitoring measures network-layer metrics (i.e., network condition) with probe packets while the passive monitoring checks both application-layer metrics (i.e., user traffic condition by analyzing RTCP packets) and system metrics. By comparing these hybrid results, we attempt to pinpoint the causes of performance degradation and explore corresponding reactions to improve the end-to-end performance. The experimental results show that the proposed hybrid monitoring can provide useful information to coordinate the performance improvement of multi-party real-time media applications.
Three-step management of pneumothorax: time for a re-think on initial management†
Kaneda, Hiroyuki; Nakano, Takahito; Taniguchi, Yohei; Saito, Tomohito; Konobu, Toshifumi; Saito, Yukihito
2013-01-01
Pneumothorax is a common disease worldwide, but surprisingly, its initial management remains controversial. There are some published guidelines for the management of spontaneous pneumothorax. However, they differ in some respects, particularly in initial management. In published trials, the objective of treatment has not been clarified and it is not possible to compare the treatment strategies between different trials because of inappropriate evaluations of the air leak. Therefore, there is a need to outline the optimal management strategy for pneumothorax. In this report, we systematically review published randomized controlled trials of the different treatments of primary spontaneous pneumothorax, point out controversial issues and finally propose a three-step strategy for the management of pneumothorax. There are three important characteristics of pneumothorax: potentially lethal respiratory dysfunction; air leak, which is the obvious cause of the disease; frequent recurrence. These three characteristics correspond to the three steps. The central idea of the strategy is that the lung should not be expanded rapidly, unless absolutely necessary. The primary objective of both simple aspiration and chest drainage should be the recovery of acute respiratory dysfunction or the avoidance of respiratory dysfunction and subsequent complications. We believe that this management strategy is simple and clinically relevant and not dependent on the classification of pneumothorax. PMID:23117233
Xamán, J.; Zavala-Guillén, I.; Hernández-López, I.; Uriarte-Flores, J.; Hernández-Pérez, I.; Macías-Melo, E. V.; Aguilar-Castro, K. M.
2018-03-01
In this paper, we evaluated the convergence rate (CPU time) of a new mathematical formulation for the numerical solution of the radiative transfer equation (RTE) with several High-Order (HO) and High-Resolution (HR) schemes. In computational fluid dynamics, this procedure is known as the Normalized Weighting-Factor (NWF) method and it is adopted here. The NWF method is used to incorporate the high-order resolution schemes in the discretized RTE. The NWF method is compared, in terms of computer time needed to obtain a converged solution, with the widely used deferred-correction (DC) technique for the calculations of a two-dimensional cavity with emitting-absorbing-scattering gray media using the discrete ordinates method. Six parameters, viz. the grid size, the order of quadrature, the absorption coefficient, the emissivity of the boundary surface, the under-relaxation factor, and the scattering albedo are considered to evaluate ten schemes. The results showed that using the DC method, in general, the scheme that had the lowest CPU time is the SOU. In contrast, with the results of theDC procedure the CPU time for DIAMOND and QUICK schemes using the NWF method is shown to be, between the 3.8 and 23.1% faster and 12.6 and 56.1% faster, respectively. However, the other schemes are more time consuming when theNWFis used instead of the DC method. Additionally, a second test case was presented and the results showed that depending on the problem under consideration, the NWF procedure may be computationally faster or slower that the DC method. As an example, the CPU time for QUICK and SMART schemes are 61.8 and 203.7%, respectively, slower when the NWF formulation is used for the second test case. Finally, future researches to explore the computational cost of the NWF method in more complex problems are required.
Modeling Stepped Leaders Using a Time Dependent Multi-dipole Model and High-speed Video Data
Karunarathne, S.; Marshall, T.; Stolzenburg, M.; Warner, T. A.; Orville, R. E.
2012-12-01
In summer of 2011, we collected lightning data with 10 stations of electric field change meters (bandwidth of 0.16 Hz - 2.6 MHz) on and around NASA/Kennedy Space Center (KSC) covering nearly 70 km × 100 km area. We also had a high-speed video (HSV) camera recording 50,000 images per second collocated with one of the electric field change meters. In this presentation we describe our use of these data to model the electric field change caused by stepped leaders. Stepped leaders of a cloud to ground lightning flash typically create the initial path for the first return stroke (RS). Most of the time, stepped leaders have multiple complex branches, and one of these branches will create the ground connection for the RS to start. HSV data acquired with a short focal length lens at ranges of 5-25 km from the flash are useful for obtaining the 2-D location of these multiple branches developing at the same time. Using HSV data along with data from the KSC Lightning Detection and Ranging (LDAR2) system and the Cloud to Ground Lightning Surveillance System (CGLSS), the 3D path of a leader may be estimated. Once the path of a stepped leader is obtained, the time dependent multi-dipole model [ Lu, Winn,and Sonnenfeld, JGR 2011] can be used to match the electric field change at various sensor locations. Based on this model, we will present the time-dependent charge distribution along a leader channel and the total charge transfer during the stepped leader phase.
International Nuclear Information System (INIS)
Kim, Tae Gyoum; Jang, Jin-Tak; Ryu, Hyukhyun; Lee, Won-Jae
2013-01-01
Highlights: •We grew vertical ZnO nanorods on ITO substrate using a two-step continuous potential process. •The nucleation for the ZnO nanorods growth was changed by first-step potential and duration. •The vertical ZnO nanorods were well grown when first-step potential was −1.2 V and 10 s. -- Abstract: In this study, we analyzed the growth of ZnO nanorods on an ITO (indium doped tin oxide) substrate by electrochemical deposition using a two-step, continuous potential process. We examined the effect of changing the first-step potential as well as the first-step duration on the morphological, structural and optical properties of ZnO nanorods, measured via using field emission scanning electron microscopy (FE-SEM), X-ray diffraction (XRD) and photoluminescence (PL), respectively. As a result, vertical ZnO nanorods were grown on ITO substrate without the need for a template when the first-step potential was set to −1.2 V for a duration of 10 s, and the second-step potential was set to −0.7 V for a duration of 1190 s. The ZnO nanorods on this sample showed the highest XRD (0 0 2)/(1 0 0) peak intensity ratio and the highest PL near band edge emission to deep level emission peak intensity ratio (NBE/DLE). In this study, the nucleation for vertical ZnO nanorod growth on an ITO substrate was found to be affected by changes in the first-step potential and first-step duration
Yizhao, Chen; Jianyang, Xia; Zhengguo, Sun; Jianlong, Li; Yiqi, Luo; Chengcheng, Gang; Zhaoqi, Wang
2015-11-06
As a key factor that determines carbon storage capacity, residence time (τE) is not well constrained in terrestrial biosphere models. This factor is recognized as an important source of model uncertainty. In this study, to understand how τE influences terrestrial carbon storage prediction in diagnostic models, we introduced a model decomposition scheme in the Boreal Ecosystem Productivity Simulator (BEPS) and then compared it with a prognostic model. The result showed that τE ranged from 32.7 to 158.2 years. The baseline residence time (τ'E) was stable for each biome, ranging from 12 to 53.7 years for forest biomes and 4.2 to 5.3 years for non-forest biomes. The spatiotemporal variations in τE were mainly determined by the environmental scalar (ξ). By comparing models, we found that the BEPS uses a more detailed pool construction but rougher parameterization for carbon allocation and decomposition. With respect to ξ comparison, the global difference in the temperature scalar (ξt) averaged 0.045, whereas the moisture scalar (ξw) had a much larger variation, with an average of 0.312. We propose that further evaluations and improvements in τ'E and ξw predictions are essential to reduce the uncertainties in predicting carbon storage by the BEPS and similar diagnostic models.
Banks, H T; Birch, Malcolm J; Brewin, Mark P; Greenwald, Stephen E; Hu, Shuhua; Kenz, Zackary R; Kruse, Carola; Maischak, Matthias; Shaw, Simon; Whiteman, John R
2014-04-13
We revisit a method originally introduced by Werder et al. (in Comput. Methods Appl. Mech. Engrg., 190:6685-6708, 2001) for temporally discontinuous Galerkin FEMs applied to a parabolic partial differential equation. In that approach, block systems arise because of the coupling of the spatial systems through inner products of the temporal basis functions. If the spatial finite element space is of dimension D and polynomials of degree r are used in time, the block system has dimension ( r + 1) D and is usually regarded as being too large when r > 1. Werder et al. found that the space-time coupling matrices are diagonalizable over [Formula: see text] for r ⩽ 100, and this means that the time-coupled computations within a time step can actually be decoupled. By using either continuous Galerkin or spectral element methods in space, we apply this DG-in-time methodology, for the first time, to second-order wave equations including elastodynamics with and without Kelvin-Voigt and Maxwell-Zener viscoelasticity. An example set of numerical results is given to demonstrate the favourable effect on error and computational work of the moderately high-order (up to degree 7) temporal and spatio-temporal approximations, and we also touch on an application of this method to an ambitious problem related to the diagnosis of coronary artery disease. Copyright © 2014 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons Ltd.
Using Variable Dwell Time to Accelerate Gaze-based Web Browsing with Two-step Selection
Chen, Zhaokang; Shi, Bertram E.
2017-01-01
In order to avoid the "Midas Touch" problem, gaze-based interfaces for selection often introduce a dwell time: a fixed amount of time the user must fixate upon an object before it is selected. Past interfaces have used a uniform dwell time across all objects. Here, we propose an algorithm for adjusting the dwell times of different objects based on the inferred probability that the user intends to select them. In particular, we introduce a probabilistic model of natural gaze behavior while sur...
Continuous-Time Random Walk with multi-step memory: an application to market dynamics
Gubiec, Tomasz; Kutner, Ryszard
2017-11-01
An extended version of the Continuous-Time Random Walk (CTRW) model with memory is herein developed. This memory involves the dependence between arbitrary number of successive jumps of the process while waiting times between jumps are considered as i.i.d. random variables. This dependence was established analyzing empirical histograms for the stochastic process of a single share price on a market within the high frequency time scale. Then, it was justified theoretically by considering bid-ask bounce mechanism containing some delay characteristic for any double-auction market. Our model appeared exactly analytically solvable. Therefore, it enables a direct comparison of its predictions with their empirical counterparts, for instance, with empirical velocity autocorrelation function. Thus, the present research significantly extends capabilities of the CTRW formalism. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.
Energy Technology Data Exchange (ETDEWEB)
Ohsuga, Ken; Takahashi, Hiroyuki R. [National Astronomical Observatory of Japan, Osawa, Mitaka, Tokyo 181-8588 (Japan)
2016-02-20
We develop a numerical scheme for solving the equations of fully special relativistic, radiation magnetohydrodynamics (MHDs), in which the frequency-integrated, time-dependent radiation transfer equation is solved to calculate the specific intensity. The radiation energy density, the radiation flux, and the radiation stress tensor are obtained by the angular quadrature of the intensity. In the present method, conservation of total mass, momentum, and energy of the radiation magnetofluids is guaranteed. We treat not only the isotropic scattering but also the Thomson scattering. The numerical method of MHDs is the same as that of our previous work. The advection terms are explicitly solved, and the source terms, which describe the gas–radiation interaction, are implicitly integrated. Our code is suitable for massive parallel computing. We present that our code shows reasonable results in some numerical tests for propagating radiation and radiation hydrodynamics. Particularly, the correct solution is given even in the optically very thin or moderately thin regimes, and the special relativistic effects are nicely reproduced.
Dill, Harald G.; Weber, Berthold
2013-12-01
The gemstones, covering the spectrum from jeweler's to showcase quality, have been presented in a tripartite subdivision, by country, geology and geomorphology realized in 99 digital maps with more than 2600 mineralized sites. The various maps were designed based on the "Chessboard classification scheme of mineral deposits" proposed by Dill (2010a, 2010b) to reveal the interrelations between gemstone deposits and mineral deposits of other commodities and direct our thoughts to potential new target areas for exploration. A number of 33 categories were used for these digital maps: chromium, nickel, titanium, iron, manganese, copper, tin-tungsten, beryllium, lithium, zinc, calcium, boron, fluorine, strontium, phosphorus, zirconium, silica, feldspar, feldspathoids, zeolite, amphibole (tiger's eye), olivine, pyroxenoid, garnet, epidote, sillimanite-andalusite, corundum-spinel - diaspore, diamond, vermiculite-pagodite, prehnite, sepiolite, jet, and amber. Besides the political base map (gems by country) the mineral deposit is drawn on a geological map, illustrating the main lithologies, stratigraphic units and tectonic structure to unravel the evolution of primary gemstone deposits in time and space. The geomorphological map is to show the control of climate and subaerial and submarine hydrography on the deposition of secondary gemstone deposits. The digital maps are designed so as to be plotted as a paper version of different scale and to upgrade them for an interactive use and link them to gemological databases.
Bunce, D; Haynes, BI; Lord, SR; Gschwind, YJ; Kochan, NA; Reppermund, S; Brodaty, H; Sachdev, PS; Delbaere, K
2017-01-01
Background: Reaction time measures have considerable potential to aid neuropsychological assessment in a variety of health care settings. One such measure, the intraindividual reaction time variability (IIV), is of particular interest as it is thought to reflect neurobiological disturbance. IIV is associated with a variety of age-related neurological disorders, as well as gait impairment and future falls in older adults. However, although persons diagnosed with Mild Cognitive Impairment (MCI)...
Energy Technology Data Exchange (ETDEWEB)
Bejeh Mir, Arash Poorsattar [Dentistry Student Research Committee (DSRC), Dental Materials Research Center, Dentistry School, Babol University of Medical Sciences, Babol (Iran, Islamic Republic of); Bejeh Mir, Morvarid Poorsattar [Private Practice of Orthodontics, Montreal, Quebec (Canada)
2012-09-15
ANSI/ADA has established standards for adequate radiopacity. This study was aimed to assess the changes in radiopacity of composite resins according to various tube-target distances and exposure times. Five 1-mm thick samples of Filtek P60 and Clearfil composite resins were prepared and exposed with six tube-target distance/exposure time setups (i.e., 40 cm, 0.2 seconds; 30 cm, 0.2 seconds; 30 cm, 0.16 seconds, 30 cm, 0.12 seconds; 15 cm, 0.2 seconds; 15 cm, 0.12 seconds) performing at 70 kVp and 7 mA along with a 12-step aluminum stepwedge (1 mm incremental steps) using a PSP digital sensor. Thereafter, the radiopacities measured with Digora for Windows software 2.5 were converted to absorbencies (i.e., A=-log (1-G/255)), where A is the absorbency and G is the measured gray scale). Furthermore, the linear regression model of aluminum thickness and absorbency was developed and used to convert the radiopacity of dental materials to the equivalent aluminum thickness. In addition, all calculations were compared with those obtained from a modified 3-step stepwedge (i.e., using data for the 2nd, 5th, and 8th steps). The radiopacities of the composite resins differed significantly with various setups (p<0.001) and between the materials (p<0.001). The best predicted model was obtained for the 30 cm 0.2 seconds setup (R2=0.999). Data from the reduced modified stepwedge was remarkable and comparable with the 12-step stepwedge. Within the limits of the present study, our findings support that various setups might influence the radiopacity of dental materials on digital radiographs.
Kandel, D. D.; Western, A. W.; Grayson, R. B.
2004-12-01
Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and
Wang, Zhan-zhi; Xiong, Ying
2013-04-01
A growing interest has been devoted to the contra-rotating propellers (CRPs) due to their high propulsive efficiency, torque balance, low fuel consumption, low cavitations, low noise performance and low hull vibration. Compared with the single-screw system, it is more difficult for the open water performance prediction because forward and aft propellers interact with each other and generate a more complicated flow field around the CRPs system. The current work focuses on the open water performance prediction of contra-rotating propellers by RANS and sliding mesh method considering the effect of computational time step size and turbulence model. The validation study has been performed on two sets of contra-rotating propellers developed by David W Taylor Naval Ship R & D center. Compared with the experimental data, it shows that RANS with sliding mesh method and SST k-ω turbulence model has a good precision in the open water performance prediction of contra-rotating propellers, and small time step size can improve the level of accuracy for CRPs with the same blade number of forward and aft propellers, while a relatively large time step size is a better choice for CRPs with different blade numbers.
Pressure correction schemes for compressible flows
International Nuclear Information System (INIS)
Kheriji, W.
2011-01-01
This thesis is concerned with the development of semi-implicit fractional step schemes, for the compressible Navier-Stokes equations; these schemes are part of the class of the pressure correction methods. The chosen spatial discretization is staggered: non conforming mixed finite elements (Crouzeix-Raviart or Rannacher-Turek) or the classic MA C scheme. An upwind finite volume discretization of the mass balance guarantees the positivity of the density. The positivity of the internal energy is obtained by discretizing the internal energy balance by an upwind finite volume scheme and b y coupling the discrete internal energy balance with the pressure correction step. A special finite volume discretization on dual cells is performed for the convection term in the momentum balance equation, and a renormalisation step for the pressure is added to the algorithm; this ensures the control in time of the integral of the total energy over the domain. All these a priori estimates imply the existence of a discrete solution by a topological degree argument. The application of this scheme to Euler equations raises an additional difficulty. Indeed, obtaining correct shocks requires the scheme to be consistent with the total energy balance, property which we obtain as follows. First of all, a local discrete kinetic energy balance is established; it contains source terms winch we somehow compensate in the internal energy balance. The kinetic and internal energy equations are associated with the dual and primal meshes respectively, and thus cannot be added to obtain a total energy balance; its continuous counterpart is however recovered at the limit: if we suppose that a sequence of discrete solutions converges when the space and time steps tend to 0, we indeed show, in 1D at least, that the limit satisfies a weak form of the equation. These theoretical results are comforted by numerical tests. Similar results are obtained for the baro-tropic Navier-Stokes equations. (author)
Third Order Reconstruction of the KP Scheme for Model of River Tinnelva
Directory of Open Access Journals (Sweden)
Susantha Dissanayake
2017-01-01
Full Text Available The Saint-Venant equation/Shallow Water Equation is used to simulate flow of river, flow of liquid in an open channel, tsunami etc. The Kurganov-Petrova (KP scheme which was developed based on the local speed of discontinuity propagation, can be used to solve hyperbolic type partial differential equations (PDEs, hence can be used to solve the Saint-Venant equation. The KP scheme is semi discrete: PDEs are discretized in the spatial domain, resulting in a set of Ordinary Differential Equations (ODEs. In this study, the common 2nd order KP scheme is extended into 3rd order scheme while following the Weighted Essentially Non-Oscillatory (WENO and Central WENO (CWENO reconstruction steps. Both the 2nd order and 3rd order schemes have been used in simulation in order to check the suitability of the KP schemes to solve hyperbolic type PDEs. The simulation results indicated that the 3rd order KP scheme shows some better stability compared to the 2nd order scheme. Computational time for the 3rd order KP scheme for variable step-length ode solvers in MATLAB is less compared to the computational time of the 2nd order KP scheme. In addition, it was confirmed that the order of the time integrators essentially should be lower compared to the order of the spatial discretization. However, for computation of abrupt step changes, the 2nd order KP scheme shows a more accurate solution.
Seven Steps to Heaven: Time and Tide in 21st Century Contemporary Music Higher Education
Mitchell, Annie K.
2018-01-01
Throughout the time of my teaching career, the tide has exposed changes in the nature of music, students and music education. This paper discusses teaching and learning in contemporary music at seven critical stages of 21st century music education: i) diverse types of undergraduate learners; ii) teaching traditional classical repertoire and skills…
Bochev, Mikhail A.; Oseledets, I.V.; Tyrtyshnikov, E.E.
2013-01-01
The aim of this paper is two-fold. First, we propose an efficient implementation of the continuous time waveform relaxation method based on block Krylov subspaces. Second, we compare this new implementation against Krylov subspace methods combined with the shift and invert technique.
Minisini, S.; Zhebel, E.; Kononov, A.; Mulder, W.A.
2013-01-01
Modeling and imaging techniques for geophysics are extremely demanding in terms of computational resources. Seismic data attempt to resolve smaller scales and deeper targets in increasingly more complex geologic settings. Finite elements enable accurate simulation of time-dependent wave propagation
Fonoff, Erich Talamoni; Azevedo, Angelo; Angelos, Jairo Silva Dos; Martinez, Raquel Chacon Ruiz; Navarro, Jessie; Reis, Paul Rodrigo; Sepulveda, Miguel Ernesto San Martin; Cury, Rubens Gisbert; Ghilardi, Maria Gabriela Dos Santos; Teixeira, Manoel Jacobsen; Lopez, William Omar Contreras
2016-07-01
OBJECT Currently, bilateral procedures involve 2 sequential implants in each of the hemispheres. The present report demonstrates the feasibility of simultaneous bilateral procedures during the implantation of deep brain stimulation (DBS) leads. METHODS Fifty-seven patients with movement disorders underwent bilateral DBS implantation in the same study period. The authors compared the time required for the surgical implantation of deep brain electrodes in 2 randomly assigned groups. One group of 28 patients underwent traditional sequential electrode implantation, and the other 29 patients underwent simultaneous bilateral implantation. Clinical outcomes of the patients with Parkinson's disease (PD) who had undergone DBS implantation of the subthalamic nucleus using either of the 2 techniques were compared. RESULTS Overall, a reduction of 38.51% in total operating time for the simultaneous bilateral group (136.4 ± 20.93 minutes) as compared with that for the traditional consecutive approach (220.3 ± 27.58 minutes) was observed. Regarding clinical outcomes in the PD patients who underwent subthalamic nucleus DBS implantation, comparing the preoperative off-medication condition with the off-medication/on-stimulation condition 1 year after the surgery in both procedure groups, there was a mean 47.8% ± 9.5% improvement in the Unified Parkinson's Disease Rating Scale Part III (UPDRS-III) score in the simultaneous group, while the sequential group experienced 47.5% ± 15.8% improvement (p = 0.96). Moreover, a marked reduction in the levodopa-equivalent dose from preoperatively to postoperatively was similar in these 2 groups. The simultaneous bilateral procedure presented major advantages over the traditional sequential approach, with a shorter total operating time. CONCLUSIONS A simultaneous stereotactic approach significantly reduces the operation time in bilateral DBS procedures, resulting in decreased microrecording time, contributing to the optimization of functional
Time Alignment as a Necessary Step in the Analysis of Sleep Probabilistic Curves
Rošt'áková, Zuzana; Rosipal, Roman
2018-02-01
Sleep can be characterised as a dynamic process that has a finite set of sleep stages during the night. The standard Rechtschaffen and Kales sleep model produces discrete representation of sleep and does not take into account its dynamic structure. In contrast, the continuous sleep representation provided by the probabilistic sleep model accounts for the dynamics of the sleep process. However, analysis of the sleep probabilistic curves is problematic when time misalignment is present. In this study, we highlight the necessity of curve synchronisation before further analysis. Original and in time aligned sleep probabilistic curves were transformed into a finite dimensional vector space, and their ability to predict subjects' age or daily measures is evaluated. We conclude that curve alignment significantly improves the prediction of the daily measures, especially in the case of the S2-related sleep states or slow wave sleep.
The impact of weight classification on safety: timing steps to adapt to external constraints
Gill, S.V.
2015-01-01
Objectives: The purpose of the current study was to evaluate how weight classification influences safety by examining adults’ ability to meet a timing constraint: walking to the pace of an audio metronome. Methods: With a cross-sectional design, walking parameters were collected as 55 adults with normal (n=30) and overweight (n=25) body mass index scores walked to slow, normal, and fast audio metronome paces. Results: Between group comparisons showed that at the fast pace, those with overweight body mass index (BMI) had longer double limb support and stance times and slower cadences than the normal weight group (all psmetronome paces revealed that participants who were overweight had higher cadences at the slow and fast paces (all ps<0.05). Conclusions: Findings suggest that those with overweight BMI alter their gait to maintain biomechanical stability. Understanding how excess weight influences gait adaptation can inform interventions to improve safety for individuals with obesity. PMID:25730658
Off-line real-time FTIR analysis of a process step in imipenem production
Boaz, Jhansi R.; Thomas, Scott M.; Meyerhoffer, Steven M.; Staskiewicz, Steven J.; Lynch, Joseph E.; Egan, Richard S.; Ellison, Dean K.
1992-08-01
We have developed an FT-IR method, using a Spectra-Tech Monit-IR 400 systems, to monitor off-line the completion of a reaction in real-time. The reaction is moisture-sensitive and analysis by more conventional methods (normal-phase HPLC) is difficult to reproduce. The FT-IR method is based on the shift of a diazo band when a conjugated beta-diketone is transformed into a silyl enol ether during the reaction. The reaction mixture is examined directly by IR and does not require sample workup. Data acquisition time is less than one minute. The method has been validated for specificity, precision and accuracy. The results obtained by the FT-IR method for known mixtures and in-process samples compare favorably with those from a normal-phase HPLC method.
Effect of moisture and drying time on the bond strength of the one-step self-etching adhesive system
Directory of Open Access Journals (Sweden)
Yoon Lee
2012-08-01
Full Text Available Objectives To investigate the effect of dentin moisture degree and air-drying time on dentin-bond strength of two different one-step self-etching adhesive systems. Materials and Methods Twenty-four human third molars were used for microtensile bond strength testing of G-Bond and Clearfil S3 Bond. The dentin surface was either blot-dried or air-dried before applying these adhesive agents. After application of the adhesive agent, three different air drying times were evaluated: 1, 5, and 10 sec. Composite resin was build up to 4 mm thickness and light cured for 40 sec with 2 separate layers. Then the tooth was sectioned and trimmed to measure the microtensile bond strength using a universal testing machine. The measured bond strengths were analyzed with three-way ANOVA and regression analysis was done (p = 0.05. Results All three factors, materials, dentin wetness and air drying time, showed significant effect on the microtensile bond strength. Clearfil S3 Bond, dry dentin surface and 10 sec air drying time showed higher bond strength. Conclusions Within the limitation of this experiment, air drying time after the application of the one-step self-etching adhesive agent was the most significant factor affecting the bond strength, followed by the material difference and dentin moisture before applying the adhesive agent.
First steps towards real-time radiography at the NECTAR facility
Bücherl, T.; Wagner, F. M.; v. Gostomski, Ch. Lierse
2009-06-01
The beam tube SR10 at Forschungsneutronenquelle Heinz Maier-Leibnitz (FRM II) provides an intense beam of fission neutrons for medical application (MEDAPP) and for radiography and tomography of technical and other objects (NECTAR). The high neutron flux of up to 9.8E+07 cm -2 s -1 (depending on filters and collimation) with a mean energy of about 1.9 MeV at the sample position at the NECTAR facility prompted an experimental feasibility study to investigate the potential for real-time (RT) radiography.
First steps towards real-time radiography at the NECTAR facility
International Nuclear Information System (INIS)
Buecherl, T.; Wagner, F.M.; Lierse von Gostomski, Ch.
2009-01-01
The beam tube SR10 at Forschungsneutronenquelle Heinz Maier-Leibnitz (FRM II) provides an intense beam of fission neutrons for medical application (MEDAPP) and for radiography and tomography of technical and other objects (NECTAR). The high neutron flux of up to 9.8E+07 cm -2 s -1 (depending on filters and collimation) with a mean energy of about 1.9 MeV at the sample position at the NECTAR facility prompted an experimental feasibility study to investigate the potential for real-time (RT) radiography.
First steps towards real-time radiography at the NECTAR facility
Energy Technology Data Exchange (ETDEWEB)
Buecherl, T. [Lehrstuhl fuer Radiochemie (RCM), Technische Universitaet Muenchen (TUM) (Germany)], E-mail: thomas.buecherl@radiochemie.de; Wagner, F.M. [Forschungsneutronenquelle Heinz Maier-Leibnitz (FRM II), Technische Universitaet Muenchen (Germany); Lierse von Gostomski, Ch. [Lehrstuhl fuer Radiochemie (RCM), Technische Universitaet Muenchen (TUM) (Germany)
2009-06-21
The beam tube SR10 at Forschungsneutronenquelle Heinz Maier-Leibnitz (FRM II) provides an intense beam of fission neutrons for medical application (MEDAPP) and for radiography and tomography of technical and other objects (NECTAR). The high neutron flux of up to 9.8E+07 cm{sup -2} s{sup -1} (depending on filters and collimation) with a mean energy of about 1.9 MeV at the sample position at the NECTAR facility prompted an experimental feasibility study to investigate the potential for real-time (RT) radiography.
A Novel Bioinspired Vision System: A Step toward Real-Time Human-Robot Interactions
Directory of Open Access Journals (Sweden)
Abdul Rahman Hafiz
2011-01-01
Full Text Available Building a human-like robot that could be involved in our daily lives is a dream of many scientists. Achieving a sophisticated robot's vision system, which can enhance the robot's real-time interaction ability with the human, is one of the main keys toward realizing such an autonomous robot. In this work, we are suggesting a bioinspired vision system that helps to develop an advanced human-robot interaction in an autonomous humanoid robot. First, we enhance the robot's vision accuracy online by applying a novel dynamic edge detection algorithm abstracted from the rules that the horizontal cells play in the mammalian retina. Second, in order to support the first algorithm, we improve the robot's tracking ability by designing a variant photoreceptors distribution corresponding to what exists in the human vision system. The experimental results verified the validity of the model. The robot could have a clear vision in real time and build a mental map that assisted it to be aware of the frontal users and to develop a positive interaction with them.
Directory of Open Access Journals (Sweden)
Rodolfo Gordillo-Orquera
2018-06-01
Full Text Available Efficient energy management is strongly dependent on determining the adequate power contracts among the ones offered by different electricity suppliers. This topic takes special relevance in healthcare buildings, where noticeable amounts of energy are required to generate an adequate health environment for patients and staff. In this paper, a convex optimization method is scrutinized to give a straightforward analysis of the optimal power levels to be contracted while minimizing the electricity bill cost in a time-of-use pricing scheme. In addition, a sensitivity analysis is carried out on the constraints in the optimization problems, which are analyzed in terms of both their empirical distribution and their bootstrap-estimated statistical distributions to create a simple-to-use tool for this purpose, the so-called mosaic-distribution. The evaluation of the proposed method was carried out with five-year consumption data on two different kinds of healthcare buildings, a large one given by Hospital Universitario de Fuenlabrada, and a primary care center, Centro de Especialidades el Arroyo, both located at Fuenlabrada (Madrid, Spain. The analysis of the resulting optimization shows that the annual savings achieved vary moderately, ranging from −0.22 % to +27.39%, depending on the analyzed year profile and the healthcare building type. The analysis introducing mosaic-distribution to represent the sensitivity score also provides operative information to evaluate the convenience of implementing energy saving measures. All this information is useful for managers to determine the appropriate power levels for next year contract renewal and to consider whether to implement demand response mechanisms in healthcare buildings.
How update schemes influence crowd simulations
International Nuclear Information System (INIS)
Seitz, Michael J; Köster, Gerta
2014-01-01
Time discretization is a key modeling aspect of dynamic computer simulations. In current pedestrian motion models based on discrete events, e.g. cellular automata and the Optimal Steps Model, fixed-order sequential updates and shuffle updates are prevalent. We propose to use event-driven updates that process events in the order they occur, and thus better match natural movement. In addition, we present a parallel update with collision detection and resolution for situations where computational speed is crucial. Two simulation studies serve to demonstrate the practical impact of the choice of update scheme. Not only do density-speed relations differ, but there is a statistically significant effect on evacuation times. Fixed-order sequential and random shuffle updates with a short update period come close to event-driven updates. The parallel update scheme overestimates evacuation times. All schemes can be employed for arbitrary simulation models with discrete events, such as car traffic or animal behavior. (paper)
Time-step selection considerations in the analysis of reactor transients with DIF3D-K
International Nuclear Information System (INIS)
Taiwo, T.A.; Khalil, H.S.; Cahalan, J.E.; Morris, E.E.
1993-01-01
The DIF3D-K code solves the three-dimensional, time-dependent multigroup neutron diffusion equations by using a nodal approach for spatial discretization and either the theta method or one of three space-time factorization approaches for temporal integration of the nodal equations. The three space-time factorization options (namely, improved quasistatic, adiabatic, and conventional point kinetics) were implemented because of their potential efficiency advantage for the analysis of transients in which the flux shape changes more slowly than its amplitude. In this paper, we describe the implementation of DIF3D-K as the neutronics module within the SAS-HWR accident analysis code. We also describe the neuronic-related time-step selection algorithms and their influence on the accuracy and efficiency of the various solution options
Time-step selection considerations in the analysis of reactor transients with DIF3D-K
International Nuclear Information System (INIS)
Taiwo, T.A.; Khalil, H.S.; Cahalan, J.E.; Morris, E.E.
1993-01-01
The DIF3D-K code solves the three-dimensional, time-dependent multigroup neutron diffusion equations by using a nodal approach for spatial discretization and either the theta method or one of three space-time factorization approaches for temporal integration of the nodal equations. The three space-time factorization options (namely, improved quasistatic, adiabatic and conventional point kinetics) were implemented because of their potential efficiency advantage for the analysis of transients in which the flux shape changes more slowly than its amplitude. Here we describe the implementation of DIF3D-K as the neutronics module within the SAS-HWR accident analysis code. We also describe the neutronics-related time step selection algorithms and their influence on the accuracy and efficiency of the various solution options
Altunsoy, Mustafa; Botsali, Murat Selim; Tosun, Gonca; Yasar, Ahmet
2015-10-16
The aim of this study was to evaluate the effect of increased exposure times on the amount of residual Bis-GMA, TEGDMA, HEMA and UDMA released from single-step self-etch adhesive systems. Two adhesive systems were used. The adhesives were applied to bovine dentin surface according to the manufacturer's instructions and were polymerized using an LED curing unit for 10, 20 and 40 seconds (n = 5). After polymerization, the specimens were stored in 75% ethanol-water solution (6 mL). Residual monomers (Bis-GMA, TEGDMA, UDMA and HEMA) that were eluted from the adhesives (after 10 minutes, 1 hour, 1 day, 7 days and 30 days) were analyzed by high-performance liquid chromatography (HPLC). The data were analyzed using 1-way analysis of variance and Tukey HSD tests. Among the time periods, the highest amount of released residual monomers from adhesives was observed in the 10th minute. There were statistically significant differences regarding released Bis-GMA, UDMA, HEMA and TEGDMA between the adhesive systems (p<0.05). There were no significant differences among the 10, 20 and 40 second polymerization times according to their effect on residual monomer release from adhesives (p>0.05). Increasing the polymerization time did not have an effect on residual monomer release from single-step self-etch adhesives.
In the time of significant generational diversity - surgical leadership must step up!
Money, Samuel R; O'Donnell, Mark E; Gray, Richard J
2014-02-01
The diverse attitudes and motivations of surgeons and surgical trainees within different age groups present an important challenge for surgical leaders and educators. These challenges to surgical leadership are not unique, and other industries have likewise needed to grapple with how best to manage these various age groups. The authors will herein explore management and leadership for surgeons in a time of age diversity, define generational variations within "Baby-Boomer", "Generation X" and "Generation Y" populations, and identify work ethos concepts amongst these three groups. The surgical community must understand and embrace these concepts in order to continue to attract a stellar pool of applicants from medical school. By not accepting the changing attitudes and motivations of young trainees and medical students, we may disenfranchise a high percentage of potential future surgeons. Surgical training programs will fill, but will they contain the highest quality trainees? Copyright © 2013 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. Published by Elsevier Ltd. All rights reserved.
Li, Shaohong L; Truhlar, Donald G
2015-07-14
Time-dependent density functional theory (TDDFT) with conventional local and hybrid functionals such as the local and hybrid generalized gradient approximations (GGA) seriously underestimates the excitation energies of Rydberg states, which limits its usefulness for applications such as spectroscopy and photochemistry. We present here a scheme that modifies the exchange-enhancement factor to improve GGA functionals for Rydberg excitations within the TDDFT framework while retaining their accuracy for valence excitations and for the thermochemical energetics calculated by ground-state density functional theory. The scheme is applied to a popular hybrid GGA functional and tested on data sets of valence and Rydberg excitations and atomization energies, and the results are encouraging. The scheme is simple and flexible. It can be used to correct existing functionals, and it can also be used as a strategy for the development of new functionals.
Kurz, Ilan; Gimmon, Yoav; Shapiro, Amir; Debi, Ronen; Snir, Yoram; Melzer, Itshak
2016-03-04
Falls are common among elderly, most of them occur while slipping or tripping during walking. We aimed to explore whether a training program that incorporates unexpected loss of balance during walking able to improve risk factors for falls. In a double-blind randomized controlled trial 53 community dwelling older adults (age 80.1±5.6 years), were recruited and randomly allocated to an intervention group (n = 27) or a control group (n = 26). The intervention group received 24 training sessions over 3 months that included unexpected perturbation of balance exercises during treadmill walking. The control group performed treadmill walking with no perturbations. The primary outcome measures were the voluntary step execution times, traditional postural sway parameters and Stabilogram-Diffusion Analysis. The secondary outcome measures were the fall efficacy Scale (FES), self-reported late life function (LLFDI), and Performance-Oriented Mobility Assessment (POMA). Compared to control, participation in intervention program that includes unexpected loss of balance during walking led to faster Voluntary Step Execution Times under single (p = 0.002; effect size [ES] =0.75) and dual task (p = 0.003; [ES] = 0.89) conditions; intervention group subjects showed improvement in Short-term Effective diffusion coefficients in the mediolateral direction of the Stabilogram-Diffusion Analysis under eyes closed conditions (p = 0.012, [ES] = 0.92). Compared to control there were no significant changes in FES, LLFDI, and POMA. An intervention program that includes unexpected loss of balance during walking can improve voluntary stepping times and balance control, both previously reported as risk factors for falls. This however, did not transferred to a change self-reported function and FES. ClinicalTrials.gov NCT01439451 .
Pochop, Jaroslav; Kačániová, Miroslava; Hleba, Lukáš; Lopasovský, L'ubomír; Bobková, Alica; Zeleňáková, Lucia; Stričík, Michal
2012-01-01
The aim of this study was to follow contamination of ready-to-eat food with Listeria monocytogenes by using the Step One real time polymerase chain reaction (PCR). We used the PrepSEQ Rapid Spin Sample Preparation Kit for isolation of DNA and MicroSEQ® Listeria monocytogenes Detection Kit for the real-time PCR performance. In 30 samples of ready-to-eat milk and meat products without incubation we detected strains of Listeria monocytogenes in five samples (swabs). Internal positive control (IPC) was positive in all samples. Our results indicated that the real-time PCR assay developed in this study could sensitively detect Listeria monocytogenes in ready-to-eat food without incubation.
DEFF Research Database (Denmark)
Pang, Kar Mun; Ivarsson, Anders; Haider, Sajjad
2013-01-01
In the current work, a local time stepping (LTS) solver for the modeling of combustion, radiative heat transfer and soot formation is developed and validated. This is achieved using an open source computational fluid dynamics code, OpenFOAM. Akin to the solver provided in default assembly i...... library in the edcSimpleFoam solver which was introduced during the 6th OpenFOAM workshop is modified and coupled with the current solver. One of the main amendments made is the integration of soot radiation submodel since this is significant in rich flames where soot particles are formed. The new solver...
Energy Technology Data Exchange (ETDEWEB)
Mather, Barry
2017-08-24
The increasing deployment of distribution-connected photovoltaic (DPV) systems requires utilities to complete complex interconnection studies. Relatively simple interconnection study methods worked well for low penetrations of photovoltaic systems, but more complicated quasi-static time-series (QSTS) analysis is required to make better interconnection decisions as DPV penetration levels increase. Tools and methods must be developed to support this. This paper presents a variable-time-step solver for QSTS analysis that significantly shortens the computational time and effort to complete a detailed analysis of the operation of a distribution circuit with many DPV systems. Specifically, it demonstrates that the proposed variable-time-step solver can reduce the required computational time by as much as 84% without introducing any important errors to metrics, such as the highest and lowest voltage occurring on the feeder, number of voltage regulator tap operations, and total amount of losses realized in the distribution circuit during a 1-yr period. Further improvement in computational speed is possible with the introduction of only modest errors in these metrics, such as a 91 percent reduction with less than 5 percent error when predicting voltage regulator operations.
Tijsma, Mylou; Vister, Eva; Hoang, Phu; Lord, Stephen R
2017-03-01
Purpose To determine (a) the discriminant validity for established fall risk factors and (b) the predictive validity for falls of a simple test of choice stepping reaction time (CSRT) in people with multiple sclerosis (MS). Method People with MS (n = 210, 21-74y) performed the CSRT, sensorimotor, balance and neuropsychological tests in a single session. They were then followed up for falls using monthly fall diaries for 6 months. Results The CSRT test had excellent discriminant validity with respect to established fall risk factors. Frequent fallers (≥3 falls) performed significantly worse in the CSRT test than non-frequent fallers (0-2 falls). With the odds of suffering frequent falls increasing 69% with each SD increase in CSRT (OR = 1.69, 95% CI: 1.27-2.26, p = falls in people with MS. This test may prove useful in documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions. Implications for rehabilitation Good choice stepping reaction time (CSRT) is required for maintaining balance. A simple low-tech CSRT test has excellent discriminative and predictive validity in relation to falls in people with MS. This test may prove useful documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions.
Directory of Open Access Journals (Sweden)
Hamid Reza Fooladmand
2017-06-01
2006 to 2008 were used for calibrating fourteen estimated models of solar radiation in seasonally and annual time steps and the measured data of years 2009 and 2010 were used for evaluating the obtained results. The equations were used in this study divided into three groups contains: 1 The equations based on only sunshine hours. 2 The equations based on only air temperature. 3 The equations based on sunshine hours and air temperature together. On the other hand, statistical comparison must be done to select the best equation for estimating solar radiation in seasonally and annual time steps. For this purpose, in validation stage the combination of statistical equations and linear correlation was used, and then the value of mean square deviation (MSD was calculated to evaluate the different models for estimating solar radiation in mentioned time steps. Results and Discussion: The mean values of mean square deviation (MSD of fourteen models for estimating solar radiation were equal to 24.16, 20.42, 4.08 and 16.19 for spring to winter respectively, and 15.40 in annual time step. Therefore, the results showed that using the equations for autumn enjoyed high accuracy, however for other seasons had low accuracy. So, using the equations for annual time step were appropriate more than the equations for seasonally time steps. Also, the mean values of mean square deviation (MSD of the equations based on only sunshine hours, the equations based on only air temperature, and the equations based on the combination of sunshine hours and air temperature for estimating solar radiation were equal to 14.82, 17.40 and 14.88, respectively. Therefore, the results indicated that the models based on only air temperature were the worst conditions for estimating solar radiation in Shiraz region, and therefore, using the sunshine hours for estimating solar radiation is necessary. Conclusions: In this study for estimating solar radiation in seasonally and annual time steps in Shiraz region
Effect of different air-drying time on the microleakage of single-step self-etch adhesives
Directory of Open Access Journals (Sweden)
Horieh Moosavi
2013-05-01
Full Text Available Objectives This study evaluated the effect of three different air-drying times on microleakage of three self-etch adhesive systems. Materials and Methods Class I cavities were prepared for 108 extracted sound human premolars. The teeth were divided into three main groups based on three different adhesives: Opti Bond All in One (OBAO, Clearfil S3 Bond (CSB, Bond Force (BF. Each main group divided into three subgroups regarding the air-drying time: without application of air stream, following the manufacturer's instruction, for 10 sec more than manufacturer's instruction. After completion of restorations, specimens were thermocycled and then connected to a fluid filtration system to evaluate microleakage. The data were statistically analyzed using two-way ANOVA and Tukey-test (α = 0.05. Results The microleakage of all adhesives decreased when the air-drying time increased from 0 sec to manufacturer's instruction (p < 0.001. The microleakage of BF reached its lowest values after increasing the drying time to 10 sec more than the manufacturer's instruction (p < 0.001. Microleakage of OBAO and CSB was significantly lower compared to BF in all three drying time (p < 0.001. Conclusions Increasing in air-drying time of adhesive layer in one-step self-etch adhesives caused reduction of microleakage, but the amount of this reduction may be dependent on the adhesive components of self-etch adhesives.
An On-Time Power-Aware Scheduling Scheme for Medical Sensor SoC-Based WBAN Systems
Hwang, Tae-Ho; Kim, Dong-Sun; Kim, Jung-Guk
2012-01-01
The focus of many leading technologies in the field of medical sensor systems is on low power consumption and robust data transmission. For example, the implantable cardioverter-defibrillator (ICD), which is used to maintain the heart in a healthy state, requires a reliable wireless communication scheme with an extremely low duty-cycle, high bit rate, and energy-efficient media access protocols. Because such devices must be sustained for over 5 years without access to battery replacement, the...
Lewis, L K; Rowlands, A V; Gardiner, P A; Standage, M; English, C; Olds, T
2016-03-01
This study aimed to evaluate the preliminary effectiveness and feasibility of a theory-informed program to reduce sitting time in older adults. Pre-experimental (pre-post) study. Thirty non-working adult (≥ 60 years) participants attended a one hour face-to-face intervention session and were guided through: a review of their sitting time; normative feedback on sitting time; and setting goals to reduce total sitting time and bouts of prolonged sitting. Participants chose six goals and integrated one per week incrementally for six weeks. Participants received weekly phone calls. Sitting time and bouts of prolonged sitting (≥ 30 min) were measured objectively for seven days (activPAL3c inclinometer) pre- and post-intervention. During these periods, a 24-h time recall instrument was administered by computer-assisted telephone interview. Participants completed a post-intervention project evaluation questionnaire. Paired t tests with sequential Bonferroni corrections and Cohen's d effect sizes were calculated for all outcomes. Twenty-seven participants completed the assessments (71.7 ± 6.5 years). Post-intervention, objectively-measured total sitting time was significantly reduced by 51.5 min per day (p=0.006; d=-0.58) and number of bouts of prolonged sitting by 0.8 per day (p=0.002; d=-0.70). Objectively-measured standing increased by 39 min per day (p=0.006; d=0.58). Participants self-reported spending 96 min less per day sitting (p<0.001; d=-0.77) and 32 min less per day watching television (p=0.005; d=-0.59). Participants were highly satisfied with the program. The 'Small Steps' program is a feasible and promising avenue for behavioral modification to reduce sitting time in older adults. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Investigation on the MOC with a linear source approximation scheme in three-dimensional assembly
International Nuclear Information System (INIS)
Zhu, Chenglin; Cao, Xinrong
2014-01-01
Method of characteristics (MOC) for solving neutron transport equation has already become one of the fundamental methods for lattice calculation of nuclear design code system. At present, MOC has three schemes to deal with the neutron source of the transport equation: the flat source approximation of the step characteristics (SC) scheme, the diamond difference (DD) scheme and the linear source (LS) characteristics scheme. The MOC for SC scheme and DD scheme need large storage space and long computing time when they are used to calculate large-scale three-dimensional neutron transport problems. In this paper, a LS scheme and its correction for negative source distribution were developed and added to DRAGON code. This new scheme was compared with the SC scheme and DD scheme which had been applied in this code. As an open source code, DRAGON could solve three-dimensional assembly with MOC method. Detailed calculation is conducted on two-dimensional VVER-1000 assembly under three schemes of MOC. The numerical results indicate that coarse mesh could be used in the LS scheme with the same accuracy. And the LS scheme applied in DRAGON is effective and expected results are achieved. Then three-dimensional cell problem and VVER-1000 assembly are calculated with LS scheme and SC scheme. The results show that less memory and shorter computational time are employed in LS scheme compared with SC scheme. It is concluded that by using LS scheme, DRAGON is able to calculate large-scale three-dimensional problems with less storage space and shorter computing time
DEFF Research Database (Denmark)
van Leeuwen, Theo
2013-01-01
This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation.......This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation....
Jia, Hao-Ran; Wang, Hong-Yin; Yu, Zhi-Wu; Chen, Zhan; Wu, Fu-Gen
2016-03-16
Long-time stable plasma membrane imaging is difficult due to the fast cellular internalization of fluorescent dyes and the quick detachment of the dyes from the membrane. In this study, we developed a two-step synergistic cell surface modification and labeling strategy to realize long-time plasma membrane imaging. Initially, a multisite plasma membrane anchoring reagent, glycol chitosan-10% PEG2000 cholesterol-10% biotin (abbreviated as "GC-Chol-Biotin"), was incubated with cells to modify the plasma membranes with biotin groups with the assistance of the membrane anchoring ability of cholesterol moieties. Fluorescein isothiocyanate (FITC)-conjugated avidin was then introduced to achieve the fluorescence-labeled plasma membranes based on the supramolecular recognition between biotin and avidin. This strategy achieved stable plasma membrane imaging for up to 8 h without substantial internalization of the dyes, and avoided the quick fluorescence loss caused by the detachment of dyes from plasma membranes. We have also demonstrated that the imaging performance of our staining strategy far surpassed that of current commercial plasma membrane imaging reagents such as DiD and CellMask. Furthermore, the photodynamic damage of plasma membranes caused by a photosensitizer, Chlorin e6 (Ce6), was tracked in real time for 5 h during continuous laser irradiation. Plasma membrane behaviors including cell shrinkage, membrane blebbing, and plasma membrane vesiculation could be dynamically recorded. Therefore, the imaging strategy developed in this work may provide a novel platform to investigate plasma membrane behaviors over a relatively long time period.
An Energy Decaying Scheme for Nonlinear Dynamics of Shells
Bottasso, Carlo L.; Bauchau, Olivier A.; Choi, Jou-Young; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
A novel integration scheme for nonlinear dynamics of geometrically exact shells is developed based on the inextensible director assumption. The new algorithm is designed so as to imply the strict decay of the system total mechanical energy at each time step, and consequently unconditional stability is achieved in the nonlinear regime. Furthermore, the scheme features tunable high frequency numerical damping and it is therefore stiffly accurate. The method is tested for a finite element spatial formulation of shells based on mixed interpolations of strain tensorial components and on a two-parameter representation of director rotations. The robustness of the, scheme is illustrated with the help of numerical examples.
Trinary signed-digit arithmetic using an efficient encoding scheme
Salim, W. Y.; Alam, M. S.; Fyath, R. S.; Ali, S. A.
2000-09-01
The trinary signed-digit (TSD) number system is of interest for ultrafast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.
Unterweger, K.
2015-01-01
© Springer International Publishing Switzerland 2015. We propose to couple our adaptive mesh refinement software PeanoClaw with existing solvers for complex overland flows that are tailored to regular Cartesian meshes. This allows us to augment them with spatial adaptivity and local time-stepping without altering the computational kernels. FullSWOF2D—Full Shallow Water Overland Flows—here is our software of choice though all paradigms hold for other solvers as well.We validate our hybrid simulation software in an artificial test scenario before we provide results for a large-scale flooding scenario of the Mecca region. The latter demonstrates that our coupling approach enables the simulation of complex “real-world” scenarios.
Energy Technology Data Exchange (ETDEWEB)
Cobb, J.W.
1995-02-01
There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.
International Nuclear Information System (INIS)
Csom, Gyula; Feher, Sandor; Szieberthj, Mate
2002-01-01
Nowadays the molten salt reactor (MSR) concept seems to revive as one of the most promising systems for the realization of transmutation. In the molten salt reactors and subcritical systems the fuel and material to be transmuted circulate dissolved in some molten salt. The main advantage of this reactor type is the possibility of the continuous feed and reprocessing of the fuel. In the present paper a novel molten salt reactor concept is introduced and its transmutation capabilities are studied. The goal is the development of a transmutation technique along with a device implementing it, which yield higher transmutation efficiencies than that of the known procedures and thus results in radioactive waste whose load on the environment is reduced both in magnitude and time length. The procedure is the multi-step time-scheduled transmutation, in which transformation is done in several consecutive steps of different neutron flux and spectrum. In the new MSR concept, named 'multi-region' MSR (MRMSR), the primary circuit is made up of a few separate loops, in which salt-fuel mixtures of different compositions are circulated. The loop sections constituting the core region are only neutronically and thermally coupled. This new concept makes possible the utilization of the spatial dependence of spectrum as well as the advantageous features of liquid fuel such as the possibility of continuous chemical processing etc. In order to compare a 'conventional' MSR and a proposed MRMSR in terms of efficiency, preliminary calculational results are shown. Further calculations in order to find the optimal implementation of this new concept and to emphasize its other advantageous features are going on. (authors)
Directory of Open Access Journals (Sweden)
R. Sitharthan
2016-09-01
Full Text Available This paper aims at modelling an electronically coupled distributed energy resource with an adaptive protection scheme. The electronically coupled distributed energy resource is a microgrid framework formed by coupling the renewable energy source electronically. Further, the proposed adaptive protection scheme provides a suitable protection to the microgrid for various fault conditions irrespective of the operating mode of the microgrid: namely, grid connected mode and islanded mode. The outstanding aspect of the developed adaptive protection scheme is that it monitors the microgrid and instantly updates relay fault current according to the variations that occur in the system. The proposed adaptive protection scheme also employs auto reclosures, through which the proposed adaptive protection scheme recovers faster from the fault and thereby increases the consistency of the microgrid. The effectiveness of the proposed adaptive protection is studied through the time domain simulations carried out in the PSCAD⧹EMTDC software environment.
International Nuclear Information System (INIS)
Park, Sook Hee
2001-02-01
boost the reliability and high computation speed of basic primitive matrix operations. The DLL(Dynamic Link Library) is a good candidate for a Matlab programmer to conveniently call the new library, since the original Matlab code does not need to be changed. The DLL library receives Matlab array represented as mxArray, and converts it into the appropriate C language structure after partitioning the array for the parallel operation. Then the DLL calls Matlab's efficient C language library, which is enabled by creating the definition files as well as including the Matlab library into the Visual C 6.0 project file. Finally, the partial results are merged at the shared memory, so the DLL integrates them to pass the final result to the caller residing in Matlab code. In this procedure, the elimination of the complex Matlab interpreting step, in addition to the parallel programming. According to the implementation described as above, matrix multiplication, inverse, pseudo inverse, and Jacobian are implemented. The first two DLLs speed up the computation by the effect of pure parallel processing. Pseudo inverse can enhance the performance based on the previous parallel procedures if and only if the given matrix is full-rank one, as data dependancy hinders the parallel computing otherwise. The enhancement of Jacobian code owes to eliminating the unnecessary code rather than parallel processing, as the operation contains so much overhead. Also implemented are the network version libraries. However, the speed is not so good as the original code because there is network speed limitation. With the better network interface, the speed up can be expected. The performance of the implemented parallel libraries has been assessed by directly measuring the execution time comparing with the original Matlab code. And the calculating times of matrix multiplications, inverse, and pseudo inverse have been reduced to 59.4 %, 34.8 % and 52 %, respectively. The execution time of Jacobian is
Energy Technology Data Exchange (ETDEWEB)
Kretzschmar, J.G.; Mertens, I.
1984-01-01
Over the period 1977-1979, hourly meteorological measurements at the Nuclear Energy Research Centre, Mol, Belgium and simultaneous synoptic observations at the nearby military airport of Kleine Brogel, have been compiled as input data for a bi-Gaussian dispersion model. The available information has first of all been used to determine hourly stability classes in ten widely used turbulent diffusion typing schemes. Systematic correlations between different systems were rare. Twelve different combinations of diffusion typing scheme-dispersion parameters were then used for calculating cumulative frequency distributions of 1 h, 8 h, 16 h, 3 d, and 26 d average ground-level concentrations at receptors respectively at 500 m, 1 km, 2 km, 4 km and 8 km from continuous ground-level release and an elevated release at 100 m height. Major differences were noted as well in the extreme values, the higher percentiles, as in the annual mean concentrations. These differences are almost entirely due to the differences in the numercial values (as a function of distance) of the various sets of dispersion parameters actually in use for impact assessment studies. Dispersion parameter sets giving the lowest normalized ground-level concentration values for ground level releases give the highest results for elevated releases and vice versa. While it was illustrated once again that the applicability of a given set of dispersion parameters is restricted due to the specific conditions under which the given set derived, it was also concluded that systematic experimental work to validate certain assumptions is urgently needed.
Timing paradox of stepping and falls in ageing: not so quick and quick(er) on the trigger.
Rogers, Mark W; Mille, Marie-Laure
2016-08-15
Physiological and degenerative changes affecting human standing balance are major contributors to falls with ageing. During imbalance, stepping is a powerful protective action for preserving balance that may be voluntarily initiated in recognition of a balance threat, or be induced by an externally imposed mechanical or sensory perturbation. Paradoxically, with ageing and falls, initiation slowing of voluntary stepping is observed together with perturbation-induced steps that are triggered as fast as or faster than for younger adults. While age-associated changes in sensorimotor conduction, central neuronal processing and cognitive functions are linked to delayed voluntary stepping, alterations in the coupling of posture and locomotion may also prolong step triggering. It is less clear, however, how these factors may explain the accelerated triggering of induced stepping. We present a conceptual model that addresses this issue. For voluntary stepping, a disruption in the normal coupling between posture and locomotion may underlie step-triggering delays through suppression of the locomotion network based on an estimation of the evolving mechanical state conditions for stability. During induced stepping, accelerated step initiation may represent an event-triggering process whereby stepping is released according to the occurrence of a perturbation rather than to the specific sensorimotor information reflecting the evolving instability. In this case, errors in the parametric control of induced stepping and its effectiveness in stabilizing balance would be likely to occur. We further suggest that there is a residual adaptive capacity with ageing that could be exploited to improve paradoxical triggering and other changes in protective stepping to impact fall risk. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.
Directory of Open Access Journals (Sweden)
Andranik Tsakanian
2012-05-01
Full Text Available In particle accelerators a preferred direction, the direction of motion, is well defined. If in a numerical calculation the (numerical dispersion in this direction is suppressed, a quite coarse mesh and moderate computational resources can be used to reach accurate results even for extremely short electron bunches. Several approaches have been proposed in the past decades to reduce the accumulated dispersion error in wakefield calculations for perfectly conducting structures. In this paper we extend the TE/TM splitting algorithm to a new hybrid scheme that allows for wakefield calculations in structures with walls of finite conductivity. The conductive boundary is modeled by one-dimensional wires connected to each boundary cell. A good agreement of the numerical simulations with analytical results and other numerical approaches is obtained.
Developing and modifying behavioral coding schemes in pediatric psychology: a practical guide.
Chorney, Jill MacLaren; McMurtry, C Meghan; Chambers, Christine T; Bakeman, Roger
2015-01-01
To provide a concise and practical guide to the development, modification, and use of behavioral coding schemes for observational data in pediatric psychology. This article provides a review of relevant literature and experience in developing and refining behavioral coding schemes. A step-by-step guide to developing and/or modifying behavioral coding schemes is provided. Major steps include refining a research question, developing or refining the coding manual, piloting and refining the coding manual, and implementing the coding scheme. Major tasks within each step are discussed, and pediatric psychology examples are provided throughout. Behavioral coding can be a complex and time-intensive process, but the approach is invaluable in allowing researchers to address clinically relevant research questions in ways that would not otherwise be possible. © The Author 2014. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
This presentation, Environmental Exposures and Health Risks in California Child Care Facilities: First Steps to Improve Environmental Health where Children Spend Time, was given at the NIEHS/EPA Children's Centers 2016 Webinar Series: Exposome.
On Converting Secret Sharing Scheme to Visual Secret Sharing Scheme
Directory of Open Access Journals (Sweden)
Wang Daoshun
2010-01-01
Full Text Available Abstract Traditional Secret Sharing (SS schemes reconstruct secret exactly the same as the original one but involve complex computation. Visual Secret Sharing (VSS schemes decode the secret without computation, but each share is m times as big as the original and the quality of the reconstructed secret image is reduced. Probabilistic visual secret sharing (Prob.VSS schemes for a binary image use only one subpixel to share the secret image; however the probability of white pixels in a white area is higher than that in a black area in the reconstructed secret image. SS schemes, VSS schemes, and Prob. VSS schemes have various construction methods and advantages. This paper first presents an approach to convert (transform a -SS scheme to a -VSS scheme for greyscale images. The generation of the shadow images (shares is based on Boolean XOR operation. The secret image can be reconstructed directly by performing Boolean OR operation, as in most conventional VSS schemes. Its pixel expansion is significantly smaller than that of VSS schemes. The quality of the reconstructed images, measured by average contrast, is the same as VSS schemes. Then a novel matrix-concatenation approach is used to extend the greyscale -SS scheme to a more general case of greyscale -VSS scheme.
J.K. Hoogland (Jiri); C.D.D. Neumann
2000-01-01
textabstractIn this article we present a new approach to the numerical valuation of derivative securities. The method is based on our previous work where we formulated the theory of pricing in terms of tradables. The basic idea is to fit a finite difference scheme to exact solutions of the pricing
Church, Timothy S
2016-11-01
The analysis plan and article in this issue of the Journal by Evenson et al. (Am J Epidemiol 2016;184(9):621-632) is well-conceived, thoughtfully conducted, and tightly written. The authors utilized the National Health and Nutrition Examination Survey data set to examine the association between accelerometer-measured physical activity level and mortality and found that meeting the 2013 federal Physical Activity Guidelines resulted in a 35% reduction in risk of mortality. The timing of these findings could not be better, given the ubiquitous nature of personal accelerometer devices. The masses are already equipped to routinely quantify their activity, and now we have the opportunity and responsibility to provide evidenced-based, tailored physical activity goals. We have evidenced-based physical activity guidelines, mass distribution of devices to track activity, and now scientific support indicating that meeting the physical activity goal, as assessed by these devices, has substantial health benefits. All of the pieces are in place to make physical inactivity a national priority, and we now have the opportunity to positively affect the health of millions of Americans. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Identification of Dobrava, Hantaan, Seoul, and Puumala viruses by one-step real-time RT-PCR.
Aitichou, Mohamed; Saleh, Sharron S; McElroy, Anita K; Schmaljohn, C; Ibrahim, M Sofi
2005-03-01
We developed four assays for specifically identifying Dobrava (DOB), Hantaan (HTN), Puumala (PUU), and Seoul (SEO) viruses. The assays are based on the real-time one-step reverse transcriptase polymerase chain reaction (RT-PCR) with the small segment used as the target sequence. The detection limits of DOB, HTN, PUU, and SEO assays were 25, 25, 25, and 12.5 plaque-forming units, respectively. The assays were evaluated in blinded experiments, each with 100 samples that contained Andes, Black Creek Canal, Crimean-Congo hemorrhagic fever, Rift Valley fever and Sin Nombre viruses in addition to DOB, HTN, PUU and SEO viruses. The sensitivity levels of the DOB, HTN, PUU, and SEO assays were 98%, 96%, 92% and 94%, respectively. The specificity of DOB, HTN and SEO assays was 100% and the specificity of the PUU assay was 98%. Because of the high levels of sensitivity, specificity, and reproducibility, we believe that these assays can be useful for diagnosing and differentiating these four Old-World hantaviruses.
A two-step real-time PCR assay for quantitation and genotyping of human parvovirus 4.
Väisänen, E; Lahtinen, A; Eis-Hübinger, A M; Lappalainen, M; Hedman, K; Söderlund-Venermo, M
2014-01-01
Human parvovirus 4 (PARV4) of the family Parvoviridae was discovered in a plasma sample of a patient with an undiagnosed acute infection in 2005. Currently, three PARV4 genotypes have been identified, however, with an unknown clinical significance. Interestingly, these genotypes seem to differ in epidemiology. In Northern Europe, USA and Asia, genotypes 1 and 2 have been found to occur mainly in persons with a history of injecting drug use or other parenteral exposure. In contrast, genotype 3 appears to be endemic in sub-Saharan Africa, where it infects children and adults without such risk behaviour. In this study, a novel straightforward and cost-efficient molecular assay for both quantitation and genotyping of PARV4 DNA was developed. The two-step method first applies a single-probe pan-PARV4 qPCR for screening and quantitation of this relatively rare virus, and subsequently, only the positive samples undergo a real-time PCR-based multi-probe genotyping. The new qPCR-GT method is highly sensitive and specific regardless of the genotype, and thus being suitable for studying the clinical impact and occurrence of the different PARV4 genotypes. Copyright © 2013 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Daniel Junker
2012-01-01
Full Text Available Objectives. To evaluate prostate cancer (PCa detection rates of real-time elastography (RTE in dependence of tumor size, tumor volume, localization and histological type. Materials and Methods. Thirdy-nine patients with biopsy proven PCa underwent RTE before radical prostatectomy (RPE to assess prostate tissue elasticity, and hard lesions were considered suspicious for PCa. After RPE, the prostates were prepared as whole-mount step sections and were compared with imaging findings for analyzing PCa detection rates. Results. RTE detected 6/62 cancer lesions with a maximum diameter of 0–5 mm (9.7%, 10/37 with a maximum diameter of 6–10 mm (27%, 24/34 with a maximum diameter of 11–20 20 mm (70.6%, 14/14 with a maximum diameter of >20 mm (100% and 40/48 with a volume ≥0.2 cm3 (83.3%. Regarding cancer lesions with a volume ≥ 0.2 cm³ there was a significant difference in PCa detection rates between Gleason scores with predominant Gleason pattern 3 compared to those with predominant Gleason pattern 4 or 5 (75% versus 100%; P=0.028. Conclusions. RTE is able to detect PCa of significant tumor volume and of predominant Gleason pattern 4 or 5 with high confidence, but is of limited value in the detection of small cancer lesions.
Gillet, P; Rapaille, A; Benoît, A; Ceinos, M; Bertrand, O; de Bouyalsky, I; Govaerts, B; Lambermont, M
2015-01-01
Whole blood donation is generally safe although vasovagal reactions can occur (approximately 1%). Risk factors are well known and prevention measures are shown as efficient. This study evaluates the impact of the donor's retention in relation to the occurrence of vasovagal reaction for the first three blood donations. Our study of data collected over three years evaluated the impact of classical risk factors and provided a model including the best combination of covariates predicting VVR. The impact of a reaction at first donation on return rate and complication until the third donation was evaluated. Our data (523,471 donations) confirmed the classical risk factors (gender, age, donor status and relative blood volume). After stepwise variable selection, donor status, relative blood volume and their interaction were the only remaining covariates in the model. Of 33,279 first-time donors monitored over a period of at least 15 months, the first three donations were followed. Data emphasised the impact of complication at first donation. The return rate for a second donation was reduced and the risk of vasovagal reaction was increased at least until the third donation. First-time donation is a crucial step in the donors' career. Donors who experienced a reaction at their first donation have a lower return rate for a second donation and a higher risk of vasovagal reaction at least until the third donation. Prevention measures have to be processed to improve donor retention and provide blood banks with adequate blood supply. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Okubo, Yoshiro; Schoene, Daniel; Lord, Stephen R
2017-04-01
To examine the effects of stepping interventions on fall risk factors and fall incidence in older people. Electronic databases (PubMed, EMBASE, CINAHL, Cochrane, CENTRAL) and reference lists of included articles from inception to March 2015. Randomised (RCT) or clinical controlled trials (CCT) of volitional and reactive stepping interventions that included older (minimum age 60) people providing data on falls or fall risk factors. Meta-analyses of seven RCTs (n=660) showed that the stepping interventions significantly reduced the rate of falls (rate ratio=0.48, 95% CI 0.36 to 0.65, prisk ratio=0.51, 95% CI 0.38 to 0.68, pfalls and proportion of fallers. A meta-analysis of two RCTs (n=62) showed that stepping interventions significantly reduced laboratory-induced falls, and meta-analysis findings of up to five RCTs and CCTs (n=36-416) revealed that stepping interventions significantly improved simple and choice stepping reaction time, single leg stance, timed up and go performance (pfalls among older adults by approximately 50%. This clinically significant reduction may be due to improvements in reaction time, gait, balance and balance recovery but not in strength. Further high-quality studies aimed at maximising the effectiveness and feasibility of stepping interventions are required. CRD42015017357. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Explicit TE/TM scheme for particle beam simulations
International Nuclear Information System (INIS)
Dohlus, M.; Zagorodnov, I.
2008-10-01
In this paper we propose an explicit two-level conservative scheme based on a TE/TM like splitting of the field components in time. Its dispersion properties are adjusted to accelerator problems. It is simpler and faster than the implicit version. It does not have dispersion in the longitudinal direction and the dispersion properties in the transversal plane are improved. The explicit character of the new scheme allows a uniformly stable conformal method without iterations and the scheme can be parallelized easily. It assures energy and charge conservation. A version of this explicit scheme for rotationally symmetric structures is free from the progressive time step reducing for higher order azimuthal modes as it takes place for Yee's explicit method used in the most popular electrodynamics codes. (orig.)
Optimized low-order explicit Runge-Kutta schemes for high- order spectral difference method
Parsani, Matteo
2012-01-01
Optimal explicit Runge-Kutta (ERK) schemes with large stable step sizes are developed for method-of-lines discretizations based on the spectral difference (SD) spatial discretization on quadrilateral grids. These methods involve many stages and provide the optimal linearly stable time step for a prescribed SD spectrum and the minimum leading truncation error coefficient, while admitting a low-storage implementation. Using a large number of stages, the new ERK schemes lead to efficiency improvements larger than 60% over standard ERK schemes for 4th- and 5th-order spatial discretization.
International Nuclear Information System (INIS)
Aboanber, A.E.; Hamada, Y.M.
2008-01-01
An extensive knowledge of the spatial power distribution is required for the design and analysis of different types of current-generation reactors, and that requires the development of more sophisticated theoretical methods. Therefore, the need to develop new methods for multidimensional transient reactor analysis still exists. The objective of this paper is to develop a computationally efficient numerical method for solving the multigroup, multidimensional, static and transient neutron diffusion kinetics equations. A generalized Runge-Kutta method has been developed for the numerical integration of the stiff space-time diffusion equations. The method is fourth-order accurate, using an embedded third-order solution to arrive at an estimate of the truncation error for automatic time step control. In addition, the A(α)-stability properties of the method are investigated. The analyses of two- and three-dimensional benchmark problems as well as static and transient problems, demonstrate that very accurate solutions can be obtained with assembly-sized spatial meshes. Preliminary numerical evaluations using two- and three-dimensional finite difference codes showed that the presented generalized Runge-Kutta method is highly accurate and efficient when compared with other optimized iterative numerical and conventional finite difference methods
Zand, Pouria; Dilo, Arta; Havinga, Paul
2013-01-01
Current wireless technologies for industrial applications, such as WirelessHART and ISA100.11a, use a centralized management approach where a central network manager handles the requirements of the static network. However, such a centralized approach has several drawbacks. For example, it cannot cope with dynamicity/disturbance in large-scale networks in a real-time manner and it incurs a high communication overhead and latency for exchanging management traffic. In this paper, we therefore propose a distributed network management scheme, D-MSR. It enables the network devices to join the network, schedule their communications, establish end-to-end connections by reserving the communication resources for addressing real-time requirements, and cope with network dynamicity (e.g., node/edge failures) in a distributed manner. According to our knowledge, this is the first distributed management scheme based on IEEE 802.15.4e standard, which guides the nodes in different phases from joining until publishing their sensor data in the network. We demonstrate via simulation that D-MSR can address real-time and reliable communication as well as the high throughput requirements of industrial automation wireless networks, while also achieving higher efficiency in network management than WirelessHART, in terms of delay and overhead. PMID:23807687
Zand, Pouria; Dilo, Arta; Havinga, Paul
2013-06-27
Current wireless technologies for industrial applications, such as WirelessHART and ISA100.11a, use a centralized management approach where a central network manager handles the requirements of the static network. However, such a centralized approach has several drawbacks. For example, it cannot cope with dynamicity/disturbance in large-scale networks in a real-time manner and it incurs a high communication overhead and latency for exchanging management traffic. In this paper, we therefore propose a distributed network management scheme, D-MSR. It enables the network devices to join the network, schedule their communications, establish end-to-end connections by reserving the communication resources for addressing real-time requirements, and cope with network dynamicity (e.g., node/edge failures) in a distributed manner. According to our knowledge, this is the first distributed management scheme based on IEEE 802.15.4e standard, which guides the nodes in different phases from joining until publishing their sensor data in the network. We demonstrate via simulation that D-MSR can address real-time and reliable communication as well as the high throughput requirements of industrial automation wireless networks, while also achieving higher efficiency in network management than WirelessHART, in terms of delay and overhead.
Four-level conservative finite-difference schemes for Boussinesq paradigm equation
Kolkovska, N.
2013-10-01
In this paper a two-parametric family of four level conservative finite difference schemes is constructed for the multidimensional Boussinesq paradigm equation. The schemes are explicit in the sense that no inner iterations are needed for evaluation of the numerical solution. The preservation of the discrete energy with this method is proved. The schemes have been numerically tested on one soliton propagation model and two solitons interaction model. The numerical experiments demonstrate that the proposed family of schemes has second order of convergence in space and time steps in the discrete maximal norm.
International Nuclear Information System (INIS)
Mortazavi, S.M.J.; Mozdarani, H.
2000-01-01
Human lymphocytes exposed to low doses of X-rays, become less susceptible to the induction of chromosome aberrations by subsequent exposure to high doses of X-rays. This has been termed the radioadaptive response. One of the most important questions in the adaptive response studies was that of the possible existence of an optimum adapting dose. Early experiments indicated that this response could be induced by low doses of X-rays from 1 cGy to 20 cGy. Recently, it has been interestingly shown that the time scheme of exposure to adapting and challenge doses plays an important role in determination of the magnitude of the induced adaptive response. In this study, using the optimum irradiation time scheme (24-48), we have monitored the cytogenetic endpoint of chromosome aberrations to assess the magnitude of adaptation to ionizing radiation in the cultured human lymphocytes. Lymphocytes were pre-exposed to an adapting dose of 1-20 cGy at 24 hours, before an acute challenge dose of 1 or 2 Gy at 48 hours. Cells were fixed at 54 hours. Lymphocytes, which were pretreated with 5 as well as 10 cGy adapting doses, had significantly fewer chromosome aberrations. In spite of the fact that lymphocytes of some of our blood donors which were pre-treated with 1 or 20 cGy adapting doses, showed an adaptive response, the pooled data (all donors) indicated that such an induction of adaptive response can not be observed in these lymphocytes. The overall pattern of the induced adaptive response, indicated that in human lymphocyte (at least under the above mentioned irradiation scheme), 5 cGy and 10 cGy adapting doses are the optimum doses. (author)
Bentum, Marinus Jan; Samsom, M.M.; Samsom, Martin M.; Slump, Cornelis H.
1995-01-01
Some image processing applications (e.g. computer graphics and robot vision) require the rotation, scaling and translation of digitized images in real-time (25–30 images per second). Today's standard image processors can not meet this timing constraint so other solutions have to be considered. This
No-neighbours recurrence schemes for space-time Green's functions on a 3D simple cubic lattice
De Hon, Bastiaan P.; Floris, Sander J.; Arnold, John M.
2018-01-01
Application of multivariate creative telescoping to a finite triple sum representation of the discrete space-time Green's function for an arbitrary numeric (non-symbolic) lattice point on a 3D simple cubic lattice produces a fast, no-neighbours, seventh-order, eighteenth-degree, discrete-time
One-step lowrank wave extrapolation
Sindi, Ghada Atif
2014-01-01
Wavefield extrapolation is at the heart of modeling, imaging, and Full waveform inversion. Spectral methods gained well deserved attention due to their dispersion free solutions and their natural handling of anisotropic media. We propose a scheme a modified one-step lowrank wave extrapolation using Shanks transform in isotropic, and anisotropic media. Specifically, we utilize a velocity gradient term to add to the accuracy of the phase approximation function in the spectral implementation. With the higher accuracy, we can utilize larger time steps and make the extrapolation more efficient. Applications to models with strong inhomogeneity and considerable anisotropy demonstrates the utility of the approach.
Melzer, I; Krasovsky, T; Oddsson, L I E; Liebermann, D G
2010-12-01
This study investigated the force-time relationship during the push-off stage of a rapid voluntary step in young and older healthy adults, to study the assumption that when balance is lost a quick step may preserve stability. The ability to achieve peak propulsive force within a short time is critical for the performance of such a quick powerful step. We hypothesized that older adults would achieve peak force and power in significantly longer times compared to young people, particularly during the push-off preparatory phase. Fifteen young and 15 older volunteers performed rapid forward steps while standing on a force platform. Absolute anteroposterior and body weight normalized vertical forces during the push-off in the preparation and swing phases were used to determine time to peak and peak force, and step power. Two-way analyses of variance ('Group' [young-older] by 'Phase' [preparation-swing]) were used to assess our hypothesis (P ≤ 0.05). Older people exerted lower peak forces (anteroposterior and vertical) than young adults, but not necessarily lower peak power. More significantly, they showed a longer time to peak force, particularly in the vertical direction during the preparation phase. Older adults generate propulsive forces slowly and reach lower magnitudes, mainly during step preparation. The time to achieve a peak force and power, rather than its actual magnitude, may account for failures in quickly performing a preventive action. Such delay may be associated with the inability to react and recruit muscles quickly. Thus, training elderly to step fast in response to relevant cues may be beneficial in the prevention of falls. Copyright © 2010 Elsevier Ltd. All rights reserved.
Manodham, Thavisak; Loyola, Luis; Miki, Tetsuya
IEEE 802.11 wirelesses LANs (WLANs) have been rapidly deployed in enterprises, public areas, and households. Voice-over-IP (VoIP) and similar applications are now commonly used in mobile devices over wireless networks. Recent works have improved the quality of service (QoS) offering higher data rates to support various kinds of real-time applications. However, besides the need for higher data rates, seamless handoff and load balancing among APs are key issues that must be addressed in order to continue supporting real-time services across wireless LANs and providing fair services to all users. In this paper, we introduce a novel access point (AP) with two transceivers that improves network efficiency by supporting seamless handoff and traffic load balancing in a wireless network. In our proposed scheme, the novel AP uses the second transceiver to scan and find neighboring STAs in the transmission range and then sends the results to neighboring APs, which compare and analyze whether or not the STA should perform a handoff. The initial results from our simulations show that the novel AP module is more effective than the conventional scheme and a related work in terms of providing a handoff process with low latency and sharing traffic load with neighbor APs.
Directory of Open Access Journals (Sweden)
Craig Cora L
2011-06-01
Full Text Available Abstract Background This study examines associations between pedometer-determined steps/day and parent-reported child's Body Mass Index (BMI and time typically spent watching television between school and dinner. Methods Young people (aged 5-19 years were recruited through their parents by random digit dialling and mailed a data collection package. Information on height and weight and time spent watching television between school and dinner on a typical school day was collected from parents. In total, 5949 boys and 5709 girls reported daily steps. BMI was categorized as overweight or obese using Cole's cut points. Participants wore pedometers for 7 days and logged daily steps. The odds of being overweight and obese by steps/day and parent-reported time spent television watching were estimated using logistic regression for complex samples. Results Girls had a lower median steps/day (10682 versus 11059 for boys and also a narrower variation in steps/day (interquartile range, 4410 versus 5309 for boys. 11% of children aged 5-19 years were classified as obese; 17% of boys and girls were overweight. Both boys and girls watched, on average, Discussion Television viewing is the more prominent factor in terms of predicting overweight, and it contributes to obesity, but steps/day attenuates the association between television viewing and obesity, and therefore can be considered protective against obesity. In addition to replacing opportunities for active alternative behaviours, exposure to television might also impact body weight by promoting excess energy intake. Conclusions In this large nationally representative sample, pedometer-determined steps/day was associated with reduced odds of being obese (but not overweight whereas each parent-reported hour spent watching television between school and dinner increased the odds of both overweight and obesity.
DEFF Research Database (Denmark)
Shi, Liping; Brehm, Robert
2016-01-01
The overall energy conversion efficiency of solar cell arrays is highly effected by partial shading effects. Especially for solar panel arrays installed in environments which are exposed to inhomogeneous dynamic changing illuminations such as on roof tops of electrical vehicles the overall system...... efficiency is drastically reduced. Dynamic real-time reconfiguration of the solar panel array can reduce effects on the output efficiency due to partial shading. This results in a maximized power output of the panel array when exposed to dynamic changing illuminations. The optimal array configuration...... with respect to shading patterns can be stated as a combinatorial optimization problem and this paper proposes a dynamic programming (DP) based algorithm which finds the optimal feasible solution to reconfigure the solar panel array for maximum efficiency in real-time with linear time complexity. It is shown...
Sadygov, Rovshan G; Maroto, Fernando Martin; Hühmer, Andreas F R
2006-12-15
We present an algorithmic approach to align three-dimensional chromatographic surfaces of LC-MS data of complex mixture samples. The approach consists of two steps. In the first step, we prealign chromatographic profiles: two-dimensional projections of chromatographic surfaces. This is accomplished by correlation analysis using fast Fourier transforms. In this step, a temporal offset that maximizes the overlap and dot product between two chromatographic profiles is determined. In the second step, the algorithm generates correlation matrix elements between full mass scans of the reference and sample chromatographic surfaces. The temporal offset from the first step indicates a range of the mass scans that are possibly correlated, then the correlation matrix is calculated only for these mass scans. The correlation matrix carries information on highly correlated scans, but it does not itself determine the scan or time alignment. Alignment is determined as a path in the correlation matrix that maximizes the sum of the correlation matrix elements. The computational complexity of the optimal path generation problem is reduced by the use of dynamic programming. The program produces time-aligned surfaces. The use of the temporal offset from the first step in the second step reduces the computation time for generating the correlation matrix and speeds up the process. The algorithm has been implemented in a program, ChromAlign, developed in C++ language for the .NET2 environment in WINDOWS XP. In this work, we demonstrate the applications of ChromAlign to alignment of LC-MS surfaces of several datasets: a mixture of known proteins, samples from digests of surface proteins of T-cells, and samples prepared from digests of cerebrospinal fluid. ChromAlign accurately aligns the LC-MS surfaces we studied. In these examples, we discuss various aspects of the alignment by ChromAlign, such as constant time axis shifts and warping of chromatographic surfaces.
Nonoscillatory shock capturing scheme using flux limited dissipation
International Nuclear Information System (INIS)
Jameson, A.
1985-01-01
A method for modifying the third order dissipative terms by the introduction of flux limiters is proposed. The first order dissipative terms can then be eliminated entirely, and in the case of a scalar conservation law the scheme is converted into a total variation diminishing scheme provided that an appropriate value is chosen for the dissipative coefficient. Particular attention is given to: (1) the treatment of the scalar conservation law; (2) the treatment of the Euler equations for inviscid compressible flow; (3) the boundary conditions; and (4) multistage time stepping and multigrid schemes. Numerical results for transonic flows suggest that a central difference scheme augmented by flux limited dissipative terms can lead to an effective nonoscillatory shock capturing method. 20 references
Linear source approximation scheme for method of characteristics
International Nuclear Information System (INIS)
Tang Chuntao
2011-01-01
Method of characteristics (MOC) for solving neutron transport equation based on unstructured mesh has already become one of the fundamental methods for lattice calculation of nuclear design code system. However, most of MOC codes are developed with flat source approximation called step characteristics (SC) scheme, which is another basic assumption for MOC. A linear source (LS) characteristics scheme and its corresponding modification for negative source distribution were proposed. The OECD/NEA C5G7-MOX 2D benchmark and a self-defined BWR mini-core problem were employed to validate the new LS module of PEACH code. Numerical results indicate that the proposed LS scheme employs less memory and computational time compared with SC scheme at the same accuracy. (authors)
Progress with multigrid schemes for hypersonic flow problems
International Nuclear Information System (INIS)
Radespiel, R.; Swanson, R.C.
1995-01-01
Several multigrid schemes are considered for the numerical computation of viscous hypersonic flows. For each scheme, the basic solution algorithm employs upwind spatial discretization with explicit multistage time stepping. Two-level versions of the various multigrid algorithms are applied to the two-dimensional advection equation, and Fourier analysis is used to determine their damping properties. The capabilities of the multigrid methods are assessed by solving three different hypersonic flow problems. Some new multigrid schemes based on semicoarsening strategies are shown to be quite effective in relieving the stiffness caused by the high-aspect-ratio cells required to resolve high Reynolds number flows. These schemes exhibit good convergence rates for Reynolds numbers up to 200 X 10 6 and Mach numbers up to 25. 32 refs., 31 figs., 1 tab
Efficient Scheme for Chemical Flooding Simulation
Directory of Open Access Journals (Sweden)
Braconnier Benjamin
2014-07-01
Full Text Available In this paper, we investigate an efficient implicit scheme for the numerical simulation of chemical enhanced oil recovery technique for oil fields. For the sake of brevity, we only focus on flows with polymer to describe the physical and numerical models. In this framework, we consider a black oil model upgraded with the polymer modeling. We assume the polymer only transported in the water phase or adsorbed on the rock following a Langmuir isotherm. The polymer reduces the water phase mobility which can change drastically the behavior of water oil interfaces. Then, we propose a fractional step technique to resolve implicitly the system. The first step is devoted to the resolution of the black oil subsystem and the second to the polymer mass conservation. In such a way, jacobian matrices coming from the implicit formulation have a moderate size and preserve solvers efficiency. Nevertheless, the coupling between the black-oil subsystem and the polymer is not fully resolved. For efficiency and accuracy comparison, we propose an explicit scheme for the polymer for which large time step is prohibited due to its CFL (Courant-Friedrichs-Levy criterion and consequently approximates accurately the coupling. Numerical experiments with polymer are simulated : a core flood, a 5-spot reservoir with surfactant and ions and a 3D real case. Comparisons are performed between the polymer explicit and implicit scheme. They prove that our polymer implicit scheme is efficient, robust and resolves accurately the coupling physics. The development and the simulations have been performed with the software PumaFlow [PumaFlow (2013 Reference manual, release V600, Beicip Franlab].
Kandouci, Chahinaz; Djebbari, Ali
2018-04-01
A new family of two-dimensional optical hybrid code which employs zero cross-correlation (ZCC) codes, constructed by the balanced incomplete block design BIBD, as both time-spreading and wavelength hopping patterns are used in this paper. The obtained codes have both off-peak autocorrelation and cross-correlation values respectively equal to zero and unity. The work in this paper is a computer experiment performed using Optisystem 9.0 software program as a simulator to determine the wavelength hopping/time spreading (WH/TS) OCDMA system performances limitations. Five system parameters were considered in this work: the optical fiber length (transmission distance), the bitrate, the chip spacing and the transmitted power. This paper shows for what sufficient system performance parameters (BER≤10-9, Q≥6) the system can stand for.
Directory of Open Access Journals (Sweden)
Jin Wang
2017-03-01
Full Text Available This article proposes a multiple-step fault estimation algorithm for hypersonic flight vehicles that uses an interval type-II Takagi–Sugeno fuzzy model. An interval type-II Takagi–Sugeno fuzzy model is developed to approximate the nonlinear dynamic system and handle the parameter uncertainties of hypersonic firstly. Then, a multiple-step time-varying additive fault estimation algorithm is designed to estimate time-varying additive elevator fault of hypersonic flight vehicles. Finally, the simulation is conducted in both aspects of modeling and fault estimation; the validity and availability of such method are verified by a series of the comparison of numerical simulation results.
International Nuclear Information System (INIS)
Khanzadeh, Alireza; Pourgholi, Mahdi
2016-01-01
In the conventional chaos synchronization methods, the time at which two chaotic systems are synchronized, is usually unknown and depends on initial conditions. In this work based on Lyapunov stability theory a sliding mode controller with time-varying switching surfaces is proposed to achieve chaos synchronization at a pre-specified time for the first time. The proposed controller is able to synchronize chaotic systems precisely at any time when we want. Moreover, by choosing the time-varying switching surfaces in a way that the reaching phase is eliminated, the synchronization becomes robust to uncertainties and exogenous disturbances. Simulation results are presented to show the effectiveness of the proposed method of stabilizing and synchronizing chaotic systems with complete robustness to uncertainty and disturbances exactly at a pre-specified time. (paper)
Khanzadeh, Alireza; Pourgholi, Mahdi
2016-08-01
In the conventional chaos synchronization methods, the time at which two chaotic systems are synchronized, is usually unknown and depends on initial conditions. In this work based on Lyapunov stability theory a sliding mode controller with time-varying switching surfaces is proposed to achieve chaos synchronization at a pre-specified time for the first time. The proposed controller is able to synchronize chaotic systems precisely at any time when we want. Moreover, by choosing the time-varying switching surfaces in a way that the reaching phase is eliminated, the synchronization becomes robust to uncertainties and exogenous disturbances. Simulation results are presented to show the effectiveness of the proposed method of stabilizing and synchronizing chaotic systems with complete robustness to uncertainty and disturbances exactly at a pre-specified time.
Directory of Open Access Journals (Sweden)
Kenneth R. Gundle
2016-04-01
Full Text Available Introduction: Orthopaedic surgery is one of the first seven specialties that began collecting Milestone data as part of the Accreditation Council for Graduate Medical Education's Next Accreditation System (NAS rollout. This transition from process-based advancement to outcome-based education is an opportunity to assess resident and faculty understanding of changing paradigms, and opinions about technical skill evaluation. Methods: In a large academic orthopaedic surgery residency program, residents and faculty were anonymously surveyed. A total of 31/32 (97% residents and 29/53 (55% faculty responded to Likert scale assessments and provided open-ended responses. An internal end-of-rotation audit was conducted to assess timeliness of evaluations. A mixed-method analysis was utilized, with nonparametric statistical testing and a constant-comparative qualitative method. Results: There was greater familiarity with the six core competencies than with Milestones or the NAS (p<0.05. A majority of faculty and residents felt that end-of-rotation evaluations were not adequate for surgical skills feedback. Fifty-eight per cent of residents reported that end-of-rotation evaluations were rarely or never filled out in a timely fashion. An internal audit demonstrated that more than 30% of evaluations were completed over a month after rotation end. Qualitative analysis included themes of resident desire for more face-to-face feedback on technical skills after operative cases, and several barriers to more frequent feedback. Discussion: The NAS and outcome-based education have arrived. Residents and faculty need to be educated on this changing paradigm. This transition period is also a window of opportunity to address methods of evaluation and feedback. In our orthopaedic residency, trainees were significantly less satisfied than faculty with the amount of technical and surgical skills feedback being provided to trainees. The quantitative and qualitative analyses
Directory of Open Access Journals (Sweden)
Muqaddas Naz
2018-02-01
Full Text Available With the emergence of automated environments, energy demand by consumers is increasing rapidly. More than 80% of total electricity is being consumed in the residential sector. This brings a challenging task of maintaining the balance between demand and generation of electric power. In order to meet such challenges, a traditional grid is renovated by integrating two-way communication between the consumer and generation unit. To reduce electricity cost and peak load demand, demand side management (DSM is modeled as an optimization problem, and the solution is obtained by applying meta-heuristic techniques with different pricing schemes. In this paper, an optimization technique, the hybrid gray wolf differential evolution (HGWDE, is proposed by merging enhanced differential evolution (EDE and gray wolf optimization (GWO scheme using real-time pricing (RTP and critical peak pricing (CPP. Load shifting is performed from on-peak hours to off-peak hours depending on the electricity cost defined by the utility. However, there is a trade-off between user comfort and cost. To validate the performance of the proposed algorithm, simulations have been carried out in MATLAB. Results illustrate that using RTP, the peak to average ratio (PAR is reduced to 53.02%, 29.02% and 26.55%, while the electricity bill is reduced to 12.81%, 12.012% and 12.95%, respectively, for the 15-, 30- and 60-min operational time interval (OTI. On the other hand, the PAR and electricity bill are reduced to 47.27%, 22.91%, 22% and 13.04%, 12%, 11.11% using the CPP tariff.
de Koning, S.; Kaal, E.; Janssen, H.-G.; van Platerink, C.; Brinkman, U.A.Th.
2008-01-01
The feasibility of a versatile system for multi-step direct thermal desorption (DTD) coupled to comprehensive gas chromatography (GC × GC) with time-of-flight mass spectrometric (TOF-MS) detection is studied. As an application the system is used for the characterization of fresh versus aged olive
Kvelde, T.; Pijnappels, M.A.G.M.; Delbaere, K.; Close, J.C.; Lord, S.R.
2010-01-01
Background. The aim of the study was to use path analysis to test a theoretical model proposing that the relationship between self-reported depressed mood and choice stepping reaction time (CSRT) is mediated by psychoactive medication use, physiological performance, and cognitive ability.A total of
Boris push with spatial stepping
International Nuclear Information System (INIS)
Penn, G; Stoltz, P H; Cary, J R; Wurtele, J
2003-01-01
The Boris push is commonly used in plasma physics simulations because of its speed and stability. It is second-order accurate, requires only one field evaluation per time step, and has good conservation properties. However, for accelerator simulations it is convenient to propagate particles in z down a changing beamline. A 'spatial Boris push' algorithm has been developed which is similar to the Boris push but uses a spatial coordinate as the independent variable, instead of time. This scheme is compared to the fourth-order Runge-Kutta algorithm, for two simplified muon beam lattices: a uniform solenoid field, and a 'FOFO' lattice where the solenoid field varies sinusoidally along the axis. Examination of the canonical angular momentum, which should be conserved in axisymmetric systems, shows that the spatial Boris push improves accuracy over long distances
Energy Technology Data Exchange (ETDEWEB)
Oh, Suhk Kun [Chungbuk National University, Chungbuk (Korea, Republic of)
2006-01-15
As an extension of our previous work on the relationship between time in Monte Carlo simulation and time in the continuous master equation in the infinit-range Glauber kinetic Ising model in the absence of any magnetic field, we explored the same model in the presence of a static magnetic field. Monte Carlo steps per spin as time in the MC simulations again turns out to be proportional to time in the master equation for the model in relatively larger static magnetic fields at any temperature. At and near the critical point in a relatively smaller magnetic field, the model exhibits a significant finite-size dependence, and the solution to the Suzuki-Kubo differential equation stemming from the master equation needs to be re-scaled to fit the Monte Carlo steps per spin for the system with different numbers of spins.
Qin, Youxiang; Zhang, Junjie
2017-07-10
A novel low complexity and energy-efficient scheme by controlling the toggle-rate of ONU with time-domain amplitude identification is proposed for a heavy load downlink in an intensity-modulation and direct-detection orthogonal frequency division multiplexing passive optical network (IM-DD OFDM-PON). In a conventional OFDM-PON downlink, all ONUs have to perform demodulation for all the OFDM frames in a broadcast way no matter whether the frames are targeted to or not, which causes a huge energy waste. However, in our scheme, the optical network unit (ONU) logical link identifications (LLIDs) are inserted into each downlink OFDM frame in time-domain at the optical line terminal (OLT) side. At the ONU side, the LLID is obtained with a low complexity and high precision amplitude identification method. The ONU sets the toggle-rate of demodulation module to zero when the frames are not targeted to, which avoids unnecessary digital signal processing (DSP) energy consumption. Compared with the sleep-mode methods consisting of clock recovery and synchronization, toggle-rate shows its advantage in fast changing, which is more suitable for the heavy load scenarios. Moreover, for the first time to our knowledge, the characteristics of the proposed scheme are investigated in a real-time IM-DD OFDM system, which performs well at the received optical power as low as -21dBm. The experimental results show that 25.1% energy consumption can be saved in the receiver compared to the conventional configurations.
Tudor-Locke, Catrine; Craig, Cora L; Cameron, Christine; Griffiths, Joseph M
2011-01-01
Abstract Background This study examines associations between pedometer-determined steps/day and parent-reported child's Body Mass Index (BMI) and time typically spent watching television between school and dinner. Methods Young people (aged 5-19 years) were recruited through their parents by random digit dialling and mailed a data collection package. Information on height and weight and time spent watching television between school and dinner on a typical school day was collected from parents...
Additive operator-difference schemes splitting schemes
Vabishchevich, Petr N
2013-01-01
Applied mathematical modeling isconcerned with solving unsteady problems. This bookshows how toconstruct additive difference schemes to solve approximately unsteady multi-dimensional problems for PDEs. Two classes of schemes are highlighted: methods of splitting with respect to spatial variables (alternating direction methods) and schemes of splitting into physical processes. Also regionally additive schemes (domain decomposition methods)and unconditionally stable additive schemes of multi-component splitting are considered for evolutionary equations of first and second order as well as for sy
A Novel Iris Segmentation Scheme
Directory of Open Access Journals (Sweden)
Chen-Chung Liu
2014-01-01
Full Text Available One of the key steps in the iris recognition system is the accurate iris segmentation from its surrounding noises including pupil, sclera, eyelashes, and eyebrows of a captured eye-image. This paper presents a novel iris segmentation scheme which utilizes the orientation matching transform to outline the outer and inner iris boundaries initially. It then employs Delogne-Kåsa circle fitting (instead of the traditional Hough transform to further eliminate the outlier points to extract a more precise iris area from an eye-image. In the extracted iris region, the proposed scheme further utilizes the differences in the intensity and positional characteristics of the iris, eyelid, and eyelashes to detect and delete these noises. The scheme is then applied on iris image database, UBIRIS.v1. The experimental results show that the presented scheme provides a more effective and efficient iris segmentation than other conventional methods.
DEFF Research Database (Denmark)
Nielsen, A. C. Y.; Bottiger, B.; Midgley, S. E.
2013-01-01
As the number of new enteroviruses and human parechoviruses seems ever growing, the necessity for updated diagnostics is relevant. We have updated an enterovirus assay and combined it with a previously published assay for human parechovirus resulting in a multiplex one-step RT-PCR assay....... The multiplex assay was validated by analysing the sensitivity and specificity of the assay compared to the respective monoplex assays, and a good concordance was found. Furthermore, the enterovirus assay was able to detect 42 reference strains from all 4 species, and an additional 9 genotypes during panel...... testing and routine usage. During 15 months of routine use, from October 2008 to December 2009, we received and analysed 2187 samples (stool samples, cerebrospinal fluids, blood samples, respiratory samples and autopsy samples) were tested, from 1546 patients and detected enteroviruses and parechoviruses...
Wen, Baole; Chini, Gregory P.; Kerswell, Rich R.; Doering, Charles R.
2015-10-01
An alternative computational procedure for numerically solving a class of variational problems arising from rigorous upper-bound analysis of forced-dissipative infinite-dimensional nonlinear dynamical systems, including the Navier-Stokes and Oberbeck-Boussinesq equations, is analyzed and applied to Rayleigh-Bénard convection. A proof that the only steady state to which this numerical algorithm can converge is the required global optimal of the relevant variational problem is given for three canonical flow configurations. In contrast with most other numerical schemes for computing the optimal bounds on transported quantities (e.g., heat or momentum) within the "background field" variational framework, which employ variants of Newton's method and hence require very accurate initial iterates, the new computational method is easy to implement and, crucially, does not require numerical continuation. The algorithm is used to determine the optimal background-method bound on the heat transport enhancement factor, i.e., the Nusselt number (Nu), as a function of the Rayleigh number (Ra), Prandtl number (Pr), and domain aspect ratio L in two-dimensional Rayleigh-Bénard convection between stress-free isothermal boundaries (Rayleigh's original 1916 model of convection). The result of the computation is significant because analyses, laboratory experiments, and numerical simulations have suggested a range of exponents α and β in the presumed Nu˜PrαRaβ scaling relation. The computations clearly show that for Ra≤1010 at fixed L =2 √{2 },Nu≤0.106 Pr0Ra5/12 , which indicates that molecular transport cannot generally be neglected in the "ultimate" high-Ra regime.
Time-Discrete Higher-Order ALE Formulations: Stability
Bonito, Andrea; Kyza, Irene; Nochetto, Ricardo H.
2013-01-01
on the stability of the PDE but may influence that of a discrete scheme. We examine this critical issue for higher-order time stepping without space discretization. We propose time-discrete discontinuous Galerkin (dG) numerical schemes of any order for a time
STeP: A Tool for the Development of Provably Correct Reactive and Real-Time Systems
National Research Council Canada - National Science Library
Manna, Zohar
1999-01-01
This research is directed towards the implementation of a comprehensive toolkit for the development and verification of high assurance reactive systems, especially concurrent, real time, and hybrid systems...
A second-order iterative implicit-explicit hybrid scheme for hyperbolic systems of conservation laws
International Nuclear Information System (INIS)
Dai, Wenlong; Woodward, P.R.
1996-01-01
An iterative implicit-explicit hybrid scheme is proposed for hyperbolic systems of conservation laws. Each wave in a system may be implicitly, or explicitly, or partially implicitly and partially explicitly treated depending on its associated Courant number in each numerical cell, and the scheme is able to smoothly switch between implicit and explicit calculations. The scheme is of Godunov-type in both explicit and implicit regimes, is in a strict conservation form, and is accurate to second-order in both space and time for all Courant numbers. The computer code for the scheme is easy to vectorize. Multicolors proposed in this paper may reduce the number of iterations required to reach a converged solution by several orders for a large time step. The feature of the scheme is shown through numerical examples. 38 refs., 12 figs
Resonance ionization scheme development for europium
Energy Technology Data Exchange (ETDEWEB)
Chrysalidis, K., E-mail: katerina.chrysalidis@cern.ch; Goodacre, T. Day; Fedosseev, V. N.; Marsh, B. A. [CERN (Switzerland); Naubereit, P. [Johannes Gutenberg-Universität, Institiut für Physik (Germany); Rothe, S.; Seiffert, C. [CERN (Switzerland); Kron, T.; Wendt, K. [Johannes Gutenberg-Universität, Institiut für Physik (Germany)
2017-11-15
Odd-parity autoionizing states of europium have been investigated by resonance ionization spectroscopy via two-step, two-resonance excitations. The aim of this work was to establish ionization schemes specifically suited for europium ion beam production using the ISOLDE Resonance Ionization Laser Ion Source (RILIS). 13 new RILIS-compatible ionization schemes are proposed. The scheme development was the first application of the Photo Ionization Spectroscopy Apparatus (PISA) which has recently been integrated into the RILIS setup.
Angel, Jordan B.; Banks, Jeffrey W.; Henshaw, William D.
2018-01-01
High-order accurate upwind approximations for the wave equation in second-order form on overlapping grids are developed. Although upwind schemes are well established for first-order hyperbolic systems, it was only recently shown by Banks and Henshaw [1] how upwinding could be incorporated into the second-order form of the wave equation. This new upwind approach is extended here to solve the time-domain Maxwell's equations in second-order form; schemes of arbitrary order of accuracy are formulated for general curvilinear grids. Taylor time-stepping is used to develop single-step space-time schemes, and the upwind dissipation is incorporated by embedding the exact solution of a local Riemann problem into the discretization. Second-order and fourth-order accurate schemes are implemented for problems in two and three space dimensions, and overlapping grids are used to treat complex geometry and problems with multiple materials. Stability analysis of the upwind-scheme on overlapping grids is performed using normal mode theory. The stability analysis and computations confirm that the upwind scheme remains stable on overlapping grids, including the difficult case of thin boundary grids when the traditional non-dissipative scheme becomes unstable. The accuracy properties of the scheme are carefully evaluated on a series of classical scattering problems for both perfect conductors and dielectric materials in two and three space dimensions. The upwind scheme is shown to be robust and provide high-order accuracy.
Construction of Low Dissipative High Order Well-Balanced Filter Schemes for Non-Equilibrium Flows
Wang, Wei; Yee, H. C.; Sjogreen, Bjorn; Magin, Thierry; Shu, Chi-Wang
2009-01-01
The goal of this paper is to generalize the well-balanced approach for non-equilibrium flow studied by Wang et al. [26] to a class of low dissipative high order shock-capturing filter schemes and to explore more advantages of well-balanced schemes in reacting flows. The class of filter schemes developed by Yee et al. [30], Sjoegreen & Yee [24] and Yee & Sjoegreen [35] consist of two steps, a full time step of spatially high order non-dissipative base scheme and an adaptive nonlinear filter containing shock-capturing dissipation. A good property of the filter scheme is that the base scheme and the filter are stand alone modules in designing. Therefore, the idea of designing a well-balanced filter scheme is straightforward, i.e., choosing a well-balanced base scheme with a well-balanced filter (both with high order). A typical class of these schemes shown in this paper is the high order central difference schemes/predictor-corrector (PC) schemes with a high order well-balanced WENO filter. The new filter scheme with the well-balanced property will gather the features of both filter methods and well-balanced properties: it can preserve certain steady state solutions exactly; it is able to capture small perturbations, e.g., turbulence fluctuations; it adaptively controls numerical dissipation. Thus it shows high accuracy, efficiency and stability in shock/turbulence interactions. Numerical examples containing 1D and 2D smooth problems, 1D stationary contact discontinuity problem and 1D turbulence/shock interactions are included to verify the improved accuracy, in addition to the well-balanced behavior.
Nielsen, Alex Christian Yde; Böttiger, Blenda; Midgley, Sofie Elisabeth; Nielsen, Lars Peter
2013-11-01
As the number of new enteroviruses and human parechoviruses seems ever growing, the necessity for updated diagnostics is relevant. We have updated an enterovirus assay and combined it with a previously published assay for human parechovirus resulting in a multiplex one-step RT-PCR assay. The multiplex assay was validated by analysing the sensitivity and specificity of the assay compared to the respective monoplex assays, and a good concordance was found. Furthermore, the enterovirus assay was able to detect 42 reference strains from all 4 species, and an additional 9 genotypes during panel testing and routine usage. During 15 months of routine use, from October 2008 to December 2009, we received and analysed 2187 samples (stool samples, cerebrospinal fluids, blood samples, respiratory samples and autopsy samples) were tested, from 1546 patients and detected enteroviruses and parechoviruses in 171 (8%) and 66 (3%) of the samples, respectively. 180 of the positive samples could be genotyped by PCR and sequencing and the most common genotypes found were human parechovirus type 3, echovirus 9, enterovirus 71, Coxsackievirus A16, and echovirus 25. During 2009 in Denmark, both enterovirus and human parechovirus type 3 had a similar seasonal pattern with a peak during the summer and autumn. Human parechovirus type 3 was almost invariably found in children less than 4 months of age. In conclusion, a multiplex assay was developed allowing simultaneous detection of 2 viruses, which can cause similar clinical symptoms. Copyright © 2013 Elsevier B.V. All rights reserved.
On some Approximation Schemes for Steady Compressible Viscous Flow
Bause, M.; Heywood, J. G.; Novotny, A.; Padula, M.
This paper continues our development of approximation schemes for steady compressible viscous flow based on an iteration between a Stokes like problem for the velocity and a transport equation for the density, with the aim of improving their suitability for computations. Such schemes seem attractive for computations because they offer a reduction to standard problems for which there is already highly refined software, and because of the guidance that can be drawn from an existence theory based on them. Our objective here is to modify a recent scheme of Heywood and Padula [12], to improve its convergence properties. This scheme improved upon an earlier scheme of Padula [21], [23] through the use of a special ``effective pressure'' in linking the Stokes and transport problems. However, its convergence is limited for several reasons. Firstly, the steady transport equation itself is only solvable for general velocity fields if they satisfy certain smallness conditions. These conditions are met here by using a rescaled variant of the steady transport equation based on a pseudo time step for the equation of continuity. Another matter limiting the convergence of the scheme in [12] is that the Stokes linearization, which is a linearization about zero, has an inevitably small range of convergence. We replace it here with an Oseen or Newton linearization, either of which has a wider range of convergence, and converges more rapidly. The simplicity of the scheme offered in [12] was conducive to a relatively simple and clearly organized proof of its convergence. The proofs of convergence for the more complicated schemes proposed here are structured along the same lines. They strengthen the theorems of existence and uniqueness in [12] by weakening the smallness conditions that are needed. The expected improvement in the computational performance of the modified schemes has been confirmed by Bause [2], in an ongoing investigation.
Effect of different air-drying time on the microleakage of single-step self-etch adhesives
Moosavi, Horieh; Forghani, Maryam; Managhebi, Esmatsadat
2013-01-01
Objectives This study evaluated the effect of three different air-drying times on microleakage of three self-etch adhesive systems. Materials and Methods Class I cavities were prepared for 108 extracted sound human premolars. The teeth were divided into three main groups based on three different adhesives: Opti Bond All in One (OBAO), Clearfil S3 Bond (CSB), Bond Force (BF). Each main group divided into three subgroups regarding the air-drying time: without application of air stream...
Directory of Open Access Journals (Sweden)
Paula M Frew
2010-09-01
Full Text Available Paula M Frew1,2,3,4, Mark J Mulligan1,2,3, Su-I Hou5, Kayshin Chan3, Carlos del Rio1,2,3,61Department of Medicine, Division of Infectious Diseases, Emory University School of Medicine, Atlanta, Georgia, USA; 2Emory Center for AIDS Research, Atlanta, Georgia, USA; 3The Hope Clinic of the Emory Vaccine Center, Decatur, Georgia, USA; 4Department of Behavioral Sciences and Health Education, Rollins School of Public Health, Emory University, Atlanta, Georgia, USA; 5Department of Health Promotion and Behavior, College of Public Health, University of Georgia, Athens, Georgia, USA; 6Department of Global Health, Rollins School of Public Health, Emory University, Atlanta, Georgia, USAObjective: This study examines whether men-who-have-sex-with-men (MSM and transgender (TG persons’ attitudes, beliefs, and risk perceptions toward human immunodeficiency virus (HIV vaccine research have been altered as a result of the negative findings from a phase 2B HIV vaccine study.Design: We conducted a cross-sectional survey among MSM and TG persons (N = 176 recruited from community settings in Atlanta from 2007 to 2008. The first group was recruited during an active phase 2B HIV vaccine trial in which a candidate vaccine was being evaluated (the “Step Study”, and the second group was recruited after product futility was widely reported in the media.Methods: Descriptive statistics, t tests, and chi-square tests were conducted to ascertain differences between the groups, and ordinal logistic regressions examined the influences of the above-mentioned factors on a critical outcome, future HIV vaccine study participation. The ordinal regression outcomes evaluated the influences on disinclination, neutrality, and inclination to study participation.Results: Behavioral outcomes such as future recruitment, event attendance, study promotion, and community mobilization did not reveal any differences in participants’ intentions between the groups. However, we observed
Implicit time accurate simulation of unsteady flow
van Buuren, René; Kuerten, Hans; Geurts, Bernard J.
2001-03-01
Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright
Selectively strippable paint schemes
Stein, R.; Thumm, D.; Blackford, Roger W.
1993-03-01
In order to meet the requirements of more environmentally acceptable paint stripping processes many different removal methods are under evaluation. These new processes can be divided into mechanical and chemical methods. ICI has developed a paint scheme with intermediate coat and fluid resistant polyurethane topcoat which can be stripped chemically in a short period of time with methylene chloride free and phenol free paint strippers.
Simple Numerical Schemes for the Korteweg-deVries Equation
International Nuclear Information System (INIS)
McKinstrie, C. J.; Kozlov, M.V.
2000-01-01
Two numerical schemes, which simulate the propagation of dispersive non-linear waves, are described. The first is a split-step Fourier scheme for the Korteweg-de Vries (KdV) equation. The second is a finite-difference scheme for the modified KdV equation. The stability and accuracy of both schemes are discussed. These simple schemes can be used to study a wide variety of physical processes that involve dispersive nonlinear waves
Simple Numerical Schemes for the Korteweg-deVries Equation
Energy Technology Data Exchange (ETDEWEB)
C. J. McKinstrie; M. V. Kozlov
2000-12-01
Two numerical schemes, which simulate the propagation of dispersive non-linear waves, are described. The first is a split-step Fourier scheme for the Korteweg-de Vries (KdV) equation. The second is a finite-difference scheme for the modified KdV equation. The stability and accuracy of both schemes are discussed. These simple schemes can be used to study a wide variety of physical processes that involve dispersive nonlinear waves.
Elliott, Mark A; du Bois, Naomi
2017-01-01
From the point of view of the cognitive dynamicist the organization of brain circuitry into assemblies defined by their synchrony at particular (and precise) oscillation frequencies is important for the correct correlation of all independent cortical responses to the different aspects of a given complex thought or object. From the point of view of anyone operating complex mechanical systems, i.e., those comprising independent components that are required to interact precisely in time, it follows that the precise timing of such a system is essential - not only essential but measurable, and scalable. It must also be reliable over observations to bring about consistent behavior, whatever that behavior is. The catastrophic consequence of an absence of such precision, for instance that required to govern the interference engine in many automobiles, is indicative of how important timing is for the function of dynamical systems at all levels of operation. The dynamics and temporal considerations combined indicate that it is necessary to consider the operating characteristic of any dynamical, cognitive brain system in terms, superficially at least, of oscillation frequencies. These may, themselves, be forensic of an underlying time-related taxonomy. Currently there are only two sets of relevant and necessarily systematic observations in this field: one of these reports the precise dynamical structure of the perceptual systems engaged in dynamical binding across form and time; the second, derived both empirically from perceptual performance data, as well as obtained from theoretical models, demonstrates a timing taxonomy related to a fundamental operator referred to as the time quantum. In this contribution both sets of theory and observations are reviewed and compared for their predictive consistency. Conclusions about direct comparability are discussed for both theories of cognitive dynamics and time quantum models. Finally, a brief review of some experimental data
Tan, Maxine; Pu, Jiantao; Zheng, Bin
2014-01-01
In the field of computer-aided mammographic mass detection, many different features and classifiers have been tested. Frequently, the relevant features and optimal topology for the artificial neural network (ANN)-based approaches at the classification stage are unknown, and thus determined by trial-and-error experiments. In this study, we analyzed a classifier that evolves ANNs using genetic algorithms (GAs), which combines feature selection with the learning task. The classifier named "Phased Searching with NEAT in a Time-Scaled Framework" was analyzed using a dataset with 800 malignant and 800 normal tissue regions in a 10-fold cross-validation framework. The classification performance measured by the area under a receiver operating characteristic (ROC) curve was 0.856 ± 0.029. The result was also compared with four other well-established classifiers that include fixed-topology ANNs, support vector machines (SVMs), linear discriminant analysis (LDA), and bagged decision trees. The results show that Phased Searching outperformed the LDA and bagged decision tree classifiers, and was only significantly outperformed by SVM. Furthermore, the Phased Searching method required fewer features and discarded superfluous structure or topology, thus incurring a lower feature computational and training and validation time requirement. Analyses performed on the network complexities evolved by Phased Searching indicate that it can evolve optimal network topologies based on its complexification and simplification parameter selection process. From the results, the study also concluded that the three classifiers - SVM, fixed-topology ANN, and Phased Searching with NeuroEvolution of Augmenting Topologies (NEAT) in a Time-Scaled Framework - are performing comparably well in our mammographic mass detection scheme.
Directory of Open Access Journals (Sweden)
Gaëlle Aeby
2014-06-01
Full Text Available Divorce and remarriage usually imply a redefinition of family boundaries, with consequences for the production and availability of social capital. This research shows that bonding and bridging social capitals are differentially made available by families. It first hypothesizes that bridging social capital is more likely to be developed in stepfamilies, and bonding social capital in first-time families. Second, the boundaries of family configurations are expected to vary within stepfamilies and within first-time families creating a diversity of family configurations within both structures. Third, in both cases, social capital is expected to depend on the ways in which their family boundaries are set up by individuals by including or excluding ex-partners, new partner's children, siblings, and other family ties. The study is based on a sample of 300 female respondents who have at least one child of their own between 5 and 13 years, 150 from a stepfamily structure and 150 from a first-time family structure. Social capital is empirically operationalized as perceived emotional support in family networks. The results show that individuals in first-time families more often develop bonding social capital and individuals in stepfamilies bridging social capital. In both cases, however, individuals in family configurations based on close blood and conjugal ties more frequently develop bonding social capital, whereas individuals in family configurations based on in-law, stepfamily or friendship ties are more likely to develop bridging social capital.
Das, P; Pandey, P; Harishankar, A; Chandy, M; Bhattacharya, S; Chakrabarti, A
2017-01-01
Standardization of Aspergillus polymerase chain reaction (PCR) poses two technical challenges (a) standardization of DNA extraction, (b) optimization of PCR against various medically important Aspergillus species. Many cases of aspergillosis go undiagnosed because of relative insensitivity of conventional diagnostic methods such as microscopy, culture or antigen detection. The present study is an attempt to standardize real-time PCR assay for rapid sensitive and specific detection of Aspergillus DNA in EDTA whole blood. Three nucleic acid extraction protocols were compared and a two-step real-time PCR assay was developed and validated following the recommendations of the European Aspergillus PCR Initiative in our setup. In the first PCR step (pan-Aspergillus PCR), the target was 28S rDNA gene, whereas in the second step, species specific PCR the targets were beta-tubulin (for Aspergillus fumigatus, Aspergillus flavus, Aspergillus terreus), gene and calmodulin gene (for Aspergillus niger). Species specific identification of four medically important Aspergillus species, namely, A. fumigatus, A. flavus, A. niger and A. terreus were achieved by this PCR. Specificity of the PCR was tested against 34 different DNA source including bacteria, virus, yeast, other Aspergillus sp., other fungal species and for human DNA and had no false-positive reactions. The analytical sensitivity of the PCR was found to be 102 CFU/ml. The present protocol of two-step real-time PCR assays for genus- and species-specific identification for commonly isolated species in whole blood for diagnosis of invasive Aspergillus infections offers a rapid, sensitive and specific assay option and requires clinical validation at multiple centers.
A multigrid algorithm for the cell-centered finite difference scheme
Ewing, Richard E.; Shen, Jian
1993-01-01
In this article, we discuss a non-variational V-cycle multigrid algorithm based on the cell-centered finite difference scheme for solving a second-order elliptic problem with discontinuous coefficients. Due to the poor approximation property of piecewise constant spaces and the non-variational nature of our scheme, one step of symmetric linear smoothing in our V-cycle multigrid scheme may fail to be a contraction. Again, because of the simple structure of the piecewise constant spaces, prolongation and restriction are trivial; we save significant computation time with very promising computational results.
Calatroni, Luca
2013-08-01
We present directional operator splitting schemes for the numerical solution of a fourth-order, nonlinear partial differential evolution equation which arises in image processing. This equation constitutes the H -1-gradient flow of the total variation and represents a prototype of higher-order equations of similar type which are popular in imaging for denoising, deblurring and inpainting problems. The efficient numerical solution of this equation is very challenging due to the stiffness of most numerical schemes. We show that the combination of directional splitting schemes with implicit time-stepping provides a stable and computationally cheap numerical realisation of the equation.
Calatroni, Luca; Dü ring, Bertram; Schö nlieb, Carola-Bibiane
2013-01-01
We present directional operator splitting schemes for the numerical solution of a fourth-order, nonlinear partial differential evolution equation which arises in image processing. This equation constitutes the H -1-gradient flow of the total variation and represents a prototype of higher-order equations of similar type which are popular in imaging for denoising, deblurring and inpainting problems. The efficient numerical solution of this equation is very challenging due to the stiffness of most numerical schemes. We show that the combination of directional splitting schemes with implicit time-stepping provides a stable and computationally cheap numerical realisation of the equation.
Non-hydrostatic semi-elastic hybrid-coordinate SISL extension of HIRLAM. Part I: numerical scheme
Rõõm, Rein; Männik, Aarne; Luhamaa, Andres
2007-01-01
Two-time-level, semi-implicit, semi-Lagrangian (SISL) scheme is applied to the non-hydrostatic pressure coordinate equations, constituting a modified Miller–Pearce–White model, in hybrid-coordinate framework. Neutral background is subtracted in the initial continuous dynamics, yielding modified equations for geopotential, temperature and logarithmic surface pressure fluctuation. Implicit Lagrangian marching formulae for single time-step are derived. A disclosure scheme is presented, which res...
Microsoft Office professional 2010 step by step
Cox, Joyce; Frye, Curtis
2011-01-01
Teach yourself exactly what you need to know about using Office Professional 2010-one step at a time! With STEP BY STEP, you build and practice new skills hands-on, at your own pace. Covering Microsoft Word, PowerPoint, Outlook, Excel, Access, Publisher, and OneNote, this book will help you learn the core features and capabilities needed to: Create attractive documents, publications, and spreadsheetsManage your e-mail, calendar, meetings, and communicationsPut your business data to workDevelop and deliver great presentationsOrganize your ideas and notes in one placeConnect, share, and accom
Rouwet, Dmitri
2016-04-01
Tracking variations in the chemical composition, water temperature and pH of brines from peak-activity crater lakes is the most obvious way to forecast phreatic activity. Volcano monitoring intrinsically implies a time window of observation that should be synchronised with the kinetics of magmatic processes, such as degassing and magma intrusion. To decipher "how much time ago" a variation in degassing regime actually occurred before eventually being detected in a crater lake is key, and depends on the lake water residence time. The above reasoning assumes that gas is preserved as anions in the lake water (SO4, Cl, F anions), in other words, that scrubbing of acid gases is complete and irreversible. Less is true. Recent work has confirmed, by direct MultiGas measurement from evaporative plumes, that even the strongest acid in liquid medium (i.e. SO2) degasses from hyper-acidic crater lakes. The less strong acid HCl has long been recognised as being more volatile than hydrophyle in extremely acidic solutions (pH near 0), through a long-term steady increase in SO4/Cl ratios in the vigorously evaporating crater lake of Poás volcano. We now know that acidic gases flush through hyper-acidic crater lake brines, but we don't know to which extend (completely or partially?), and with which speed. The chemical composition hence only reflects a transient phase of the gas flushing through the lake. In terms of volcanic surveillance this brings the advantage that the monitoring time window is definitely shorter than defined by the water chemistry, but yet, we do not know how much shorter. Empirical experiments by Capaccioni et al. (in press) have tried to tackle this kinetic problem for HCl degassing from a "lab-lake" on the short-term (2 days). With this state of the art in mind, two new monitoring strategies can be proposed to seek for precursory signals of phreatic eruptions from crater lakes: (1) Tracking variations in gas compositions, fluxes and ratios between species in
Getty, Stephanie A.; Brinckerhoff, William B.; Cornish, Timothy; Li, Xiang; Floyd, Melissa; Arevalo, Ricardo Jr.; Cook, Jamie Elsila; Callahan, Michael P.
2013-01-01
Laser desorption/ionization time-of-flight mass spectrometry (LD-TOF-MS) holds promise to be a low-mass, compact in situ analytical capability for future landed missions to planetary surfaces. The ability to analyze a solid sample for both mineralogical and preserved organic content with laser ionization could be compelling as part of a scientific mission pay-load that must be prepared for unanticipated discoveries. Targeted missions for this instrument capability include Mars, Europa, Enceladus, and small icy bodies, such as asteroids and comets.
International Nuclear Information System (INIS)
Busolo, F.; Conventi, L.; Grigolon, M.; Palu, G.
1991-01-01
Kinetics of [3H]-uridine uptake by murine peritoneal macrophages (pM phi) is early altered after exposure to a variety of stimuli. Alterations caused by Candida albicans, lipopolysaccharide (LPS) and recombinant interferon-gamma (rIFN-gamma) were similar in SAVO, C57BL/6, C3H/HeN and C3H/HeJ mice, and were not correlated with an activation process as shown by the amount of tumor necrosis factor-alpha (TNF-alpha) being released. Short-time exposure to all stimuli resulted in an increased nucleoside uptake by SAVO pM phi, suggesting that the tumoricidal function of this cell either depends from the type of stimulus or the time when the specific interaction with the cell receptor is taking place. Experiments with priming and triggering signals confirmed the above findings, indicating that the increase or the decrease of nucleoside uptake into the cell depends essentially on the chemical nature of the priming stimulus. The triggering stimulus, on the other hand, is only able to amplify the primary response
Computational scheme for transient temperature distribution in PWR vessel wall
International Nuclear Information System (INIS)
Dedovic, S.; Ristic, P.
1980-01-01
Computer code TEMPNES is a part of joint effort made in Gosa Industries in achieving the technique for structural analysis of heavy pressure vessels. Transient heat conduction problems analysis is based on finite element discretization of structures non-linear transient matrix formulation and time integration scheme as developed by Wilson (step-by-step procedure). Convection boundary conditions and the effect of heat generation due to radioactive radiation are both considered. The computation of transient temperature distributions in reactor vessel wall when the water temperature suddenly drops as a consequence of reactor cooling pump failure is presented. The vessel is treated as as axisymmetric body of revolution. The program has two finite time element options a) fixed predetermined increment and; b) an automatically optimized time increment for each step dependent on the rate of change of the nodal temperatures. (author)
A Memory Efficient Network Encryption Scheme
El-Fotouh, Mohamed Abo; Diepold, Klaus
In this paper, we studied the two widely used encryption schemes in network applications. Shortcomings have been found in both schemes, as these schemes consume either more memory to gain high throughput or low memory with low throughput. The need has aroused for a scheme that has low memory requirements and in the same time possesses high speed, as the number of the internet users increases each day. We used the SSM model [1], to construct an encryption scheme based on the AES. The proposed scheme possesses high throughput together with low memory requirements.
International Nuclear Information System (INIS)
Imam-Dahroni; Dwi-Herwidhi; NS, Kasilani
2000-01-01
The research of the synthesis of matrix graphite on the step of bakingprocess was conducted, by focusing on the influence of time and velocityvariables of the inert gas. The investigation on baking times ranging from 5minutes to 55 minutes and by varying the velocity of inert gas from 0.30l/minute to 3.60 l/minute, resulted the product of different matrix.Optimizing at the time of operation and the flow rate of argon gas indicatedthat the baking time for 30 minutes and by the flow rate of argon gas of 2.60l/minute resulted best matrix graphite that has a hardness value of 11kg/mm 2 of hardness and the ductility of 1800 Newton. (author)
Ficanha, Evandro M; Ribeiro, Guilherme A; Knop, Lauren; Rastgaar, Mo
2017-07-01
This paper describes the methods and experiment protocols for estimation of the human ankle impedance during turning and straight line walking. The ankle impedance of two human subjects during the stance phase of walking in both dorsiflexion plantarflexion (DP) and inversion eversion (IE) were estimated. The impedance was estimated about 8 axes of rotations of the human ankle combining different amounts of DP and IE rotations, and differentiating among positive and negative rotations at 5 instants of the stance length (SL). Specifically, the impedance was estimated at 10%, 30%, 50%, 70% and 90% of the SL. The ankle impedance showed great variability across time, and across the axes of rotation, with consistent larger stiffness and damping in DP than IE. When comparing straight walking and turning, the main differences were in damping at 50%, 70%, and 90% of the SL with an increase in damping at all axes of rotation during turning.
Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul
In this article, one and two-dimensional hydrodynamical models of semiconductor devices are numerically investigated. The models treat the propagation of electrons in a semiconductor device as the flow of a charged compressible fluid. It plays an important role in predicting the behavior of electron flow in semiconductor devices. Mathematically, the governing equations form a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the kinetic flux-vector splitting (KFVS) method for the hyperbolic step, and a semi-implicit Runge-Kutta method for the relaxation step. The KFVS method is based on the direct splitting of macroscopic flux functions of the system on the cell interfaces. The second order accuracy of the scheme is achieved by using MUSCL-type initial reconstruction and Runge-Kutta time stepping method. Several case studies are considered. For validation, the results of current scheme are compared with those obtained from the splitting scheme based on the NT central scheme. The effects of various parameters such as low field mobility, device length, lattice temperature and voltage are analyzed. The accuracy, efficiency and simplicity of the proposed KFVS scheme validates its generic applicability to the given model equations. A two dimensional simulation is also performed by KFVS method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.
A predictor-corrector scheme for solving the Volterra integral equation
Al Jarro, Ahmed
2011-08-01
The occurrence of late time instabilities is a common problem of almost all time marching methods developed for solving time domain integral equations. Implicit marching algorithms are now considered stable with various efforts that have been developed for removing low and high frequency instabilities. On the other hand, literature on stabilizing explicit schemes, which might be considered more efficient since they do not require a matrix inversion at each time step, is practically non-existent. In this work, a stable but still explicit predictor-corrector scheme is proposed for solving the Volterra integral equation and its efficacy is verified numerically. © 2011 IEEE.
Wierenga, Debbie; Engbers, Luuk H; van Empelen, Pepijn; Hildebrandt, Vincent H; van Mechelen, Willem
2012-08-07
Worksite health promotion programs (WHPPs) offer an attractive opportunity to improve the lifestyle of employees. Nevertheless, broad scale and successful implementation of WHPPs in daily practice often fails. In the present study, called BRAVO@Work, a 7-step implementation strategy was used to develop, implement and embed a WHPP in two different worksites with a focus on multiple lifestyle interventions.This article describes the design and framework for the formative evaluation of this 7-step strategy under real-time conditions by an embedded scientist with the purpose to gain insight into whether this this 7-step strategy is a useful and effective implementation strategy. Furthermore, we aim to gain insight into factors that either facilitate or hamper the implementation process, the quality of the implemented lifestyle interventions and the degree of adoption, implementation and continuation of these interventions. This study is a formative evaluation within two different worksites with an embedded scientist on site to continuously monitor the implementation process. Each worksite (i.e. a University of Applied Sciences and an Academic Hospital) will assign a participating faculty or a department, to implement a WHPP focusing on lifestyle interventions using the 7-step strategy. The primary focus will be to describe the natural course of development, implementation and maintenance of a WHPP by studying [a] the use and adherence to the 7-step strategy, [b] barriers and facilitators that influence the natural course of adoption, implementation and maintenance, and [c] the implementation process of the lifestyle interventions. All data will be collected using qualitative (i.e. real-time monitoring and semi-structured interviews) and quantitative methods (i.e. process evaluation questionnaires) applying data triangulation. Except for the real-time monitoring, the data collection will take place at baseline and after 6, 12 and 18 months. This is one of the few
Directory of Open Access Journals (Sweden)
Wierenga Debbie
2012-08-01
Full Text Available Abstract Background Worksite health promotion programs (WHPPs offer an attractive opportunity to improve the lifestyle of employees. Nevertheless, broad scale and successful implementation of WHPPs in daily practice often fails. In the present study, called BRAVO@Work, a 7-step implementation strategy was used to develop, implement and embed a WHPP in two different worksites with a focus on multiple lifestyle interventions. This article describes the design and framework for the formative evaluation of this 7-step strategy under real-time conditions by an embedded scientist with the purpose to gain insight into whether this this 7-step strategy is a useful and effective implementation strategy. Furthermore, we aim to gain insight into factors that either facilitate or hamper the implementation process, the quality of the implemented lifestyle interventions and the degree of adoption, implementation and continuation of these interventions. Methods and design This study is a formative evaluation within two different worksites with an embedded scientist on site to continuously monitor the implementation process. Each worksite (i.e. a University of Applied Sciences and an Academic Hospital will assign a participating faculty or a department, to implement a WHPP focusing on lifestyle interventions using the 7-step strategy. The primary focus will be to describe the natural course of development, implementation and maintenance of a WHPP by studying [a] the use and adherence to the 7-step strategy, [b] barriers and facilitators that influence the natural course of adoption, implementation and maintenance, and [c] the implementation process of the lifestyle interventions. All data will be collected using qualitative (i.e. real-time monitoring and semi-structured interviews and quantitative methods (i.e. process evaluation questionnaires applying data triangulation. Except for the real-time monitoring, the data collection will take place at baseline and
A new parallelization algorithm of ocean model with explicit scheme
Fu, X. D.
2017-08-01
This paper will focus on the parallelization of ocean model with explicit scheme which is one of the most commonly used schemes in the discretization of governing equation of ocean model. The characteristic of explicit schema is that calculation is simple, and that the value of the given grid point of ocean model depends on the grid point at the previous time step, which means that one doesn’t need to solve sparse linear equations in the process of solving the governing equation of the ocean model. Aiming at characteristics of the explicit scheme, this paper designs a parallel algorithm named halo cells update with tiny modification of original ocean model and little change of space step and time step of the original ocean model, which can parallelize ocean model by designing transmission module between sub-domains. This paper takes the GRGO for an example to implement the parallelization of GRGO (Global Reduced Gravity Ocean model) with halo update. The result demonstrates that the higher speedup can be achieved at different problem size.
Directory of Open Access Journals (Sweden)
Vanessa Suin
2014-01-01
Full Text Available A generic two-step lyssavirus real-time reverse transcriptase polymerase chain reaction (qRT-PCR, based on a nested PCR strategy, was validated for the detection of different lyssavirus species. Primers with 17 to 30% of degenerate bases were used in both consecutive steps. The assay could accurately detect RABV, LBV, MOKV, DUVV, EBLV-1, EBLV-2, and ABLV. In silico sequence alignment showed a functional match with the remaining lyssavirus species. The diagnostic specificity was 100% and the sensitivity proved to be superior to that of the fluorescent antigen test. The limit of detection was ≤1 50% tissue culture infectious dose. The related vesicular stomatitis virus was not recognized, confirming the selectivity for lyssaviruses. The assay was applied to follow the evolution of rabies virus infection in the brain of mice from 0 to 10 days after intranasal inoculation. The obtained RNA curve corresponded well with the curves obtained by a one-step monospecific RABV-qRT-PCR, the fluorescent antigen test, and virus titration. Despite the presence of degenerate bases, the assay proved to be highly sensitive, specific, and reproducible.
Suin, Vanessa; Nazé, Florence; Francart, Aurélie; Lamoral, Sophie; De Craeye, Stéphane; Kalai, Michael; Van Gucht, Steven
2014-01-01
A generic two-step lyssavirus real-time reverse transcriptase polymerase chain reaction (qRT-PCR), based on a nested PCR strategy, was validated for the detection of different lyssavirus species. Primers with 17 to 30% of degenerate bases were used in both consecutive steps. The assay could accurately detect RABV, LBV, MOKV, DUVV, EBLV-1, EBLV-2, and ABLV. In silico sequence alignment showed a functional match with the remaining lyssavirus species. The diagnostic specificity was 100% and the sensitivity proved to be superior to that of the fluorescent antigen test. The limit of detection was ≤ 1 50% tissue culture infectious dose. The related vesicular stomatitis virus was not recognized, confirming the selectivity for lyssaviruses. The assay was applied to follow the evolution of rabies virus infection in the brain of mice from 0 to 10 days after intranasal inoculation. The obtained RNA curve corresponded well with the curves obtained by a one-step monospecific RABV-qRT-PCR, the fluorescent antigen test, and virus titration. Despite the presence of degenerate bases, the assay proved to be highly sensitive, specific, and reproducible.
Norlelawati, A T; Mohd Danial, G; Nora, H; Nadia, O; Zatur Rawihah, K; Nor Zamzila, A; Naznin, M
2016-04-01
Synovial sarcoma (SS) is a rare cancer and accounts for 5-10% of adult soft tissue sarcomas. Making an accurate diagnosis is difficult due to the overlapping histological features of SS with other types of sarcomas and the non-specific immunohistochemistry profile findings. Molecular testing is thus considered necessary to confirm the diagnosis since more than 90% of SS cases carry the transcript of t(X;18)(p11.2;q11.2). The purpose of this study is to diagnose SS at molecular level by testing for t(X;18) fusion-transcript expression through One-step reverse transcriptase real-time Polymerase Chain Reaction (PCR). Formalin-fixed paraffin-embedded tissue blocks of 23 cases of soft tissue sarcomas, which included 5 and 8 cases reported as SS as the primary diagnosis and differential diagnosis respectively, were retrieved from the Department of Pathology, Tengku Ampuan Afzan Hospital, Kuantan, Pahang. RNA was purified from the tissue block sections and then subjected to One-step reverse transcriptase real-time PCR using sequence specific hydrolysis probes for simultaneous detection of either SYT-SSX1 or SYT-SSX2 fusion transcript. Of the 23 cases, 4 cases were found to be positive for SYT-SSX fusion transcript in which 2 were diagnosed as SS whereas in the 2 other cases, SS was the differential diagnosis. Three cases were excluded due to failure of both amplification assays SYT-SSX and control β-2-microglobulin. The remaining 16 cases were negative for the fusion transcript. This study has shown that the application of One-Step reverse transcriptase real time PCR for the detection SYT-SSX transcript is feasible as an aid in confirming the diagnosis of synovial sarcoma.
Tsujimoto, A; Barkmeier, W W; Takamizawa, T; Latta, M A; Miyazaki, M
2016-01-01
The purpose of this study was to evaluate the effect of phosphoric acid pre-etching times on shear bond strength (SBS) and surface free energy (SFE) with single-step self-etch adhesives. The three single-step self-etch adhesives used were: 1) Scotchbond Universal Adhesive (3M ESPE), 2) Clearfil tri-S Bond (Kuraray Noritake Dental), and 3) G-Bond Plus (GC). Two no pre-etching groups, 1) untreated enamel and 2) enamel surfaces after ultrasonic cleaning with distilled water for 30 seconds to remove the smear layer, were prepared. There were four pre-etching groups: 1) enamel surfaces were pre-etched with phosphoric acid (Etchant, 3M ESPE) for 3 seconds, 2) enamel surfaces were pre-etched for 5 seconds, 3) enamel surfaces were pre-etched for 10 seconds, and 4) enamel surfaces were pre-etched for 15 seconds. Resin composite was bonded to the treated enamel surface to determine SBS. The SFEs of treated enamel surfaces were determined by measuring the contact angles of three test liquids. Scanning electron microscopy was used to examine the enamel surfaces and enamel-adhesive interface. The specimens with phosphoric acid pre-etching showed significantly higher SBS and SFEs than the specimens without phosphoric acid pre-etching regardless of the adhesive system used. SBS and SFEs did not increase for phosphoric acid pre-etching times over 3 seconds. There were no significant differences in SBS and SFEs between the specimens with and without a smear layer. The data suggest that phosphoric acid pre-etching of ground enamel improves the bonding performance of single-step self-etch adhesives, but these bonding properties do not increase for phosphoric acid pre-etching times over 3 seconds.
Jothiprakash, V.; Magar, R. B.
2012-07-01
SummaryIn this study, artificial intelligent (AI) techniques such as artificial neural network (ANN), Adaptive neuro-fuzzy inference system (ANFIS) and Linear genetic programming (LGP) are used to predict daily and hourly multi-time-step ahead intermittent reservoir inflow. To illustrate the applicability of AI techniques, intermittent Koyna river watershed in Maharashtra, India is chosen as a case study. Based on the observed daily and hourly rainfall and reservoir inflow various types of time-series, cause-effect and combined models are developed with lumped and distributed input data. Further, the model performance was evaluated using various performance criteria. From the results, it is found that the performances of LGP models are found to be superior to ANN and ANFIS models especially in predicting the peak inflows for both daily and hourly time-step. A detailed comparison of the overall performance indicated that the combined input model (combination of rainfall and inflow) performed better in both lumped and distributed input data modelling. It was observed that the lumped input data models performed slightly better because; apart from reducing the noise in the data, the better techniques and their training approach, appropriate selection of network architecture, required inputs, and also training-testing ratios of the data set. The slight poor performance of distributed data is due to large variations and lesser number of observed values.
Chen, Jingfang; Zhang, Rusheng; Ou, Xinhua; Yao, Dong; Huang, Zheng; Li, Linzhi; Sun, Biancheng
2017-06-01
A TaqMan based duplex one-step real time RT-PCR (rRT-PCR) assay was developed for the rapid detection of Coxsackievirus A10 (CV-A10) and other enterovirus (EVs) in clinical samples. The assay was fully evaluated and found to be specific and sensitive. When applied in 115 clinical samples, a 100% diagnostic sensitivity in CV-A10 detection and 97.4% diagnostic sensitivity in other EVs were found. Copyright © 2017 Elsevier Ltd. All rights reserved.
Construction of low dissipative high-order well-balanced filter schemes for non-equilibrium flows
International Nuclear Information System (INIS)
Wang Wei; Yee, H.C.; Sjoegreen, Bjoern; Magin, Thierry; Shu, Chi-Wang
2011-01-01
The goal of this paper is to generalize the well-balanced approach for non-equilibrium flow studied by Wang et al. (2009) to a class of low dissipative high-order shock-capturing filter schemes and to explore more advantages of well-balanced schemes in reacting flows. More general 1D and 2D reacting flow models and new examples of shock turbulence interactions are provided to demonstrate the advantage of well-balanced schemes. The class of filter schemes developed by Yee et al. (1999) , Sjoegreen and Yee (2004) and Yee and Sjoegreen (2007) consist of two steps, a full time step of spatially high-order non-dissipative base scheme and an adaptive non-linear filter containing shock-capturing dissipation. A good property of the filter scheme is that the base scheme and the filter are stand-alone modules in designing. Therefore, the idea of designing a well-balanced filter scheme is straightforward, i.e. choosing a well-balanced base scheme with a well-balanced filter (both with high-order accuracy). A typical class of these schemes shown in this paper is the high-order central difference schemes/predictor-corrector (PC) schemes with a high-order well-balanced WENO filter. The new filter scheme with the well-balanced property will gather the features of both filter methods and well-balanced properties: it can preserve certain steady-state solutions exactly; it is able to capture small perturbations, e.g. turbulence fluctuations; and it adaptively controls numerical dissipation. Thus it shows high accuracy, efficiency and stability in shock/turbulence interactions. Numerical examples containing 1D and 2D smooth problems, 1D stationary contact discontinuity problem and 1D turbulence/shock interactions are included to verify the improved accuracy, in addition to the well-balanced behavior.
How can conceptual schemes change teaching?
Wickman, Per-Olof
2012-03-01
Lundqvist, Almqvist and Östman describe a teacher's manner of teaching and the possible consequences it may have for students' meaning making. In doing this the article examines a teacher's classroom practice by systematizing the teacher's transactions with the students in terms of certain conceptual schemes, namely the epistemological moves, educational philosophies and the selective traditions of this practice. In connection to their study one may ask how conceptual schemes could change teaching. This article examines how the relationship of the conceptual schemes produced by educational researchers to educational praxis has developed from the middle of the last century to today. The relationship is described as having been transformed in three steps: (1) teacher deficit and social engineering, where conceptual schemes are little acknowledged, (2) reflecting practitioners, where conceptual schemes are mangled through teacher practice to aid the choices of already knowledgeable teachers, and (3) the mangling of the conceptual schemes by researchers through practice with the purpose of revising theory.
International Nuclear Information System (INIS)
Lee, T.V.; Rothstein, D.; Madey, R.
1986-01-01
The time-dependent concentration of a radioactive gas at the outlet of an adsorber bed for a step change in the input concentration is analyzed by the method of moments. This moment analysis yields analytical expressions for calculating the kinetic parameters of a gas adsorbed on a porous solid in terms of observables from a time-dependent transmission curve. Transmission is the ratio of the adsorbate outlet concentration to that at the inlet. The three nonequilibrium parameters are the longitudinal diffusion coefficient, the solid-phase diffusion coefficient, and the interfacial mass-transfer coefficient. Three quantities that can be extracted in principle from an experimental transmission curve are the equilibrium transmission, the average residence (or propagation) time, and the first-moment relative to the propagation time. The propagation time for a radioactive gas is given by the time integral of one minus the transmission (expressed as a fraction of the steady-state transmission). The steady-state transmission, the propagation time, and the first-order moment are functions of the three kinetic parameters and the equilibrium adsorption capacity. The equilibrium adsorption capacity is extracted from an experimental transmission curve for a stable gaseous isotope. The three kinetic parameters can be obtained by solving the three analytical expressions simultaneously. No empirical correlations are required
New advection schemes for free surface flows
International Nuclear Information System (INIS)
Pavan, Sara
2016-01-01
The purpose of this thesis is to build higher order and less diffusive schemes for pollutant transport in shallow water flows or 3D free surface flows. We want robust schemes which respect the main mathematical properties of the advection equation with relatively low numerical diffusion and apply them to environmental industrial applications. Two techniques are tested in this work: a classical finite volume method and a residual distribution technique combined with a finite element method. For both methods we propose a decoupled approach since it is the most advantageous in terms of accuracy and CPU time. Concerning the first technique, a vertex-centred finite volume method is used to solve the augmented shallow water system where the numerical flux is computed through an Harten-Lax-Van Leer-Contact Riemann solver. Starting from this solution, a decoupled approach is formulated and is preferred since it allows to compute with a larger time step the advection of a tracer. This idea was inspired by Audusse, E. and Bristeau, M.O. [13]. The Monotonic Upwind Scheme for Conservation Law, combined with the decoupled approach, is then used for the second order extension in space. The wetting and drying problem is also analysed and a possible solution is presented. In the second case, the shallow water system is entirely solved using the finite element technique and the residual distribution method is applied to the solution of the tracer equation, focusing on the case of time-dependent problems. However, for consistency reasons the resolution of the continuity equation must be considered in the numerical discretization of the tracer. In order to get second order schemes for unsteady cases a predictor-corrector scheme is used in this work. A first order but less diffusive version of the predictor-corrector scheme is also introduced. Moreover, we also present a new locally semi-implicit version of the residual distribution method which, in addition to good properties in
Getty, Stephanie; Brickerhoff, William; Cornish, Timothy; Ecelberger, Scott; Floyd, Melissa
2012-01-01
RATIONALE A miniature time-of-flight mass spectrometer has been adapted to demonstrate two-step laser desorption-ionization (LOI) in a compact instrument package for enhanced organics detection. Two-step LDI decouples the desorption and ionization processes, relative to traditional laser ionization-desorption, in order to produce low-fragmentation conditions for complex organic analytes. Tuning UV ionization laser energy allowed control ofthe degree of fragmentation, which may enable better identification of constituent species. METHODS A reflectron time-of-flight mass spectrometer prototype measuring 20 cm in length was adapted to a two-laser configuration, with IR (1064 nm) desorption followed by UV (266 nm) postionization. A relatively low ion extraction voltage of 5 kV was applied at the sample inlet. Instrument capabilities and performance were demonstrated with analysis of a model polycyclic aromatic hydrocarbon, representing a class of compounds important to the fields of Earth and planetary science. RESULTS L2MS analysis of a model PAH standard, pyrene, has been demonstrated, including parent mass identification and the onset o(tunable fragmentation as a function of ionizing laser energy. Mass resolution m/llm = 380 at full width at half-maximum was achieved which is notable for gas-phase ionization of desorbed neutrals in a highly-compact mass analyzer. CONCLUSIONS Achieving two-step laser mass spectrometry (L2MS) in a highly-miniature instrument enables a powerful approach to the detection and characterization of aromatic organics in remote terrestrial and planetary applications. Tunable detection of parent and fragment ions with high mass resolution, diagnostic of molecular structure, is possible on such a compact L2MS instrument. Selectivity of L2MS against low-mass inorganic salt interferences is a key advantage when working with unprocessed, natural samples, and a mechanism for the observed selectivity is presented.
Wan, Hui; Zhang, Kai; Rasch, Philip J.; Singh, Balwinder; Chen, Xingyuan; Edwards, Jim
2017-02-01
A test procedure is proposed for identifying numerically significant solution changes in evolution equations used in atmospheric models. The test issues a fail signal when any code modifications or computing environment changes lead to solution differences that exceed the known time step sensitivity of the reference model. Initial evidence is provided using the Community Atmosphere Model (CAM) version 5.3 that the proposed procedure can be used to distinguish rounding-level solution changes from impacts of compiler optimization or parameter perturbation, which are known to cause substantial differences in the simulated climate. The test is not exhaustive since it does not detect issues associated with diagnostic calculations that do not feedback to the model state variables. Nevertheless, it provides a practical and objective way to assess the significance of solution changes. The short simulation length implies low computational cost. The independence between ensemble members allows for parallel execution of all simulations, thus facilitating fast turnaround. The new method is simple to implement since it does not require any code modifications. We expect that the same methodology can be used for any geophysical model to which the concept of time step convergence is applicable.
Matroids and quantum-secret-sharing schemes
International Nuclear Information System (INIS)
Sarvepalli, Pradeep; Raussendorf, Robert
2010-01-01
A secret-sharing scheme is a cryptographic protocol to distribute a secret state in an encoded form among a group of players such that only authorized subsets of the players can reconstruct the secret. Classically, efficient secret-sharing schemes have been shown to be induced by matroids. Furthermore, access structures of such schemes can be characterized by an excluded minor relation. No such relations are known for quantum secret-sharing schemes. In this paper we take the first steps toward a matroidal characterization of quantum-secret-sharing schemes. In addition to providing a new perspective on quantum-secret-sharing schemes, this characterization has important benefits. While previous work has shown how to construct quantum-secret-sharing schemes for general access structures, these schemes are not claimed to be efficient. In this context the present results prove to be useful; they enable us to construct efficient quantum-secret-sharing schemes for many general access structures. More precisely, we show that an identically self-dual matroid that is representable over a finite field induces a pure-state quantum-secret-sharing scheme with information rate 1.
International Nuclear Information System (INIS)
Mirzaee, Hossein
2009-01-01
The Levenberg-Marquardt learning algorithm is applied for training a multilayer perception with three hidden layer each with ten neurons in order to carefully map the structure of chaotic time series such as Mackey-Glass time series. First the MLP network is trained with 1000 data, and then it is tested with next 500 data. After that the trained and tested network is applied for long-term prediction of next 120 data which come after test data. The prediction is such a way that, the first inputs to network for prediction are the four last data of test data, then the predicted value is shifted to the regression vector which is the input to the network, then after first four-step of prediction, the input regression vector to network is fully predicted values and in continue, each predicted data is shifted to input vector for subsequent prediction.
Directory of Open Access Journals (Sweden)
Shanming Wang
2015-01-01
Full Text Available Now electric machines integrate with power electronics to form inseparable systems in lots of applications for high performance. For such systems, two kinds of nonlinearities, the magnetic nonlinearity of iron core and the circuit nonlinearity caused by power electronics devices, coexist at the same time, which makes simulation time-consuming. In this paper, the multiloop model combined with FE model of AC-DC synchronous generators, as one example of electric machine with power electronics system, is set up. FE method is applied for magnetic nonlinearity and variable-step variable-topology simulation method is applied for circuit nonlinearity. In order to improve the simulation speed, the incomplete Cholesky conjugate gradient (ICCG method is used to solve the state equation. However, when power electronics device switches off, the convergence difficulty occurs. So a straightforward approach to achieve convergence of simulation is proposed. At last, the simulation results are compared with the experiments.
Charge-conserving FEM-PIC schemes on general grids
International Nuclear Information System (INIS)
Campos Pinto, M.; Jund, S.; Salmon, S.; Sonnendruecker, E.
2014-01-01
Particle-In-Cell (PIC) solvers are a major tool for the understanding of the complex behavior of a plasma or a particle beam in many situations. An important issue for electromagnetic PIC solvers, where the fields are computed using Maxwell's equations, is the problem of discrete charge conservation. In this article, we aim at proposing a general mathematical formulation for charge-conserving finite-element Maxwell solvers coupled with particle schemes. In particular, we identify the finite-element continuity equations that must be satisfied by the discrete current sources for several classes of time-domain Vlasov-Maxwell simulations to preserve the Gauss law at each time step, and propose a generic algorithm for computing such consistent sources. Since our results cover a wide range of schemes (namely curl-conforming finite element methods of arbitrary degree, general meshes in two or three dimensions, several classes of time discretization schemes, particles with arbitrary shape factors and piecewise polynomial trajectories of arbitrary degree), we believe that they provide a useful roadmap in the design of high-order charge-conserving FEM-PIC numerical schemes. (authors)
Zheng, Yang; Zhou, Jianzhong; Xu, Yanhe; Zhang, Yuncheng; Qian, Zhongdong
2017-05-01
This paper proposes a distributed model predictive control based load frequency control (MPC-LFC) scheme to improve control performances in the frequency regulation of power system. In order to reduce the computational burden in the rolling optimization with a sufficiently large prediction horizon, the orthonormal Laguerre functions are utilized to approximate the predicted control trajectory. The closed-loop stability of the proposed MPC scheme is achieved by adding a terminal equality constraint to the online quadratic optimization and taking the cost function as the Lyapunov function. Furthermore, the treatments of some typical constraints in load frequency control have been studied based on the specific Laguerre-based formulations. Simulations have been conducted in two different interconnected power systems to validate the effectiveness of the proposed distributed MPC-LFC as well as its superiority over the comparative methods. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Kimura, Fumiko; Umezawa, Tatsuo; Asano, Tomonari; Chihara, Ruri; Nishi, Naoko; Nishimura, Shigeyoshi; Sakai, Fumikazu
2010-01-01
We compared stair-step artifacts and radiation dose between prospective electrocardiography (ECG)-gated coronary computed tomography angiography (prospective CCTA) and retrospective CCTA using 64-detector CT and determined the optimal padding time (PT) for prospective CCTA. We retrospectively evaluated 183 patients [mean heart rate (HR) <65 beats/min, maximum HR instability <5 beats/min] who had undergone CCTA. We scored stair-step artifacts from 1 (severe) to 5 (none) and evaluated the effective dose in 53 patients with retrospective CCTA and 130 with prospective CCTA (PT 200 ms, n=32; PT 50 ms, n=98). Mean artifact scores were 4.3 in both retrospective and prospective CCTAs. However, statistically more arteries scored <3 (nonassessable) on prospective CCTA (P<0.001). Mean scores for prospective CCTA with 200- and 50-ms PT were 4.1 and 4.3, respectively (no significant difference). The radiation dose of prospective CCTA was reduced by 59.1% to 80.7%. Prospective CCTA reduces the radiation dose and allows diagnostic imaging in most cases but shows more nonevaluable artifacts than retrospective CCTA. Use of 50-ms instead of 200-ms PT appears to maintain image quality in patients with a mean HR <65 beats/min and HR instability of <5 beats/min. (author)
Beltrame, R.; Bonito, A. B.; Celandroni, N.; Ferro, E.
1985-11-01
A FIFO Order based Demand Assignment (FODA) access scheme was designed to handle packetized data and voice traffic in a multiple access satellite broadcast channel of Mbits band. The channel is shared by as many as 64 simultaneously active stations in a range of 255 addressable stations. A sophisticated traffic environment is assumed, including different types of service requirements and an arbitrary load distribution among the stations. The results of 2Mbit/sec simulation tests for an existing hardware environment are presented.
International Nuclear Information System (INIS)
Carrander, Claes; Mousavi, Seyed Ali; Engdahl, Göran
2017-01-01
In many transformer applications, it is necessary to have a core magnetization model that takes into account both magnetic and electrical effects. This becomes particularly important in three-phase transformers, where the zero-sequence impedance is generally high, and therefore affects the magnetization very strongly. In this paper, we demonstrate a time-step topological simulation method that uses a lumped-element approach to accurately model both the electrical and magnetic circuits. The simulation method is independent of the used hysteresis model. In this paper, a hysteresis model based on the first-order reversal-curve has been used. - Highlights: • A lumped-element method for modelling transformers i demonstrated. • The method can include hysteresis and arbitrarily complex geometries. • Simulation results for one power transformer are compared to measurements. • An analytical curve-fitting expression for static hysteresis loops is shown.
A Classification Scheme for Production System Processes
DEFF Research Database (Denmark)
Sørensen, Daniel Grud Hellerup; Brunø, Thomas Ditlev; Nielsen, Kjeld
2018-01-01
Manufacturing companies often have difficulties developing production platforms, partly due to the complexity of many production systems and difficulty determining which processes constitute a platform. Understanding production processes is an important step to identifying candidate processes...... for a production platform based on existing production systems. Reviewing a number of existing classifications and taxonomies, a consolidated classification scheme for processes in production of discrete products has been outlined. The classification scheme helps ensure consistency during mapping of existing...
A Practical Voter-Verifiable Election Scheme.
Chaum, D; Ryan, PYA; Schneider, SA
2005-01-01
We present an election scheme designed to allow voters to verify that their vote is accurately included in the count. The scheme provides a high degree of transparency whilst ensuring the secrecy of votes. Assurance is derived from close auditing of all the steps of the vote recording and counting process with minimal dependence on the system components. Thus, assurance arises from verification of the election rather than having to place trust in the correct behaviour of components of the vot...
Ertsen, M. W.; Murphy, J. T.; Purdue, L. E.; Zhu, T.
2014-04-01
When simulating social action in modeling efforts, as in socio-hydrology, an issue of obvious importance is how to ensure that social action by human agents is well-represented in the analysis and the model. Generally, human decision-making is either modeled on a yearly basis or lumped together as collective social structures. Both responses are problematic, as human decision-making is more complex and organizations are the result of human agency and cannot be used as explanatory forces. A way out of the dilemma of how to include human agency is to go to the largest societal and environmental clustering possible: society itself and climate, with time steps of years or decades. In the paper, another way out is developed: to face human agency squarely, and direct the modeling approach to the agency of individuals and couple this with the lowest appropriate hydrological level and time step. This approach is supported theoretically by the work of Bruno Latour, the French sociologist and philosopher. We discuss irrigation archaeology, as it is in this discipline that the issues of scale and explanatory force are well discussed. The issue is not just what scale to use: it is what scale matters. We argue that understanding the arrangements that permitted the management of irrigation over centuries requires modeling and understanding the small-scale, day-to-day operations and personal interactions upon which they were built. This effort, however, must be informed by the longer-term dynamics, as these provide the context within which human agency is acted out.
Fisher, Abi; Ucci, Marcella; Smith, Lee; Sawyer, Alexia; Spinney, Richard; Konstantatou, Marina; Marmot, Alexi
2018-06-01
Office-based workers spend a large proportion of the day sitting and tend to have low overall activity levels. Despite some evidence that features of the external physical environment are associated with physical activity, little is known about the influence of the spatial layout of the internal environment on movement, and the majority of data use self-report. This study investigated associations between objectively-measured sitting time and activity levels and the spatial layout of office floors in a sample of UK office-based workers. Participants wore activPAL accelerometers for at least three consecutive workdays. Primary outcomes were steps and proportion of sitting time per working hour. Primary exposures were office spatial layout, which was objectively-measured by deriving key spatial variables: 'distance from each workstation to key office destinations', 'distance from participant's workstation to all other workstations', 'visibility of co-workers', and workstation 'closeness'. 131 participants from 10 organisations were included. Fifty-four per cent were female, 81% were white, and the majority had a managerial or professional role (72%) in their organisation. The average proportion of the working hour spent sitting was 0.7 (SD 0.15); participants took on average 444 (SD 210) steps per working hour. Models adjusted for confounders revealed significant negative associations between step count and distance from each workstation to all other office destinations (e.g., B = -4.66, 95% CI: -8.12, -1.12, p office destinations (e.g., B = -6.45, 95% CI: -11.88, -0.41, p office destinations the less they walked, suggesting that changing the relative distance between workstations and other destinations on the same floor may not be the most fruitful target for promoting walking and reducing sitting in the workplace. However, reported effect sizes were very small and based on cross-sectional analyses. The approaches developed in this study could be applied to other
Meliga, Philippe
2017-07-01
We provide in-depth scrutiny of two methods making use of adjoint-based gradients to compute the sensitivity of drag in the two-dimensional, periodic flow past a circular cylinder (Re≲189 ): first, the time-stepping analysis used in Meliga et al. [Phys. Fluids 26, 104101 (2014), 10.1063/1.4896941] that relies on classical Navier-Stokes modeling and determines the sensitivity to any generic control force from time-dependent adjoint equations marched backwards in time; and, second, a self-consistent approach building on the model of Mantič-Lugo et al. [Phys. Rev. Lett. 113, 084501 (2014), 10.1103/PhysRevLett.113.084501] to compute semilinear approximations of the sensitivity to the mean and fluctuating components of the force. Both approaches are applied to open-loop control by a small secondary cylinder and allow identifying the sensitive regions without knowledge of the controlled states. The theoretical predictions obtained by time-stepping analysis reproduce well the results obtained by direct numerical simulation of the two-cylinder system. So do the predictions obtained by self-consistent analysis, which corroborates the relevance of the approach as a guideline for efficient and systematic control design in the attempt to reduce drag, even though the Reynolds number is not close to the instability threshold and the oscillation amplitude is not small. This is because, unlike simpler approaches relying on linear stability analysis to predict the main features of the flow unsteadiness, the semilinear framework encompasses rigorously the effect of the control on the mean flow, as well as on the finite-amplitude fluctuation that feeds back nonlinearly onto the mean flow via the formation of Reynolds stresses. Such results are especially promising as the self-consistent approach determines the sensitivity from time-independent equations that can be solved iteratively, which makes it generally less computationally demanding. We ultimately discuss the extent to
Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars
2018-02-01
The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration
Directory of Open Access Journals (Sweden)
S. Ghosh
Full Text Available Many Large Eddy Simulation (LES models use the classic Kessler parameterisation either as it is or in a modified form to model the process of cloud water autoconversion into precipitation. The Kessler scheme, being linear, is particularly useful and is computationally straightforward to implement. However, a major limitation with this scheme lies in its inability to predict different autoconversion rates for maritime and continental clouds. In contrast, the Berry formulation overcomes this difficulty, although it is cubic. Due to their different forms, it is difficult to match the two solutions to each other. In this paper we single out the processes of cloud conversion and accretion operating in a deep model cloud and neglect the advection terms for simplicity. This facilitates exact analytical integration and we are able to derive new expressions for the time of onset of precipitation using both the Kessler and Berry formulations. We then discuss the conditions when the two schemes are equivalent. Finally, we also critically examine the process of droplet evaporation within the framework of the classic Kessler scheme. We improve the existing parameterisation with an accurate estimation of the diffusional mass transport of water vapour. We then demonstrate the overall robustness of our calculations by comparing our results with the experimental observations of Beard and Pruppacher, and find excellent agreement.
Key words. Atmospheric composition and structure · Cloud physics and chemistry · Pollution · Meteorology and atmospheric dynamics · Precipitation
Directory of Open Access Journals (Sweden)
S. Ghosh
1998-05-01
Full Text Available Many Large Eddy Simulation (LES models use the classic Kessler parameterisation either as it is or in a modified form to model the process of cloud water autoconversion into precipitation. The Kessler scheme, being linear, is particularly useful and is computationally straightforward to implement. However, a major limitation with this scheme lies in its inability to predict different autoconversion rates for maritime and continental clouds. In contrast, the Berry formulation overcomes this difficulty, although it is cubic. Due to their different forms, it is difficult to match the two solutions to each other. In this paper we single out the processes of cloud conversion and accretion operating in a deep model cloud and neglect the advection terms for simplicity. This facilitates exact analytical integration and we are able to derive new expressions for the time of onset of precipitation using both the Kessler and Berry formulations. We then discuss the conditions when the two schemes are equivalent. Finally, we also critically examine the process of droplet evaporation within the framework of the classic Kessler scheme. We improve the existing parameterisation with an accurate estimation of the diffusional mass transport of water vapour. We then demonstrate the overall robustness of our calculations by comparing our results with the experimental observations of Beard and Pruppacher, and find excellent agreement.Key words. Atmospheric composition and structure · Cloud physics and chemistry · Pollution · Meteorology and atmospheric dynamics · Precipitation
Sman, van der R.G.M.
2006-01-01
In the special case of relaxation parameter = 1 lattice Boltzmann schemes for (convection) diffusion and fluid flow are equivalent to finite difference/volume (FD) schemes, and are thus coined finite Boltzmann (FB) schemes. We show that the equivalence is inherent to the homology of the
Numerical schemes for explosion hazards
International Nuclear Information System (INIS)
Therme, Nicolas
2015-01-01
In nuclear facilities, internal or external explosions can cause confinement breaches and radioactive materials release in the environment. Hence, modeling such phenomena is crucial for safety matters. Blast waves resulting from explosions are modeled by the system of Euler equations for compressible flows, whereas Navier-Stokes equations with reactive source terms and level set techniques are used to simulate the propagation of flame front during the deflagration phase. The purpose of this thesis is to contribute to the creation of efficient numerical schemes to solve these complex models. The work presented here focuses on two major aspects: first, the development of consistent schemes for the Euler equations, then the buildup of reliable schemes for the front propagation. In both cases, explicit in time schemes are used, but we also introduce a pressure correction scheme for the Euler equations. Staggered discretization is used in space. It is based on the internal energy formulation of the Euler system, which insures its positivity and avoids tedious discretization of the total energy over staggered grids. A discrete kinetic energy balance is derived from the scheme and a source term is added in the discrete internal energy balance equation to preserve the exact total energy balance at the limit. High order methods of MUSCL type are used in the discrete convective operators, based solely on material velocity. They lead to positivity of density and internal energy under CFL conditions. This ensures that the total energy cannot grow and we can furthermore derive a discrete entropy inequality. Under stability assumptions of the discrete L8 and BV norms of the scheme's solutions one can prove that a sequence of converging discrete solutions necessarily converges towards the weak solution of the Euler system. Besides it satisfies a weak entropy inequality at the limit. Concerning the front propagation, we transform the flame front evolution equation (the so called
Herrendoerfer, R.; van Dinther, Y.; Gerya, T.
2015-12-01
To explore the relationships between subduction dynamics and the megathrust earthquake potential, we have recently developed a numerical model that bridges the gap between processes on geodynamic and earthquake cycle time scales. In a self-consistent, continuum-based framework including a visco-elasto-plastic constitutive relationship, cycles of megathrust earthquake-like ruptures were simulated through a purely slip rate-dependent friction, albeit with very low slip rates (van Dinther et al., JGR, 2013). In addition to much faster earthquakes, a range of aseismic slip processes operate at different time scales in nature. These aseismic processes likely accommodate a considerable amount of the plate convergence and are thus relevant in order to estimate the long-term seismic coupling and related hazard in subduction zones. To simulate and resolve this wide spectrum of slip processes, we innovatively implemented rate-and state dependent friction (RSF) and an adaptive time-stepping into our continuum framework. The RSF formulation, in contrast to our previous friction formulation, takes the dependency of frictional strength on a state variable into account. It thereby allows for continuous plastic yielding inside rate-weakening regions, which leads to aseismic slip. In contrast to the conventional RSF formulation, we relate slip velocities to strain rates and use an invariant formulation. Thus we do not require the a priori definition of infinitely thin, planar faults in a homogeneous elastic medium. With this new implementation of RSF, we succeed to produce consistent cycles of frictional instabilities. By changing the frictional parameter a, b, and the characteristic slip distance, we observe a transition from stable sliding to stick-slip behaviour. This transition is in general agreement with predictions from theoretical estimates of the nucleation size, thereby to first order validating our implementation. By incorporating adaptive time-stepping based on a
International Nuclear Information System (INIS)
Balsara, D.S.
1999-01-01
In this paper we analyze some of the numerical issues that are involved in making time-implicit higher-order Godunov schemes for the equations of radiation hydrodynamics (and the Euler or Navier-Stokes equations). This is done primarily with the intent of incorporating such methods in the author's RIEMANN code. After examining the issues it is shown that the construction of a time-implicit higher-order Godunov scheme for radiation hydrodynamics would be benefited by our ability to evaluate exact Jacobians of the numerical flux that is based on Roe-type flux difference splitting. In this paper we show that this can be done analytically in a form that is suitable for efficient computational implementation. It is also shown that when multiple fluid species are used or when multiple radiation frequencies are used the computational cost in the evaluation of the exact Jacobians scales linearly with the number of fluid species or the number of radiation frequencies. Connections are made to other types of numerical fluxes, especially those based on flux difference splittings. It is shown that the evaluation of the exact Jacobian for such numerical fluxes is also benefited by the present strategy and the results given here. It is, however, pointed out that time-implicit schemes that are based on the evaluation of the exact Jacobians for flux difference splittings using the methods developed here are both computationally more efficient and numerically more stable than corresponding time-implicit schemes that are based on the evaluation of the exact or approximate Jacobians for flux vector splittings. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)
Selecting registration schemes in case of interstitial lung disease follow-up in CT
International Nuclear Information System (INIS)
Vlachopoulos, Georgios; Korfiatis, Panayiotis; Skiadopoulos, Spyros; Kazantzi, Alexandra; Kalogeropoulou, Christina; Pratikakis, Ioannis; Costaridou, Lena
2015-01-01
Purpose: Primary goal of this study is to select optimal registration schemes in the framework of interstitial lung disease (ILD) follow-up analysis in CT. Methods: A set of 128 multiresolution schemes composed of multiresolution nonrigid and combinations of rigid and nonrigid registration schemes are evaluated, utilizing ten artificially warped ILD follow-up volumes, originating from ten clinical volumetric CT scans of ILD affected patients, to select candidate optimal schemes. Specifically, all combinations of four transformation models (three rigid: rigid, similarity, affine and one nonrigid: third order B-spline), four cost functions (sum-of-square distances, normalized correlation coefficient, mutual information, and normalized mutual information), four gradient descent optimizers (standard, regular step, adaptive stochastic, and finite difference), and two types of pyramids (recursive and Gaussian-smoothing) were considered. The selection process involves two stages. The first stage involves identification of schemes with deformation field singularities, according to the determinant of the Jacobian matrix. In the second stage, evaluation methodology is based on distance between corresponding landmark points in both normal lung parenchyma (NLP) and ILD affected regions. Statistical analysis was performed in order to select near optimal registration schemes per evaluation metric. Performance of the candidate registration schemes was verified on a case sample of ten clinical follow-up CT scans to obtain the selected registration schemes. Results: By considering near optimal schemes common to all ranking lists, 16 out of 128 registration schemes were initially selected. These schemes obtained submillimeter registration accuracies in terms of average distance errors 0.18 ± 0.01 mm for NLP and 0.20 ± 0.01 mm for ILD, in case of artificially generated follow-up data. Registration accuracy in terms of average distance error in clinical follow-up data was in the
Selecting registration schemes in case of interstitial lung disease follow-up in CT
Energy Technology Data Exchange (ETDEWEB)
Vlachopoulos, Georgios; Korfiatis, Panayiotis; Skiadopoulos, Spyros; Kazantzi, Alexandra [Department of Medical Physics, School of Medicine,University of Patras, Patras 26504 (Greece); Kalogeropoulou, Christina [Department of Radiology, School of Medicine, University of Patras, Patras 26504 (Greece); Pratikakis, Ioannis [Department of Electrical and Computer Engineering, Democritus University of Thrace, Xanthi 67100 (Greece); Costaridou, Lena, E-mail: costarid@upatras.gr [Department of Medical Physics, School of Medicine, University of Patras, Patras 26504 (Greece)
2015-08-15
Purpose: Primary goal of this study is to select optimal registration schemes in the framework of interstitial lung disease (ILD) follow-up analysis in CT. Methods: A set of 128 multiresolution schemes composed of multiresolution nonrigid and combinations of rigid and nonrigid registration schemes are evaluated, utilizing ten artificially warped ILD follow-up volumes, originating from ten clinical volumetric CT scans of ILD affected patients, to select candidate optimal schemes. Specifically, all combinations of four transformation models (three rigid: rigid, similarity, affine and one nonrigid: third order B-spline), four cost functions (sum-of-square distances, normalized correlation coefficient, mutual information, and normalized mutual information), four gradient descent optimizers (standard, regular step, adaptive stochastic, and finite difference), and two types of pyramids (recursive and Gaussian-smoothing) were considered. The selection process involves two stages. The first stage involves identification of schemes with deformation field singularities, according to the determinant of the Jacobian matrix. In the second stage, evaluation methodology is based on distance between corresponding landmark points in both normal lung parenchyma (NLP) and ILD affected regions. Statistical analysis was performed in order to select near optimal registration schemes per evaluation metric. Performance of the candidate registration schemes was verified on a case sample of ten clinical follow-up CT scans to obtain the selected registration schemes. Results: By considering near optimal schemes common to all ranking lists, 16 out of 128 registration schemes were initially selected. These schemes obtained submillimeter registration accuracies in terms of average distance errors 0.18 ± 0.01 mm for NLP and 0.20 ± 0.01 mm for ILD, in case of artificially generated follow-up data. Registration accuracy in terms of average distance error in clinical follow-up data was in the
Energy Technology Data Exchange (ETDEWEB)
Tetsu, Hiroyuki; Nakamoto, Taishi, E-mail: h.tetsu@geo.titech.ac.jp [Earth and Planetary Sciences, Tokyo Institute of Technology, Tokyo 152-8551 (Japan)
2016-03-15
Radiation is an important process of energy transport, a force, and a basis for synthetic observations, so radiation hydrodynamics (RHD) calculations have occupied an important place in astrophysics. However, although the progress in computational technology is remarkable, their high numerical cost is still a persistent problem. In this work, we compare the following schemes used to solve the nonlinear simultaneous equations of an RHD algorithm with the flux-limited diffusion approximation: the Newton–Raphson (NR) method, operator splitting, and linearization (LIN), from the perspective of the computational cost involved. For operator splitting, in addition to the traditional simple operator splitting (SOS) scheme, we examined the scheme developed by Douglas and Rachford (DROS). We solve three test problems (the thermal relaxation mode, the relaxation and the propagation of linear waves, and radiating shock) using these schemes and then compare their dependence on the time step size. As a result, we find the conditions of the time step size necessary for adopting each scheme. The LIN scheme is superior to other schemes if the ratio of radiation pressure to gas pressure is sufficiently low. On the other hand, DROS can be the most efficient scheme if the ratio is high. Although the NR scheme can be adopted independently of the regime, especially in a problem that involves optically thin regions, the convergence tends to be worse. In all cases, SOS is not practical.
Chertock, Alina; Cui, Shumo; Kurganov, Alexander; Özcan, Şeyma Nur; Tadmor, Eitan
2018-04-01
We develop a second-order well-balanced central-upwind scheme for the compressible Euler equations with gravitational source term. Here, we advocate a new paradigm based on a purely conservative reformulation of the equations using global fluxes. The proposed scheme is capable of exactly preserving steady-state solutions expressed in terms of a nonlocal equilibrium variable. A crucial step in the construction of the second-order scheme is a well-balanced piecewise linear reconstruction of equilibrium variables combined with a well-balanced central-upwind evolution in time, which is adapted to reduce the amount of numerical viscosity when the flow is at (near) steady-state regime. We show the performance of our newly developed central-upwind scheme and demonstrate importance of perfect balance between the fluxes and gravitational forces in a series of one- and two-dimensional examples.
Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José; Liu, Qinya; Zhou, Bing
2017-05-01
We carry out full waveform inversion (FWI) in time domain based on an alternative frequency-band selection strategy that allows us to implement the method with success. This strategy aims at decomposing the seismic data within partially overlapped frequency intervals by carrying out a concatenated treatment of the wavelet to largely avoid redundant frequency information to adapt to wavelength or wavenumber coverage. A pertinent numerical test proves the effectiveness of this strategy. Based on this strategy, we comparatively analyze the effects of update parameters for the nonlinear conjugate gradient (CG) method and step-length formulas on the multiscale FWI through several numerical tests. The investigations of up to eight versions of the nonlinear CG method with and without Gaussian white noise make clear that the HS (Hestenes and Stiefel in J Res Natl Bur Stand Sect 5:409-436, 1952), CD (Fletcher in Practical methods of optimization vol. 1: unconstrained optimization, Wiley, New York, 1987), and PRP (Polak and Ribière in Revue Francaise Informat Recherche Opertionelle, 3e Année 16:35-43, 1969; Polyak in USSR Comput Math Math Phys 9:94-112, 1969) versions are more efficient among the eight versions, while the DY (Dai and Yuan in SIAM J Optim 10:177-182, 1999) version always yields inaccurate result, because it overestimates the deeper parts of the model. The application of FWI algorithms using distinct step-length formulas, such as the direct method ( Direct), the parabolic search method ( Search), and the two-point quadratic interpolation method ( Interp), proves that the Interp is more efficient for noise-free data, while the Direct is more efficient for Gaussian white noise data. In contrast, the Search is less efficient because of its slow convergence. In general, the three step-length formulas are robust or partly insensitive to Gaussian white noise and the complexity of the model. When the initial velocity model deviates far from the real model or the
Korir, Peter C.; Dejene, Francis B.
2018-04-01
In this work two step growth process was used to prepare Cu(In, Ga)Se2 thin film for solar cell applications. The first step involves deposition of Cu-In-Ga precursor films followed by the selenization process under vacuum using elemental selenium vapor to form Cu(In,Ga)Se2 film. The growth process was done at a fixed temperature of 515 °C for 45, 60 and 90 min to control film thickness and gallium incorporation into the absorber layer film. The X-ray diffraction (XRD) pattern confirms single-phase Cu(In,Ga)Se2 film for all the three samples and no secondary phases were observed. A shift in the diffraction peaks to higher 2θ (2 theta) values is observed for the thin films compared to that of pure CuInSe2. The surface morphology of the resulting film grown for 60 min was characterized by the presence of uniform large grain size particles, which are typical for device quality material. Photoluminescence spectra show the shifting of emission peaks to higher energies for longer duration of selenization attributed to the incorporation of more gallium into the CuInSe2 crystal structure. Electron probe microanalysis (EPMA) revealed a uniform distribution of the elements through the surface of the film. The elemental ratio of Cu/(In + Ga) and Se/Cu + In + Ga strongly depends on the selenization time. The Cu/In + Ga ratio for the 60 min film is 0.88 which is in the range of the values (0.75-0.98) for best solar cell device performances.
Cakar, N; Tuŏrul, M; Demirarslan, A; Nahum, A; Adams, A; Akýncý, O; Esen, F; Telci, L
2001-04-01
To determine the time required for the partial pressure of arterial oxygen (PaO2) to reach equilibrium after a 0.20 increment or decrement in fractional inspired oxygen concentration (FIO2) during mechanical ventilation. A multi-disciplinary ICU in a university hospital. Twenty-five adult, non-COPD patients with stable blood gas values (PaO2/FIO2 > or = 180 on the day of the study) on pressure-controlled ventilation (PCV). Following a baseline PaO2 (PaO2b) measurement at FIO2 = 0.35, the FIO2 was increased to 0.55 for 30 min and then decreased to 0.35 without any other change in ventilatory parameters. Sequential blood gas measurements were performed at 3, 5, 7, 9, 11, 15, 20, 25 and 30 min in both periods. The PaO2 values measured at the 30th min after a step change in FIO2 (FIO2 = 0.55, PaO2[55] and FIO2 = 0.35, PaO2[35]) were accepted as representative of the equilibrium values for PaO2. Each patient's rise and fall in PaO2 over time, PaO2(t), were fitted to the following respective exponential equations: PaO2b + (PaO2[55]-PaO2b)(1-e-kt) and PaO2[55] + (PaO2[35]-PaO2[55])(e-kt) where "t" refers to time, PaO2[55] and PaO2[35] are the final PaO2 values obtained at a new FIO2 of 0.55 and 0.35, after a 0.20 increment and decrement in FIO2, respectively. Time constant "k" was determined by a non-linear fitting curve and 90% oxygenation times were defined as the time required to reach 90% of the final equilibrated PaO2 calculated by using the non-linear fitting curves. Time constant values for the rise and fall periods were 1.01 +/- 0.71 min-1, 0.69 +/- 0.42 min-1, respectively, and 90% oxygenation times for rises and falls in PaO2 periods were 4.2 +/- 4.1 min-1 and 5.5 +/- 4.8 min-1, respectively. There was no significant difference between the rise and fall periods for the two parameters (p > 0.05). We conclude that in stable patients ventilated with PCV, after a step change in FIO2 of 0.20, 5-10 min will be adequate for obtaining a blood gas sample to measure a Pa
Agapitov, O. V.; Mozer, F.; Artemyev, A.; Krasnoselskikh, V.; Lejosne, S.
2014-12-01
A huge number of different non-linear structures (double layers, electron holes, non-linear whistlers, etc) have been observed by the electric field experiment on the Van Allen Probes in conjunction with relativistic electron acceleration in the Earth's outer radiation belt. These structures, found as short duration (~0.1 msec) quasi-periodic bursts of electric field in the high time resolution electric field waveform, have been called Time Domain Structures (TDS). They can quite effectively interact with radiation belt electrons. Due to the trapping of electrons into these non-linear structures, they are accelerated up to ~10 keV and their pitch angles are changed, especially for low energies (˜1 keV). Large amplitude electric field perturbations cause non-linear resonant trapping of electrons into the effective potential of the TDS and these electrons are then accelerated in the non-homogeneous magnetic field. These locally accelerated electrons create the "seed population" of several keV electrons that can be accelerated by coherent, large amplitude, upper band whistler waves to MeV energies in this two step acceleration process. All the elements of this chain acceleration mechanism have been observed by the Van Allen Probes.
Two-Step Load Balancing Scheme for Fairness Improvement in ...
African Journals Online (AJOL)
ABSTRACT: The problem of load imbalance in HetNets among wireless access technologies is addressed in this article. ... balancing could either mean balancing the transmit power or the radio ..... Weight for Multimedia Transmission.
International Nuclear Information System (INIS)
Gomez-Torres, Armando Miguel; Sanchez-Espinoza, Victor Hugo; Ivanov, Kostadin; Macian-Juan, Rafael
2012-01-01
Highlights: ► A fixed point iteration (FPI) is implemented in DYNSUB. ► Comparisons between the explicit scheme and the FPI are done. ► The FPI scheme allows moving from one time step to the other with converged solution. ► FPI allows the use of larger time steps without compromising the accuracy of results. ► FPI results are promising and represent an option in order to optimize calculations. -- Abstract: DYNSUB is a novel two-way pin-based coupling of the simplified transport (SP 3 ) version of DYN3D with the subchannel code SUBCHANFLOW. The new coupled code system allows for a more realistic description of the core behaviour under steady state and transients conditions, and has been widely described in Part I of this paper. Additionally to the explicit coupling developed and described in Part I, a nested loop iteration or fixed point iteration (FPI) is implemented in DYNSUB. A FPI is not an implicit scheme but approximates it by adding an iteration loop to the current explicit scheme. The advantage of the method is that it allows the use of larger time steps; however the nested loop iteration could take much more time in getting a converged solution that could be less efficient than the explicit scheme with small time steps. A comparison of the two temporal schemes is performed. The results using FPI are very promising and represent a very good option in order to optimize computational times without losing accuracy. However it is also shown that a FPI scheme can produce inaccurate results if the time step is not chosen in agreement with the analyzed transient.
International Nuclear Information System (INIS)
Fowler, Jack F.; Limbergen, Erik F.M. van
1997-01-01
Purpose: To explore the possible increase of radiation effect in tissues irradiated by pulsed brachytherapy (PDR) for local tissue dose rates between those 'averaged over the whole pulse' and the instantaneous high dose rates close to the dwell positions. Increased effect is more likely for tissues with short half-times of repair of the order of a few minutes, similar to pulse durations. Methods and Materials: Calculations were done assuming the linear quadratic formula for radiation damage, in which only the dose-squared term is subject to exponential repair. The situation with two components of T (1(2)) is addressed. A constant overall time of 140 h and a constant total dose of 70 Gy were assumed throughout, the continuous low dose rate of 0.5 Gy/h (CLDR) providing the unitary standard effects for each PDR condition. Effects of dose rates ranging from 4 Gy/h to 120 Gy/h (HDR at 2 Gy/min) were studied, covering the gap in an earlier publication. Four schedules were examined: doses per pulse of 0.5, 1, 1.5, and 2 Gy given at repetition frequencies of 1, 2, 3, and 4 h, respectively, each with a range of assumed half-times of repair of 4 min to 1.5 h. Results are presented for late-responding tissues, the differences from CLDR being two or three times greater than for early-responding tissues and most tumors. Results: Curves are presented relating the ratio of increased biological effect (proportional to log cell kill) calculated for PDR relative to CLDR. Ratios as high as 1.5 can be found for large doses per pulse (2 Gy) if the half-time of repair in tissues is as short as a few minutes. The major influences on effect are dose per pulse, half-time of repair in tissue, and--when T (1(2)) is short--the instantaneous dose rate. Maximum ratios of PDR/CLDR occur when the dose rate is such that pulse duration is approximately equal to T (1(2)) . As dose rate in the pulse is increased, a plateau of effect is reached, for most T (1(2)) s, above 10 to 20 Gy/h, which is
Yamamoto, Yasue; Moriwaki, Shinichi; Kawasumi, Atsushi; Miyano, Shinji; Shinohara, Hirofumi
2016-04-01
We propose novel circuit techniques for 1 clock (1CLK) 1 read/1 write (1R/1W) 2-port static random-access memories (SRAMs) to improve read access time (tAC) and write margins at low voltages. Two-stage read boost (TSR-BST) and write word line boost (WWL-BST) after the read sensing schemes have been proposed. TSR-BST reduces the worst read bit line (RBL) delay by 61% and RBL amplitude by 10% at V DD = 0.5 V, which improves tAC by 39% and reduces energy dissipation by 11% at V DD = 0.55 V. WWL-BST after read sensing scheme improves minimum operating voltage (V min) by 140 mV. A 32 kbit 1CLK 1R/1W 2-port SRAM with TSR-BST and WWL-BST has been developed using a 40 nm CMOS.
Plasma simulation with the Differential Algebraic Cubic Interpolated Propagation scheme
Energy Technology Data Exchange (ETDEWEB)
Utsumi, Takayuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1998-03-01
A computer code based on the Differential Algebraic Cubic Interpolated Propagation scheme has been developed for the numerical solution of the Boltzmann equation for a one-dimensional plasma with immobile ions. The scheme advects the distribution function and its first derivatives in the phase space for one time step by using a numerical integration method for ordinary differential equations, and reconstructs the profile in phase space by using a cubic polynomial within a grid cell. The method gives stable and accurate results, and is efficient. It is successfully applied to a number of equations; the Vlasov equation, the Boltzmann equation with the Fokker-Planck or the Bhatnagar-Gross-Krook (BGK) collision term and the relativistic Vlasov equation. The method can be generalized in a straightforward way to treat cases such as problems with nonperiodic boundary conditions and higher dimensional problems. (author)
On applications of chimera grid schemes to store separation
Cougherty, F. C.; Benek, J. A.; Steger, J. L.
1985-01-01
A finite difference scheme which uses multiple overset meshes to simulate the aerodynamics of aircraft/store interaction and store separation is described. In this chimera, or multiple mesh, scheme, a complex configuration is mapped using a major grid about the main component of the configuration, and minor overset meshes are used to map each additional component such as a store. As a first step in modeling the aerodynamics of store separation, two dimensional inviscid flow calculations were carried out in which one of the minor meshes is allowed to move with respect to the major grid. Solutions of calibrated two dimensional problems indicate that allowing one mesh to move with respect to another does not adversely affect the time accuracy of an unsteady solution. Steady, inviscid three dimensional computations demonstrate the capability to simulate complex configurations, including closely packed multiple bodies.
A fast resonance interference treatment scheme with subgroup method
International Nuclear Information System (INIS)
Cao, L.; He, Q.; Wu, H.; Zu, T.; Shen, W.
2015-01-01
A fast Resonance Interference Factor (RIF) scheme is proposed to treat the resonance interference effects between different resonance nuclides. This scheme utilizes the conventional subgroup method to evaluate the self-shielded cross sections of the dominant resonance nuclide in the heterogeneous system and the hyper-fine energy group method to represent the resonance interference effects in a simplified homogeneous model. In this paper, the newly implemented scheme is compared to the background iteration scheme, the Resonance Nuclide Group (RNG) scheme and the conventional RIF scheme. The numerical results show that the errors of the effective self-shielded cross sections are significantly reduced by the fast RIF scheme compared with the background iteration scheme and the RNG scheme. Besides, the fast RIF scheme consumes less computation time than the conventional RIF schemes. The speed-up ratio is ~4.5 for MOX pin cell problems. (author)
Energy Technology Data Exchange (ETDEWEB)
Laitinen, T.
2013-11-01
This thesis is based on the construction of a two-step laser desorption-ionization aerosol time-of-flight mass spectrometer (laser AMS), which is capable of measuring 10 to 50 nm aerosol particles collected from urban and rural air at-site and in near real time. The operation and applicability of the instrument was tested with various laboratory measurements, including parallel measurements with filter collection/chromatographic analysis, and then in field experiments in urban environment and boreal forest. Ambient ultrafine aerosol particles are collected on a metal surface by electrostatic precipitation and introduced to the time-of-flight mass spectrometer (TOF-MS) with a sampling valve. Before MS analysis particles are desorbed from the sampling surface with an infrared laser and ionized with a UV laser. The formed ions are guided to the TOF-MS by ion transfer optics, separated according to their m/z ratios, and detected with a micro channel plate detector. The laser AMS was used in urban air studies to quantify the carbon cluster content in 50 nm aerosol particles. Standards for the study were produced from 50 nm graphite particles, suspended in toluene, with 72 hours of high power sonication. The results showed the average amount of carbon clusters (winter 2012, Helsinki, Finland) in 50 nm particles to be 7.2% per sample. Several fullerenes/fullerene fragments were detected during the measurements. In boreal forest measurements, the laser AMS was capable of detecting several different organic species in 10 to 50 nm particles. These included nitrogen-containing compounds, carbon clusters, aromatics, aliphatic hydrocarbons, and oxygenated hydrocarbons. A most interesting event occurred during the boreal forest measurements in spring 2011 when the chemistry of the atmosphere clearly changed during snow melt. On that time concentrations of laser AMS ions m/z 143 and 185 (10 nm particles) increased dramatically. Exactly at the same time, quinoline concentrations
Kaald, Rune; Eggen, Trym; Ytterdal, Trond
2017-02-01
Fully digitized 2D ultrasound transducer arrays require one ADC per channel with a beamforming architecture consuming low power. We give design considerations for per-channel digitization and beamforming, and present the design and measurements of a continuous time delta-sigma modulator (CTDSM) for cardiac ultrasound applications. By integrating a mixer into the modulator frontend, the phase and frequency of the input signal can be shifted, thereby enabling both improved conversion efficiency and narrowband beamforming. To minimize the power consumption, we propose an optimization methodology using a simulated annealing framework combined with a C++ simulator solving linear electrical networks. The 3rd order single-bit feedback type modulator, implemented in a 65 nm CMOS process, achieves an SNR/SNDR of 67.8/67.4 dB across 1 MHz bandwidth consuming 131 [Formula: see text] of power. The achieved figure of merit of 34.2 fJ/step is comparable with state-of-the-art feedforward type multi-bit designs. We further demonstrate the influence to the dynamic range when performing dynamic receive beamforming on recorded delta-sigma modulated bit-stream sequences.
Scheme Program Documentation Tools
DEFF Research Database (Denmark)
Nørmark, Kurt
2004-01-01
are separate and intended for different documentation purposes they are related to each other in several ways. Both tools are based on XML languages for tool setup and for documentation authoring. In addition, both tools rely on the LAML framework which---in a systematic way---makes an XML language available...... as named functions in Scheme. Finally, the Scheme Elucidator is able to integrate SchemeDoc resources as part of an internal documentation resource....
International Nuclear Information System (INIS)
Zhao, S.; Lardjane, N.; Fedioun, I.
2014-01-01
Improved WENO schemes, Z, M, and their combination MZ, originally designed to capture sharper discontinuities than the classical fifth order Jiang-Shu scheme does, are evaluated for the purpose of implicit large eddy simulation of free shear flows. 1D Fourier analysis of errors reveals the built-in filter and dissipative properties of the schemes, which are subsequently applied to the canonical Rayleigh-Taylor and Taylor-Green flows. Large eddy simulations of a transonic non-reacting and a supersonic reacting air/H2 jets are then performed at resolution 128 * 128 * 512, showing no significant difference in the flow statistics. However, the computational time varies from one scheme to the other, the Z scheme providing the smaller wall-time due to larger allowed time steps. (authors)
Park, Jihoon
2015-01-01
We report a simple two-step annealing scheme for the fabrication of stable non-volatile memory devices employing poly(vinylidene fluoride) (PVDF) polymer thin-films. The proposed two-step annealing scheme comprises the crystallization of the ferroelectric gamma-phase during the first step and enhancement of the PVDF film dense morphology during the second step. Moreover, when we extended the processing time of the second step, we obtained good hysteresis curves down to 1 Hz, the first such report for ferroelectric PVDF films. The PVDF films also exhibit a coercive field of 113 MV m-1 and a ferroelectric polarization of 5.4 μC cm-2. © The Royal Society of Chemistry 2015.
International Nuclear Information System (INIS)
Popov, Pavel P.; Pope, Stephen B.
2014-01-01
This work addresses the issue of particle mass consistency in Large Eddy Simulation/Probability Density Function (LES/PDF) methods for turbulent reactive flows. Numerical schemes for the implicit and explicit enforcement of particle mass consistency (PMC) are introduced, and their performance is examined in a representative LES/PDF application, namely the Sandia–Sydney Bluff-Body flame HM1. A new combination of interpolation schemes for velocity and scalar fields is found to better satisfy PMC than multilinear and fourth-order Lagrangian interpolation. A second-order accurate time-stepping scheme for stochastic differential equations (SDE) is found to improve PMC relative to Euler time stepping, which is the first time that a second-order scheme is found to be beneficial, when compared to a first-order scheme, in an LES/PDF application. An explicit corrective velocity scheme for PMC enforcement is introduced, and its parameters optimized to enforce a specified PMC criterion with minimal corrective velocity magnitudes
Alternating Direction Implicit (ADI) schemes for a PDE-based image osmosis model
Calatroni, L.; Estatico, C.; Garibaldi, N.; Parisotto, S.
2017-10-01
We consider Alternating Direction Implicit (ADI) splitting schemes to compute efficiently the numerical solution of the PDE osmosis model considered by Weickert et al. in [10] for several imaging applications. The discretised scheme is shown to preserve analogous properties to the continuous model. The dimensional splitting strategy traduces numerically into the solution of simple tridiagonal systems for which standard matrix factorisation techniques can be used to improve upon the performance of classical implicit methods, even for large time steps. Applications to the shadow removal problem are presented.
International Nuclear Information System (INIS)
1980-10-01
This book is divided into three parts, which is about practical using of stepping motor. The first part has six chapters. The contents of the first part are about stepping motor, classification of stepping motor, basic theory og stepping motor, characteristic and basic words, types and characteristic of stepping motor in hybrid type and basic control of stepping motor. The second part deals with application of stepping motor with hardware of stepping motor control, stepping motor control by microcomputer and software of stepping motor control. The last part mentions choice of stepping motor system, examples of stepping motor, measurement of stepping motor and practical cases of application of stepping motor.
Allsop, Thomas; Bhamber, Ranjeet; Lloyd, Glynn; Miller, Martin R.; Dixon, Andrew; Webb, David; Ania Castañón, Juan Diego; Bennion, Ian
2012-11-01
An array of in-line curvature sensors on a garment is used to monitor the thoracic and abdominal movements of a human during respiration. The results are used to obtain volumetric changes of the human torso in agreement with a spirometer used simultaneously at the mouth. The array of 40 in-line fiber Bragg gratings is used to produce 20 curvature sensors at different locations, each sensor consisting of two fiber Bragg gratings. The 20 curvature sensors and adjoining fiber are encapsulated into a low-temperature-cured synthetic silicone. The sensors are wavelength interrogated by a commercially available system from Moog Insensys, and the wavelength changes are calibrated to recover curvature. A three-dimensional algorithm is used to generate shape changes during respiration that allow the measurement of absolute volume changes at various sections of the torso. It is shown that the sensing scheme yields a volumetric error of 6%. Comparing the volume data obtained from the spirometer with the volume estimated with the synchronous data from the shape-sensing array yielded a correlation value 0.86 with a Pearson's correlation coefficient p<0.01.
Sub-step methodology for coupled Monte Carlo depletion and thermal hydraulic codes
International Nuclear Information System (INIS)
Kotlyar, D.; Shwageraus, E.
2016-01-01
Highlights: • Discretization of time in coupled MC codes determines the results’ accuracy. • The error is due to lack of information regarding the time-dependent reaction rates. • The proposed sub-step method considerably reduces the time discretization error. • No additional MC transport solutions are required within the time step. • The reaction rates are varied as functions of nuclide densities and TH conditions. - Abstract: The governing procedure in coupled Monte Carlo (MC) codes relies on discretization of the simulation time into time steps. Typically, the MC transport solution at discrete points will generate reaction rates, which in most codes are assumed to be constant within the time step. This assumption can trigger numerical instabilities or result in a loss of accuracy, which, in turn, would require reducing the time steps size. This paper focuses on reducing the time discretization error without requiring additional MC transport solutions and hence with no major computational overhead. The sub-step method presented here accounts for the reaction rate variation due to the variation in nuclide densities and thermal hydraulic (TH) conditions. This is achieved by performing additional depletion and TH calculations within the analyzed time step. The method was implemented in BGCore code and subsequently used to analyze a series of test cases. The results indicate that computational speedup of up to a factor of 10 may be achieved over the existing coupling schemes.
Directory of Open Access Journals (Sweden)
Delogu Mauro
2006-05-01
Full Text Available Abstract Background Avian influenza viruses (AIVs are endemic in wild birds and their introduction and conversion to highly pathogenic avian influenza virus in domestic poultry is a cause of serious economic losses as well as a risk for potential transmission to humans. The ability to rapidly recognise AIVs in biological specimens is critical for limiting further spread of the disease in poultry. The advent of molecular methods such as real time polymerase chain reaction has allowed improvement of detection methods currently used in laboratories, although not all of these methods include an Internal Positive Control (IPC to monitor for false negative results. Therefore we developed a one-step reverse transcription real time PCR (RRT-PCR with a Minor Groove Binder (MGB probe for the detection of different subtypes of AIVs. This technique also includes an IPC. Methods RRT-PCR was developed using an improved TaqMan technology with a MGB probe to detect AI from reference viruses. Primers and probe were designed based on the matrix gene sequences from most animal and human A influenza virus subtypes. The specificity of RRT-PCR was assessed by detecting influenza A virus isolates belonging to subtypes from H1–H13 isolated in avian, human, swine and equine hosts. The analytical sensitivity of the RRT-PCR assay was determined using serial dilutions of in vitro transcribed matrix gene RNA. The use of a rodent RNA as an IPC in order not to reduce the efficiency of the assay was adopted. Results The RRT-PCR assay is capable to detect all tested influenza A viruses. The detection limit of the assay was shown to be between 5 and 50 RNA copies per reaction and the standard curve demonstrated a linear range from 5 to 5 × 108 copies as well as excellent reproducibility. The analytical sensitivity of the assay is 10–100 times higher than conventional RT-PCR. Conclusion The high sensitivity, rapidity, reproducibility and specificity of the AIV RRT-PCR with
Sorger, Bettina; Kamp, Tabea; Weiskopf, Nikolaus; Peters, Judith Caroline; Goebel, Rainer
2018-05-15
Brain-computer interfaces (BCIs) based on real-time functional magnetic resonance imaging (rtfMRI) are currently explored in the context of developing alternative (motor-independent) communication and control means for the severely disabled. In such BCI systems, the user encodes a particular intention (e.g., an answer to a question or an intended action) by evoking specific mental activity resulting in a distinct brain state that can be decoded from fMRI activation. One goal in this context is to increase the degrees of freedom in encoding different intentions, i.e., to allow the BCI user to choose from as many options as possible. Recently, the ability to voluntarily modulate spatial and/or temporal blood oxygenation level-dependent (BOLD)-signal features has been explored implementing different mental tasks and/or different encoding time intervals, respectively. Our two-session fMRI feasibility study systematically investigated for the first time the possibility of using magnitudinal BOLD-signal features for intention encoding. Particularly, in our novel paradigm, participants (n=10) were asked to alternately self-regulate their regional brain-activation level to 30%, 60% or 90% of their maximal capacity by applying a selected activation strategy (i.e., performing a mental task, e.g., inner speech) and modulation strategies (e.g., using different speech rates) suggested by the experimenters. In a second step, we tested the hypothesis that the additional availability of feedback information on the current BOLD-signal level within a region of interest improves the gradual-self regulation performance. Therefore, participants were provided with neurofeedback in one of the two fMRI sessions. Our results show that the majority of the participants were able to gradually self-regulate regional brain activation to at least two different target levels even in the absence of neurofeedback. When provided with continuous feedback on their current BOLD-signal level, most
Park, Jihoon; Kurra, Narendra; AlMadhoun, M. N.; Odeh, Ihab N.; Alshareef, Husam N.
2015-01-01
We report a simple two-step annealing scheme for the fabrication of stable non-volatile memory devices employing poly(vinylidene fluoride) (PVDF) polymer thin-films. The proposed two-step annealing scheme comprises the crystallization
Multiresolution signal decomposition schemes
J. Goutsias (John); H.J.A.M. Heijmans (Henk)
1998-01-01
textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis
Consolidation of the health insurance scheme
Association du personnel
2009-01-01
In the last issue of Echo, we highlighted CERN’s obligation to guarantee a social security scheme for all employees, pensioners and their families. In that issue we talked about the first component: pensions. This time we shall discuss the other component: the CERN Health Insurance Scheme (CHIS).
Quantum signature scheme based on a quantum search algorithm
International Nuclear Information System (INIS)
Yoon, Chun Seok; Kang, Min Sung; Lim, Jong In; Yang, Hyung Jin
2015-01-01
We present a quantum signature scheme based on a two-qubit quantum search algorithm. For secure transmission of signatures, we use a quantum search algorithm that has not been used in previous quantum signature schemes. A two-step protocol secures the quantum channel, and a trusted center guarantees non-repudiation that is similar to other quantum signature schemes. We discuss the security of our protocol. (paper)
Step out - Step in Sequencing Games
Musegaas, M.; Borm, P.E.M.; Quant, M.
2014-01-01
In this paper a new class of relaxed sequencing games is introduced: the class of Step out - Step in sequencing games. In this relaxation any player within a coalition is allowed to step out from his position in the processing order and to step in at any position later in the processing order.
Step out-step in sequencing games
Musegaas, Marieke; Borm, Peter; Quant, Marieke
2015-01-01
In this paper a new class of relaxed sequencing games is introduced: the class of Step out–Step in sequencing games. In this relaxation any player within a coalition is allowed to step out from his position in the processing order and to step in at any position later in the processing order. First,
Convection-diffusion lattice Boltzmann scheme for irregular lattices
Sman, van der R.G.M.; Ernst, M.H.
2000-01-01
In this paper, a lattice Boltzmann (LB) scheme for convection diffusion on irregular lattices is presented, which is free of any interpolation or coarse graining step. The scheme is derived using the axioma that the velocity moments of the equilibrium distribution equal those of the
Lattice Boltzmann scheme for diffusion on triangular grids
Sman, van der R.G.M.
2003-01-01
In this paper we present a Lattice Boltzmann scheme for diffusion on it unstructured triangular grids. In this formulation of a LB for irregular grids there is no need for interpolation, which is required in other LB schemes on irregular grids. At the end of the propagation step the lattice gas
Conservative numerical schemes for Euler-Lagrange equations
Energy Technology Data Exchange (ETDEWEB)
Vazquez, L. [Universidad Complutense, Madrid (Spain). Dept. de Matematica Aplicada; Jimenez, S. [Universidad Alfonso X El Sabio, Madrid (Spain). Dept. de Matematica Aplicada
1999-05-01
As a preliminary step to study magnetic field lines, the authors seek numerical schemes that reproduce at discrete level the significant feature of the continuous model, based on an underling Lagrangian structure. The resulting scheme give discrete counterparts of the variation law for the energy as well of as the Euler-Lagrange equations and their symmetries.
Quantum signature scheme for known quantum messages
International Nuclear Information System (INIS)
Kim, Taewan; Lee, Hyang-Sook
2015-01-01
When we want to sign a quantum message that we create, we can use arbitrated quantum signature schemes which are possible to sign for not only known quantum messages but also unknown quantum messages. However, since the arbitrated quantum signature schemes need the help of a trusted arbitrator in each verification of the signature, it is known that the schemes are not convenient in practical use. If we consider only known quantum messages such as the above situation, there can exist a quantum signature scheme with more efficient structure. In this paper, we present a new quantum signature scheme for known quantum messages without the help of an arbitrator. Differing from arbitrated quantum signature schemes based on the quantum one-time pad with the symmetric key, since our scheme is based on quantum public-key cryptosystems, the validity of the signature can be verified by a receiver without the help of an arbitrator. Moreover, we show that our scheme provides the functions of quantum message integrity, user authentication and non-repudiation of the origin as in digital signature schemes. (paper)
Rokhzadi, Arman; Mohammadian, Abdolmajid; Charron, Martin
2018-01-01
The objective of this paper is to develop an optimized implicit-explicit (IMEX) Runge-Kutta scheme for atmospheric applications focusing on stability and accuracy. Following the common terminology, the proposed method is called IMEX-SSP2(2,3,2), as it has second-order accuracy and is composed of diagonally implicit two-stage and explicit three-stage parts. This scheme enjoys the Strong Stability Preserving (SSP) property for both parts. This new scheme is applied to nonhydrostatic compressible Boussinesq equations in two different arrangements, including (i) semiimplicit and (ii) Horizontally Explicit-Vertically Implicit (HEVI) forms. The new scheme preserves the SSP property for larger regions of absolute monotonicity compared to the well-studied scheme in the same class. In addition, numerical tests confirm that the IMEX-SSP2(2,3,2) improves the maximum stable time step as well as the level of accuracy and computational cost compared to other schemes in the same class. It is demonstrated that the A-stability property as well as satisfying "second-stage order" and stiffly accurate conditions lead the proposed scheme to better performance than existing schemes for the applications examined herein.
Threshold Signature Schemes Application
Directory of Open Access Journals (Sweden)
Anastasiya Victorovna Beresneva
2015-10-01
Full Text Available This work is devoted to an investigation of threshold signature schemes. The systematization of the threshold signature schemes was done, cryptographic constructions based on interpolation Lagrange polynomial, elliptic curves and bilinear pairings were examined. Different methods of generation and verification of threshold signatures were explored, the availability of practical usage of threshold schemes in mobile agents, Internet banking and e-currency was shown. The topics of further investigation were given and it could reduce a level of counterfeit electronic documents signed by a group of users.
Rashed-Ul Islam, S M; Jahan, Munira; Tabassum, Shahina
2015-01-01
Virological monitoring is the best predictor for the management of chronic hepatitis B virus (HBV) infections. Consequently, it is important to use the most efficient, rapid and cost-effective testing systems for HBV DNA quantification. The present study compared the performance characteristics of a one-step HBV polymerase chain reaction (PCR) vs the two-step HBV PCR method for quantification of HBV DNA from clinical samples. A total of 100 samples consisting of 85 randomly selected samples from patients with chronic hepatitis B (CHB) and 15 samples from apparently healthy individuals were enrolled in this study. Of the 85 CHB clinical samples tested, HBV DNA was detected from 81% samples by one-step PCR method with median HBV DNA viral load (VL) of 7.50 × 10 3 lU/ml. In contrast, 72% samples were detected by the two-step PCR system with median HBV DNA of 3.71 × 10 3 lU/ml. The one-step method showed strong linear correlation with two-step PCR method (r = 0.89; p Tabassum S. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting. Euroasian J Hepato-Gastroenterol 2015;5(1):11-15.
A Hybrid DGTD-MNA Scheme for Analyzing Complex Electromagnetic Systems
Li, Peng
2015-01-07
A hybrid electromagnetics (EM)-circuit simulator for analyzing complex systems consisting of EM devices loaded with nonlinear multi-port lumped circuits is described. The proposed scheme splits the computational domain into two subsystems: EM and circuit subsystems, where field interactions are modeled using Maxwell and Kirchhoff equations, respectively. Maxwell equations are discretized using a discontinuous Galerkin time domain (DGTD) scheme while Kirchhoff equations are discretized using a modified nodal analysis (MNA)-based scheme. The coupling between the EM and circuit subsystems is realized at the lumped ports, where related EM fields and circuit voltages and currents are allowed to “interact’’ via numerical flux. To account for nonlinear lumped circuit elements, the standard Newton-Raphson method is applied at every time step. Additionally, a local time-stepping scheme is developed to improve the efficiency of the hybrid solver. Numerical examples consisting of EM systems loaded with single and multiport linear/nonlinear circuit networks are presented to demonstrate the accuracy, efficiency, and applicability of the proposed solver.
Error Analysis of Explicit Partitioned Runge–Kutta Schemes for Conservation Laws
Hundsdorfer, Willem
2014-08-27
An error analysis is presented for explicit partitioned Runge–Kutta methods and multirate methods applied to conservation laws. The interfaces, across which different methods or time steps are used, lead to order reduction of the schemes. Along with cell-based decompositions, also flux-based decompositions are studied. In the latter case mass conservation is guaranteed, but it will be seen that the accuracy may deteriorate.
Error Analysis of Explicit Partitioned Runge–Kutta Schemes for Conservation Laws
Hundsdorfer, Willem; Ketcheson, David I.; Savostianov, Igor
2014-01-01
An error analysis is presented for explicit partitioned Runge–Kutta methods and multirate methods applied to conservation laws. The interfaces, across which different methods or time steps are used, lead to order reduction of the schemes. Along with cell-based decompositions, also flux-based decompositions are studied. In the latter case mass conservation is guaranteed, but it will be seen that the accuracy may deteriorate.
Numerical method for solving the three-dimensional time-dependent neutron diffusion equation
International Nuclear Information System (INIS)
Khaled, S.M.; Szatmary, Z.
2005-01-01
A numerical time-implicit method has been developed for solving the coupled three-dimensional time-dependent multi-group neutron diffusion and delayed neutron precursor equations. The numerical stability of the implicit computation scheme and the convergence of the iterative associated processes have been evaluated. The computational scheme requires the solution of large linear systems at each time step. For this purpose, the point over-relaxation Gauss-Seidel method was chosen. A new scheme was introduced instead of the usual source iteration scheme. (author)
Construction of a knowledge classification scheme for sharing and usage of knowledge
International Nuclear Information System (INIS)
Yoo, Jae Bok; Oh, Jeong Hoon; Lee, Ji Ho; Ko, Young Chul
2003-12-01
To efficiently share knowledge among our members on the basis of knowledge management system, first of all, we need to systematically design the knowledge classification scheme that we can classify these knowledge well. The objective of this project is to construct the most suitable knowledge classification scheme that all of us can share them in Korea Atomic Energy Research Institute(KAERI). To construct the knowledge classification scheme all over the our organization, we established a few principles to design it and examined related many classification schemes. And we carried out 3 steps to complete the best desirable KAERI's knowledge classification scheme, that is, 1) the step to design a draft of the knowledge classification scheme, 2) the step to revise a draft of the knowledge classification scheme, 3) the step to verify the revised scheme and to decide its scheme. The scheme completed as a results of this project is consisted of total 218 items, that is, sections of 8 items, classes of 43 items and sub-classes of 167 items. We expect that the knowledge classification scheme designed as the results of this project can be played an important role as the frame to share knowledge among our members when we introduce knowledge management system in our organization. In addition, we expect that methods to design its scheme as well as this scheme itself can be applied when design a knowledge classification scheme at the other R and D institutes and enterprises
DEFF Research Database (Denmark)
Pötz, Katharina Anna; Haas, Rainer; Balzarova, Michaela
2013-01-01
of schemes that can be categorized on focus areas, scales, mechanisms, origins, types and commitment levels. Research limitations/implications – The findings contribute to conceptual and empirical research on existing models to compare and analyse CSR standards. Sampling technique and depth of analysis limit......Purpose – The rise of CSR followed a demand for CSR standards and guidelines. In a sector already characterized by a large number of standards, the authors seek to ask what CSR schemes apply to agribusiness, and how they can be systematically compared and analysed. Design....../methodology/approach – Following a deductive-inductive approach the authors develop a model to compare and analyse CSR schemes based on existing studies and on coding qualitative data on 216 CSR schemes. Findings – The authors confirm that CSR standards and guidelines have entered agribusiness and identify a complex landscape...
Energy Technology Data Exchange (ETDEWEB)
Willcock, J J; Lumsdaine, A; Quinlan, D J
2008-08-19
Tabled execution is a generalization of memorization developed by the logic programming community. It not only saves results from tabled predicates, but also stores the set of currently active calls to them; tabled execution can thus provide meaningful semantics for programs that seemingly contain infinite recursions with the same arguments. In logic programming, tabled execution is used for many purposes, both for improving the efficiency of programs, and making tasks simpler and more direct to express than with normal logic programs. However, tabled execution is only infrequently applied in mainstream functional languages such as Scheme. We demonstrate an elegant implementation of tabled execution in Scheme, using a mix of continuation-passing style and mutable data. We also show the use of tabled execution in Scheme for a problem in formal language and automata theory, demonstrating that tabled execution can be a valuable tool for Scheme users.
Implicit unified gas-kinetic scheme for steady state solutions in all flow regimes
Zhu, Yajun; Zhong, Chengwen; Xu, Kun
2016-06-01
This paper presents an implicit unified gas-kinetic scheme (UGKS) for non-equilibrium steady state flow computation. The UGKS is a direct modeling method for flow simulation in all regimes with the updates of both macroscopic flow variables and microscopic gas distribution function. By solving the macroscopic equations implicitly, a predicted equilibrium state can be obtained first through iterations. With the newly predicted equilibrium state, the evolution equation of the gas distribution function and the corresponding collision term can be discretized in a fully implicit way for fast convergence through iterations as well. The lower-upper symmetric Gauss-Seidel (LU-SGS) factorization method is implemented to solve both macroscopic and microscopic equations, which improves the efficiency of the scheme. Since the UGKS is a direct modeling method and its physical solution depends on the mesh resolution and the local time step, a physical time step needs to be fixed before using an implicit iterative technique with a pseudo-time marching step. Therefore, the physical time step in the current implicit scheme is determined by the same way as that in the explicit UGKS for capturing the physical solution in all flow regimes, but the convergence to a steady state speeds up through the adoption of a numerical time step with large CFL number. Many numerical test cases in different flow regimes from low speed to hypersonic ones, such as the Couette flow, cavity flow, and the flow passing over a cylinder, are computed to validate the current implicit method. The overall efficiency of the implicit UGKS can be improved by one or two orders of magnitude in comparison with the explicit one.
Evaluating statistical cloud schemes
Grützun, Verena; Quaas, Johannes; Morcrette , Cyril J.; Ament, Felix
2015-01-01
Statistical cloud schemes with prognostic probability distribution functions have become more important in atmospheric modeling, especially since they are in principle scale adaptive and capture cloud physics in more detail. While in theory the schemes have a great potential, their accuracy is still questionable. High-resolution three-dimensional observational data of water vapor and cloud water, which could be used for testing them, are missing. We explore the potential of ground-based re...
Gamma spectrometry; level schemes
International Nuclear Information System (INIS)
Blachot, J.; Bocquet, J.P.; Monnand, E.; Schussler, F.
1977-01-01
The research presented dealt with: a new beta emitter, isomer of 131 Sn; the 136 I levels fed through the radioactive decay of 136 Te (20.9s); the A=145 chain (β decay of Ba, La and Ce, and level schemes for 145 La, 145 Ce, 145 Pr); the A=47 chain (La and Ce, β decay, and the level schemes of 147 Ce and 147 Pr) [fr
International Nuclear Information System (INIS)
2002-04-01
This scheme defines the objectives relative to the renewable energies and the rational use of the energy in the framework of the national energy policy. It evaluates the needs and the potentialities of the regions and preconizes the actions between the government and the territorial organizations. The document is presented in four parts: the situation, the stakes and forecasts; the possible actions for new measures; the scheme management and the regional contributions analysis. (A.L.B.)