WorldWideScience

Sample records for time step computations

  1. Newmark local time stepping on high-performance computing architectures

    KAUST Repository

    Rietmann, Max

    2016-11-25

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  2. Newmark local time stepping on high-performance computing architectures

    KAUST Repository

    Rietmann, Max; Grote, Marcus; Peter, Daniel; Schenk, Olaf

    2016-01-01

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  3. Newmark local time stepping on high-performance computing architectures

    Energy Technology Data Exchange (ETDEWEB)

    Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland); Institute of Geophysics, ETH Zurich (Switzerland); Grote, Marcus, E-mail: marcus.grote@unibas.ch [Department of Mathematics and Computer Science, University of Basel (Switzerland); Peter, Daniel, E-mail: daniel.peter@kaust.edu.sa [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland); Institute of Geophysics, ETH Zurich (Switzerland); Schenk, Olaf, E-mail: olaf.schenk@usi.ch [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland)

    2017-04-01

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  4. Diffeomorphic image registration with automatic time-step adjustment

    DEFF Research Database (Denmark)

    Pai, Akshay Sadananda Uppinakudru; Klein, S.; Sommer, Stefan Horst

    2015-01-01

    In this paper, we propose an automated Euler's time-step adjustment scheme for diffeomorphic image registration using stationary velocity fields (SVFs). The proposed variational problem aims at bounding the inverse consistency error by adaptively adjusting the number of Euler's step required to r...... accuracy as a fixed time-step scheme however at a much less computational cost....

  5. An explicit multi-time-stepping algorithm for aerodynamic flows

    OpenAIRE

    Niemann-Tuitman, B.E.; Veldman, A.E.P.

    1997-01-01

    An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for aerodynamic turbulent flows. For two-dimensional flows speedups in the order of five with respect to single time stepping are obtained.

  6. An explicit multi-time-stepping algorithm for aerodynamic flows

    NARCIS (Netherlands)

    Niemann-Tuitman, B.E.; Veldman, A.E.P.

    1997-01-01

    An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for

  7. Time step length versus efficiency of Monte Carlo burnup calculations

    International Nuclear Information System (INIS)

    Dufek, Jan; Valtavirta, Ville

    2014-01-01

    Highlights: • Time step length largely affects efficiency of MC burnup calculations. • Efficiency of MC burnup calculations improves with decreasing time step length. • Results were obtained from SIE-based Monte Carlo burnup calculations. - Abstract: We demonstrate that efficiency of Monte Carlo burnup calculations can be largely affected by the selected time step length. This study employs the stochastic implicit Euler based coupling scheme for Monte Carlo burnup calculations that performs a number of inner iteration steps within each time step. In a series of calculations, we vary the time step length and the number of inner iteration steps; the results suggest that Monte Carlo burnup calculations get more efficient as the time step length is reduced. More time steps must be simulated as they get shorter; however, this is more than compensated by the decrease in computing cost per time step needed for achieving a certain accuracy

  8. Computational Abstraction Steps

    DEFF Research Database (Denmark)

    Thomsen, Lone Leth; Thomsen, Bent; Nørmark, Kurt

    2010-01-01

    and class instantiations. Our teaching experience shows that many novice programmers find it difficult to write programs with abstractions that materialise to concrete objects later in the development process. The contribution of this paper is the idea of initiating a programming process by creating...... or capturing concrete values, objects, or actions. As the next step, some of these are lifted to a higher level by computational means. In the object-oriented paradigm the target of such steps is classes. We hypothesise that the proposed approach primarily will be beneficial to novice programmers or during...... the exploratory phase of a program development process. In some specific niches it is also expected that our approach will benefit professional programmers....

  9. Computer experiments of the time-sequence of individual steps in multiple Coulomb-excitation

    International Nuclear Information System (INIS)

    Boer, J. de; Dannhaueser, G.

    1982-01-01

    The way in which the multiple E2 steps in the Coulomb-excitation of a rotational band of a nucleus follow one another is elucidated for selected examples using semiclassical computer experiments. The role a given transition plays for the excitation of a given final state is measured by a quantity named ''importance function''. It is found that these functions, calculated for the highest rotational state, peak at times forming a sequence for the successive E2 transitions starting from the ground state. This sequential behaviour is used to approximately account for the effects on the projectile orbit of the sequential transfer of excitation energy and angular momentum from projectile to target. These orbits lead to similar deflection functions and cross sections as those obtained from a symmetrization procedure approximately accounting for the transfer of angular momentum and energy. (Auth.)

  10. Minimal features of a computer and its basic software to executs NEPTUNIX 2 numerical step

    International Nuclear Information System (INIS)

    Roux, Pierre.

    1982-12-01

    NEPTUNIX 2 is a package which carries out the simulation of complex processes described by numerous non linear algebro-differential equations. Main features are: non linear or time dependent parameters, implicit form, stiff systems, dynamic change of equations leading to discontinuities on some variables. Thus the mathematical model is built with an equation set F(x,x',t,l) = 0, where t is the independent variable, x' the derivative of x and l an ''algebrized'' logical variable. The NEPTUNIX 2 package is divided into two successive major steps: a non numerical step and a numerical step. The non numerical step must be executed on a series 370 IBM computer or a compatible computer. This step generates a FORTRAN language model picture fitted for the computer carrying out the numerical step. The numerical step consists in building and running a mathematical model simulator. This execution step of NEPTUNIX 2 has been designed in order to be transportable on many computers. The present manual describes minimal features of such host computer used for executing the NEPTUNIX 2 numerical step [fr

  11. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    Science.gov (United States)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  12. Computing the sensitivity of drag and lift in flow past a circular cylinder: Time-stepping versus self-consistent analysis

    Science.gov (United States)

    Meliga, Philippe

    2017-07-01

    We provide in-depth scrutiny of two methods making use of adjoint-based gradients to compute the sensitivity of drag in the two-dimensional, periodic flow past a circular cylinder (Re≲189 ): first, the time-stepping analysis used in Meliga et al. [Phys. Fluids 26, 104101 (2014), 10.1063/1.4896941] that relies on classical Navier-Stokes modeling and determines the sensitivity to any generic control force from time-dependent adjoint equations marched backwards in time; and, second, a self-consistent approach building on the model of Mantič-Lugo et al. [Phys. Rev. Lett. 113, 084501 (2014), 10.1103/PhysRevLett.113.084501] to compute semilinear approximations of the sensitivity to the mean and fluctuating components of the force. Both approaches are applied to open-loop control by a small secondary cylinder and allow identifying the sensitive regions without knowledge of the controlled states. The theoretical predictions obtained by time-stepping analysis reproduce well the results obtained by direct numerical simulation of the two-cylinder system. So do the predictions obtained by self-consistent analysis, which corroborates the relevance of the approach as a guideline for efficient and systematic control design in the attempt to reduce drag, even though the Reynolds number is not close to the instability threshold and the oscillation amplitude is not small. This is because, unlike simpler approaches relying on linear stability analysis to predict the main features of the flow unsteadiness, the semilinear framework encompasses rigorously the effect of the control on the mean flow, as well as on the finite-amplitude fluctuation that feeds back nonlinearly onto the mean flow via the formation of Reynolds stresses. Such results are especially promising as the self-consistent approach determines the sensitivity from time-independent equations that can be solved iteratively, which makes it generally less computationally demanding. We ultimately discuss the extent to

  13. Adaptive Time Stepping for Transient Network Flow Simulation in Rocket Propulsion Systems

    Science.gov (United States)

    Majumdar, Alok K.; Ravindran, S. S.

    2017-01-01

    Fluid and thermal transients found in rocket propulsion systems such as propellant feedline system is a complex process involving fast phases followed by slow phases. Therefore their time accurate computation requires use of short time step initially followed by the use of much larger time step. Yet there are instances that involve fast-slow-fast phases. In this paper, we present a feedback control based adaptive time stepping algorithm, and discuss its use in network flow simulation of fluid and thermal transients. The time step is automatically controlled during the simulation by monitoring changes in certain key variables and by feedback. In order to demonstrate the viability of time adaptivity for engineering problems, we applied it to simulate water hammer and cryogenic chill down in pipelines. Our comparison and validation demonstrate the accuracy and efficiency of this adaptive strategy.

  14. The hyperbolic step potential: Anti-bound states, SUSY partners and Wigner time delays

    Energy Technology Data Exchange (ETDEWEB)

    Gadella, M. [Departamento de Física Teórica, Atómica y Óptica and IMUVA, Universidad de Valladolid, E-47011 Valladolid (Spain); Kuru, Ş. [Department of Physics, Faculty of Science, Ankara University, 06100 Ankara (Turkey); Negro, J., E-mail: jnegro@fta.uva.es [Departamento de Física Teórica, Atómica y Óptica and IMUVA, Universidad de Valladolid, E-47011 Valladolid (Spain)

    2017-04-15

    We study the scattering produced by a one dimensional hyperbolic step potential, which is exactly solvable and shows an unusual interest because of its asymmetric character. The analytic continuation of the scattering matrix in the momentum representation has a branch cut and an infinite number of simple poles on the negative imaginary axis which are related with the so called anti-bound states. This model does not show resonances. Using the wave functions of the anti-bound states, we obtain supersymmetric (SUSY) partners which are the series of Rosen–Morse II potentials. We have computed the Wigner reflection and transmission time delays for the hyperbolic step and such SUSY partners. Our results show that the more bound states a partner Hamiltonian has the smaller is the time delay. We also have evaluated time delays for the hyperbolic step potential in the classical case and have obtained striking similitudes with the quantum case. - Highlights: • The scattering matrix of hyperbolic step potential is studied. • The scattering matrix has a branch cut and an infinite number of poles. • The poles are associated to anti-bound states. • Susy partners using antibound states are computed. • Wigner time delays for the hyperbolic step and partner potentials are compared.

  15. Multiple Time-Step Dual-Hamiltonian Hybrid Molecular Dynamics - Monte Carlo Canonical Propagation Algorithm.

    Science.gov (United States)

    Chen, Yunjie; Kale, Seyit; Weare, Jonathan; Dinner, Aaron R; Roux, Benoît

    2016-04-12

    A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method.

  16. Fast Determination of Distribution-Connected PV Impacts Using a Variable Time-Step Quasi-Static Time-Series Approach: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Mather, Barry

    2017-08-24

    The increasing deployment of distribution-connected photovoltaic (DPV) systems requires utilities to complete complex interconnection studies. Relatively simple interconnection study methods worked well for low penetrations of photovoltaic systems, but more complicated quasi-static time-series (QSTS) analysis is required to make better interconnection decisions as DPV penetration levels increase. Tools and methods must be developed to support this. This paper presents a variable-time-step solver for QSTS analysis that significantly shortens the computational time and effort to complete a detailed analysis of the operation of a distribution circuit with many DPV systems. Specifically, it demonstrates that the proposed variable-time-step solver can reduce the required computational time by as much as 84% without introducing any important errors to metrics, such as the highest and lowest voltage occurring on the feeder, number of voltage regulator tap operations, and total amount of losses realized in the distribution circuit during a 1-yr period. Further improvement in computational speed is possible with the introduction of only modest errors in these metrics, such as a 91 percent reduction with less than 5 percent error when predicting voltage regulator operations.

  17. Solving point reactor kinetic equations by time step-size adaptable numerical methods

    International Nuclear Information System (INIS)

    Liao Chaqing

    2007-01-01

    Based on the analysis of effects of time step-size on numerical solutions, this paper showed the necessity of step-size adaptation. Based on the relationship between error and step-size, two-step adaptation methods for solving initial value problems (IVPs) were introduced. They are Two-Step Method and Embedded Runge-Kutta Method. PRKEs were solved by implicit Euler method with step-sizes optimized by using Two-Step Method. It was observed that the control error has important influence on the step-size and the accuracy of solutions. With suitable control errors, the solutions of PRKEs computed by the above mentioned method are accurate reasonably. The accuracy and usage of MATLAB built-in ODE solvers ode23 and ode45, both of which adopt Runge-Kutta-Fehlberg method, were also studied and discussed. (authors)

  18. Symplectic integrators with adaptive time steps

    Science.gov (United States)

    Richardson, A. S.; Finn, J. M.

    2012-01-01

    In recent decades, there have been many attempts to construct symplectic integrators with variable time steps, with rather disappointing results. In this paper, we identify the causes for this lack of performance, and find that they fall into two categories. In the first, the time step is considered a function of time alone, Δ = Δ(t). In this case, backward error analysis shows that while the algorithms remain symplectic, parametric instabilities may arise because of resonance between oscillations of Δ(t) and the orbital motion. In the second category the time step is a function of phase space variables Δ = Δ(q, p). In this case, the system of equations to be solved is analyzed by introducing a new time variable τ with dt = Δ(q, p) dτ. The transformed equations are no longer in Hamiltonian form, and thus do not benefit from integration methods which would be symplectic for Hamiltonian systems. We analyze two methods for integrating the transformed equations which do, however, preserve the structure of the original equations. The first is an extended phase space method, which has been successfully used in previous studies of adaptive time step symplectic integrators. The second, novel, method is based on a non-canonical mixed-variable generating function. Numerical trials for both of these methods show good results, without parametric instabilities or spurious growth or damping. It is then shown how to adapt the time step to an error estimate found by backward error analysis, in order to optimize the time-stepping scheme. Numerical results are obtained using this formulation and compared with other time-stepping schemes for the extended phase space symplectic method.

  19. Multiple time step integrators in ab initio molecular dynamics

    International Nuclear Information System (INIS)

    Luehr, Nathan; Martínez, Todd J.; Markland, Thomas E.

    2014-01-01

    Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy

  20. Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping

    KAUST Repository

    Bonito, Andrea

    2014-10-31

    © Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.

  1. Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping

    KAUST Repository

    Bonito, Andrea; Guermond, Jean-Luc; Lee, Sanghyun

    2014-01-01

    © Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.

  2. Fast algorithms for computing phylogenetic divergence time.

    Science.gov (United States)

    Crosby, Ralph W; Williams, Tiffani L

    2017-12-06

    The inference of species divergence time is a key step in most phylogenetic studies. Methods have been available for the last ten years to perform the inference, but the performance of the methods does not yet scale well to studies with hundreds of taxa and thousands of DNA base pairs. For example a study of 349 primate taxa was estimated to require over 9 months of processing time. In this work, we present a new algorithm, AncestralAge, that significantly improves the performance of the divergence time process. As part of AncestralAge, we demonstrate a new method for the computation of phylogenetic likelihood and our experiments show a 90% improvement in likelihood computation time on the aforementioned dataset of 349 primates taxa with over 60,000 DNA base pairs. Additionally, we show that our new method for the computation of the Bayesian prior on node ages reduces the running time for this computation on the 349 taxa dataset by 99%. Through the use of these new algorithms we open up the ability to perform divergence time inference on large phylogenetic studies.

  3. Coronary computed tomography angiography using prospective electrocardiography-gated axial scans with 64-detector computed tomography. Evaluation of stair-step artifacts and padding time

    International Nuclear Information System (INIS)

    Kimura, Fumiko; Umezawa, Tatsuo; Asano, Tomonari; Chihara, Ruri; Nishi, Naoko; Nishimura, Shigeyoshi; Sakai, Fumikazu

    2010-01-01

    We compared stair-step artifacts and radiation dose between prospective electrocardiography (ECG)-gated coronary computed tomography angiography (prospective CCTA) and retrospective CCTA using 64-detector CT and determined the optimal padding time (PT) for prospective CCTA. We retrospectively evaluated 183 patients [mean heart rate (HR) <65 beats/min, maximum HR instability <5 beats/min] who had undergone CCTA. We scored stair-step artifacts from 1 (severe) to 5 (none) and evaluated the effective dose in 53 patients with retrospective CCTA and 130 with prospective CCTA (PT 200 ms, n=32; PT 50 ms, n=98). Mean artifact scores were 4.3 in both retrospective and prospective CCTAs. However, statistically more arteries scored <3 (nonassessable) on prospective CCTA (P<0.001). Mean scores for prospective CCTA with 200- and 50-ms PT were 4.1 and 4.3, respectively (no significant difference). The radiation dose of prospective CCTA was reduced by 59.1% to 80.7%. Prospective CCTA reduces the radiation dose and allows diagnostic imaging in most cases but shows more nonevaluable artifacts than retrospective CCTA. Use of 50-ms instead of 200-ms PT appears to maintain image quality in patients with a mean HR <65 beats/min and HR instability of <5 beats/min. (author)

  4. Time step MOTA thermostat simulation

    International Nuclear Information System (INIS)

    Guthrie, G.L.

    1978-09-01

    The report details the logic, program layout, and operating procedures for the time-step MOTA (Materials Open Test Assembly) thermostat simulation program known as GYRD. It will enable prospective users to understand the operation of the program, run it, and interpret the results. The time-step simulation analysis was the approach chosen to determine the maximum value gain that could be used to minimize steady temperature offset without risking undamped thermal oscillations. The advantage of the GYRD program is that it directly shows hunting, ringing phenomenon, and similar events. Programs BITT and CYLB are faster, but do not directly show ringing time

  5. Step by step parallel programming method for molecular dynamics code

    International Nuclear Information System (INIS)

    Orii, Shigeo; Ohta, Toshio

    1996-07-01

    Parallel programming for a numerical simulation program of molecular dynamics is carried out with a step-by-step programming technique using the two phase method. As a result, within the range of a certain computing parameters, it is found to obtain parallel performance by using the level of parallel programming which decomposes the calculation according to indices of do-loops into each processor on the vector parallel computer VPP500 and the scalar parallel computer Paragon. It is also found that VPP500 shows parallel performance in wider range computing parameters. The reason is that the time cost of the program parts, which can not be reduced by the do-loop level of the parallel programming, can be reduced to the negligible level by the vectorization. After that, the time consuming parts of the program are concentrated on less parts that can be accelerated by the do-loop level of the parallel programming. This report shows the step-by-step parallel programming method and the parallel performance of the molecular dynamics code on VPP500 and Paragon. (author)

  6. A multi-time-step noise reduction method for measuring velocity statistics from particle tracking velocimetry

    Science.gov (United States)

    Machicoane, Nathanaël; López-Caballero, Miguel; Bourgoin, Mickael; Aliseda, Alberto; Volk, Romain

    2017-10-01

    We present a method to improve the accuracy of velocity measurements for fluid flow or particles immersed in it, based on a multi-time-step approach that allows for cancellation of noise in the velocity measurements. Improved velocity statistics, a critical element in turbulent flow measurements, can be computed from the combination of the velocity moments computed using standard particle tracking velocimetry (PTV) or particle image velocimetry (PIV) techniques for data sets that have been collected over different values of time intervals between images. This method produces Eulerian velocity fields and Lagrangian velocity statistics with much lower noise levels compared to standard PIV or PTV measurements, without the need of filtering and/or windowing. Particle displacement between two frames is computed for multiple different time-step values between frames in a canonical experiment of homogeneous isotropic turbulence. The second order velocity structure function of the flow is computed with the new method and compared to results from traditional measurement techniques in the literature. Increased accuracy is also demonstrated by comparing the dissipation rate of turbulent kinetic energy measured from this function against previously validated measurements.

  7. Adaptive time-stepping Monte Carlo integration of Coulomb collisions

    Science.gov (United States)

    Särkimäki, K.; Hirvijoki, E.; Terävä, J.

    2018-01-01

    We report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell-Jüttner statistics. The implementation is based on the Beliaev-Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space. Detailed description is provided for both the physics and implementation of the operator. The focus is in adaptive integration of stochastic differential equations, which is an overlooked aspect among existing Monte Carlo implementations of Coulomb collision operators. We verify that our operator converges to known analytical results and demonstrate that careless implementation of the adaptive time step can lead to severely erroneous results. The operator is provided as a self-contained Fortran 95 module and can be included into existing orbit-following tools that trace either the full Larmor motion or the guiding center dynamics. The adaptive time-stepping algorithm is expected to be useful in situations where the collision frequencies vary greatly over the course of a simulation. Examples include the slowing-down of fusion products or other fast ions, and the Dreicer generation of runaway electrons as well as the generation of fast ions or electrons with ion or electron cyclotron resonance heating.

  8. Computer aided virtual manufacturing using Creo parametric easy to learn step by step guide

    CERN Document Server

    Kanife, Paul Obiora

    2016-01-01

    Providing a step-by-step guide for the implementation of virtual manufacturing using Creo Parametric software (formerly known as Pro-Engineer), this book creates an engaging and interactive learning experience for manufacturing engineering students. Featuring graphic illustrations of simulation processes and operations, and written in accessible English to promote user-friendliness, the book covers key topics in the field including: the engraving machining process, face milling, profile milling, surface milling, volume rough milling, expert machining, electric discharge machining (EDM), and area turning using the lathe machining process. Maximising reader insights into how to simulate material removal processes, and how to generate cutter location data and G-codes data, this valuable resource equips undergraduate, postgraduate, BTech and HND students in the fields of manufacturing engineering, computer aided design (CAD) and computer aided engineering (CAE) with transferable skills and knowledge. This book is...

  9. Noise-constrained switching times for heteroclinic computing

    Science.gov (United States)

    Neves, Fabio Schittler; Voit, Maximilian; Timme, Marc

    2017-03-01

    Heteroclinic computing offers a novel paradigm for universal computation by collective system dynamics. In such a paradigm, input signals are encoded as complex periodic orbits approaching specific sequences of saddle states. Without inputs, the relevant states together with the heteroclinic connections between them form a network of states—the heteroclinic network. Systems of pulse-coupled oscillators or spiking neurons naturally exhibit such heteroclinic networks of saddles, thereby providing a substrate for general analog computations. Several challenges need to be resolved before it becomes possible to effectively realize heteroclinic computing in hardware. The time scales on which computations are performed crucially depend on the switching times between saddles, which in turn are jointly controlled by the system's intrinsic dynamics and the level of external and measurement noise. The nonlinear dynamics of pulse-coupled systems often strongly deviate from that of time-continuously coupled (e.g., phase-coupled) systems. The factors impacting switching times in pulse-coupled systems are still not well understood. Here we systematically investigate switching times in dependence of the levels of noise and intrinsic dissipation in the system. We specifically reveal how local responses to pulses coact with external noise. Our findings confirm that, like in time-continuous phase-coupled systems, piecewise-continuous pulse-coupled systems exhibit switching times that transiently increase exponentially with the number of switches up to some order of magnitude set by the noise level. Complementarily, we show that switching times may constitute a good predictor for the computation reliability, indicating how often an input signal must be reiterated. By characterizing switching times between two saddles in conjunction with the reliability of a computation, our results provide a first step beyond the coding of input signal identities toward a complementary coding for

  10. High-resolution seismic wave propagation using local time stepping

    KAUST Repository

    Peter, Daniel

    2017-03-13

    High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step size for ground-motion simulations due to numerical stability conditions. To alleviate this problem, local time stepping (LTS) algorithms allow an explicit time stepping scheme to adapt the time step to the element size, allowing nearoptimal time steps everywhere in the mesh. This can potentially lead to significantly faster simulation runtimes.

  11. Time step size selection for radiation diffusion calculations

    International Nuclear Information System (INIS)

    Rider, W.J.; Knoll, D.A.

    1999-01-01

    The purpose of this note is to describe a time step control technique as applied to radiation diffusion. Standard practice only provides a heuristic criteria related to the relative change in the dependent variables. The authors propose an alternative based on relatively simple physical principles. This time step control applies to methods of solution that are unconditionally stable and converges nonlinearities within a time step in the governing equations. Commonly, nonlinearities in the governing equations are evaluated using existing (old time) data. The authors refer to this as the semi-implicit (SI) method. When a method converges nonlinearities within a time step, the entire governing equation including all nonlinearities is self-consistently evaluated using advance time data (with appropriate time centering for accuracy)

  12. Dose field simulation for products irradiated by electron beams: formulation of the problem and its step by step solution with EGS4 computer code

    International Nuclear Information System (INIS)

    Rakhno, I.L.; Roginets, L.P.

    1999-01-01

    When performing radiation treatment of products using an electron beam much time and money should be spent for numerous measurements to make optimal choice of treatment mode. Direct radiation treatment simulation by means of the EGS4 computer code fails to describe such measurement results correctly. In the paper a multi-step radiation treatment planning procedure is suggested which consists in fitting the EGS4 simulation results to reference measurement results, and using the fitted electron beam parameters and other ones in subsequent computer simulations. It is shown that the fitting procedure should be performed separately for each material or product type. The procedure suggested allows to replace measurements by computer simulations and therefore reduces significantly time and money required for such measurements. (author)

  13. The importance of time-stepping errors in ocean models

    Science.gov (United States)

    Williams, P. D.

    2011-12-01

    Many ocean models use leapfrog time stepping. The Robert-Asselin (RA) filter is usually applied after each leapfrog step, to control the computational mode. However, it will be shown in this presentation that the RA filter generates very large amounts of numerical diapycnal mixing. In some ocean models, the numerical diapycnal mixing from the RA filter is as large as the physical diapycnal mixing. This lowers our confidence in the fidelity of the simulations. In addition to the above problem, the RA filter also damps the physical solution and degrades the numerical accuracy. These two concomitant problems occur because the RA filter does not conserve the mean state, averaged over the three time slices on which it operates. The presenter has recently proposed a simple modification to the RA filter, which does conserve the three-time-level mean state. The modified filter has become known as the Robert-Asselin-Williams (RAW) filter. When used in conjunction with the leapfrog scheme, the RAW filter eliminates the numerical damping of the physical solution and increases the amplitude accuracy by two orders, yielding third-order accuracy. The phase accuracy is unaffected and remains second-order. The RAW filter can easily be incorporated into existing models of the ocean, typically via the insertion of just a single line of code. Better simulations are obtained, at almost no additional computational expense. Results will be shown from recent implementations of the RAW filter in various ocean models. For example, in the UK Met Office Hadley Centre ocean model, sea-surface temperature and sea-ice biases in the North Atlantic Ocean are found to be reduced. These improvements are encouraging for the use of the RAW filter in other ocean models.

  14. Development of a real time activity monitoring Android application utilizing SmartStep.

    Science.gov (United States)

    Hegde, Nagaraj; Melanson, Edward; Sazonov, Edward

    2016-08-01

    Footwear based activity monitoring systems are becoming popular in academic research as well as consumer industry segments. In our previous work, we had presented developmental aspects of an insole based activity and gait monitoring system-SmartStep, which is a socially acceptable, fully wireless and versatile insole. The present work describes the development of an Android application that captures the SmartStep data wirelessly over Bluetooth Low energy (BLE), computes features on the received data, runs activity classification algorithms and provides real time feedback. The development of activity classification methods was based on the the data from a human study involving 4 participants. Participants were asked to perform activities of sitting, standing, walking, and cycling while they wore SmartStep insole system. Multinomial Logistic Discrimination (MLD) was utilized in the development of machine learning model for activity prediction. The resulting classification model was implemented in an Android Smartphone. The Android application was benchmarked for power consumption and CPU loading. Leave one out cross validation resulted in average accuracy of 96.9% during model training phase. The Android application for real time activity classification was tested on a human subject wearing SmartStep resulting in testing accuracy of 95.4%.

  15. Considerations for the independent reaction times and step-by-step methods for radiation chemistry simulations

    Science.gov (United States)

    Plante, Ianik; Devroye, Luc

    2017-10-01

    Ionizing radiation interacts with the water molecules of the tissues mostly by ionizations and excitations, which result in the formation of the radiation track structure and the creation of radiolytic species such as H.,.OH, H2, H2O2, and e-aq. After their creation, these species diffuse and may chemically react with the neighboring species and with the molecules of the medium. Therefore radiation chemistry is of great importance in radiation biology. As the chemical species are not distributed homogeneously, the use of conventional models of homogeneous reactions cannot completely describe the reaction kinetics of the particles. Actually, many simulations of radiation chemistry are done using the Independent Reaction Time (IRT) method, which is a very fast technique to calculate radiochemical yields but which do not calculate the positions of the radiolytic species as a function of time. Step-by-step (SBS) methods, which are able to provide such information, have been used only sparsely because these are time-consuming in terms of calculation. Recent improvements in computer performance now allow the regular use of the SBS method in radiation chemistry. The SBS and IRT methods are both based on the Green's functions of the diffusion equation (GFDE). In this paper, several sampling algorithms of the GFDE and for the IRT method are presented. We show that the IRT and SBS methods are exactly equivalent for 2-particles systems for diffusion and partially diffusion-controlled reactions between non-interacting particles. We also show that the results obtained with the SBS simulation method with periodic boundary conditions are in agreement with the predictions by classical reaction kinetics theory, which is an important step towards using this method for modelling of biochemical networks and metabolic pathways involved in oxidative stress. Finally, the first simulation results obtained with the code RITRACKS (Relativistic Ion Tracks) are presented.

  16. Efficiently computing exact geodesic loops within finite steps.

    Science.gov (United States)

    Xin, Shi-Qing; He, Ying; Fu, Chi-Wing

    2012-06-01

    Closed geodesics, or geodesic loops, are crucial to the study of differential topology and differential geometry. Although the existence and properties of closed geodesics on smooth surfaces have been widely studied in mathematics community, relatively little progress has been made on how to compute them on polygonal surfaces. Most existing algorithms simply consider the mesh as a graph and so the resultant loops are restricted only on mesh edges, which are far from the actual geodesics. This paper is the first to prove the existence and uniqueness of geodesic loop restricted on a closed face sequence; it contributes also with an efficient algorithm to iteratively evolve an initial closed path on a given mesh into an exact geodesic loop within finite steps. Our proposed algorithm takes only an O(k) space complexity and an O(mk) time complexity (experimentally), where m is the number of vertices in the region bounded by the initial loop and the resultant geodesic loop, and k is the average number of edges in the edge sequences that the evolving loop passes through. In contrast to the existing geodesic curvature flow methods which compute an approximate geodesic loop within a predefined threshold, our method is exact and can apply directly to triangular meshes without needing to solve any differential equation with a numerical solver; it can run at interactive speed, e.g., in the order of milliseconds, for a mesh with around 50K vertices, and hence, significantly outperforms existing algorithms. Actually, our algorithm could run at interactive speed even for larger meshes. Besides the complexity of the input mesh, the geometric shape could also affect the number of evolving steps, i.e., the performance. We motivate our algorithm with an interactive shape segmentation example shown later in the paper.

  17. Multi-step-prediction of chaotic time series based on co-evolutionary recurrent neural network

    International Nuclear Information System (INIS)

    Ma Qianli; Zheng Qilun; Peng Hong; Qin Jiangwei; Zhong Tanwei

    2008-01-01

    This paper proposes a co-evolutionary recurrent neural network (CERNN) for the multi-step-prediction of chaotic time series, it estimates the proper parameters of phase space reconstruction and optimizes the structure of recurrent neural networks by co-evolutionary strategy. The searching space was separated into two subspaces and the individuals are trained in a parallel computational procedure. It can dynamically combine the embedding method with the capability of recurrent neural network to incorporate past experience due to internal recurrence. The effectiveness of CERNN is evaluated by using three benchmark chaotic time series data sets: the Lorenz series, Mackey-Glass series and real-world sun spot series. The simulation results show that CERNN improves the performances of multi-step-prediction of chaotic time series

  18. ChromAlign: A two-step algorithmic procedure for time alignment of three-dimensional LC-MS chromatographic surfaces.

    Science.gov (United States)

    Sadygov, Rovshan G; Maroto, Fernando Martin; Hühmer, Andreas F R

    2006-12-15

    We present an algorithmic approach to align three-dimensional chromatographic surfaces of LC-MS data of complex mixture samples. The approach consists of two steps. In the first step, we prealign chromatographic profiles: two-dimensional projections of chromatographic surfaces. This is accomplished by correlation analysis using fast Fourier transforms. In this step, a temporal offset that maximizes the overlap and dot product between two chromatographic profiles is determined. In the second step, the algorithm generates correlation matrix elements between full mass scans of the reference and sample chromatographic surfaces. The temporal offset from the first step indicates a range of the mass scans that are possibly correlated, then the correlation matrix is calculated only for these mass scans. The correlation matrix carries information on highly correlated scans, but it does not itself determine the scan or time alignment. Alignment is determined as a path in the correlation matrix that maximizes the sum of the correlation matrix elements. The computational complexity of the optimal path generation problem is reduced by the use of dynamic programming. The program produces time-aligned surfaces. The use of the temporal offset from the first step in the second step reduces the computation time for generating the correlation matrix and speeds up the process. The algorithm has been implemented in a program, ChromAlign, developed in C++ language for the .NET2 environment in WINDOWS XP. In this work, we demonstrate the applications of ChromAlign to alignment of LC-MS surfaces of several datasets: a mixture of known proteins, samples from digests of surface proteins of T-cells, and samples prepared from digests of cerebrospinal fluid. ChromAlign accurately aligns the LC-MS surfaces we studied. In these examples, we discuss various aspects of the alignment by ChromAlign, such as constant time axis shifts and warping of chromatographic surfaces.

  19. Stability of the high-order finite elements for acoustic or elastic wave propagation with high-order time stepping

    KAUST Repository

    De Basabe, Jonás D.

    2010-04-01

    We investigate the stability of some high-order finite element methods, namely the spectral element method and the interior-penalty discontinuous Galerkin method (IP-DGM), for acoustic or elastic wave propagation that have become increasingly popular in the recent past. We consider the Lax-Wendroff method (LWM) for time stepping and show that it allows for a larger time step than the classical leap-frog finite difference method, with higher-order accuracy. In particular the fourth-order LWM allows for a time step 73 per cent larger than that of the leap-frog method; the computational cost is approximately double per time step, but the larger time step partially compensates for this additional cost. Necessary, but not sufficient, stability conditions are given for the mentioned methods for orders up to 10 in space and time. The stability conditions for IP-DGM are approximately 20 and 60 per cent more restrictive than those for SEM in the acoustic and elastic cases, respectively. © 2010 The Authors Journal compilation © 2010 RAS.

  20. Combined Effects of Numerical Method Type and Time Step on Water Stressed Actual Crop ET

    Directory of Open Access Journals (Sweden)

    B. Ghahraman

    2016-02-01

    Full Text Available Introduction: Actual crop evapotranspiration (Eta is important in hydrologic modeling and irrigation water management issues. Actual ET depends on an estimation of a water stress index and average soil water at crop root zone, and so depends on a chosen numerical method and adapted time step. During periods with no rainfall and/or irrigation, actual ET can be computed analytically or by using different numerical methods. Overal, there are many factors that influence actual evapotranspiration. These factors are crop potential evapotranspiration, available root zone water content, time step, crop sensitivity, and soil. In this paper different numerical methods are compared for different soil textures and different crops sensitivities. Materials and Methods: During a specific time step with no rainfall or irrigation, change in soil water content would be equal to evapotranspiration, ET. In this approach, however, deep percolation is generally ignored due to deep water table and negligible unsaturated hydraulic conductivity below rooting depth. This differential equation may be solved analytically or numerically considering different algorithms. We adapted four different numerical methods, as explicit, implicit, and modified Euler, midpoint method, and 3-rd order Heun method to approximate the differential equation. Three general soil types of sand, silt, and clay, and three different crop types of sensitive, moderate, and resistant under Nishaboor plain were used. Standard soil fraction depletion (corresponding to ETc=5 mm.d-1, pstd, below which crop faces water stress is adopted for crop sensitivity. Three values for pstd were considered in this study to cover the common crops in the area, including winter wheat and barley, cotton, alfalfa, sugar beet, saffron, among the others. Based on this parameter, three classes for crop sensitivity was considered, sensitive crops with pstd=0.2, moderate crops with pstd=0.5, and resistive crops with pstd=0

  1. Elasto-plastic benchmark calculations. Step 1: verification of the numerical accuracy of the computer programs

    International Nuclear Information System (INIS)

    Corsi, F.

    1985-01-01

    In connection with the design of nuclear reactors components operating at elevated temperature, design criteria need a level of realism in the prediction of inelastic structural behaviour. This concept leads to the necessity of developing non linear computer programmes, and, as a consequence, to the problems of verification and qualification of these tools. Benchmark calculations allow to carry out these two actions, involving at the same time an increased level of confidence in complex phenomena analysis and in inelastic design calculations. With the financial and programmatic support of the Commission of the European Communities (CEE) a programme of elasto-plastic benchmark calculations relevant to the design of structural components for LMFBR has been undertaken by those Member States which are developing a fast reactor project. Four principal progressive aims were initially pointed out that brought to the decision to subdivide the Benchmark effort in a calculations series of four sequential steps: step 1 to 4. The present document tries to summarize Step 1 of the Benchmark exercise, to derive some conclusions on Step 1 by comparison of the results obtained with the various codes and to point out some concluding comments on the first action. It is to point out that even if the work was designed to test the capabilities of the computer codes, another aim was to increase the skill of the users concerned

  2. A Multi-step and Multi-level approach for Computer Aided Molecular Design

    DEFF Research Database (Denmark)

    . The problem formulation step incorporates a knowledge base for the identification and setup of the design criteria. Candidate compounds are identified using a multi-level generate and test CAMD solution algorithm capable of designing molecules having a high level of molecular detail. A post solution step...... using an Integrated Computer Aided System (ICAS) for result analysis and verification is included in the methodology. Keywords: CAMD, separation processes, knowledge base, molecular design, solvent selection, substitution, group contribution, property prediction, ICAS Introduction The use of Computer...... Aided Molecular Design (CAMD) for the identification of compounds having specific physic...

  3. Numerical sensitivity computation for discontinuous gradient-only optimization problems using the complex-step method

    CSIR Research Space (South Africa)

    Wilke, DN

    2012-07-01

    Full Text Available problems that utilise remeshing (i.e. the mesh topology is allowed to change) between design updates. Here, changes in mesh topology result in abrupt changes in the discretization error of the computed response. These abrupt changes in turn manifests... in shape optimization but may be present whenever (partial) differential equations are ap- proximated numerically with non-constant discretization methods e.g. remeshing of spatial domains or automatic time stepping in temporal domains. Keywords: Complex...

  4. Positivity-preserving dual time stepping schemes for gas dynamics

    Science.gov (United States)

    Parent, Bernard

    2018-05-01

    A new approach at discretizing the temporal derivative of the Euler equations is here presented which can be used with dual time stepping. The temporal discretization stencil is derived along the lines of the Cauchy-Kowalevski procedure resulting in cross differences in spacetime but with some novel modifications which ensure the positivity of the discretization coefficients. It is then shown that the so-obtained spacetime cross differences result in changes to the wave speeds and can thus be incorporated within Roe or Steger-Warming schemes (with and without reconstruction-evolution) simply by altering the eigenvalues. The proposed approach is advantaged over alternatives in that it is positivity-preserving for the Euler equations. Further, it yields monotone solutions near discontinuities while exhibiting a truncation error in smooth regions less than the one of the second- or third-order accurate backward-difference-formula (BDF) for either small or large time steps. The high resolution and positivity preservation of the proposed discretization stencils are independent of the convergence acceleration technique which can be set to multigrid, preconditioning, Jacobian-free Newton-Krylov, block-implicit, etc. Thus, the current paper also offers the first implicit integration of the time-accurate Euler equations that is positivity-preserving in the strict sense (that is, the density and temperature are guaranteed to remain positive). This is in contrast to all previous positivity-preserving implicit methods which only guaranteed the positivity of the density, not of the temperature or pressure. Several stringent reacting and inert test cases confirm the positivity-preserving property of the proposed method as well as its higher resolution and higher computational efficiency over other second-order and third-order implicit temporal discretization strategies.

  5. Ovarian tissue cryopreservation by stepped vitrification and monitored by X-ray computed tomography.

    Science.gov (United States)

    Corral, Ariadna; Clavero, Macarena; Gallardo, Miguel; Balcerzyk, Marcin; Amorim, Christiani A; Parrado-Gallego, Ángel; Dolmans, Marie-Madeleine; Paulini, Fernanda; Morris, John; Risco, Ramón

    2018-04-01

    Ovarian tissue cryopreservation is, in most cases, the only fertility preservation option available for female patients soon to undergo gonadotoxic treatment. To date, cryopreservation of ovarian tissue has been carried out by both traditional slow freezing method and vitrification, but even with the best techniques, there is still a considerable loss of follicle viability. In this report, we investigated a stepped cryopreservation procedure which combines features of slow cooling and vitrification (hereafter called stepped vitrification). Bovine ovarian tissue was used as a tissue model. Stepwise increments of the Me 2 SO concentration coupled with stepwise drops-in temperature in a device specifically designed for this purpose and X-ray computed tomography were combined to investigate loading times at each step, by monitoring the attenuation of the radiation proportional to Me 2 SO permeation. Viability analysis was performed in warmed tissues by immunohistochemistry. Although further viability tests should be conducted after transplantation, preliminary results are very promising. Four protocols were explored. Two of them showed a poor permeation of the vitrification solution (P1 and P2). The other two (P3 and P4), with higher permeation, were studied in deeper detail. Out of these two protocols, P4, with a longer permeation time at -40 °C, showed the same histological integrity after warming as fresh controls. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Real-Time Accumulative Computation Motion Detectors

    Directory of Open Access Journals (Sweden)

    Saturnino Maldonado-Bascón

    2009-12-01

    Full Text Available The neurally inspired accumulative computation (AC method and its application to motion detection have been introduced in the past years. This paper revisits the fact that many researchers have explored the relationship between neural networks and finite state machines. Indeed, finite state machines constitute the best characterized computational model, whereas artificial neural networks have become a very successful tool for modeling and problem solving. The article shows how to reach real-time performance after using a model described as a finite state machine. This paper introduces two steps towards that direction: (a A simplification of the general AC method is performed by formally transforming it into a finite state machine. (b A hardware implementation in FPGA of such a designed AC module, as well as an 8-AC motion detector, providing promising performance results. We also offer two case studies of the use of AC motion detectors in surveillance applications, namely infrared-based people segmentation and color-based people tracking, respectively.

  7. Improving stability of stabilized and multiscale formulations in flow simulations at small time steps

    KAUST Repository

    Hsu, Ming-Chen

    2010-02-01

    The objective of this paper is to show that use of the element-vector-based definition of stabilization parameters, introduced in [T.E. Tezduyar, Computation of moving boundaries and interfaces and stabilization parameters, Int. J. Numer. Methods Fluids 43 (2003) 555-575; T.E. Tezduyar, Y. Osawa, Finite element stabilization parameters computed from element matrices and vectors, Comput. Methods Appl. Mech. Engrg. 190 (2000) 411-430], circumvents the well-known instability associated with conventional stabilized formulations at small time steps. We describe formulations for linear advection-diffusion and incompressible Navier-Stokes equations and test them on three benchmark problems: advection of an L-shaped discontinuity, laminar flow in a square domain at low Reynolds number, and turbulent channel flow at friction-velocity Reynolds number of 395. © 2009 Elsevier B.V. All rights reserved.

  8. Region-oriented CT image representation for reducing computing time of Monte Carlo simulations

    International Nuclear Information System (INIS)

    Sarrut, David; Guigues, Laurent

    2008-01-01

    Purpose. We propose a new method for efficient particle transportation in voxelized geometry for Monte Carlo simulations. We describe its use for calculating dose distribution in CT images for radiation therapy. Material and methods. The proposed approach, based on an implicit volume representation named segmented volume, coupled with an adapted segmentation procedure and a distance map, allows us to minimize the number of boundary crossings, which slows down simulation. The method was implemented with the GEANT4 toolkit and compared to four other methods: One box per voxel, parameterized volumes, octree-based volumes, and nested parameterized volumes. For each representation, we compared dose distribution, time, and memory consumption. Results. The proposed method allows us to decrease computational time by up to a factor of 15, while keeping memory consumption low, and without any modification of the transportation engine. Speeding up is related to the geometry complexity and the number of different materials used. We obtained an optimal number of steps with removal of all unnecessary steps between adjacent voxels sharing a similar material. However, the cost of each step is increased. When the number of steps cannot be decreased enough, due for example, to the large number of material boundaries, such a method is not considered suitable. Conclusion. This feasibility study shows that optimizing the representation of an image in memory potentially increases computing efficiency. We used the GEANT4 toolkit, but we could potentially use other Monte Carlo simulation codes. The method introduces a tradeoff between speed and geometry accuracy, allowing computational time gain. However, simulations with GEANT4 remain slow and further work is needed to speed up the procedure while preserving the desired accuracy

  9. Mesh and Time-Step Independent Computational Fluid Dynamics (CFD) Solutions

    Science.gov (United States)

    Nijdam, Justin J.

    2013-01-01

    A homework assignment is outlined in which students learn Computational Fluid Dynamics (CFD) concepts of discretization, numerical stability and accuracy, and verification in a hands-on manner by solving physically realistic problems of practical interest to engineers. The students solve a transient-diffusion problem numerically using the common…

  10. Aggressive time step selection for the time asymptotic velocity diffusion problem

    International Nuclear Information System (INIS)

    Hewett, D.W.; Krapchev, V.B.; Hizanidis, K.; Bers, A.

    1984-12-01

    An aggressive time step selector for an ADI algorithm is preseneted that is applied to the linearized 2-D Fokker-Planck equation including an externally imposed quasilinear diffusion term. This method provides a reduction in CPU requirements by factors of two or three compared to standard ADI. More important, the robustness of the procedure greatly reduces the work load of the user. The procedure selects a nearly optimal Δt with a minimum of intervention by the user thus relieving the need to supervise the algorithm. In effect, the algorithm does its own supervision by discarding time steps made with Δt too large

  11. GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling

    Science.gov (United States)

    Miki, Yohei; Umemura, Masayuki

    2017-04-01

    The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.

  12. Preliminary Investigation of Time Remaining Display on the Computer-based Emergency Operating Procedure

    Science.gov (United States)

    Suryono, T. J.; Gofuku, A.

    2018-02-01

    One of the important thing in the mitigation of accidents in nuclear power plant accidents is time management. The accidents should be resolved as soon as possible in order to prevent the core melting and the release of radioactive material to the environment. In this case, operators should follow the emergency operating procedure related with the accident, in step by step order and in allowable time. Nowadays, the advanced main control rooms are equipped with computer-based procedures (CBPs) which is make it easier for operators to do their tasks of monitoring and controlling the reactor. However, most of the CBPs do not include the time remaining display feature which informs operators of time available for them to execute procedure steps and warns them if the they reach the time limit. Furthermore, the feature will increase the awareness of operators about their current situation in the procedure. This paper investigates this issue. The simplified of emergency operating procedure (EOP) of steam generator tube rupture (SGTR) accident of PWR plant is applied. In addition, the sequence of actions on each step of the procedure is modelled using multilevel flow modelling (MFM) and influenced propagation rule. The prediction of action time on each step is acquired based on similar case accidents and the Support Vector Regression. The derived time will be processed and then displayed on a CBP user interface.

  13. Smart Wireless Power Transfer Operated by Time-Modulated Arrays via a Two-Step Procedure

    Directory of Open Access Journals (Sweden)

    Diego Masotti

    2015-01-01

    Full Text Available The paper introduces a novel method for agile and precise wireless power transmission operated by a time-modulated array. The unique, almost real-time reconfiguration capability of these arrays is fully exploited by a two-step procedure: first, a two-element time-modulated subarray is used for localization of tagged sensors to be energized; the entire 16-element TMA then provides the power to the detected tags, by exploiting the fundamental and first-sideband harmonic radiation. An investigation on the best array architecture is carried out, showing the importance of the adopted nonlinear/full-wave computer-aided-design platform. Very promising simulated energy transfer performance of the entire nonlinear radiating system is demonstrated.

  14. Computing single step operators of logic programming in radial basis function neural networks

    Science.gov (United States)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-07-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (Tp:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  15. Computing single step operators of logic programming in radial basis function neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia)

    2014-07-10

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  16. Computing single step operators of logic programming in radial basis function neural networks

    International Nuclear Information System (INIS)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-01-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T p :I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks

  17. Intake flow and time step analysis in the modeling of a direct injection Diesel engine

    Energy Technology Data Exchange (ETDEWEB)

    Zancanaro Junior, Flavio V.; Vielmo, Horacio A. [Federal University of Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Mechanical Engineering Dept.], E-mails: zancanaro@mecanica.ufrgs.br, vielmoh@mecanica.ufrgs.br

    2010-07-01

    This paper discusses the effects of the time step on turbulence flow structure in the intake and in-cylinder systems of a Diesel engine during the intake process, under the motored condition. The three-dimensional modeling of a reciprocating engine geometry comprising a bowl-in-piston combustion chamber, intake port of shallow ramp helical type and exhaust port of conventional type. The equations are numerically solved, including a transient analysis, valves and piston movements, for engine speed of 1500 rpm, using a commercial Finite Volumes CFD code. A parallel computation is employed. For the purpose of examining the in-cylinder turbulence characteristics two parameters are observed: the discharge coefficient and swirl ratio. This two parameters quantify the fluid flow characteristics inside cylinder in the intake stroke, therefore, it is very important their study and understanding. Additionally, the evolution of the discharge coefficient and swirl ratio, along crank angle, are correlated and compared, with the objective of clarifying the physical mechanisms. Regarding the turbulence, computations are performed with the Eddy Viscosity Model k-u SST, in its Low-Reynolds approaches, with standard near wall treatment. The system of partial differential equations to be solved consists of the Reynolds-averaged compressible Navier-Stokes equations with the constitutive relations for an ideal gas, and using a segregated solution algorithm. The enthalpy equation is also solved. A moving hexahedral trimmed mesh independence study is presented. In the same way many convergence tests are performed, and a secure criterion established. The results of the pressure fields are shown in relation to vertical plane that passes through the valves. Areas of low pressure can be seen in the valve curtain region, due to strong jet flows. Also, it is possible to note divergences between the time steps, mainly for the smaller time step. (author)

  18. Four Steps to Fabulous Computer Furniture.

    Science.gov (United States)

    Sturgeon, Julie

    2001-01-01

    Explores how one university saved money and avoided wasted time when buying computer desks for dorm rooms. Buying considerations discussed include how the desks were to be used, the space required, desk durability, cable friendly features, and accessory necessity. (GR)

  19. Computationally determining the salience of decision points for real-time wayfinding support

    Directory of Open Access Journals (Sweden)

    Makoto Takemiya

    2012-06-01

    Full Text Available This study introduces the concept of computational salience to explain the discriminatory efficacy of decision points, which in turn may have applications to providing real-time assistance to users of navigational aids. This research compared algorithms for calculating the computational salience of decision points and validated the results via three methods: high-salience decision points were used to classify wayfinders; salience scores were used to weight a conditional probabilistic scoring function for real-time wayfinder performance classification; and salience scores were correlated with wayfinding-performance metrics. As an exploratory step to linking computational and cognitive salience, a photograph-recognition experiment was conducted. Results reveal a distinction between algorithms useful for determining computational and cognitive saliences. For computational salience, information about the structural integration of decision points is effective, while information about the probability of decision-point traversal shows promise for determining cognitive salience. Limitations from only using structural information and motivations for future work that include non-structural information are elicited.

  20. An adaptive time-stepping strategy for solving the phase field crystal model

    International Nuclear Information System (INIS)

    Zhang, Zhengru; Ma, Yuan; Qiao, Zhonghua

    2013-01-01

    In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. The numerical experiments demonstrate that the CPU time is significantly saved for long time simulations

  1. Computer aided planning of orthopaedic surgeries: the definition of generic planning steps for bone removal procedures.

    Science.gov (United States)

    Putzer, David; Moctezuma, Jose Luis; Nogler, Michael

    2017-11-01

    An increasing number of orthopaedic surgeons are using computer aided planning tools for bone removal applications. The aim of the study was to consolidate a set of generic functions to be used for a 3D computer assisted planning or simulation. A limited subset of 30 surgical procedures was analyzed and verified in 243 surgical procedures of a surgical atlas. Fourteen generic functions to be used in 3D computer assisted planning and simulations were extracted. Our results showed that the average procedure comprises 14 ± 10 (SD) steps with ten different generic planning steps and four generic bone removal steps. In conclusion, the study shows that with a limited number of 14 planning functions it is possible to perform 243 surgical procedures out of Campbell's Operative Orthopedics atlas. The results may be used as a basis for versatile generic intraoperative planning software.

  2. Coherent states for the time dependent harmonic oscillator: the step function

    International Nuclear Information System (INIS)

    Moya-Cessa, Hector; Fernandez Guasti, Manuel

    2003-01-01

    We study the time evolution for the quantum harmonic oscillator subjected to a sudden change of frequency. It is based on an approximate analytic solution to the time dependent Ermakov equation for a step function. This approach allows for a continuous treatment that differs from former studies that involve the matching of two time independent solutions at the time when the step occurs

  3. A stabilized Runge–Kutta–Legendre method for explicit super-time-stepping of parabolic and mixed equations

    International Nuclear Information System (INIS)

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-01

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s 2 times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful

  4. Towards a comprehensive framework for cosimulation of dynamic models with an emphasis on time stepping

    Science.gov (United States)

    Hoepfer, Matthias

    co-simulation approach to modeling and simulation. It lays out the general approach to dynamic system co-simulation, and gives a comprehensive overview of what co-simulation is and what it is not. It creates a taxonomy of the requirements and limits of co-simulation, and the issues arising with co-simulating sub-models. Possible solutions towards resolving the stated problems are investigated to a certain depth. A particular focus is given to the issue of time stepping. It will be shown that for dynamic models, the selection of the simulation time step is a crucial issue with respect to computational expense, simulation accuracy, and error control. The reasons for this are discussed in depth, and a time stepping algorithm for co-simulation with unknown dynamic sub-models is proposed. Motivations and suggestions for the further treatment of selected issues are presented.

  5. Investigation of the slice sensitivity profile for step-and-shoot mode multi-slice computed tomography

    International Nuclear Information System (INIS)

    Hsieh Jiang

    2001-01-01

    Multislice computed tomography (MCT) is one of the recent technology advancements in CT. Compared to single slice CT, MCT significantly improves examination time, x-ray tube efficiency, and contrast material utilization. Although the scan mode of MCT is predominately helical, step-and-shoot (axial) scans continue to be an important part of routine clinical protocols. In this paper, we present a detailed investigation on the slice sensitivity profile (SSP) of MCT in the step-and-shoot mode. Our investigation shows that, unlike single slice CT, the SSP for MCT exhibits multiple peaks and valleys resulting from intercell gaps between detector rows. To fully understand the characteristics of the SSP, we developed an analytical model to predict the behavior of MCT. We propose a simple experimental technique that can quickly and accurately measure SSP. The impact of the SSP on image artifacts and low contrast detectability is also investigated

  6. The symmetric MSD encoder for one-step adder of ternary optical computer

    Science.gov (United States)

    Kai, Song; LiPing, Yan

    2016-08-01

    The symmetric Modified Signed-Digit (MSD) encoding is important for achieving the one-step MSD adder of Ternary Optical Computer (TOC). The paper described the symmetric MSD encoding algorithm in detail, and developed its truth table which has nine rows and nine columns. According to the truth table, the state table was developed, and the optical-path structure and circuit-implementation scheme of the symmetric MSD encoder (SME) for one-step adder of TOC were proposed. Finally, a series of experiments were designed and performed. The observed results of the experiments showed that the scheme to implement SME was correct, feasible and efficient.

  7. Multiple-time-stepping generalized hybrid Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  8. Challenges in reducing the computational time of QSTS simulations for distribution system analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Deboever, Jeremiah [Georgia Inst. of Technology, Atlanta, GA (United States); Zhang, Xiaochen [Georgia Inst. of Technology, Atlanta, GA (United States); Reno, Matthew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Broderick, Robert Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grijalva, Santiago [Georgia Inst. of Technology, Atlanta, GA (United States); Therrien, Francis [CME International T& D, St. Bruno, QC (Canada)

    2017-06-01

    The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10 to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.

  9. High-resolution seismic wave propagation using local time stepping

    KAUST Repository

    Peter, Daniel; Rietmann, Max; Galvez, Percy; Ampuero, Jean Paul

    2017-01-01

    High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step

  10. Real-time Tsunami Inundation Prediction Using High Performance Computers

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2014-12-01

    Recently off-shore tsunami observation stations based on cabled ocean bottom pressure gauges are actively being deployed especially in Japan. These cabled systems are designed to provide real-time tsunami data before tsunamis reach coastlines for disaster mitigation purposes. To receive real benefits of these observations, real-time analysis techniques to make an effective use of these data are necessary. A representative study was made by Tsushima et al. (2009) that proposed a method to provide instant tsunami source prediction based on achieving tsunami waveform data. As time passes, the prediction is improved by using updated waveform data. After a tsunami source is predicted, tsunami waveforms are synthesized from pre-computed tsunami Green functions of linear long wave equations. Tsushima et al. (2014) updated the method by combining the tsunami waveform inversion with an instant inversion of coseismic crustal deformation and improved the prediction accuracy and speed in the early stages. For disaster mitigation purposes, real-time predictions of tsunami inundation are also important. In this study, we discuss the possibility of real-time tsunami inundation predictions, which require faster-than-real-time tsunami inundation simulation in addition to instant tsunami source analysis. Although the computational amount is large to solve non-linear shallow water equations for inundation predictions, it has become executable through the recent developments of high performance computing technologies. We conducted parallel computations of tsunami inundation and achieved 6.0 TFLOPS by using 19,000 CPU cores. We employed a leap-frog finite difference method with nested staggered grids of which resolution range from 405 m to 5 m. The resolution ratio of each nested domain was 1/3. Total number of grid points were 13 million, and the time step was 0.1 seconds. Tsunami sources of 2011 Tohoku-oki earthquake were tested. The inundation prediction up to 2 hours after the

  11. 12 CFR 1102.27 - Computing time.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Computing time. 1102.27 Section 1102.27 Banks... for Proceedings § 1102.27 Computing time. (a) General rule. In computing any period of time prescribed... time begins to run is not included. The last day so computed is included, unless it is a Saturday...

  12. [Collaborative application of BEPS at different time steps.

    Science.gov (United States)

    Lu, Wei; Fan, Wen Yi; Tian, Tian

    2016-09-01

    BEPSHourly is committed to simulate the ecological and physiological process of vegetation at hourly time steps, and is often applied to analyze the diurnal change of gross primary productivity (GPP), net primary productivity (NPP) at site scale because of its more complex model structure and time-consuming solving process. However, daily photosynthetic rate calculation in BEPSDaily model is simpler and less time-consuming, not involving many iterative processes. It is suitable for simulating the regional primary productivity and analyzing the spatial distribution of regional carbon sources and sinks. According to the characteristics and applicability of BEPSDaily and BEPSHourly models, this paper proposed a method of collaborative application of BEPS at daily and hourly time steps. Firstly, BEPSHourly was used to optimize the main photosynthetic parameters: the maximum rate of carboxylation (V c max ) and the maximum rate of photosynthetic electron transport (J max ) at site scale, and then the two optimized parameters were introduced into BEPSDaily model to estimate regional NPP at regional scale. The results showed that optimization of the main photosynthesis parameters based on the flux data could improve the simulate ability of the model. The primary productivity of different forest types in descending order was deciduous broad-leaved forest, mixed forest, coniferous forest in 2011. The collaborative application of carbon cycle models at different steps proposed in this study could effectively optimize the main photosynthesis parameters V c max and J max , simulate the monthly averaged diurnal GPP, NPP, calculate the regional NPP, and analyze the spatial distribution of regional carbon sources and sinks.

  13. Micro-computed tomography characterization of tissue engineering scaffolds: effects of pixel size and rotation step.

    Science.gov (United States)

    Cengiz, Ibrahim Fatih; Oliveira, Joaquim Miguel; Reis, Rui L

    2017-08-01

    Quantitative assessment of micro-structure of materials is of key importance in many fields including tissue engineering, biology, and dentistry. Micro-computed tomography (µ-CT) is an intensively used non-destructive technique. However, the acquisition parameters such as pixel size and rotation step may have significant effects on the obtained results. In this study, a set of tissue engineering scaffolds including examples of natural and synthetic polymers, and ceramics were analyzed. We comprehensively compared the quantitative results of µ-CT characterization using 15 acquisition scenarios that differ in the combination of the pixel size and rotation step. The results showed that the acquisition parameters could statistically significantly affect the quantified mean porosity, mean pore size, and mean wall thickness of the scaffolds. The effects are also practically important since the differences can be as high as 24% regarding the mean porosity in average, and 19.5 h and 166 GB regarding the characterization time and data storage per sample with a relatively small volume. This study showed in a quantitative manner the effects of such a wide range of acquisition scenarios on the final data, as well as the characterization time and data storage per sample. Herein, a clear picture of the effects of the pixel size and rotation step on the results is provided which can notably be useful to refine the practice of µ-CT characterization of scaffolds and economize the related resources.

  14. A New Quaternion-Based Kalman Filter for Real-Time Attitude Estimation Using the Two-Step Geometrically-Intuitive Correction Algorithm.

    Science.gov (United States)

    Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun

    2017-09-19

    In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions.

  15. 12 CFR 622.21 - Computing time.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Computing time. 622.21 Section 622.21 Banks and... Formal Hearings § 622.21 Computing time. (a) General rule. In computing any period of time prescribed or... run is not to be included. The last day so computed shall be included, unless it is a Saturday, Sunday...

  16. Studies on steps affecting tritium residence time in solid blanket

    International Nuclear Information System (INIS)

    Tanaka, Satoru

    1987-01-01

    For the self sustaining of CTR fuel cycle, the effective tritium recovery from blankets is essential. This means that not only tritium breeding ratio must be larger than 1.0, but also high recovering speed is required for the short residence time of tritium in blankets. Short residence time means that the tritium inventory in blankets is small. In this paper, the tritium residence time and tritium inventory in a solid blanket are modeled by considering the steps constituting tritium release. Some of these tritium migration processes were experimentally evaluated. The tritium migration steps in a solid blanket using sintered breeding materials consist of diffusion in grains, desorption at grain edges, diffusion and permeation through grain boundaries, desorption at particle edges, diffusion and percolation through interconnected pores to purging stream, and convective mass transfer to stream. Corresponding to these steps, diffusive, soluble, adsorbed and trapped tritium inventories and the tritium in gas phase are conceivable. The code named TTT was made for calculating these tritium inventories and the residence time of tritium. An example of the results of calculation is shown. The blanket is REPUTER-1, which is the conceptual design of a commercial reversed field pinch fusion reactor studied at the University of Tokyo. The experimental studies on the migration steps of tritium are reported. (Kako, I.)

  17. COMPUTATIONAL ANALYSIS OF BACKWARD-FACING STEP FLOW

    Directory of Open Access Journals (Sweden)

    Erhan PULAT

    2001-01-01

    Full Text Available In this study, backward-facing step flow that are encountered in electronic systems cooling, heat exchanger design, and gas turbine cooling are investigated computationally. Steady, incompressible, and two-dimensional air flow is analyzed. Inlet velocity is assumed uniform and it is obtained from parabolic profile by using maximum velocity. In the analysis, the effects of channel expansion ratio and Reynolds number to reattachment length are investigated. In addition, pressure distribution throughout the channel length is also obtained and flow is analyzed for the Reynolds number values of 50 and 150 and channel expansion ratios of 1.5 and 2. Governing equations are solved by using Galerkin finite element mothod of ANSYS-FLOTRAN code. Obtained results are compared with the solutions of lattice BGK method that is relatively new method in fluid dynamics and other numerical and experimental results. It is concluded that reattachment length increases with increasing Reynolds number and at the same Reynolds number it decreases with increasing channel expansion ratio.

  18. Optimal order and time-step criterion for Aarseth-type N-body integrators

    International Nuclear Information System (INIS)

    Makino, Junichiro

    1991-01-01

    How the selection of the time-step criterion and the order of the integrator change the efficiency of Aarseth-type N-body integrators is discussed. An alternative to Aarseth's scheme based on the direct calculation of the time derivative of the force using the Hermite interpolation is compared to Aarseth's scheme, which uses the Newton interpolation to construct the predictor and corrector. How the number of particles in the system changes the behavior of integrators is examined. The Hermite scheme allows a time step twice as large as that for the standard Aarseth scheme for the same accuracy. The calculation cost of the Hermite scheme per time step is roughly twice as much as that of the standard Aarseth scheme. The optimal order of the integrators depends on both the particle number and the accuracy required. The time-step criterion of the standard Aarseth scheme is found to be inapplicable to higher-order integrators, and a more uniformly reliable criterion is proposed. 18 refs

  19. Finding Multi-step Attacks in Computer Networks using Heuristic Search and Mobile Ambients

    NARCIS (Netherlands)

    Nunes Leal Franqueira, V.

    2009-01-01

    An important aspect of IT security governance is the proactive and continuous identification of possible attacks in computer networks. This is complicated due to the complexity and size of networks, and due to the fact that usually network attacks are performed in several steps. This thesis proposes

  20. Astronomical sketching a step-by-step introduction

    CERN Document Server

    Handy, Richard; Perez, Jeremy; Rix, Erika; Robbins, Sol

    2007-01-01

    This book presents the amateur with fine examples of astronomical sketches and step-by-step tutorials in each medium, from pencil to computer graphics programs. This unique book can teach almost anyone to create beautiful sketches of celestial objects.

  1. Formulation of an explicit-multiple-time-step time integration method for use in a global primitive equation grid model

    Science.gov (United States)

    Chao, W. C.

    1982-01-01

    With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.

  2. Time dependent theory of two-step absorption of two pulses

    Energy Technology Data Exchange (ETDEWEB)

    Rebane, Inna, E-mail: inna.rebane@ut.ee

    2015-09-25

    The time dependent theory of two step-absorption of two different light pulses with arbitrary duration in the electronic three-level model is proposed. The probability that the third level is excited at the moment t is found in depending on the time delay between pulses, the spectral widths of the pulses and the energy relaxation constants of the excited electronic levels. The time dependent perturbation theory is applied without using “doorway–window” approach. The time and spectral behavior of the spectrum using in calculations as simple as possible model is analyzed. - Highlights: • Time dependent theory of two-step absorption in the three-level model is proposed. • Two different light pulses with arbitrary duration is observed. • The time dependent perturbation theory is applied without “door–window” approach. • The time and spectral behavior of the spectra is analyzed for several cases.

  3. A new and inexpensive non-bit-for-bit solution reproducibility test based on time step convergence (TSC1.0)

    Science.gov (United States)

    Wan, Hui; Zhang, Kai; Rasch, Philip J.; Singh, Balwinder; Chen, Xingyuan; Edwards, Jim

    2017-02-01

    A test procedure is proposed for identifying numerically significant solution changes in evolution equations used in atmospheric models. The test issues a fail signal when any code modifications or computing environment changes lead to solution differences that exceed the known time step sensitivity of the reference model. Initial evidence is provided using the Community Atmosphere Model (CAM) version 5.3 that the proposed procedure can be used to distinguish rounding-level solution changes from impacts of compiler optimization or parameter perturbation, which are known to cause substantial differences in the simulated climate. The test is not exhaustive since it does not detect issues associated with diagnostic calculations that do not feedback to the model state variables. Nevertheless, it provides a practical and objective way to assess the significance of solution changes. The short simulation length implies low computational cost. The independence between ensemble members allows for parallel execution of all simulations, thus facilitating fast turnaround. The new method is simple to implement since it does not require any code modifications. We expect that the same methodology can be used for any geophysical model to which the concept of time step convergence is applicable.

  4. FRANTIC: a computer code for time dependent unavailability analysis

    International Nuclear Information System (INIS)

    Vesely, W.E.; Goldberg, F.F.

    1977-03-01

    The FRANTIC computer code evaluates the time dependent and average unavailability for any general system model. The code is written in FORTRAN IV for the IBM 370 computer. Non-repairable components, monitored components, and periodically tested components are handled. One unique feature of FRANTIC is the detailed, time dependent modeling of periodic testing which includes the effects of test downtimes, test overrides, detection inefficiencies, and test-caused failures. The exponential distribution is used for the component failure times and periodic equations are developed for the testing and repair contributions. Human errors and common mode failures can be included by assigning an appropriate constant probability for the contributors. The output from FRANTIC consists of tables and plots of the system unavailability along with a breakdown of the unavailability contributions. Sensitivity studies can be simply performed and a wide range of tables and plots can be obtained for reporting purposes. The FRANTIC code represents a first step in the development of an approach that can be of direct value in future system evaluations. Modifications resulting from use of the code, along with the development of reliability data based on operating reactor experience, can be expected to provide increased confidence in its use and potential application to the licensing process

  5. Implementation of a variable-step integration technique for nonlinear structural dynamic analysis

    International Nuclear Information System (INIS)

    Underwood, P.; Park, K.C.

    1977-01-01

    The paper presents the implementation of a recently developed unconditionally stable implicit time integration method into a production computer code for the transient response analysis of nonlinear structural dynamic systems. The time integrator is packaged with two significant features; a variable step size that is automatically determined and this is accomplished without additional matrix refactorizations. The equations of motion solved by the time integrator must be cast in the pseudo-force form, and this provides the mechanism for controlling the step size. Step size control is accomplished by extrapolating the pseudo-force to the next time (the predicted pseudo-force), then performing the integration step and then recomputing the pseudo-force based on the current solution (the correct pseudo-force); from this data an error norm is constructed, the value of which determines the step size for the next step. To avoid refactoring the required matrix with each step size change a matrix scaling technique is employed, which allows step sizes to change by a factor of 100 without refactoring. If during a computer run the integrator determines it can run with a step size larger than 100 times the original minimum step size, the matrix is refactored to take advantage of the larger step size. The strategy for effecting these features are discussed in detail. (Auth.)

  6. Effect of time step size and turbulence model on the open water hydrodynamic performance prediction of contra-rotating propellers

    Science.gov (United States)

    Wang, Zhan-zhi; Xiong, Ying

    2013-04-01

    A growing interest has been devoted to the contra-rotating propellers (CRPs) due to their high propulsive efficiency, torque balance, low fuel consumption, low cavitations, low noise performance and low hull vibration. Compared with the single-screw system, it is more difficult for the open water performance prediction because forward and aft propellers interact with each other and generate a more complicated flow field around the CRPs system. The current work focuses on the open water performance prediction of contra-rotating propellers by RANS and sliding mesh method considering the effect of computational time step size and turbulence model. The validation study has been performed on two sets of contra-rotating propellers developed by David W Taylor Naval Ship R & D center. Compared with the experimental data, it shows that RANS with sliding mesh method and SST k-ω turbulence model has a good precision in the open water performance prediction of contra-rotating propellers, and small time step size can improve the level of accuracy for CRPs with the same blade number of forward and aft propellers, while a relatively large time step size is a better choice for CRPs with different blade numbers.

  7. Stability analysis and time-step limits for a Monte Carlo Compton-scattering method

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.

    2010-01-01

    A Monte Carlo method for simulating Compton scattering in high energy density applications has been presented that models the photon-electron collision kinematics exactly [E. Canfield, W.M. Howard, E.P. Liang, Inverse Comptonization by one-dimensional relativistic electrons, Astrophys. J. 323 (1987) 565]. However, implementing this technique typically requires an explicit evaluation of the material temperature, which can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and develop two time-step limits that avoid undesirable behavior. The first time-step limit prevents instabilities, while the second, more restrictive time-step limit avoids both instabilities and nonphysical oscillations. With a set of numerical examples, we demonstrate the efficacy of these time-step limits.

  8. 12 CFR 908.27 - Computing time.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Computing time. 908.27 Section 908.27 Banks and... PRACTICE AND PROCEDURE IN HEARINGS ON THE RECORD General Rules § 908.27 Computing time. (a) General rule. In computing any period of time prescribed or allowed by this subpart, the date of the act or event...

  9. Design and implementation of the one-step MSD adder of optical computer.

    Science.gov (United States)

    Song, Kai; Yan, Liping

    2012-03-01

    On the basis of the symmetric encoding algorithm for the modified signed-digit (MSD), a 7*7 truth table that can be realized with optical methods was developed. And based on the truth table, the optical path structures and circuit implementations of the one-step MSD adder of ternary optical computer (TOC) were designed. Experiments show that the scheme is correct, feasible, and efficient. © 2012 Optical Society of America

  10. Sub-step methodology for coupled Monte Carlo depletion and thermal hydraulic codes

    International Nuclear Information System (INIS)

    Kotlyar, D.; Shwageraus, E.

    2016-01-01

    Highlights: • Discretization of time in coupled MC codes determines the results’ accuracy. • The error is due to lack of information regarding the time-dependent reaction rates. • The proposed sub-step method considerably reduces the time discretization error. • No additional MC transport solutions are required within the time step. • The reaction rates are varied as functions of nuclide densities and TH conditions. - Abstract: The governing procedure in coupled Monte Carlo (MC) codes relies on discretization of the simulation time into time steps. Typically, the MC transport solution at discrete points will generate reaction rates, which in most codes are assumed to be constant within the time step. This assumption can trigger numerical instabilities or result in a loss of accuracy, which, in turn, would require reducing the time steps size. This paper focuses on reducing the time discretization error without requiring additional MC transport solutions and hence with no major computational overhead. The sub-step method presented here accounts for the reaction rate variation due to the variation in nuclide densities and thermal hydraulic (TH) conditions. This is achieved by performing additional depletion and TH calculations within the analyzed time step. The method was implemented in BGCore code and subsequently used to analyze a series of test cases. The results indicate that computational speedup of up to a factor of 10 may be achieved over the existing coupling schemes.

  11. Parallel computation of rotating flows

    DEFF Research Database (Denmark)

    Lundin, Lars Kristian; Barker, Vincent A.; Sørensen, Jens Nørkær

    1999-01-01

    This paper deals with the simulation of 3‐D rotating flows based on the velocity‐vorticity formulation of the Navier‐Stokes equations in cylindrical coordinates. The governing equations are discretized by a finite difference method. The solution is advanced to a new time level by a two‐step process....... In the first step, the vorticity at the new time level is computed using the velocity at the previous time level. In the second step, the velocity at the new time level is computed using the new vorticity. We discuss here the second part which is by far the most time‐consuming. The numerical problem...

  12. Multi-Time Step Service Restoration for Advanced Distribution Systems and Microgrids

    International Nuclear Information System (INIS)

    Chen, Bo; Chen, Chen; Wang, Jianhui; Butler-Purry, Karen L.

    2017-01-01

    Modern power systems are facing increased risk of disasters that can cause extended outages. The presence of remote control switches (RCSs), distributed generators (DGs), and energy storage systems (ESS) provides both challenges and opportunities for developing post-fault service restoration methodologies. Inter-temporal constraints of DGs, ESS, and loads under cold load pickup (CLPU) conditions impose extra complexity on problem formulation and solution. In this paper, a multi-time step service restoration methodology is proposed to optimally generate a sequence of control actions for controllable switches, ESSs, and dispatchable DGs to assist the system operator with decision making. The restoration sequence is determined to minimize the unserved customers by energizing the system step by step without violating operational constraints at each time step. The proposed methodology is formulated as a mixed-integer linear programming (MILP) model and can adapt to various operation conditions. Furthermore, the proposed method is validated through several case studies that are performed on modified IEEE 13-node and IEEE 123-node test feeders.

  13. Grief: Difficult Times, Simple Steps.

    Science.gov (United States)

    Waszak, Emily Lane

    This guide presents techniques to assist others in coping with the loss of a loved one. Using the language of 9 layperson, the book contains more than 100 tips for caregivers or loved ones. A simple step is presented on each page, followed by reasons and instructions for each step. Chapters include: "What to Say"; "Helpful Things to Do"; "Dealing…

  14. Numerical characterisation of one-step and three-step solar air heating collectors used for cocoa bean solar drying.

    Science.gov (United States)

    Orbegoso, Elder Mendoza; Saavedra, Rafael; Marcelo, Daniel; La Madrid, Raúl

    2017-12-01

    In the northern coastal and jungle areas of Peru, cocoa beans are dried using artisan methods, such as direct exposure to sunlight. This traditional process is time intensive, leading to a reduction in productivity and, therefore, delays in delivery times. The present study was intended to numerically characterise the thermal behaviour of three configurations of solar air heating collectors in order to determine which demonstrated the best thermal performance under several controlled operating conditions. For this purpose, a computational fluid dynamics model was developed to describe the simultaneous convective and radiative heat transfer phenomena under several operation conditions. The constructed computational fluid dynamics model was firstly validated through comparison with the data measurements of a one-step solar air heating collector. We then simulated two further three-step solar air heating collectors in order to identify which demonstrated the best thermal performance in terms of outlet air temperature and thermal efficiency. The numerical results show that under the same solar irradiation area of exposition and operating conditions, the three-step solar air heating collector with the collector plate mounted between the second and third channels was 67% more thermally efficient compared to the one-step solar air heating collector. This is because the air exposition with the surface of the collector plate for the three-step solar air heating collector former device was twice than the one-step solar air heating collector. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Microprocessor controller for stepping motors

    International Nuclear Information System (INIS)

    Strait, B.G.; Thuot, M.E.

    1977-01-01

    A new concept for digital computer control of multiple stepping motors which operate in a severe electromagnetic pulse environment is presented. The motors position mirrors in the beam-alignment system of a 100-kJ CO 2 laser. An asynchronous communications channel of a computer is used to send coded messages, containing the motor address and stepping-command information, to the stepping-motor controller in a bit serial format over a fiber-optics communications link. The addressed controller responds by transmitting to the computer its address and other motor information, thus confirming the received message. Each controller is capable of controlling three stepping motors. The controller contains the fiber-optics interface, a microprocessor, and the stepping-motor driven circuits. The microprocessor program, which resides in an EPROM, decodes the received messages, transmits responses, performs the stepping-motor sequence logic, maintains motor-position information, and monitors the motor's reference switch. For multiple stepping-motor application, the controllers are connected in a daisy chain providing control of many motors from one asynchronous communications channel of the computer

  16. A parallel nearly implicit time-stepping scheme

    OpenAIRE

    Botchev, Mike A.; van der Vorst, Henk A.

    2001-01-01

    Across-the-space parallelism still remains the most mature, convenient and natural way to parallelize large scale problems. One of the major problems here is that implicit time stepping is often difficult to parallelize due to the structure of the system. Approximate implicit schemes have been suggested to circumvent the problem. These schemes have attractive stability properties and they are also very well parallelizable. The purpose of this article is to give an overall assessment of the pa...

  17. 12 CFR 1780.11 - Computing time.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Computing time. 1780.11 Section 1780.11 Banks... time. (a) General rule. In computing any period of time prescribed or allowed by this subpart, the date of the act or event that commences the designated period of time is not included. The last day so...

  18. Towards a real time computation of the dose in a phantom segmented into homogeneous meshes

    International Nuclear Information System (INIS)

    Blanpain, B.

    2009-10-01

    Automatic radiation therapy treatment planning necessitates a very fast computation of the dose delivered to the patient. We propose to compute the dose by segmenting the patient's phantom into homogeneous meshes, and by associating, to the meshes, projections to dose distributions pre-computed in homogeneous phantoms, along with weights managing heterogeneities. The dose computation is divided into two steps. The first step impacts the meshes: projections and weights are set according to physical and geometrical criteria. The second step impacts the voxels: the dose is computed by evaluating the functions previously associated to their mesh. This method is very fast, in particular when there are few points of interest (several hundreds). In this case, results are obtained in less than one second. With such performances, practical realization of automatic treatment planning becomes practically feasible. (author)

  19. On an efficient multiple time step Monte Carlo simulation of the SABR model

    NARCIS (Netherlands)

    Leitao Rodriguez, A.; Grzelak, L.A.; Oosterlee, C.W.

    2017-01-01

    In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math.

  20. 6 CFR 13.27 - Computation of time.

    Science.gov (United States)

    2010-01-01

    ... 6 Domestic Security 1 2010-01-01 2010-01-01 false Computation of time. 13.27 Section 13.27 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY PROGRAM FRAUD CIVIL REMEDIES § 13.27 Computation of time. (a) In computing any period of time under this part or in an order issued...

  1. Enforcing the Courant-Friedrichs-Lewy condition in explicitly conservative local time stepping schemes

    Science.gov (United States)

    Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.

    2018-04-01

    An optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubic "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a constraint on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.

  2. Errors in Postural Preparation Lead to Increased Choice Reaction Times for Step Initiation in Older Adults

    Science.gov (United States)

    Nutt, John G.; Horak, Fay B.

    2011-01-01

    Background. This study asked whether older adults were more likely than younger adults to err in the initial direction of their anticipatory postural adjustment (APA) prior to a step (indicating a motor program error), whether initial motor program errors accounted for reaction time differences for step initiation, and whether initial motor program errors were linked to inhibitory failure. Methods. In a stepping task with choice reaction time and simple reaction time conditions, we measured forces under the feet to quantify APA onset and step latency and we used body kinematics to quantify forward movement of center of mass and length of first step. Results. Trials with APA errors were almost three times as common for older adults as for younger adults, and they were nine times more likely in choice reaction time trials than in simple reaction time trials. In trials with APA errors, step latency was delayed, correlation between APA onset and step latency was diminished, and forward motion of the center of mass prior to the step was increased. Participants with more APA errors tended to have worse Stroop interference scores, regardless of age. Conclusions. The results support the hypothesis that findings of slow choice reaction time step initiation in older adults are attributable to inclusion of trials with incorrect initial motor preparation and that these errors are caused by deficits in response inhibition. By extension, the results also suggest that mixing of trials with correct and incorrect initial motor preparation might explain apparent choice reaction time slowing with age in upper limb tasks. PMID:21498431

  3. Use of time space Green's functions in the computation of transient eddy current fields

    International Nuclear Information System (INIS)

    Davey, K.; Turner, L.

    1988-01-01

    The utility of integral equations to solve eddy current problems has been borne out by numerous computations in the past few years, principally in sinusoidal steady-state problems. This paper attempts to examine the applicability of the integral approaches in both time and space for the more generic transient problem. The basic formulation for the time space Green's function approach is laid out. A technique employing Gauss-Laguerre integration is employed to realize the temporal solution, while Gauss--Legendre integration is used to resolve the spatial field character. The technique is then applied to the fusion electromagnetic induction experiments (FELIX) cylinder experiments in both two and three dimensions. It is found that quite accurate solutions can be obtained using rather coarse time steps and very few unknowns; the three-dimensional field solution worked out in this context used basically only four unknowns. The solution appears to be somewhat sensitive to the choice of time step, a consequence of a numerical instability imbedded in the Green's function near the origin

  4. Dissolvable fluidic time delays for programming multi-step assays in instrument-free paper diagnostics.

    Science.gov (United States)

    Lutz, Barry; Liang, Tinny; Fu, Elain; Ramachandran, Sujatha; Kauffman, Peter; Yager, Paul

    2013-07-21

    Lateral flow tests (LFTs) are an ingenious format for rapid and easy-to-use diagnostics, but they are fundamentally limited to assay chemistries that can be reduced to a single chemical step. In contrast, most laboratory diagnostic assays rely on multiple timed steps carried out by a human or a machine. Here, we use dissolvable sugar applied to paper to create programmable flow delays and present a paper network topology that uses these time delays to program automated multi-step fluidic protocols. Solutions of sucrose at different concentrations (10-70% of saturation) were added to paper strips and dried to create fluidic time delays spanning minutes to nearly an hour. A simple folding card format employing sugar delays was shown to automate a four-step fluidic process initiated by a single user activation step (folding the card); this device was used to perform a signal-amplified sandwich immunoassay for a diagnostic biomarker for malaria. The cards are capable of automating multi-step assay protocols normally used in laboratories, but in a rapid, low-cost, and easy-to-use format.

  5. Development and validation of a local time stepping-based PaSR solver for combustion and radiation modeling

    DEFF Research Database (Denmark)

    Pang, Kar Mun; Ivarsson, Anders; Haider, Sajjad

    2013-01-01

    In the current work, a local time stepping (LTS) solver for the modeling of combustion, radiative heat transfer and soot formation is developed and validated. This is achieved using an open source computational fluid dynamics code, OpenFOAM. Akin to the solver provided in default assembly i...... library in the edcSimpleFoam solver which was introduced during the 6th OpenFOAM workshop is modified and coupled with the current solver. One of the main amendments made is the integration of soot radiation submodel since this is significant in rich flames where soot particles are formed. The new solver...

  6. Generalized Runge-Kutta method for two- and three-dimensional space-time diffusion equations with a variable time step

    International Nuclear Information System (INIS)

    Aboanber, A.E.; Hamada, Y.M.

    2008-01-01

    An extensive knowledge of the spatial power distribution is required for the design and analysis of different types of current-generation reactors, and that requires the development of more sophisticated theoretical methods. Therefore, the need to develop new methods for multidimensional transient reactor analysis still exists. The objective of this paper is to develop a computationally efficient numerical method for solving the multigroup, multidimensional, static and transient neutron diffusion kinetics equations. A generalized Runge-Kutta method has been developed for the numerical integration of the stiff space-time diffusion equations. The method is fourth-order accurate, using an embedded third-order solution to arrive at an estimate of the truncation error for automatic time step control. In addition, the A(α)-stability properties of the method are investigated. The analyses of two- and three-dimensional benchmark problems as well as static and transient problems, demonstrate that very accurate solutions can be obtained with assembly-sized spatial meshes. Preliminary numerical evaluations using two- and three-dimensional finite difference codes showed that the presented generalized Runge-Kutta method is highly accurate and efficient when compared with other optimized iterative numerical and conventional finite difference methods

  7. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  8. The large discretization step method for time-dependent partial differential equations

    Science.gov (United States)

    Haras, Zigo; Taasan, Shlomo

    1995-01-01

    A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.

  9. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    Energy Technology Data Exchange (ETDEWEB)

    Murari, A.; Barana, O. [Consorzio RFX Associazione EURATOM ENEA per la Fusione, Corso Stati Uniti 4, Padua (Italy); Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F. [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon (United Kingdom); Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D. [Association EURATOM-CEA, CEA Cadarache, 13 - Saint-Paul-lez-Durance (France); Albanese, R. [Assoc. Euratom-ENEA-CREATE, Univ. Mediterranea RC (Italy); Arena, P.; Bruno, M. [Assoc. Euratom-ENEA-CREATE, Univ.di Catania (Italy); Ambrosino, G.; Ariola, M. [Assoc. Euratom-ENEA-CREATE, Univ. Napoli Federico Napoli (Italy); Crisanti, F. [Associazone EURATOM ENEA sulla Fusione, C.R. Frascati (Italy); Luna, E. de la; Sanchez, J. [Associacion EURATOM CIEMAT para Fusion, Madrid (Spain)

    2004-07-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  10. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    International Nuclear Information System (INIS)

    Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Murari, A.; Barana, O.; Albanese, R.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with internal transport barriers. Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  11. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    International Nuclear Information System (INIS)

    Murari, A.; Barana, O.; Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Albanese, R.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  12. Binary Factorization in Hopfield-Like Neural Networks: Single-Step Approximation and Computer Simulations

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Sirota, A.M.; Húsek, Dušan; Muraviev, I. P.

    2004-01-01

    Roč. 14, č. 2 (2004), s. 139-152 ISSN 1210-0552 R&D Projects: GA ČR GA201/01/1192 Grant - others:BARRANDE(EU) 99010-2/99053; Intellectual computer Systems(EU) Grant 2.45 Institutional research plan: CEZ:AV0Z1030915 Keywords : nonlinear binary factor analysis * feature extraction * recurrent neural network * Single-Step approximation * neurodynamics simulation * attraction basins * Hebbian learning * unsupervised learning * neuroscience * brain function modeling Subject RIV: BA - General Mathematics

  13. Multi-Step Time Series Forecasting with an Ensemble of Varied Length Mixture Models.

    Science.gov (United States)

    Ouyang, Yicun; Yin, Hujun

    2018-05-01

    Many real-world problems require modeling and forecasting of time series, such as weather temperature, electricity demand, stock prices and foreign exchange (FX) rates. Often, the tasks involve predicting over a long-term period, e.g. several weeks or months. Most existing time series models are inheritably for one-step prediction, that is, predicting one time point ahead. Multi-step or long-term prediction is difficult and challenging due to the lack of information and uncertainty or error accumulation. The main existing approaches, iterative and independent, either use one-step model recursively or treat the multi-step task as an independent model. They generally perform poorly in practical applications. In this paper, as an extension of the self-organizing mixture autoregressive (AR) model, the varied length mixture (VLM) models are proposed to model and forecast time series over multi-steps. The key idea is to preserve the dependencies between the time points within the prediction horizon. Training data are segmented to various lengths corresponding to various forecasting horizons, and the VLM models are trained in a self-organizing fashion on these segments to capture these dependencies in its component AR models of various predicting horizons. The VLM models form a probabilistic mixture of these varied length models. A combination of short and long VLM models and an ensemble of them are proposed to further enhance the prediction performance. The effectiveness of the proposed methods and their marked improvements over the existing methods are demonstrated through a number of experiments on synthetic data, real-world FX rates and weather temperatures.

  14. Effects of computing time delay on real-time control systems

    Science.gov (United States)

    Shin, Kang G.; Cui, Xianzhong

    1988-01-01

    The reliability of a real-time digital control system depends not only on the reliability of the hardware and software used, but also on the speed in executing control algorithms. The latter is due to the negative effects of computing time delay on control system performance. For a given sampling interval, the effects of computing time delay are classified into the delay problem and the loss problem. Analysis of these two problems is presented as a means of evaluating real-time control systems. As an example, both the self-tuning predicted (STP) control and Proportional-Integral-Derivative (PID) control are applied to the problem of tracking robot trajectories, and their respective effects of computing time delay on control performance are comparatively evaluated. For this example, the STP (PID) controller is shown to outperform the PID (STP) controller in coping with the delay (loss) problem.

  15. Constructing an exposure chart: step by step (based on standard procedures)

    International Nuclear Information System (INIS)

    David, Jocelyn L; Cansino, Percedita T.; Taguibao, Angileo P.

    2000-01-01

    An exposure chart is very important in conducting radiographic inspection of materials. By using an accurate exposure chart, an inspector is able to avoid a trial and error way of determining correct time to expose a specimen, thereby producing a radiograph that has an acceptable density based on a standard. The chart gives the following information: x-ray machine model and brand, distance of the x-ray tube from the film, type and thickness of intensifying screens, film type, radiograph density, and film processing conditions. The methods of preparing an exposure chart are available in existing radiographic testing manuals. These described methods are presented in step by step procedures, covering the actual laboratory set-up, data gathering, computations, and transformation of derived data into Characteristic Curve and Exposure Chart

  16. Comparison between time-step-integration and probabilistic methods in seismic analysis of a linear structure

    International Nuclear Information System (INIS)

    Schneeberger, B.; Breuleux, R.

    1977-01-01

    Assuming that earthquake ground motion is a stationary time function, the seismic analysis of a linear structure can be done by probailistic methods using the 'power spectral density function' (PSD), instead of applying the more traditional time-step-integration using earthquake time histories (TH). A given structure was analysed both by PSD and TH methods computing and comparing 'floor response spectra'. The analysis using TH was performed for two different TH and different frequency intervals for the 'floor-response-spectra'. The analysis using PSD first produced PSD functions of the responses of the floors and these were then converted into 'foor-response-spectra'. Plots of the resulting 'floor-response-spectra' show: (1) The agreement of TH and PSD results is quite close. (2) The curves produced by PSD are much smoother than those produced by TH and mostly form an enelope of the latter. (3) The curves produced by TH are quite jagged with the location and magnitude of the peaks depending on the choice of frequencies at which the 'floor-response-spectra' were evaluated and on the choice of TH. (Auth.)

  17. Computational hologram synthesis and representation on spatial light modulators for real-time 3D holographic imaging

    International Nuclear Information System (INIS)

    Reichelt, Stephan; Leister, Norbert

    2013-01-01

    In dynamic computer-generated holography that utilizes spatial light modulators, both hologram synthesis and hologram representation are essential in terms of fast computation and high reconstruction quality. For hologram synthesis, i.e. the computation step, Fresnel transform based or point-source based raytracing methods can be applied. In the encoding step, the complex wave-field has to be optimally represented by the SLM with its given modulation capability. For proper hologram reconstruction that implies a simultaneous and independent amplitude and phase modulation of the input wave-field by the SLM. In this paper, we discuss full complex hologram representation methods on SLMs by considering inherent SLM parameter such as modulation type and bit depth on their reconstruction performance such as diffraction efficiency and SNR. We review the three implementation schemes of Burckhardt amplitude-only representation, phase-only macro-pixel representation, and two-phase interference representation. Besides the optical performance we address their hardware complexity and required computational load. Finally, we experimentally demonstrate holographic reconstructions of different representation schemes as obtained by functional prototypes utilizing SeeReal's viewing-window holographic display technology. The proposed hardware implementations enable a fast encoding of complex-valued hologram data and thus will pave the way for commercial real-time holographic 3D imaging in the near future.

  18. Effect of exposure time reduction towards sensitivity and SNR for computed radiography (CR) application in NDT

    International Nuclear Information System (INIS)

    Sapizah Rahim; Khairul Anuar Mohd Salleh; Noorhazleena Azaman; Shaharudin Sayuti; Siti Madiha Muhammad Amir; Arshad Yassin; Abdul Razak Hamzah

    2010-01-01

    Signal-to-noise ratio (SNR) and sensitivity study of Computed Radiography (CR) system with reduction of exposure time is presented. The purposes of this research are to determine the behavior of SNR toward three different thicknesses (step wedge; 5, 10 and 15 mm) and the ability of CR system to recognize hole type penetrameter when the exposure time decreased up to 80 % according to the exposure chart (D7; ISOVOLT Titan E). It is shown that the SNR is decreased with decreasing of exposure time percentage but the high quality image is achieved until 80 % reduction of exposure time. (author)

  19. Two-Step Time of Arrival Estimation for Pulse-Based Ultra-Wideband Systems

    Directory of Open Access Journals (Sweden)

    H. Vincent Poor

    2008-05-01

    Full Text Available In cooperative localization systems, wireless nodes need to exchange accurate position-related information such as time-of-arrival (TOA and angle-of-arrival (AOA, in order to obtain accurate location information. One alternative for providing accurate position-related information is to use ultra-wideband (UWB signals. The high time resolution of UWB signals presents a potential for very accurate positioning based on TOA estimation. However, it is challenging to realize very accurate positioning systems in practical scenarios, due to both complexity/cost constraints and adverse channel conditions such as multipath propagation. In this paper, a two-step TOA estimation algorithm is proposed for UWB systems in order to provide accurate TOA estimation under practical constraints. In order to speed up the estimation process, the first step estimates a coarse TOA of the received signal based on received signal energy. Then, in the second step, the arrival time of the first signal path is estimated by considering a hypothesis testing approach. The proposed scheme uses low-rate correlation outputs and is able to perform accurate TOA estimation in reasonable time intervals. The simulation results are presented to analyze the performance of the estimator.

  20. Analysis of factors influencing the integrated bolus peak timing in contrast-enhanced brain computed tomographic angiography

    International Nuclear Information System (INIS)

    Son, Soon Yong; Choi, Kwan Woo; Jeong, Hoi Woun; Jang, Seo Goo; Jung, Jae Young; Yun, Jung Soo; Kim, Ki Won; Lee, Young Ah; Son, Jin Hyun; Min, Jung Whan

    2016-01-01

    The objective of this study was to analyze the factors influencing integrated bolus peak timing in contrast- enhanced computed tomographic angiography (CTA) and to determine a method of calculating personal peak time. The optimal time was calculated by performing multiple linear regression analysis, after finding the influence factors through correlation analysis between integrated peak time of contrast medium and personal measured value by monitoring CTA scans. The radiation exposure dose in CTA was 716.53 mGy·cm and the radiation exposure dose in monitoring scan was 15.52 mGy (2 - 34 mGy). The results were statistically significant (p < .01). Regression analysis revealed, a -0.160 times decrease with a one-step increase in heart rate in male, and -0.004, -0.174, and 0.006 times decrease with one-step in DBP, heart rate, and blood sugar, respectively, in female. In a consistency test of peak time by calculating measured peak time and peak time by using the regression equation, the consistency was determined to be very high for male and female. This study could prevent unnecessary dose exposure by encouraging in clinic calculation of personal integrated peak time of contrast medium prior to examination

  1. Rotor Cascade Shape Optimization with Unsteady Passing Wakes Using Implicit Dual-Time Stepping and a Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Eun Seok Lee

    2003-01-01

    Full Text Available An axial turbine rotor cascade-shape optimization with unsteady passing wakes was performed to obtain an improved aerodynamic performance using an unsteady flow, Reynolds-averaged Navier-Stokes equations solver that was based on explicit, finite difference; Runge-Kutta multistage time marching; and the diagonalized alternating direction implicit scheme. The code utilized Baldwin-Lomax algebraic and k-ε turbulence modeling. The full approximation storage multigrid method and preconditioning were implemented as iterative convergence-acceleration techniques. An implicit dual-time stepping method was incorporated in order to simulate the unsteady flow fields. The objective function was defined as minimization of total pressure loss and maximization of lift, while the mass flow rate was fixed during the optimization. The design variables were several geometric parameters characterizing airfoil leading edge, camber, stagger angle, and inter-row spacing. The genetic algorithm was used as an optimizer, and the penalty method was introduced for combining the constraints with the objective function. Each individual's objective function was computed simultaneously by using a 32-processor distributedmemory computer. The optimization results indicated that only minor improvements are possible in unsteady rotor/stator aerodynamics by varying these geometric parameters.

  2. A deterministic algorithm for fitting a step function to a weighted point-set

    KAUST Repository

    Fournier, Hervé

    2013-02-01

    Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance to the input points. It matches the expected time bound of the best known randomized algorithm for this problem. Our approach relies on Coles improved parametric searching technique. As a direct application, our result yields the first O(nlogn)-time algorithm for computing a k-center of a set of n weighted points on the real line. © 2012 Elsevier B.V.

  3. Some Comments on the Behavior of the RELAP5 Numerical Scheme at Very Small Time Steps

    International Nuclear Information System (INIS)

    Tiselj, Iztok; Cerne, Gregor

    2000-01-01

    The behavior of the RELAP5 code at very short time steps is described, i.e., δt [approximately equal to] 0.01 δx/c. First, the property of the RELAP5 code to trace acoustic waves with 'almost' second-order accuracy is demonstrated. Quasi-second-order accuracy is usually achieved for acoustic waves at very short time steps but can never be achieved for the propagation of nonacoustic temperature and void fraction waves. While this feature may be beneficial for the simulations of fast transients describing pressure waves, it also has an adverse effect: The lack of numerical diffusion at very short time steps can cause typical second-order numerical oscillations near steep pressure jumps. This behavior explains why an automatic halving of the time step, which is used in RELAP5 when numerical difficulties are encountered, in some cases leads to the failure of the simulation.Second, the integration of the stiff interphase exchange terms in RELAP5 is studied. For transients with flashing and/or rapid condensation as the main phenomena, results strongly depend on the time step used. Poor accuracy is achieved with 'normal' time steps (δt [approximately equal to] δx/v) because of the very short characteristic timescale of the interphase mass and heat transfer sources. In such cases significantly different results are predicted with very short time steps because of the more accurate integration of the stiff interphase exchange terms

  4. Multi-time-step domain coupling method with energy control

    DEFF Research Database (Denmark)

    Mahjoubi, N.; Krenk, Steen

    2010-01-01

    the individual time step. It is demonstrated that displacement continuity between the subdomains leads to cancelation of the interface contributions to the energy balance equation, and thus stability and algorithmic damping properties of the original algorithms are retained. The various subdomains can...... by a numerical example using a refined mesh around concentrated forces. Copyright © 2010 John Wiley & Sons, Ltd....

  5. Sharing Steps in the Workplace: Changing Privacy Concerns Over Time

    DEFF Research Database (Denmark)

    Jensen, Nanna Gorm; Shklovski, Irina

    2016-01-01

    study of a Danish workplace participating in a step counting campaign. We find that concerns of employees who choose to participate and those who choose not to differ. Moreover, privacy concerns of participants develop and change over time. Our findings challenge the assumption that consumers...

  6. Cluster Computing for Embedded/Real-Time Systems

    Science.gov (United States)

    Katz, D.; Kepner, J.

    1999-01-01

    Embedded and real-time systems, like other computing systems, seek to maximize computing power for a given price, and thus can significantly benefit from the advancing capabilities of cluster computing.

  7. Computer network time synchronization the network time protocol

    CERN Document Server

    Mills, David L

    2006-01-01

    What started with the sundial has, thus far, been refined to a level of precision based on atomic resonance: Time. Our obsession with time is evident in this continued scaling down to nanosecond resolution and beyond. But this obsession is not without warrant. Precision and time synchronization are critical in many applications, such as air traffic control and stock trading, and pose complex and important challenges in modern information networks.Penned by David L. Mills, the original developer of the Network Time Protocol (NTP), Computer Network Time Synchronization: The Network Time Protocol

  8. Multiple time step molecular dynamics in the optimized isokinetic ensemble steered with the molecular theory of solvation: Accelerating with advanced extrapolation of effective solvation forces

    International Nuclear Information System (INIS)

    Omelyan, Igor; Kovalenko, Andriy

    2013-01-01

    We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nosé-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics

  9. Insect-computer hybrid legged robot with user-adjustable speed, step length and walking gait.

    Science.gov (United States)

    Cao, Feng; Zhang, Chao; Choo, Hao Yu; Sato, Hirotaka

    2016-03-01

    We have constructed an insect-computer hybrid legged robot using a living beetle (Mecynorrhina torquata; Coleoptera). The protraction/retraction and levation/depression motions in both forelegs of the beetle were elicited by electrically stimulating eight corresponding leg muscles via eight pairs of implanted electrodes. To perform a defined walking gait (e.g., gallop), different muscles were individually stimulated in a predefined sequence using a microcontroller. Different walking gaits were performed by reordering the applied stimulation signals (i.e., applying different sequences). By varying the duration of the stimulation sequences, we successfully controlled the step frequency and hence the beetle's walking speed. To the best of our knowledge, this paper presents the first demonstration of living insect locomotion control with a user-adjustable walking gait, step length and walking speed. © 2016 The Author(s).

  10. NEPTUNIX 2: Operating on computers network - Catalogued procedures

    International Nuclear Information System (INIS)

    Roux, Pierre.

    1982-06-01

    NEPTUNIX 2 is a package which carries out the simulation of complex processes described by numerous non linear algebro-differential equations. Main features are: non linear or time dependent parameters, implicit form, stiff systems, dynamic change of equations leading to discontinuities on some variables. Thus the mathematical model is built with an equations set F(x,x',1,t), where t is the independent variable, x' the derivative of x and 1 an ''algebrized'' logical variable. The NEPTUNIX 2 package is divided into two successive major steps: a non numerical step and a numerical step. The numerical step, using results from a picture of the model translated in FORTRAN language, in a form fitted for the executive computer, carries out the simulmations; in this way, NEPTUNIX 2 numerical step is portable. On the opposite, the non numerical step must be executed on a series 370 IBM computer or on a compatible computer. The present manual describes NEPTUNIX 2 operating procedures when the two steps are executed on the same computer and also when the numerical step is executed on an other computer connected or not on the same computing network [fr

  11. Measuring border delay and crossing times at the US-Mexico border : part II. Step-by-step guidelines for implementing a radio frequency identification (RFID) system to measure border crossing and wait times.

    Science.gov (United States)

    2012-06-01

    The purpose of these step-by-step guidelines is to assist in planning, designing, and deploying a system that uses radio frequency identification (RFID) technology to measure the time needed for commercial vehicles to complete the northbound border c...

  12. 5 CFR 890.101 - Definitions; time computations.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Definitions; time computations. 890.101....101 Definitions; time computations. (a) In this part, the terms annuitant, carrier, employee, employee... in section 8901 of title 5, United States Code, and supplement the following definitions: Appropriate...

  13. Displacement in the parameter space versus spurious solution of discretization with large time step

    International Nuclear Information System (INIS)

    Mendes, Eduardo; Letellier, Christophe

    2004-01-01

    In order to investigate a possible correspondence between differential and difference equations, it is important to possess discretization of ordinary differential equations. It is well known that when differential equations are discretized, the solution thus obtained depends on the time step used. In the majority of cases, such a solution is considered spurious when it does not resemble the expected solution of the differential equation. This often happens when the time step taken into consideration is too large. In this work, we show that, even for quite large time steps, some solutions which do not correspond to the expected ones are still topologically equivalent to solutions of the original continuous system if a displacement in the parameter space is considered. To reduce such a displacement, a judicious choice of the discretization scheme should be made. To this end, a recent discretization scheme, based on the Lie expansion of the original differential equations, proposed by Monaco and Normand-Cyrot will be analysed. Such a scheme will be shown to be sufficient for providing an adequate discretization for quite large time steps compared to the pseudo-period of the underlying dynamics

  14. Development of real-time visualization system for Computational Fluid Dynamics on parallel computers

    International Nuclear Information System (INIS)

    Muramatsu, Kazuhiro; Otani, Takayuki; Matsumoto, Hideki; Takei, Toshifumi; Doi, Shun

    1998-03-01

    A real-time visualization system for computational fluid dynamics in a network connecting between a parallel computing server and the client terminal was developed. Using the system, a user can visualize the results of a CFD (Computational Fluid Dynamics) simulation on the parallel computer as a client terminal during the actual computation on a server. Using GUI (Graphical User Interface) on the client terminal, to user is also able to change parameters of the analysis and visualization during the real-time of the calculation. The system carries out both of CFD simulation and generation of a pixel image data on the parallel computer, and compresses the data. Therefore, the amount of data from the parallel computer to the client is so small in comparison with no compression that the user can enjoy the swift image appearance comfortably. Parallelization of image data generation is based on Owner Computation Rule. GUI on the client is built on Java applet. A real-time visualization is thus possible on the client PC only if Web browser is implemented on it. (author)

  15. 29 CFR 1921.22 - Computation of time.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Computation of time. 1921.22 Section 1921.22 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... WORKERS' COMPENSATION ACT Miscellaneous § 1921.22 Computation of time. Sundays and holidays shall be...

  16. A deterministic algorithm for fitting a step function to a weighted point-set

    KAUST Repository

    Fournier, Hervé ; Vigneron, Antoine E.

    2013-01-01

    Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance

  17. Integration of FULLSWOF2D and PeanoClaw: Adaptivity and Local Time-Stepping for Complex Overland Flows

    KAUST Repository

    Unterweger, K.

    2015-01-01

    © Springer International Publishing Switzerland 2015. We propose to couple our adaptive mesh refinement software PeanoClaw with existing solvers for complex overland flows that are tailored to regular Cartesian meshes. This allows us to augment them with spatial adaptivity and local time-stepping without altering the computational kernels. FullSWOF2D—Full Shallow Water Overland Flows—here is our software of choice though all paradigms hold for other solvers as well.We validate our hybrid simulation software in an artificial test scenario before we provide results for a large-scale flooding scenario of the Mecca region. The latter demonstrates that our coupling approach enables the simulation of complex “real-world” scenarios.

  18. Automated computation of arbor densities: a step toward identifying neuronal cell types

    Directory of Open Access Journals (Sweden)

    Uygar eSümbül

    2014-11-01

    Full Text Available The shape and position of a neuron convey information regarding its molecular and functional identity. The identification of cell types from structure, a classic method, relies on the time-consuming step of arbor tracing. However, as genetic tools and imaging methods make data-driven approaches to neuronal circuit analysis feasible, the need for automated processing increases. Here, we first establish that mouse retinal ganglion cell types can be as precise about distributing their arbor volumes across the inner plexiform layer as they are about distributing the skeletons of the arbors. Then, we describe an automated approach to computing the spatial distribution of the dendritic arbors, or arbor density, with respect to a global depth coordinate based on this observation. Our method involves three-dimensional reconstruction of neuronal arbors by a supervised machine learning algorithm, post-processing of the enhanced stacks to remove somata and isolate the neuron of interest, and registration of neurons to each other using automatically detected arbors of the starburst amacrine interneurons as fiducial markers. In principle, this method could be generalizable to other structures of the CNS, provided that they allow sparse labeling of the cells and contain a reliable axis of spatial reference.

  19. Soil Erosion Estimation Using Grid-based Computation

    Directory of Open Access Journals (Sweden)

    Josef Vlasák

    2005-06-01

    quick computation while the precision degradation is smaller then in the one-step computation. The above-described two-step method is suitable mainly for a combination of larger areas (several cadastral areas and high precision sources at the same time. It is evident that in present time such a combination is rather unique, which means that in most cases the one-step computation with 30m–50m cell sizes is adequate. It is expected that in the future all sources used for the determination of the factors will have a higher accuracy and, therefore, the two-step computation will be needed.

  20. Stepping Stones through Time

    Directory of Open Access Journals (Sweden)

    Emily Lyle

    2012-03-01

    Full Text Available Indo-European mythology is known only through written records but it needs to be understood in terms of the preliterate oral-cultural context in which it was rooted. It is proposed that this world was conceptually organized through a memory-capsule consisting of the current generation and the three before it, and that there was a system of alternate generations with each generation taking a step into the future under the leadership of a white or red king.

  1. Perturbed Strong Stability Preserving Time-Stepping Methods For Hyperbolic PDEs

    KAUST Repository

    Hadjimichael, Yiannis

    2017-09-30

    A plethora of physical phenomena are modelled by hyperbolic partial differential equations, for which the exact solution is usually not known. Numerical methods are employed to approximate the solution to hyperbolic problems; however, in many cases it is difficult to satisfy certain physical properties while maintaining high order of accuracy. In this thesis, we develop high-order time-stepping methods that are capable of maintaining stability constraints of the solution, when coupled with suitable spatial discretizations. Such methods are called strong stability preserving (SSP) time integrators, and we mainly focus on perturbed methods that use both upwind- and downwind-biased spatial discretizations. Firstly, we introduce a new family of third-order implicit Runge–Kuttas methods with arbitrarily large SSP coefficient. We investigate the stability and accuracy of these methods and we show that they perform well on hyperbolic problems with large CFL numbers. Moreover, we extend the analysis of SSP linear multistep methods to semi-discretized problems for which different terms on the right-hand side of the initial value problem satisfy different forward Euler (or circle) conditions. Optimal perturbed and additive monotonicity-preserving linear multistep methods are studied in the context of such problems. Optimal perturbed methods attain augmented monotonicity-preserving step sizes when the different forward Euler conditions are taken into account. On the other hand, we show that optimal SSP additive methods achieve a monotonicity-preserving step-size restriction no better than that of the corresponding non-additive SSP linear multistep methods. Furthermore, we develop the first SSP linear multistep methods of order two and three with variable step size, and study their optimality. We describe an optimal step-size strategy and demonstrate the effectiveness of these methods on various one- and multi-dimensional problems. Finally, we establish necessary conditions

  2. General purpose computers in real time

    International Nuclear Information System (INIS)

    Biel, J.R.

    1989-01-01

    I see three main trends in the use of general purpose computers in real time. The first is more processing power. The second is the use of higher speed interconnects between computers (allowing more data to be delivered to the processors). The third is the use of larger programs running in the computers. Although there is still work that needs to be done, I believe that all indications are that the online need for general purpose computers should be available for the SCC and LHC machines. 2 figs

  3. Rotor cascade shape optimization with unsteady passing wakes using implicit dual time stepping method

    Science.gov (United States)

    Lee, Eun Seok

    2000-10-01

    An improved aerodynamics performance of a turbine cascade shape can be achieved by an understanding of the flow-field associated with the stator-rotor interaction. In this research, an axial gas turbine airfoil cascade shape is optimized for improved aerodynamic performance by using an unsteady Navier-Stokes solver and a parallel genetic algorithm. The objective of the research is twofold: (1) to develop a computational fluid dynamics code having faster convergence rate and unsteady flow simulation capabilities, and (2) to optimize a turbine airfoil cascade shape with unsteady passing wakes for improved aerodynamic performance. The computer code solves the Reynolds averaged Navier-Stokes equations. It is based on the explicit, finite difference, Runge-Kutta time marching scheme and the Diagonalized Alternating Direction Implicit (DADI) scheme, with the Baldwin-Lomax algebraic and k-epsilon turbulence modeling. Improvements in the code focused on the cascade shape design capability, convergence acceleration and unsteady formulation. First, the inverse shape design method was implemented in the code to provide the design capability, where a surface transpiration concept was employed as an inverse technique to modify the geometry satisfying the user specified pressure distribution on the airfoil surface. Second, an approximation storage multigrid method was implemented as an acceleration technique. Third, the preconditioning method was adopted to speed up the convergence rate in solving the low Mach number flows. Finally, the implicit dual time stepping method was incorporated in order to simulate the unsteady flow-fields. For the unsteady code validation, the Stokes's 2nd problem and the Poiseuille flow were chosen and compared with the computed results and analytic solutions. To test the code's ability to capture the natural unsteady flow phenomena, vortex shedding past a cylinder and the shock oscillation over a bicircular airfoil were simulated and compared with

  4. Effects of computer-based graphic organizers to solve one-step word problems for middle school students with mild intellectual disability: A preliminary study.

    Science.gov (United States)

    Sheriff, Kelli A; Boon, Richard T

    2014-08-01

    The purpose of this study was to examine the effects of computer-based graphic organizers, using Kidspiration 3© software, to solve one-step word problems. Participants included three students with mild intellectual disability enrolled in a functional academic skills curriculum in a self-contained classroom. A multiple probe single-subject research design (Horner & Baer, 1978) was used to evaluate the effectiveness of computer-based graphic organizers to solving mathematical one-step word problems. During the baseline phase, the students completed a teacher-generated worksheet that consisted of nine functional word problems in a traditional format using a pencil, paper, and a calculator. In the intervention and maintenance phases, the students were instructed to complete the word problems using a computer-based graphic organizer. Results indicated that all three of the students improved in their ability to solve the one-step word problems using computer-based graphic organizers compared to traditional instructional practices. Limitations of the study and recommendations for future research directions are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Error Analysis of a Fractional Time-Stepping Technique for Incompressible Flows with Variable Density

    KAUST Repository

    Guermond, J.-L.; Salgado, Abner J.

    2011-01-01

    In this paper we analyze the convergence properties of a new fractional time-stepping technique for the solution of the variable density incompressible Navier-Stokes equations. The main feature of this method is that, contrary to other existing algorithms, the pressure is determined by just solving one Poisson equation per time step. First-order error estimates are proved, and stability of a formally second-order variant of the method is established. © 2011 Society for Industrial and Applied Mathematics.

  6. First step in optimization doses in computed tomography

    International Nuclear Information System (INIS)

    Mecca, Fernando; Nascimeto, Vitor; Dias, K. Simone

    2008-01-01

    Full text: Introduction: The evolution reached by computed tomography in the last 10 years made this image modality have utmost importance for the analysis and diagnosis of a broad range of pathologies. Thus, a significant increase in the number of examinations using CT can be observed. Hence, the doses of radiation in such analyses became a factor of concern, because they increase the collective dose over the population. The use of the 'ALARA' principle in computed tomography became a necessity and the first step to perform it is to know the doses applied in each exam, building, then, a methodology to reduce their values without losing diagnostic information. Methodology: In the optimization process of dose values with CT scan at INCA (National Institute of Cancer, Rio de Janeiro-Brazil), examinations carried through in two distinct equipments were analyzed. For each room, samples of 10 patients were taken from each examination, both for adult and child patients: thorax (including high resolution exams), abdomen, pelvis and skull. The values of C VOL and P kl were estimated from the table values of nC w as well as from the values established in the dosimetry carried through with head and abdomen phantoms. Results: In adult thorax examinations, the C VOL values have ranged between 14 and 21 mGy and P kl values from 230 and 590 mGy*cm. For head examinations the range was between 8 and 16 mGy and 350 and 600 mGy.cm. For abdomen, it ranged between 6 and 16 mGy and 200 and 440 mGy*cm. For child patients the results are in the same range of adults in all examinations. Conclusion: There was evident in this work the necessity of the optimization doses in protocols of children because his doses are the same of the adult patients them is necessary to study specific protocols for this kind of patients at least. (author)

  7. Time-Predictable Computer Architecture

    Directory of Open Access Journals (Sweden)

    Schoeberl Martin

    2009-01-01

    Full Text Available Today's general-purpose processors are optimized for maximum throughput. Real-time systems need a processor with both a reasonable and a known worst-case execution time (WCET. Features such as pipelines with instruction dependencies, caches, branch prediction, and out-of-order execution complicate WCET analysis and lead to very conservative estimates. In this paper, we evaluate the issues of current architectures with respect to WCET analysis. Then, we propose solutions for a time-predictable computer architecture. The proposed architecture is evaluated with implementation of some features in a Java processor. The resulting processor is a good target for WCET analysis and still performs well in the average case.

  8. Recent achievements in real-time computational seismology in Taiwan

    Science.gov (United States)

    Lee, S.; Liang, W.; Huang, B.

    2012-12-01

    Real-time computational seismology is currently possible to be achieved which needs highly connection between seismic database and high performance computing. We have developed a real-time moment tensor monitoring system (RMT) by using continuous BATS records and moment tensor inversion (CMT) technique. The real-time online earthquake simulation service is also ready to open for researchers and public earthquake science education (ROS). Combine RMT with ROS, the earthquake report based on computational seismology can provide within 5 minutes after an earthquake occurred (RMT obtains point source information ROS completes a 3D simulation real-time now. For more information, welcome to visit real-time computational seismology earthquake report webpage (RCS).

  9. 7 CFR 1.603 - How are time periods computed?

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 1 2010-01-01 2010-01-01 false How are time periods computed? 1.603 Section 1.603... Licenses General Provisions § 1.603 How are time periods computed? (a) General. Time periods are computed as follows: (1) The day of the act or event from which the period begins to run is not included. (2...

  10. Automated selection of brain regions for real-time fMRI brain-computer interfaces

    Science.gov (United States)

    Lührs, Michael; Sorger, Bettina; Goebel, Rainer; Esposito, Fabrizio

    2017-02-01

    Objective. Brain-computer interfaces (BCIs) implemented with real-time functional magnetic resonance imaging (rt-fMRI) use fMRI time-courses from predefined regions of interest (ROIs). To reach best performances, localizer experiments and on-site expert supervision are required for ROI definition. To automate this step, we developed two unsupervised computational techniques based on the general linear model (GLM) and independent component analysis (ICA) of rt-fMRI data, and compared their performances on a communication BCI. Approach. 3 T fMRI data of six volunteers were re-analyzed in simulated real-time. During a localizer run, participants performed three mental tasks following visual cues. During two communication runs, a letter-spelling display guided the subjects to freely encode letters by performing one of the mental tasks with a specific timing. GLM- and ICA-based procedures were used to decode each letter, respectively using compact ROIs and whole-brain distributed spatio-temporal patterns of fMRI activity, automatically defined from subject-specific or group-level maps. Main results. Letter-decoding performances were comparable to supervised methods. In combination with a similarity-based criterion, GLM- and ICA-based approaches successfully decoded more than 80% (average) of the letters. Subject-specific maps yielded optimal performances. Significance. Automated solutions for ROI selection may help accelerating the translation of rt-fMRI BCIs from research to clinical applications.

  11. Step by Step Microsoft Office Visio 2003

    CERN Document Server

    Lemke, Judy

    2004-01-01

    Experience learning made easy-and quickly teach yourself how to use Visio 2003, the Microsoft Office business and technical diagramming program. With STEP BY STEP, you can take just the lessons you need, or work from cover to cover. Either way, you drive the instruction-building and practicing the skills you need, just when you need them! Produce computer network diagrams, organization charts, floor plans, and moreUse templates to create new diagrams and drawings quicklyAdd text, color, and 1-D and 2-D shapesInsert graphics and pictures, such as company logosConnect shapes to create a basic f

  12. Rapid expansion method (REM) for time‐stepping in reverse time migration (RTM)

    KAUST Repository

    Pestana, Reynam C.

    2009-01-01

    We show that the wave equation solution using a conventional finite‐difference scheme, derived commonly by the Taylor series approach, can be derived directly from the rapid expansion method (REM). After some mathematical manipulation we consider an analytical approximation for the Bessel function where we assume that the time step is sufficiently small. From this derivation we find that if we consider only the first two Chebyshev polynomials terms in the rapid expansion method we can obtain the second order time finite‐difference scheme that is frequently used in more conventional finite‐difference implementations. We then show that if we use more terms from the REM we can obtain a more accurate time integration of the wave field. Consequently, we have demonstrated that the REM is more accurate than the usual finite‐difference schemes and it provides a wave equation solution which allows us to march in large time steps without numerical dispersion and is numerically stable. We illustrate the method with post and pre stack migration results.

  13. Instruction timing for the CDC 7600 computer

    International Nuclear Information System (INIS)

    Lipps, H.

    1975-01-01

    This report provides timing information for all instructions of the Control Data 7600 computer, except for instructions of type 01X, to enable the optimization of 7600 programs. The timing rules serve as background information for timing charts which are produced by a program (TIME76) of the CERN Program Library. The rules that co-ordinate the different sections of the CPU are stated in as much detail as is necessary to time the flow of instructions for a given sequence of code. Instruction fetch, instruction issue, and access to small core memory are treated at length, since details are not available from the computer manuals. Annotated timing charts are given for 24 examples, chosen to display the full range of timing considerations. (Author)

  14. Computational fluid dynamics a practical approach

    CERN Document Server

    Tu, Jiyuan; Liu, Chaoqun

    2018-01-01

    Computational Fluid Dynamics: A Practical Approach, Third Edition, is an introduction to CFD fundamentals and commercial CFD software to solve engineering problems. The book is designed for a wide variety of engineering students new to CFD, and for practicing engineers learning CFD for the first time. Combining an appropriate level of mathematical background, worked examples, computer screen shots, and step-by-step processes, this book walks the reader through modeling and computing, as well as interpreting CFD results. This new edition has been updated throughout, with new content and improved figures, examples and problems.

  15. 50 CFR 221.3 - How are time periods computed?

    Science.gov (United States)

    2010-10-01

    ... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false How are time periods computed? 221.3... Provisions § 221.3 How are time periods computed? (a) General. Time periods are computed as follows: (1) The day of the act or event from which the period begins to run is not included. (2) The last day of the...

  16. Real-time dynamics of lattice gauge theories with a few-qubit quantum computer

    Science.gov (United States)

    Martinez, Esteban A.; Muschik, Christine A.; Schindler, Philipp; Nigg, Daniel; Erhard, Alexander; Heyl, Markus; Hauke, Philipp; Dalmonte, Marcello; Monz, Thomas; Zoller, Peter; Blatt, Rainer

    2016-06-01

    Gauge theories are fundamental to our understanding of interactions between the elementary constituents of matter as mediated by gauge bosons. However, computing the real-time dynamics in gauge theories is a notorious challenge for classical computational methods. This has recently stimulated theoretical effort, using Feynman’s idea of a quantum simulator, to devise schemes for simulating such theories on engineered quantum-mechanical devices, with the difficulty that gauge invariance and the associated local conservation laws (Gauss laws) need to be implemented. Here we report the experimental demonstration of a digital quantum simulation of a lattice gauge theory, by realizing (1 + 1)-dimensional quantum electrodynamics (the Schwinger model) on a few-qubit trapped-ion quantum computer. We are interested in the real-time evolution of the Schwinger mechanism, describing the instability of the bare vacuum due to quantum fluctuations, which manifests itself in the spontaneous creation of electron-positron pairs. To make efficient use of our quantum resources, we map the original problem to a spin model by eliminating the gauge fields in favour of exotic long-range interactions, which can be directly and efficiently implemented on an ion trap architecture. We explore the Schwinger mechanism of particle-antiparticle generation by monitoring the mass production and the vacuum persistence amplitude. Moreover, we track the real-time evolution of entanglement in the system, which illustrates how particle creation and entanglement generation are directly related. Our work represents a first step towards quantum simulation of high-energy theories using atomic physics experiments—the long-term intention is to extend this approach to real-time quantum simulations of non-Abelian lattice gauge theories.

  17. Computers are stepping stones to improved imaging.

    Science.gov (United States)

    Freiherr, G

    1991-02-01

    Never before has the radiology industry embraced the computer with such enthusiasm. Graphics supercomputers as well as UNIX- and RISC-based computing platforms are turning up in every digital imaging modality and especially in systems designed to enhance and transmit images, says author Greg Freiherr on assignment for Computers in Healthcare at the Radiological Society of North America conference in Chicago.

  18. Analysis on burnup step effect for evaluating reactor criticality and fuel breeding ratio

    International Nuclear Information System (INIS)

    Saputra, Geby; Purnama, Aditya Rizki; Permana, Sidik; Suzuki, Mitsutoshi

    2014-01-01

    Criticality condition of the reactors is one of the important factors for evaluating reactor operation and nuclear fuel breeding ratio is another factor to show nuclear fuel sustainability. This study analyzes the effect of burnup steps and cycle operation step for evaluating the criticality condition of the reactor as well as the performance of nuclear fuel breeding or breeding ratio (BR). Burnup step is performed based on a day step analysis which is varied from 10 days up to 800 days and for cycle operation from 1 cycle up to 8 cycles reactor operations. In addition, calculation efficiency based on the variation of computer processors to run the analysis in term of time (time efficiency in the calculation) have been also investigated. Optimization method for reactor design analysis which is used a large fast breeder reactor type as a reference case was performed by adopting an established reactor design code of JOINT-FR. The results show a criticality condition becomes higher for smaller burnup step (day) and for breeding ratio becomes less for smaller burnup step (day). Some nuclides contribute to make better criticality when smaller burnup step due to individul nuclide half-live. Calculation time for different burnup step shows a correlation with the time consuming requirement for more details step calculation, although the consuming time is not directly equivalent with the how many time the burnup time step is divided

  19. One-step electrodeposition process of CuInSe2: Deposition time effect

    Indian Academy of Sciences (India)

    Administrator

    CuInSe2 thin films were prepared by one-step electrodeposition process using a simplified two- electrodes system. ... homojunctions or heterojunctions (Rincon et al 1983). Efficiency of ... deposition times onto indium thin oxide (ITO)-covered.

  20. A six step approach for developing computer based assessment in medical education.

    Science.gov (United States)

    Hassanien, Mohammed Ahmed; Al-Hayani, Abdulmoneam; Abu-Kamer, Rasha; Almazrooa, Adnan

    2013-01-01

    Assessment, which entails the systematic evaluation of student learning, is an integral part of any educational process. Computer-based assessment (CBA) techniques provide a valuable resource to students seeking to evaluate their academic progress through instantaneous, personalized feedback. CBA reduces examination, grading and reviewing workloads and facilitates training. This paper describes a six step approach for developing CBA in higher education and evaluates student perceptions of computer-based summative assessment at the College of Medicine, King Abdulaziz University. A set of questionnaires were distributed to 341 third year medical students (161 female and 180 male) immediately after examinations in order to assess the adequacy of the system for the exam program. The respondents expressed high satisfaction with the first Saudi experience of CBA for final examinations. However, about 50% of them preferred the use of a pilot CBA before its formal application; hence, many did not recommend its use for future examinations. Both male and female respondents reported that the range of advantages offered by CBA outweighed any disadvantages. Further studies are required to monitor the extended employment of CBA technology for larger classes and for a variety of subjects at universities.

  1. Age-related differences in lower-limb force-time relation during the push-off in rapid voluntary stepping.

    Science.gov (United States)

    Melzer, I; Krasovsky, T; Oddsson, L I E; Liebermann, D G

    2010-12-01

    This study investigated the force-time relationship during the push-off stage of a rapid voluntary step in young and older healthy adults, to study the assumption that when balance is lost a quick step may preserve stability. The ability to achieve peak propulsive force within a short time is critical for the performance of such a quick powerful step. We hypothesized that older adults would achieve peak force and power in significantly longer times compared to young people, particularly during the push-off preparatory phase. Fifteen young and 15 older volunteers performed rapid forward steps while standing on a force platform. Absolute anteroposterior and body weight normalized vertical forces during the push-off in the preparation and swing phases were used to determine time to peak and peak force, and step power. Two-way analyses of variance ('Group' [young-older] by 'Phase' [preparation-swing]) were used to assess our hypothesis (P ≤ 0.05). Older people exerted lower peak forces (anteroposterior and vertical) than young adults, but not necessarily lower peak power. More significantly, they showed a longer time to peak force, particularly in the vertical direction during the preparation phase. Older adults generate propulsive forces slowly and reach lower magnitudes, mainly during step preparation. The time to achieve a peak force and power, rather than its actual magnitude, may account for failures in quickly performing a preventive action. Such delay may be associated with the inability to react and recruit muscles quickly. Thus, training elderly to step fast in response to relevant cues may be beneficial in the prevention of falls. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. Guidelines and Procedures for Computing Time-Series Suspended-Sediment Concentrations and Loads from In-Stream Turbidity-Sensor and Streamflow Data

    Science.gov (United States)

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.

    2009-01-01

    In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.

  3. Diffusive Wave Approximation to the Shallow Water Equations: Computational Approach

    KAUST Repository

    Collier, Nathan

    2011-05-14

    We discuss the use of time adaptivity applied to the one dimensional diffusive wave approximation to the shallow water equations. A simple and computationally economical error estimator is discussed which enables time-step size adaptivity. This robust adaptive time discretization corrects the initial time step size to achieve a user specified bound on the discretization error and allows time step size variations of several orders of magnitude. In particular, in the one dimensional results presented in this work feature a change of four orders of magnitudes for the time step over the entire simulation.

  4. Real-time computational photon-counting LiDAR

    Science.gov (United States)

    Edgar, Matthew; Johnson, Steven; Phillips, David; Padgett, Miles

    2018-03-01

    The availability of compact, low-cost, and high-speed MEMS-based spatial light modulators has generated widespread interest in alternative sampling strategies for imaging systems utilizing single-pixel detectors. The development of compressed sensing schemes for real-time computational imaging may have promising commercial applications for high-performance detectors, where the availability of focal plane arrays is expensive or otherwise limited. We discuss the research and development of a prototype light detection and ranging (LiDAR) system via direct time of flight, which utilizes a single high-sensitivity photon-counting detector and fast-timing electronics to recover millimeter accuracy three-dimensional images in real time. The development of low-cost real time computational LiDAR systems could have importance for applications in security, defense, and autonomous vehicles.

  5. Elderly fallers enhance dynamic stability through anticipatory postural adjustments during a choice stepping reaction time

    Directory of Open Access Journals (Sweden)

    Romain Tisserand

    2016-11-01

    Full Text Available In the case of disequilibrium, the capacity to step quickly is critical to avoid falling for elderly. This capacity can be simply assessed through the choice stepping reaction time test (CSRT, where elderly fallers (F take longer to step than elderly non-fallers (NF. However, reasons why elderly F elongate their stepping time remain unclear. The purpose of this study is to assess the characteristics of anticipated postural adjustments (APA that elderly F develop in a stepping context and their consequences on the dynamic stability. 44 community-dwelling elderly subjects (20 F and 22 NF performed a CSRT where kinematics and ground reaction forces were collected. Variables were analyzed using two-way repeated measures ANOVAs. Results for F compared to NF showed that stepping time is elongated, due to a longer APA phase. During APA, they seem to use two distinct balance strategies, depending on the axis: in the anteroposterior direction, we measured a smaller backward movement and slower peak velocity of the center of pressure (CoP; in the mediolateral direction, the CoP movement was similar in amplitude and peak velocity between groups but lasted longer. The biomechanical consequence of both strategies was an increased margin of stability (MoS at foot-off, in the respective direction. By elongating their APA, elderly F use a safer balance strategy that prioritizes dynamic stability conditions instead of the objective of the task. Such a choice in balance strategy probably comes from muscular limitations and/or a higher fear of falling and paradoxically indicates an increased risk of fall.

  6. Modeling Stepped Leaders Using a Time Dependent Multi-dipole Model and High-speed Video Data

    Science.gov (United States)

    Karunarathne, S.; Marshall, T.; Stolzenburg, M.; Warner, T. A.; Orville, R. E.

    2012-12-01

    In summer of 2011, we collected lightning data with 10 stations of electric field change meters (bandwidth of 0.16 Hz - 2.6 MHz) on and around NASA/Kennedy Space Center (KSC) covering nearly 70 km × 100 km area. We also had a high-speed video (HSV) camera recording 50,000 images per second collocated with one of the electric field change meters. In this presentation we describe our use of these data to model the electric field change caused by stepped leaders. Stepped leaders of a cloud to ground lightning flash typically create the initial path for the first return stroke (RS). Most of the time, stepped leaders have multiple complex branches, and one of these branches will create the ground connection for the RS to start. HSV data acquired with a short focal length lens at ranges of 5-25 km from the flash are useful for obtaining the 2-D location of these multiple branches developing at the same time. Using HSV data along with data from the KSC Lightning Detection and Ranging (LDAR2) system and the Cloud to Ground Lightning Surveillance System (CGLSS), the 3D path of a leader may be estimated. Once the path of a stepped leader is obtained, the time dependent multi-dipole model [ Lu, Winn,and Sonnenfeld, JGR 2011] can be used to match the electric field change at various sensor locations. Based on this model, we will present the time-dependent charge distribution along a leader channel and the total charge transfer during the stepped leader phase.

  7. Bayesian emulation for optimization in multi-step portfolio decisions

    OpenAIRE

    Irie, Kaoru; West, Mike

    2016-01-01

    We discuss the Bayesian emulation approach to computational solution of multi-step portfolio studies in financial time series. "Bayesian emulation for decisions" involves mapping the technical structure of a decision analysis problem to that of Bayesian inference in a purely synthetic "emulating" statistical model. This provides access to standard posterior analytic, simulation and optimization methods that yield indirect solutions of the decision problem. We develop this in time series portf...

  8. Data-Based Predictive Control with Multirate Prediction Step

    Science.gov (United States)

    Barlow, Jonathan S.

    2010-01-01

    Data-based predictive control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. One challenge of MPC is computational requirements increasing with prediction horizon length. This paper develops a closed-loop dynamic output feedback controller that minimizes a multi-step-ahead receding-horizon cost function with multirate prediction step. One result is a reduced influence of prediction horizon and the number of system outputs on the computational requirements of the controller. Another result is an emphasis on portions of the prediction window that are sampled more frequently. A third result is the ability to include more outputs in the feedback path than in the cost function.

  9. TimeSet: A computer program that accesses five atomic time services on two continents

    Science.gov (United States)

    Petrakis, P. L.

    1993-01-01

    TimeSet is a shareware program for accessing digital time services by telephone. At its initial release, it was capable of capturing time signals only from the U.S. Naval Observatory to set a computer's clock. Later the ability to synchronize with the National Institute of Standards and Technology was added. Now, in Version 7.10, TimeSet is able to access three additional telephone time services in Europe - in Sweden, Austria, and Italy - making a total of five official services addressable by the program. A companion program, TimeGen, allows yet another source of telephone time data strings for callers equipped with TimeSet version 7.10. TimeGen synthesizes UTC time data strings in the Naval Observatory's format from an accurately set and maintained DOS computer clock, and transmits them to callers. This allows an unlimited number of 'freelance' time generating stations to be created. Timesetting from TimeGen is made feasible by the advent of Becker's RighTime, a shareware program that learns the drift characteristics of a computer's clock and continuously applies a correction to keep it accurate, and also brings .01 second resolution to the DOS clock. With clock regulation by RighTime and periodic update calls by the TimeGen station to an official time source via TimeSet, TimeGen offers the same degree of accuracy within the resolution of the computer clock as any official atomic time source.

  10. TV time but not computer time is associated with cardiometabolic risk in Dutch young adults.

    Science.gov (United States)

    Altenburg, Teatske M; de Kroon, Marlou L A; Renders, Carry M; Hirasing, Remy; Chinapaw, Mai J M

    2013-01-01

    TV time and total sedentary time have been positively related to biomarkers of cardiometabolic risk in adults. We aim to examine the association of TV time and computer time separately with cardiometabolic biomarkers in young adults. Additionally, the mediating role of waist circumference (WC) is studied. Data of 634 Dutch young adults (18-28 years; 39% male) were used. Cardiometabolic biomarkers included indicators of overweight, blood pressure, blood levels of fasting plasma insulin, cholesterol, glucose, triglycerides and a clustered cardiometabolic risk score. Linear regression analyses were used to assess the cross-sectional association of self-reported TV and computer time with cardiometabolic biomarkers, adjusting for demographic and lifestyle factors. Mediation by WC was checked using the product-of-coefficient method. TV time was significantly associated with triglycerides (B = 0.004; CI = [0.001;0.05]) and insulin (B = 0.10; CI = [0.01;0.20]). Computer time was not significantly associated with any of the cardiometabolic biomarkers. We found no evidence for WC to mediate the association of TV time or computer time with cardiometabolic biomarkers. We found a significantly positive association of TV time with cardiometabolic biomarkers. In addition, we found no evidence for WC as a mediator of this association. Our findings suggest a need to distinguish between TV time and computer time within future guidelines for screen time.

  11. Implementation of Real-Time Machining Process Control Based on Fuzzy Logic in a New STEP-NC Compatible System

    Directory of Open Access Journals (Sweden)

    Po Hu

    2016-01-01

    Full Text Available Implementing real-time machining process control at shop floor has great significance on raising the efficiency and quality of product manufacturing. A framework and implementation methods of real-time machining process control based on STEP-NC are presented in this paper. Data model compatible with ISO 14649 standard is built to transfer high-level real-time machining process control information between CAPP systems and CNC systems, in which EXPRESS language is used to define new STEP-NC entities. Methods for implementing real-time machining process control at shop floor are studied and realized on an open STEP-NC controller, which is developed using object-oriented, multithread, and shared memory technologies conjunctively. Cutting force at specific direction of machining feature in side mill is chosen to be controlled object, and a fuzzy control algorithm with self-adjusting factor is designed and embedded in the software CNC kernel of STEP-NC controller. Experiments are carried out to verify the proposed framework, STEP-NC data model, and implementation methods for real-time machining process control. The results of experiments prove that real-time machining process control tasks can be interpreted and executed correctly by the STEP-NC controller at shop floor, in which actual cutting force is kept around ideal value, whether axial cutting depth changes suddenly or continuously.

  12. Rapid expansion method (REM) for time‐stepping in reverse time migration (RTM)

    KAUST Repository

    Pestana, Reynam C.; Stoffa, Paul L.

    2009-01-01

    an analytical approximation for the Bessel function where we assume that the time step is sufficiently small. From this derivation we find that if we consider only the first two Chebyshev polynomials terms in the rapid expansion method we can obtain the second

  13. Time-of-Flight Cameras in Computer Graphics

    DEFF Research Database (Denmark)

    Kolb, Andreas; Barth, Erhardt; Koch, Reinhard

    2010-01-01

    Computer Graphics, Computer Vision and Human Machine Interaction (HMI). These technologies are starting to have an impact on research and commercial applications. The upcoming generation of ToF sensors, however, will be even more powerful and will have the potential to become “ubiquitous real-time geometry...

  14. 29 CFR 4245.8 - Computation of time.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Computation of time. 4245.8 Section 4245.8 Labor Regulations Relating to Labor (Continued) PENSION BENEFIT GUARANTY CORPORATION INSOLVENCY, REORGANIZATION, TERMINATION, AND OTHER RULES APPLICABLE TO MULTIEMPLOYER PLANS NOTICE OF INSOLVENCY § 4245.8 Computation of...

  15. A Computational Fluid Dynamics Algorithm on a Massively Parallel Computer

    Science.gov (United States)

    Jespersen, Dennis C.; Levit, Creon

    1989-01-01

    The discipline of computational fluid dynamics is demanding ever-increasing computational power to deal with complex fluid flow problems. We investigate the performance of a finite-difference computational fluid dynamics algorithm on a massively parallel computer, the Connection Machine. Of special interest is an implicit time-stepping algorithm; to obtain maximum performance from the Connection Machine, it is necessary to use a nonstandard algorithm to solve the linear systems that arise in the implicit algorithm. We find that the Connection Machine ran achieve very high computation rates on both explicit and implicit algorithms. The performance of the Connection Machine puts it in the same class as today's most powerful conventional supercomputers.

  16. The Non–Symmetric s–Step Lanczos Algorithm: Derivation of Efficient Recurrences and Synchronization–Reducing Variants of BiCG and QMR

    Directory of Open Access Journals (Sweden)

    Feuerriegel Stefan

    2015-12-01

    Full Text Available The Lanczos algorithm is among the most frequently used iterative techniques for computing a few dominant eigenvalues of a large sparse non-symmetric matrix. At the same time, it serves as a building block within biconjugate gradient (BiCG and quasi-minimal residual (QMR methods for solving large sparse non-symmetric systems of linear equations. It is well known that, when implemented on distributed-memory computers with a huge number of processes, the synchronization time spent on computing dot products increasingly limits the parallel scalability. Therefore, we propose synchronization-reducing variants of the Lanczos, as well as BiCG and QMR methods, in an attempt to mitigate these negative performance effects. These so-called s-step algorithms are based on grouping dot products for joint execution and replacing time-consuming matrix operations by efficient vector recurrences. The purpose of this paper is to provide a rigorous derivation of the recurrences for the s-step Lanczos algorithm, introduce s-step BiCG and QMR variants, and compare the parallel performance of these new s-step versions with previous algorithms.

  17. Computation Offloading for Frame-Based Real-Time Tasks under Given Server Response Time Guarantees

    Directory of Open Access Journals (Sweden)

    Anas S. M. Toma

    2014-11-01

    Full Text Available Computation offloading has been adopted to improve the performance of embedded systems by offloading the computation of some tasks, especially computation-intensive tasks, to servers or clouds. This paper explores computation offloading for real-time tasks in embedded systems, provided given response time guarantees from the servers, to decide which tasks should be offloaded to get the results in time. We consider frame-based real-time tasks with the same period and relative deadline. When the execution order of the tasks is given, the problem can be solved in linear time. However, when the execution order is not specified, we prove that the problem is NP-complete. We develop a pseudo-polynomial-time algorithm for deriving feasible schedules, if they exist.  An approximation scheme is also developed to trade the error made from the algorithm and the complexity. Our algorithms are extended to minimize the period/relative deadline of the tasks for performance maximization. The algorithms are evaluated with a case study for a surveillance system and synthesized benchmarks.

  18. One-step reduced kinetics for lean hydrogen-air deflagration

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez-Galisteo, D.; Sanchez, A.L. [Area de Mecanica de Fluidos, Univ. Carlos III de Madrid, Leganes 28911 (Spain); Linan, A. [ETSI Aeronauticos, Pl. Cardenal Cisneros 3, Madrid 28040 (Spain); Williams, F.A. [Dept. of Mechanical and Aerospace Engineering, University of California San Diego, La Jolla, CA 92093-0411 (United States)

    2009-05-15

    A short mechanism consisting of seven elementary reactions, of which only three are reversible, is shown to provide good predictions of hydrogen-air lean-flame burning velocities. This mechanism is further simplified by noting that over a range of conditions of practical interest, near the lean flammability limit all reaction intermediaries have small concentrations in the important thin reaction zone that controls the hydrogen-air laminar burning velocity and therefore follow a steady state approximation, while the main species react according to the global irreversible reaction 2H{sub 2} + O{sub 2} {yields} 2H{sub 2}O. An explicit expression for the non-Arrhenius rate of this one-step overall reaction for hydrogen oxidation is derived from the seven-step detailed mechanism, for application near the flammability limit. The one-step results are used to calculate flammability limits and burning velocities of planar deflagrations. Furthermore, implications concerning radical profiles in the deflagration and reasons for the success of the approximations are clarified. It is also demonstrated that adding only two irreversible direct recombination steps to the seven-step mechanism accurately reproduces burning velocities of the full detailed mechanism for all equivalence ratios at normal atmospheric conditions and that an eight-step detailed mechanism, constructed from the seven-step mechanism by adding to it the fourth reversible shuffle reaction, improves predictions of O and OH profiles. The new reduced-chemistry descriptions can be useful for both analytical and computational studies of lean hydrogen-air flames, decreasing required computation times. (author)

  19. Stochastic nonlinear time series forecasting using time-delay reservoir computers: performance and universality.

    Science.gov (United States)

    Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo

    2014-07-01

    Reservoir computing is a recently introduced machine learning paradigm that has already shown excellent performances in the processing of empirical data. We study a particular kind of reservoir computers called time-delay reservoirs that are constructed out of the sampling of the solution of a time-delay differential equation and show their good performance in the forecasting of the conditional covariances associated to multivariate discrete-time nonlinear stochastic processes of VEC-GARCH type as well as in the prediction of factual daily market realized volatilities computed with intraday quotes, using as training input daily log-return series of moderate size. We tackle some problems associated to the lack of task-universality for individually operating reservoirs and propose a solution based on the use of parallel arrays of time-delay reservoirs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Computational plasticity algorithm for particle dynamics simulations

    Science.gov (United States)

    Krabbenhoft, K.; Lyamin, A. V.; Vignes, C.

    2018-01-01

    The problem of particle dynamics simulation is interpreted in the framework of computational plasticity leading to an algorithm which is mathematically indistinguishable from the common implicit scheme widely used in the finite element analysis of elastoplastic boundary value problems. This algorithm provides somewhat of a unification of two particle methods, the discrete element method and the contact dynamics method, which usually are thought of as being quite disparate. In particular, it is shown that the former appears as the special case where the time stepping is explicit while the use of implicit time stepping leads to the kind of schemes usually labelled contact dynamics methods. The framing of particle dynamics simulation within computational plasticity paves the way for new approaches similar (or identical) to those frequently employed in nonlinear finite element analysis. These include mixed implicit-explicit time stepping, dynamic relaxation and domain decomposition schemes.

  1. Adaptive Step Size Gradient Ascent ICA Algorithm for Wireless MIMO Systems

    Directory of Open Access Journals (Sweden)

    Zahoor Uddin

    2018-01-01

    Full Text Available Independent component analysis (ICA is a technique of blind source separation (BSS used for separation of the mixed received signals. ICA algorithms are classified into adaptive and batch algorithms. Adaptive algorithms perform well in time-varying scenario with high-computational complexity, while batch algorithms have better separation performance in quasistatic channels with low-computational complexity. Amongst batch algorithms, the gradient-based ICA algorithms perform well, but step size selection is critical in these algorithms. In this paper, an adaptive step size gradient ascent ICA (ASS-GAICA algorithm is presented. The proposed algorithm is free from selection of the step size parameter with improved convergence and separation performance. Different performance evaluation criteria are used to verify the effectiveness of the proposed algorithm. Performance of the proposed algorithm is compared with the FastICA and optimum block adaptive ICA (OBAICA algorithms for quasistatic and time-varying wireless channels. Simulation is performed over quadrature amplitude modulation (QAM and binary phase shift keying (BPSK signals. Results show that the proposed algorithm outperforms the FastICA and OBAICA algorithms for a wide range of signal-to-noise ratio (SNR and input data block lengths.

  2. Development of a Computational Steering Framework for High Performance Computing Environments on Blue Gene/P Systems

    KAUST Repository

    Danani, Bob K.

    2012-01-01

    of simulation results in a post-processing step is now transformed into a real-time interactive workflow that significantly reduces development and testing time. Computational steering provides the capability to direct or re-direct the progress of a simulation

  3. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    Science.gov (United States)

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  4. Massively parallel signal processing using the graphics processing unit for real-time brain-computer interface feature extraction

    Directory of Open Access Journals (Sweden)

    J. Adam Wilson

    2009-07-01

    Full Text Available The clock speeds of modern computer processors have nearly plateaued in the past five years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card (GPU was developed for real-time neural signal processing of a brain-computer interface (BCI. The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter, followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally-intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a CPU-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  5. Virus Alert: Ten Steps to Safe Computing.

    Science.gov (United States)

    Gunter, Glenda A.

    1997-01-01

    Discusses computer viruses and explains how to detect them; discusses virus protection and the need to update antivirus software; and offers 10 safe computing tips, including scanning floppy disks and commercial software, how to safely download files from the Internet, avoiding pirated software copies, and backing up files. (LRW)

  6. Single-step digital backpropagation for nonlinearity mitigation

    DEFF Research Database (Denmark)

    Secondini, Marco; Rommel, Simon; Meloni, Gianluca

    2015-01-01

    Nonlinearity mitigation based on the enhanced split-step Fourier method (ESSFM) for the implementation of low-complexity digital backpropagation (DBP) is investigated and experimentally demonstrated. After reviewing the main computational aspects of DBP and of the conventional split-step Fourier...... in the computational complexity, power consumption, and latency with respect to a simple feed-forward equalizer for bulk dispersion compensation....

  7. Comparing an Annual and a Daily Time-Step Model for Predicting Field-Scale Phosphorus Loss.

    Science.gov (United States)

    Bolster, Carl H; Forsberg, Adam; Mittelstet, Aaron; Radcliffe, David E; Storm, Daniel; Ramirez-Avila, John; Sharpley, Andrew N; Osmond, Deanna

    2017-11-01

    A wide range of mathematical models are available for predicting phosphorus (P) losses from agricultural fields, ranging from simple, empirically based annual time-step models to more complex, process-based daily time-step models. In this study, we compare field-scale P-loss predictions between the Annual P Loss Estimator (APLE), an empirically based annual time-step model, and the Texas Best Management Practice Evaluation Tool (TBET), a process-based daily time-step model based on the Soil and Water Assessment Tool. We first compared predictions of field-scale P loss from both models using field and land management data collected from 11 research sites throughout the southern United States. We then compared predictions of P loss from both models with measured P-loss data from these sites. We observed a strong and statistically significant ( loss between the two models; however, APLE predicted, on average, 44% greater dissolved P loss, whereas TBET predicted, on average, 105% greater particulate P loss for the conditions simulated in our study. When we compared model predictions with measured P-loss data, neither model consistently outperformed the other, indicating that more complex models do not necessarily produce better predictions of field-scale P loss. Our results also highlight limitations with both models and the need for continued efforts to improve their accuracy. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  8. The Point Zoro Symmetric Single-Step Procedure for Simultaneous Estimation of Polynomial Zeros

    Directory of Open Access Journals (Sweden)

    Mansor Monsi

    2012-01-01

    Full Text Available The point symmetric single step procedure PSS1 has R-order of convergence at least 3. This procedure is modified by adding another single-step, which is the third step in PSS1. This modified procedure is called the point zoro symmetric single-step PZSS1. It is proven that the R-order of convergence of PZSS1 is at least 4 which is higher than the R-order of convergence of PT1, PS1, and PSS1. Hence, computational time is reduced since this procedure is more efficient for bounding simple zeros simultaneously.

  9. Computation of a long-time evolution in a Schroedinger system

    International Nuclear Information System (INIS)

    Girard, R.; Kroeger, H.; Labelle, P.; Bajzer, Z.

    1988-01-01

    We compare different techniques for the computation of a long-time evolution and the S matrix in a Schroedinger system. As an application we consider a two-nucleon system interacting via the Yamaguchi potential. We suggest computation of the time evolution for a very short time using Pade approximants, the long-time evolution being obtained by iterative squaring. Within the technique of strong approximation of Moller wave operators (SAM) we compare our calculation with computation of the time evolution in the eigenrepresentation of the Hamiltonian and with the standard Lippmann-Schwinger solution for the S matrix. We find numerical agreement between these alternative methods for time-evolution computation up to half the number of digits of internal machine precision, and fairly rapid convergence of both techniques towards the Lippmann-Schwinger solution

  10. Stability of the high-order finite elements for acoustic or elastic wave propagation with high-order time stepping

    KAUST Repository

    De Basabe, Joná s D.; Sen, Mrinal K.

    2010-01-01

    popular in the recent past. We consider the Lax-Wendroff method (LWM) for time stepping and show that it allows for a larger time step than the classical leap-frog finite difference method, with higher-order accuracy. In particular the fourth-order LWM

  11. Sharp Penalty Term and Time Step Bounds for the Interior Penalty Discontinuous Galerkin Method for Linear Hyperbolic Problems

    NARCIS (Netherlands)

    Geevers, Sjoerd; van der Vegt, J.J.W.

    2017-01-01

    We present sharp and sucient bounds for the interior penalty term and time step size to ensure stability of the symmetric interior penalty discontinuous Galerkin (SIPDG) method combined with an explicit time-stepping scheme. These conditions hold for generic meshes, including unstructured

  12. An evaluation of the accuracy of modeled and computed streamflow time-series data for the Ohio River at Hannibal Lock and Dam and at a location upstream from Sardis, Ohio

    Science.gov (United States)

    Koltun, G.F.

    2015-01-01

    Between July 2013 and June 2014, the U.S. Geological Survey (USGS) made 10 streamflow measurements on the Ohio River about 1.5 miles (mi) downstream from the Hannibal Lock and Dam (near Hannibal, Ohio) and 11 streamflow measurements near the USGS Sardis gage (station number 03114306) located approximately 2.4 mi upstream from Sardis, Ohio. The measurement results were used to assess the accuracy of modeled or computed instantaneous streamflow time series created and supplied by the USGS, U.S. Army Corps of Engineers (USACE), and National Weather Service (NWS) for the Ohio River at Hannibal Lock and Dam and (or) at the USGS streamgage. Hydraulic or hydrologic models were used to create the modeled time series; index-velocity methods or gate-opening ratings coupled with hydropower operation data were used to create the computed time series. The time step of the various instantaneous streamflow time series ranged from 15 minutes to 24 hours (once-daily values at 12:00 Coordinated Universal Time [UTC]). The 15-minute time-series data, computed by the USGS for the Sardis gage, also were downsampled to 1-hour and 24-hour time steps to permit more direct comparisons with other streamflow time series.

  13. A coupled weather generator - rainfall-runoff approach on hourly time steps for flood risk analysis

    Science.gov (United States)

    Winter, Benjamin; Schneeberger, Klaus; Dung Nguyen, Viet; Vorogushyn, Sergiy; Huttenlau, Matthias; Merz, Bruno; Stötter, Johann

    2017-04-01

    The evaluation of potential monetary damage of flooding is an essential part of flood risk management. One possibility to estimate the monetary risk is to analyze long time series of observed flood events and their corresponding damages. In reality, however, only few flood events are documented. This limitation can be overcome by the generation of a set of synthetic, physically and spatial plausible flood events and subsequently the estimation of the resulting monetary damages. In the present work, a set of synthetic flood events is generated by a continuous rainfall-runoff simulation in combination with a coupled weather generator and temporal disaggregation procedure for the study area of Vorarlberg (Austria). Most flood risk studies focus on daily time steps, however, the mesoscale alpine study area is characterized by short concentration times, leading to large differences between daily mean and daily maximum discharge. Accordingly, an hourly time step is needed for the simulations. The hourly metrological input for the rainfall-runoff model is generated in a two-step approach. A synthetic daily dataset is generated by a multivariate and multisite weather generator and subsequently disaggregated to hourly time steps with a k-Nearest-Neighbor model. Following the event generation procedure, the negative consequences of flooding are analyzed. The corresponding flood damage for each synthetic event is estimated by combining the synthetic discharge at representative points of the river network with a loss probability relation for each community in the study area. The loss probability relation is based on exposure and susceptibility analyses on a single object basis (residential buildings) for certain return periods. For these impact analyses official inundation maps of the study area are used. Finally, by analyzing the total event time series of damages, the expected annual damage or losses associated with a certain probability of occurrence can be estimated for

  14. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    Science.gov (United States)

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial

  15. Parallel algorithms and archtectures for computational structural mechanics

    Science.gov (United States)

    Patrick, Merrell; Ma, Shing; Mahajan, Umesh

    1989-01-01

    The determination of the fundamental (lowest) natural vibration frequencies and associated mode shapes is a key step used to uncover and correct potential failures or problem areas in most complex structures. However, the computation time taken by finite element codes to evaluate these natural frequencies is significant, often the most computationally intensive part of structural analysis calculations. There is continuing need to reduce this computation time. This study addresses this need by developing methods for parallel computation.

  16. Step-to-step reproducibility and asymmetry to study gait auto-optimization in healthy and cerebral palsied subjects.

    Science.gov (United States)

    Descatoire, A; Femery, V; Potdevin, F; Moretto, P

    2009-05-01

    The purpose of our study was to compare plantar pressure asymmetry and step-to-step reproducibility in both able-bodied persons and two groups of hemiplegics. The relevance of the research was to determine the efficiency of asymmetry and reproducibility as indexes for diagnosis and rehabilitation processes. This study comprised 31 healthy young subjects and 20 young subjects suffering from cerebral palsy hemiplegia assigned to two groups of 10 subjects according to the severity of their musculoskeletal disorders. The peaks of plantar pressure and the time to peak pressure were recorded with an in-shoe measurement system. The intra-individual coefficient of variability was calculated to indicate the consistency of plantar pressure during walking and to define gait stability. The effect size was computed to quantify the asymmetry and measurements were conducted at eight footprint locations. Results indicated few differences in step-to-step reproducibility between the healthy group and the less spastic group while the most affected group showed a more asymmetrical and unstable gait. From the concept of self-optimisation and depending on the neuromotor disorders the organism could make priorities based on pain, mobility, stability or energy expenditure to develop the best gait auto-optimisation.

  17. Imprecise results: Utilizing partial computations in real-time systems

    Science.gov (United States)

    Lin, Kwei-Jay; Natarajan, Swaminathan; Liu, Jane W.-S.

    1987-01-01

    In real-time systems, a computation may not have time to complete its execution because of deadline requirements. In such cases, no result except the approximate results produced by the computations up to that point will be available. It is desirable to utilize these imprecise results if possible. Two approaches are proposed to enable computations to return imprecise results when executions cannot be completed normally. The milestone approach records results periodically, and if a deadline is reached, returns the last recorded result. The sieve approach demarcates sections of code which can be skipped if the time available is insufficient. By using these approaches, the system is able to produce imprecise results when deadlines are reached. The design of the Concord project is described which supports imprecise computations using these techniques. Also presented is a general model of imprecise computations using these techniques, as well as one which takes into account the influence of the environment, showing where the latter approach fits into this model.

  18. Real-Time Thevenin Impedance Computation

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Jóhannsson, Hjörtur

    2013-01-01

    operating state, and strict time constraints are difficult to adhere to as the complexity of the grid increases. Several suggested approaches for real-time stability assessment require Thevenin impedances to be determined for the observed system conditions. By combining matrix factorization, graph reduction......, and parallelization, we develop an algorithm for computing Thevenin impedances an order of magnitude faster than previous approaches. We test the factor-and-solve algorithm with data from several power grids of varying complexity, and we show how the algorithm allows realtime stability assessment of complex power...

  19. How many steps/day are enough? for adults

    Directory of Open Access Journals (Sweden)

    Rowe David A

    2011-07-01

    Full Text Available Abstract Physical activity guidelines from around the world are typically expressed in terms of frequency, duration, and intensity parameters. Objective monitoring using pedometers and accelerometers offers a new opportunity to measure and communicate physical activity in terms of steps/day. Various step-based versions or translations of physical activity guidelines are emerging, reflecting public interest in such guidance. However, there appears to be a wide discrepancy in the exact values that are being communicated. It makes sense that step-based recommendations should be harmonious with existing evidence-based public health guidelines that recognize that "some physical activity is better than none" while maintaining a focus on time spent in moderate-to-vigorous physical activity (MVPA. Thus, the purpose of this review was to update our existing knowledge of "How many steps/day are enough?", and to inform step-based recommendations consistent with current physical activity guidelines. Normative data indicate that healthy adults typically take between 4,000 and 18,000 steps/day, and that 10,000 steps/day is reasonable for this population, although there are notable "low active populations." Interventions demonstrate incremental increases on the order of 2,000-2,500 steps/day. The results of seven different controlled studies demonstrate that there is a strong relationship between cadence and intensity. Further, despite some inter-individual variation, 100 steps/minute represents a reasonable floor value indicative of moderate intensity walking. Multiplying this cadence by 30 minutes (i.e., typical of a daily recommendation produces a minimum of 3,000 steps that is best used as a heuristic (i.e., guiding value, but these steps must be taken over and above habitual activity levels to be a true expression of free-living steps/day that also includes recommendations for minimal amounts of time in MVPA. Computed steps/day translations of time in

  20. Third-order-accurate numerical methods for efficient, large time-step solutions of mixed linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Cobb, J.W.

    1995-02-01

    There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.

  1. Development of a Computational Steering Framework for High Performance Computing Environments on Blue Gene/P Systems

    KAUST Repository

    Danani, Bob K.

    2012-07-01

    Computational steering has revolutionized the traditional workflow in high performance computing (HPC) applications. The standard workflow that consists of preparation of an application’s input, running of a simulation, and visualization of simulation results in a post-processing step is now transformed into a real-time interactive workflow that significantly reduces development and testing time. Computational steering provides the capability to direct or re-direct the progress of a simulation application at run-time. It allows modification of application-defined control parameters at run-time using various user-steering applications. In this project, we propose a computational steering framework for HPC environments that provides an innovative solution and easy-to-use platform, which allows users to connect and interact with running application(s) in real-time. This framework uses RealityGrid as the underlying steering library and adds several enhancements to the library to enable steering support for Blue Gene systems. Included in the scope of this project is the development of a scalable and efficient steering relay server that supports many-to-many connectivity between multiple steered applications and multiple steering clients. Steered applications can range from intermediate simulation and physical modeling applications to complex computational fluid dynamics (CFD) applications or advanced visualization applications. The Blue Gene supercomputer presents special challenges for remote access because the compute nodes reside on private networks. This thesis presents an implemented solution and demonstrates it on representative applications. Thorough implementation details and application enablement steps are also presented in this thesis to encourage direct usage of this framework.

  2. An appraisal of computational techniques for transient heat conduction equation

    International Nuclear Information System (INIS)

    Kant, T.

    1983-01-01

    A semi-discretization procedure in which the ''space'' dimension is discretized by the finite element method is emphasized for transient problems. This standard methodology transforms the space-time partial differential equation (PDE) system into a set of ordinary differential equations (ODE) in time. Existing methods for transient heat conduction calculations are then reviewed. Existence of two general classes of time integration schemes- implicit and explicit is noted. Numerical stability characteristics of these two methods are elucidated. Implicit methods are noted to be numerically stable, permitting large time steps, but the cost per step is high. On the otherhand, explicit schemes are noted to be inexpensive per step, but small step size is required. Low computational cost of the explicit schemes make it very attractive for nonlinear problems. However, numerical stability considerations requiring use of very small time steps come in the way of its general adoption. Effectiveness of the fourth-order Runge-Kutta-Gill explicit integrator is then numerically evaluated. Finally we discuss some very recent works on development of computational algorithms which not only achieve unconditional stability, high accuracy and convergence but involve computations on matrix equations of elements only. This development is considered to be very significant in the light of our experience gained for simple heat conduction calculations. We conclude that such algorithms have the potential for further developments leading to development of economical methods for general transient analysis of complex physical systems. (orig.)

  3. Step-by-Step Simulation of Radiation Chemistry Using Green Functions for Diffusion-Influenced Reactions

    Science.gov (United States)

    Plante, Ianik; Cucinotta, Francis A.

    2011-01-01

    Radiolytic species are formed approximately 1 ps after the passage of ionizing radiation through matter. After their formation, they diffuse and chemically react with other radiolytic species and neighboring biological molecules, leading to various oxidative damage. Therefore, the simulation of radiation chemistry is of considerable importance to understand how radiolytic species damage biological molecules [1]. The step-by-step simulation of chemical reactions is difficult, because the radiolytic species are distributed non-homogeneously in the medium. Consequently, computational approaches based on Green functions for diffusion-influenced reactions should be used [2]. Recently, Green functions for more complex type of reactions have been published [3-4]. We have developed exact random variate generators of these Green functions [5], which will allow us to use them in radiation chemistry codes. Moreover, simulating chemistry using the Green functions is which is computationally very demanding, because the probabilities of reactions between each pair of particles should be evaluated at each timestep [2]. This kind of problem is well adapted for General Purpose Graphic Processing Units (GPGPU), which can handle a large number of similar calculations simultaneously. These new developments will allow us to include more complex reactions in chemistry codes, and to improve the calculation time. This code should be of importance to link radiation track structure simulations and DNA damage models.

  4. Spying on real-time computers to improve performance

    International Nuclear Information System (INIS)

    Taff, L.M.

    1975-01-01

    The sampled program-counter histogram, an established technique for shortening the execution times of programs, is described for a real-time computer. The use of a real-time clock allows particularly easy implementation. (Auth.)

  5. Multivariate statistical analysis of a multi-step industrial processes

    DEFF Research Database (Denmark)

    Reinikainen, S.P.; Høskuldsson, Agnar

    2007-01-01

    Monitoring and quality control of industrial processes often produce information on how the data have been obtained. In batch processes, for instance, the process is carried out in stages; some process or control parameters are set at each stage. However, the obtained data might not be utilized...... efficiently, even if this information may reveal significant knowledge about process dynamics or ongoing phenomena. When studying the process data, it may be important to analyse the data in the light of the physical or time-wise development of each process step. In this paper, a unified approach to analyse...... multivariate multi-step processes, where results from each step are used to evaluate future results, is presented. The methods presented are based on Priority PLS Regression. The basic idea is to compute the weights in the regression analysis for given steps, but adjust all data by the resulting score vectors...

  6. Time step size limitation introduced by the BSSN Gamma Driver

    Energy Technology Data Exchange (ETDEWEB)

    Schnetter, Erik, E-mail: schnetter@cct.lsu.ed [Department of Physics and Astronomy, Louisiana State University, LA (United States)

    2010-08-21

    Many mesh refinement simulations currently performed in numerical relativity counteract instabilities near the outer boundary of the simulation domain either by changes to the mesh refinement scheme or by changes to the gauge condition. We point out that the BSSN Gamma Driver gauge condition introduces a time step size limitation in a similar manner as a Courant-Friedrichs-Lewy condition, but which is independent of the spatial resolution. We give a didactic explanation of this issue, show why, especially, mesh refinement simulations suffer from it, and point to a simple remedy. (note)

  7. DC Motor Parameter Identification Using Speed Step Responses

    Directory of Open Access Journals (Sweden)

    Wei Wu

    2012-01-01

    Full Text Available Based on the DC motor speed response measurement under a step voltage input, important motor parameters such as the electrical time constant, the mechanical time constant, and the friction can be estimated. A power series expansion of the motor speed response is presented, whose coefficients are related to the motor parameters. These coefficients can be easily computed using existing curve fitting methods. Experimental results are presented to demonstrate the application of this approach. In these experiments, the approach was readily implemented and gave more accurate estimates than conventional methods.

  8. A Distributed Computing Network for Real-Time Systems.

    Science.gov (United States)

    1980-11-03

    7 ) AU2 o NAVA TUNDEWATER SY$TEMS CENTER NEWPORT RI F/G 9/2 UIS RIBUT E 0 COMPUTIN G N LTWORK FOR REAL - TIME SYSTEMS .(U) UASSIFIED NOV Al 6 1...MORAIS - UT 92 dLEVEL c A Distributed Computing Network for Real - Time Systems . 11 𔃺-1 Gordon E/Morson I7 y tm- ,r - t "en t As J 2 -p .. - 7 I’ cNaval...NUMBER TD 5932 / N 4. TITLE mand SubotI. S. TYPE OF REPORT & PERIOD COVERED A DISTRIBUTED COMPUTING NETWORK FOR REAL - TIME SYSTEMS 6. PERFORMING ORG

  9. Variable Neighborhood Search for Parallel Machines Scheduling Problem with Step Deteriorating Jobs

    Directory of Open Access Journals (Sweden)

    Wenming Cheng

    2012-01-01

    Full Text Available In many real scheduling environments, a job processed later needs longer time than the same job when it starts earlier. This phenomenon is known as scheduling with deteriorating jobs to many industrial applications. In this paper, we study a scheduling problem of minimizing the total completion time on identical parallel machines where the processing time of a job is a step function of its starting time and a deteriorating date that is individual to all jobs. Firstly, a mixed integer programming model is presented for the problem. And then, a modified weight-combination search algorithm and a variable neighborhood search are employed to yield optimal or near-optimal schedule. To evaluate the performance of the proposed algorithms, computational experiments are performed on randomly generated test instances. Finally, computational results show that the proposed approaches obtain near-optimal solutions in a reasonable computational time even for large-sized problems.

  10. Effects of the lateral amplitude and regularity of upper body fluctuation on step time variability evaluated using return map analysis.

    Science.gov (United States)

    Chidori, Kazuhiro; Yamamoto, Yuji

    2017-01-01

    The aim of this study was to evaluate the effects of the lateral amplitude and regularity of upper body fluctuation on step time variability. Return map analysis was used to clarify the relationship between step time variability and a history of falling. Eleven healthy, community-dwelling older adults and twelve younger adults participated in the study. All of the subjects walked 25 m at a comfortable speed. Trunk acceleration was measured using triaxial accelerometers attached to the third lumbar vertebrae (L3) and the seventh cervical vertebrae (C7). The normalized average magnitude of acceleration, the coefficient of determination ($R^2$) of the return map, and the step time variabilities, were calculated. Cluster analysis using the average fluctuation and the regularity of C7 fluctuation identified four walking patterns in the mediolateral (ML) direction. The participants with higher fluctuation and lower regularity showed significantly greater step time variability compared with the others. Additionally, elderly participants who had fallen in the past year had higher amplitude and a lower regularity of fluctuation during walking. In conclusion, by focusing on the time evolution of each step, it is possible to understand the cause of stride and/or step time variability that is associated with a risk of falls.

  11. Real-time computing platform for spiking neurons (RT-spike).

    Science.gov (United States)

    Ros, Eduardo; Ortigosa, Eva M; Agís, Rodrigo; Carrillo, Richard; Arnold, Michael

    2006-07-01

    A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.

  12. 43 CFR 45.3 - How are time periods computed?

    Science.gov (United States)

    2010-10-01

    ... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false How are time periods computed? 45.3... IN FERC HYDROPOWER LICENSES General Provisions § 45.3 How are time periods computed? (a) General... run is not included. (2) The last day of the period is included. (i) If that day is a Saturday, Sunday...

  13. Development and evaluation of a real-time one step Reverse-Transcriptase PCR for quantitation of Chandipura Virus

    Directory of Open Access Journals (Sweden)

    Tandale Babasaheb V

    2008-12-01

    Full Text Available Abstract Background Chandipura virus (CHPV, a member of family Rhabdoviridae was attributed to an explosive outbreak of acute encephalitis in children in Andhra Pradesh, India in 2003 and a small outbreak among tribal children from Gujarat, Western India in 2004. The case-fatality rate ranged from 55–75%. Considering the rapid progression of the disease and high mortality, a highly sensitive method for quantifying CHPV RNA by real-time one step reverse transcriptase PCR (real-time one step RT-PCR using TaqMan technology was developed for rapid diagnosis. Methods Primers and probe for P gene were designed and used to standardize real-time one step RT-PCR assay for CHPV RNA quantitation. Standard RNA was prepared by PCR amplification, TA cloning and run off transcription. The optimized real-time one step RT-PCR assay was compared with the diagnostic nested RT-PCR and different virus isolation systems [in vivo (mice in ovo (eggs, in vitro (Vero E6, PS, RD and Sand fly cell line] for the detection of CHPV. Sensitivity and specificity of real-time one step RT-PCR assay was evaluated with diagnostic nested RT-PCR, which is considered as a gold standard. Results Real-time one step RT-PCR was optimized using in vitro transcribed (IVT RNA. Standard curve showed linear relationship for wide range of 102-1010 (r2 = 0.99 with maximum Coefficient of variation (CV = 5.91% for IVT RNA. The newly developed real-time RT-PCR was at par with nested RT-PCR in sensitivity and superior to cell lines and other living systems (embryonated eggs and infant mice used for the isolation of the virus. Detection limit of real-time one step RT-PCR and nested RT-PCR was found to be 1.2 × 100 PFU/ml. RD cells, sand fly cells, infant mice, and embryonated eggs showed almost equal sensitivity (1.2 × 102 PFU/ml. Vero and PS cell-lines (1.2 × 103 PFU/ml were least sensitive to CHPV infection. Specificity of the assay was found to be 100% when RNA from other viruses or healthy

  14. Development of a computer-controlled tensiometer for real-time measurements of tension in tubular organs.

    Science.gov (United States)

    Gregersen, H; Barlow, J; Thompson, D

    1999-04-01

    A computer-controlled tensiometer for studying wall tension in tubular organs has been developed. The system consisted of a probe with an inflatable balloon, an impedance planimeter, pressure transducer and amplifier, a pump with RS232 interface and a PC with dedicated software. Circumferential wall tension was computed in real time from pressure and cross-sectional area measurements (tension measurement mode). Wall tension can be maintained on a preset level or be changed as a step or ramp function by a feedback control of the infusion/withdrawal pump (tension control mode). A software regulator adjusted the volume rate (low volume rate when the computed tension was close to the preset value) to minimize overshoot and oscillation. Validation tests were performed and the technique was applied in the human oesophagus. Volume- and tension-controlled balloon distensions elicited secondary peristalsis of increasing intensity that was decreased significantly by the antimuscarinic agent Hyoscine butyl bromide. In tension control mode Hyoscine butyl bromide caused oesophageal relaxation, i.e. CSA to increase and pressure to decay. Furthermore, pronounced pressure relaxation and tension relaxation were observed during volume-controlled distension after administration of Hyoscine butyl bromide.

  15. Real-time, single-step bioassay using nanoplasmonic resonator with ultra-high sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xiang; Ellman, Jonathan A; Chen, Fanqing Frank; Su, Kai-Hang; Wei, Qi-Huo; Sun, Cheng

    2014-04-01

    A nanoplasmonic resonator (NPR) comprising a metallic nanodisk with alternating shielding layer(s), having a tagged biomolecule conjugated or tethered to the surface of the nanoplasmonic resonator for highly sensitive measurement of enzymatic activity. NPRs enhance Raman signals in a highly reproducible manner, enabling fast detection of protease and enzyme activity, such as Prostate Specific Antigen (paPSA), in real-time, at picomolar sensitivity levels. Experiments on extracellular fluid (ECF) from paPSA-positive cells demonstrate specific detection in a complex bio-fluid background in real-time single-step detection in very small sample volumes.

  16. Diablo 2.0: A modern DNS/LES code for the incompressible NSE leveraging new time-stepping and multigrid algorithms

    Science.gov (United States)

    Cavaglieri, Daniele; Bewley, Thomas; Mashayek, Ali

    2015-11-01

    We present a new code, Diablo 2.0, for the simulation of the incompressible NSE in channel and duct flows with strong grid stretching near walls. The code leverages the fractional step approach with a few twists. New low-storage IMEX (implicit-explicit) Runge-Kutta time-marching schemes are tested which are superior to the traditional and widely-used CN/RKW3 (Crank-Nicolson/Runge-Kutta-Wray) approach; the new schemes tested are L-stable in their implicit component, and offer improved overall order of accuracy and stability with, remarkably, similar computational cost and storage requirements. For duct flow simulations, our new code also introduces a new smoother for the multigrid solver for the pressure Poisson equation. The classic approach, involving alternating-direction zebra relaxation, is replaced by a new scheme, dubbed tweed relaxation, which achieves the same convergence rate with roughly half the computational cost. The code is then tested on the simulation of a shear flow instability in a duct, a classic problem in fluid mechanics which has been the object of extensive numerical modelling for its role as a canonical pathway to energetic turbulence in several fields of science and engineering.

  17. Relativistic Photoionization Computations with the Time Dependent Dirac Equation

    Science.gov (United States)

    2016-10-12

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6795--16-9698 Relativistic Photoionization Computations with the Time Dependent Dirac... Photoionization Computations with the Time Dependent Dirac Equation Daniel F. Gordon and Bahman Hafizi Naval Research Laboratory 4555 Overlook Avenue, SW...Unclassified Unlimited Unclassified Unlimited 22 Daniel Gordon (202) 767-5036 Tunneling Photoionization Ionization of inner shell electrons by laser

  18. Stepping reaction time and gait adaptability are significantly impaired in people with Parkinson's disease: Implications for fall risk.

    Science.gov (United States)

    Caetano, Maria Joana D; Lord, Stephen R; Allen, Natalie E; Brodie, Matthew A; Song, Jooeun; Paul, Serene S; Canning, Colleen G; Menant, Jasmine C

    2018-02-01

    Decline in the ability to take effective steps and to adapt gait, particularly under challenging conditions, may be important reasons why people with Parkinson's disease (PD) have an increased risk of falling. This study aimed to determine the extent of stepping and gait adaptability impairments in PD individuals as well as their associations with PD symptoms, cognitive function and previous falls. Thirty-three older people with PD and 33 controls were assessed in choice stepping reaction time, Stroop stepping and gait adaptability tests; measurements identified as fall risk factors in older adults. People with PD had similar mean choice stepping reaction times to healthy controls, but had significantly greater intra-individual variability. In the Stroop stepping test, the PD participants were more likely to make an error (48 vs 18%), took 715 ms longer to react (2312 vs 1517 ms) and had significantly greater response variability (536 vs 329 ms) than the healthy controls. People with PD also had more difficulties adapting their gait in response to targets (poorer stepping accuracy) and obstacles (increased number of steps) appearing at short notice on a walkway. Within the PD group, higher disease severity, reduced cognition and previous falls were associated with poorer stepping and gait adaptability performances. People with PD have reduced ability to adapt gait to unexpected targets and obstacles and exhibit poorer stepping responses, particularly in a test condition involving conflict resolution. Such impaired stepping responses in Parkinson's disease are associated with disease severity, cognitive impairment and falls. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Strong coupling in electromechanical computation

    CERN Document Server

    Fuezi, J

    2000-01-01

    A method is presented to carry out simultaneously electromagnetic field and force computation, electrical circuit analysis and mechanical computation to simulate the dynamic operation of electromagnetic actuators. The equation system is solved by a predictor-corrector scheme containing a Powell error minimization algorithm which ensures that every differential equation (coil current, field strength rate, flux rate, speed of the keeper) is fulfilled within the same time step.

  20. Strong coupling in electromechanical computation

    Energy Technology Data Exchange (ETDEWEB)

    Fuezi, J. E-mail: fuzi@leda.unitbv.rofuzi@evtsz.bme.hu

    2000-06-02

    A method is presented to carry out simultaneously electromagnetic field and force computation, electrical circuit analysis and mechanical computation to simulate the dynamic operation of electromagnetic actuators. The equation system is solved by a predictor-corrector scheme containing a Powell error minimization algorithm which ensures that every differential equation (coil current, field strength rate, flux rate, speed of the keeper) is fulfilled within the same time step.

  1. Cloud Computing: A model Construct of Real-Time Monitoring for Big Dataset Analytics Using Apache Spark

    Science.gov (United States)

    Alkasem, Ameen; Liu, Hongwei; Zuo, Decheng; Algarash, Basheer

    2018-01-01

    The volume of data being collected, analyzed, and stored has exploded in recent years, in particular in relation to the activity on the cloud computing. While large-scale data processing, analysis, storage, and platform model such as cloud computing were previously and currently are increasingly. Today, the major challenge is it address how to monitor and control these massive amounts of data and perform analysis in real-time at scale. The traditional methods and model systems are unable to cope with these quantities of data in real-time. Here we present a new methodology for constructing a model for optimizing the performance of real-time monitoring of big datasets, which includes a machine learning algorithms and Apache Spark Streaming to accomplish fine-grained fault diagnosis and repair of big dataset. As a case study, we use the failure of Virtual Machines (VMs) to start-up. The methodology proposition ensures that the most sensible action is carried out during the procedure of fine-grained monitoring and generates the highest efficacy and cost-saving fault repair through three construction control steps: (I) data collection; (II) analysis engine and (III) decision engine. We found that running this novel methodology can save a considerate amount of time compared to the Hadoop model, without sacrificing the classification accuracy or optimization of performance. The accuracy of the proposed method (92.13%) is an improvement on traditional approaches.

  2. Computing closest saddle node bifurcations in a radial system via conic programming

    Energy Technology Data Exchange (ETDEWEB)

    Jabr, R.A. [Electrical, Computer and Communication Engineering Department, Notre Dame University, P.O. Box 72, Zouk Mikhael, Zouk Mosbeh (Lebanon); Pal, B.C. [Department of Electrical and Electronic Engineering, Imperial College London, SW7 2BT (United Kingdom)

    2009-07-15

    This paper considers the problem of computing the loading limits in a radial system which are (i) locally closest to current operating load powers and (ii) at which saddle node bifurcation occurs. The procedure is based on a known technique which requires iterating between two computational steps until convergence. In essence, step 1 produces a vector normal to the real and/or reactive load solution space boundary, whereas step 2 computes the bifurcation point along that vector. The paper shows that each of the above computational steps can be formulated as a second-order cone program for which polynomial time interior-point methods and efficient implementations exist. The proposed conic programming approach is used to compute the closest bifurcation points and the corresponding worst case load power margins of eleven different distribution systems. The approach is validated graphically and the existence of multiple load power margins is investigated. (author)

  3. Numerical analysis of resonances induced by s wave neutrons in transmission time-of-flight experiments with a computer IBM 7094 II

    International Nuclear Information System (INIS)

    Corge, Ch.

    1969-01-01

    Numerical analysis of transmission resonances induced by s wave neutrons in time-of-flight experiments can be achieved in a fairly automatic way on an IBM 7094/II computer. The involved computations are carried out following a four step scheme: 1 - experimental raw data are processed to obtain the resonant transmissions, 2 - values of experimental quantities for each resonance are derived from the above transmissions, 3 - resonance parameters are determined using a least square method to solve the over determined system obtained by equalling theoretical functions to the correspondent experimental values. Four analysis methods are gathered in the same code, 4 - graphical control of the results is performed. (author) [fr

  4. STICK: Spike Time Interval Computational Kernel, a Framework for General Purpose Computation Using Neurons, Precise Timing, Delays, and Synchrony.

    Science.gov (United States)

    Lagorce, Xavier; Benosman, Ryad

    2015-11-01

    There has been significant research over the past two decades in developing new platforms for spiking neural computation. Current neural computers are primarily developed to mimic biology. They use neural networks, which can be trained to perform specific tasks to mainly solve pattern recognition problems. These machines can do more than simulate biology; they allow us to rethink our current paradigm of computation. The ultimate goal is to develop brain-inspired general purpose computation architectures that can breach the current bottleneck introduced by the von Neumann architecture. This work proposes a new framework for such a machine. We show that the use of neuron-like units with precise timing representation, synaptic diversity, and temporal delays allows us to set a complete, scalable compact computation framework. The framework provides both linear and nonlinear operations, allowing us to represent and solve any function. We show usability in solving real use cases from simple differential equations to sets of nonlinear differential equations leading to chaotic attractors.

  5. Computer-controlled neutron time-of-flight spectrometer. Part II

    International Nuclear Information System (INIS)

    Merriman, S.H.

    1979-12-01

    A time-of-flight spectrometer for neutron inelastic scattering research has been interfaced to a PDP-15/30 computer. The computer is used for experimental data acquisition and analysis and for apparatus control. This report was prepared to summarize the functions of the computer and to act as a users' guide to the software system

  6. Computational scheme for transient temperature distribution in PWR vessel wall

    International Nuclear Information System (INIS)

    Dedovic, S.; Ristic, P.

    1980-01-01

    Computer code TEMPNES is a part of joint effort made in Gosa Industries in achieving the technique for structural analysis of heavy pressure vessels. Transient heat conduction problems analysis is based on finite element discretization of structures non-linear transient matrix formulation and time integration scheme as developed by Wilson (step-by-step procedure). Convection boundary conditions and the effect of heat generation due to radioactive radiation are both considered. The computation of transient temperature distributions in reactor vessel wall when the water temperature suddenly drops as a consequence of reactor cooling pump failure is presented. The vessel is treated as as axisymmetric body of revolution. The program has two finite time element options a) fixed predetermined increment and; b) an automatically optimized time increment for each step dependent on the rate of change of the nodal temperatures. (author)

  7. One-Step Leapfrog LOD-BOR-FDTD Algorithm with CPML Implementation

    Directory of Open Access Journals (Sweden)

    Yi-Gang Wang

    2016-01-01

    Full Text Available An unconditionally stable one-step leapfrog locally one-dimensional finite-difference time-domain (LOD-FDTD algorithm towards body of revolution (BOR is presented. The equations of the proposed algorithm are obtained by the algebraic manipulation of those used in the conventional LOD-BOR-FDTD algorithm. The equations for z-direction electric and magnetic fields in the proposed algorithm should be treated specially. The new algorithm obtains a higher computational efficiency while preserving the properties of the conventional LOD-BOR-FDTD algorithm. Moreover, the convolutional perfectly matched layer (CPML is introduced into the one-step leapfrog LOD-BOR-FDTD algorithm. The equation of the one-step leapfrog CPML is concise. Numerical results show that its reflection error is small. It can be concluded that the similar CPML scheme can also be easily applied to the one-step leapfrog LOD-FDTD algorithm in the Cartesian coordinate system.

  8. Time series modeling, computation, and inference

    CERN Document Server

    Prado, Raquel

    2010-01-01

    The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit

  9. Heterogeneous real-time computing in radio astronomy

    Science.gov (United States)

    Ford, John M.; Demorest, Paul; Ransom, Scott

    2010-07-01

    Modern computer architectures suited for general purpose computing are often not the best choice for either I/O-bound or compute-bound problems. Sometimes the best choice is not to choose a single architecture, but to take advantage of the best characteristics of different computer architectures to solve your problems. This paper examines the tradeoffs between using computer systems based on the ubiquitous X86 Central Processing Units (CPU's), Field Programmable Gate Array (FPGA) based signal processors, and Graphical Processing Units (GPU's). We will show how a heterogeneous system can be produced that blends the best of each of these technologies into a real-time signal processing system. FPGA's tightly coupled to analog-to-digital converters connect the instrument to the telescope and supply the first level of computing to the system. These FPGA's are coupled to other FPGA's to continue to provide highly efficient processing power. Data is then packaged up and shipped over fast networks to a cluster of general purpose computers equipped with GPU's, which are used for floating-point intensive computation. Finally, the data is handled by the CPU and written to disk, or further processed. Each of the elements in the system has been chosen for its specific characteristics and the role it can play in creating a system that does the most for the least, in terms of power, space, and money.

  10. Ubiquitous computing technology for just-in-time motivation of behavior change.

    Science.gov (United States)

    Intille, Stephen S

    2004-01-01

    This paper describes a vision of health care where "just-in-time" user interfaces are used to transform people from passive to active consumers of health care. Systems that use computational pattern recognition to detect points of decision, behavior, or consequences automatically can present motivational messages to encourage healthy behavior at just the right time. Further, new ubiquitous computing and mobile computing devices permit information to be conveyed to users at just the right place. In combination, computer systems that present messages at the right time and place can be developed to motivate physical activity and healthy eating. Computational sensing technologies can also be used to measure the impact of the motivational technology on behavior.

  11. Replacement strategy for obsolete plant computers

    International Nuclear Information System (INIS)

    Schaefer, J.P.

    1985-01-01

    The plant computers of the first generation of larger nuclear power plants are reaching the end of their useful life time with respect to the hardware. The software would be no reason for a system exchange but new tasks for the supervisory computer system, availability questions of maintenance personnel and spare parts and the demand for improved operating procedures for the computer users have stimulated the considerations on how to exchange a computer system in a nuclear power plant without extending plant outage times due to exchange works. In the Federal Republic of Germany the planning phase of such backfitting projects is well under way, some projects are about to be implemented. The base for these backfitting projects is a modular supervisory computer concept which has been designated for the new line of KWU PWR's. The main characteristic of this computer system is the splitting of the system into a data acquisition level and a data processing level. This principle allows an extension of the processing level or even repeated replacements of the processing computers. With the existing computer system still in operation the new system can be installed in a step-by-step procedure. As soon as the first of the redundant process computers of the data processing level is in operation and the data link to the data acquisition computers is established the old computer system can be taken out of service. Then the back-up processing computer can be commissioned to complete the new system. (author)

  12. Picture processing computer to control movement by computer provided vision

    Energy Technology Data Exchange (ETDEWEB)

    Graefe, V

    1983-01-01

    The author introduces a multiprocessor system which has been specially developed to enable mechanical devices to interpret pictures presented in real time. The separate processors within this system operate simultaneously and independently. By means of freely moveable windows the processors can concentrate on those parts of the picture that are relevant to the control problem. If a machine is to make a correct response to its observation of a picture of moving objects, it must be able to follow the picture sequence, step by step, in real time. As the usual serially operating processors are too slow for such a task, the author describes three models of a special picture processing computer which it has been necessary to develop. 3 references.

  13. Scalable explicit implementation of anisotropic diffusion with Runge-Kutta-Legendre super-time stepping

    Science.gov (United States)

    Vaidya, Bhargav; Prasad, Deovrat; Mignone, Andrea; Sharma, Prateek; Rickler, Luca

    2017-12-01

    An important ingredient in numerical modelling of high temperature magnetized astrophysical plasmas is the anisotropic transport of heat along magnetic field lines from higher to lower temperatures. Magnetohydrodynamics typically involves solving the hyperbolic set of conservation equations along with the induction equation. Incorporating anisotropic thermal conduction requires to also treat parabolic terms arising from the diffusion operator. An explicit treatment of parabolic terms will considerably reduce the simulation time step due to its dependence on the square of the grid resolution (Δx) for stability. Although an implicit scheme relaxes the constraint on stability, it is difficult to distribute efficiently on a parallel architecture. Treating parabolic terms with accelerated super-time-stepping (STS) methods has been discussed in literature, but these methods suffer from poor accuracy (first order in time) and also have difficult-to-choose tuneable stability parameters. In this work, we highlight a second-order (in time) Runge-Kutta-Legendre (RKL) scheme (first described by Meyer, Balsara & Aslam 2012) that is robust, fast and accurate in treating parabolic terms alongside the hyperbolic conversation laws. We demonstrate its superiority over the first-order STS schemes with standard tests and astrophysical applications. We also show that explicit conduction is particularly robust in handling saturated thermal conduction. Parallel scaling of explicit conduction using RKL scheme is demonstrated up to more than 104 processors.

  14. Continuous-Time Symmetric Hopfield Nets are Computationally Universal

    Czech Academy of Sciences Publication Activity Database

    Šíma, Jiří; Orponen, P.

    2003-01-01

    Roč. 15, č. 3 (2003), s. 693-733 ISSN 0899-7667 R&D Projects: GA AV ČR IAB2030007; GA ČR GA201/02/1456 Institutional research plan: AV0Z1030915 Keywords : continuous-time Hopfield network * Liapunov function * analog computation * computational power * Turing universality Subject RIV: BA - General Mathematics Impact factor: 2.747, year: 2003

  15. Microsoft Office professional 2010 step by step

    CERN Document Server

    Cox, Joyce; Frye, Curtis

    2011-01-01

    Teach yourself exactly what you need to know about using Office Professional 2010-one step at a time! With STEP BY STEP, you build and practice new skills hands-on, at your own pace. Covering Microsoft Word, PowerPoint, Outlook, Excel, Access, Publisher, and OneNote, this book will help you learn the core features and capabilities needed to: Create attractive documents, publications, and spreadsheetsManage your e-mail, calendar, meetings, and communicationsPut your business data to workDevelop and deliver great presentationsOrganize your ideas and notes in one placeConnect, share, and accom

  16. The enhancement of time-stepping procedures in SYVAC A/C

    International Nuclear Information System (INIS)

    Broyd, T.W.

    1986-01-01

    This report summarises the work carried out an SYVAC A/C between February and May 1985 aimed at improving the way in which time-stepping procedures are handled. The majority of the work was concerned with three types of problem, viz: i) Long vault release, short geosphere response ii) Short vault release, long geosphere response iii) Short vault release, short geosphere response The report contains details of changes to the logic and structure of SYVAC A/C, as well as the results of code implementation tests. It has been written primarily for members of the UK SYVAC development team, and should not be used or referred to in isolation. (author)

  17. Performance of an attention-demanding task during treadmill walking shifts the noise qualities of step-to-step variation in step width.

    Science.gov (United States)

    Grabiner, Mark D; Marone, Jane R; Wyatt, Marilynn; Sessoms, Pinata; Kaufman, Kenton R

    2018-06-01

    The fractal scaling evident in the step-to-step fluctuations of stepping-related time series reflects, to some degree, neuromotor noise. The primary purpose of this study was to determine the extent to which the fractal scaling of step width, step width and step width variability are affected by performance of an attention-demanding task. We hypothesized that the attention-demanding task would shift the structure of the step width time series toward white, uncorrelated noise. Subjects performed two 10-min treadmill walking trials, a control trial of undisturbed walking and a trial during which they performed a mental arithmetic/texting task. Motion capture data was converted to step width time series, the fractal scaling of which were determined from their power spectra. Fractal scaling decreased by 22% during the texting condition (p Step width and step width variability increased 19% and five percent, respectively (p step width fractal scaling. The change of the fractal scaling of step width is consistent with increased cognitive demand and suggests a transition in the characteristics of the signal noise. This may reflect an important advance toward the understanding of the manner in which neuromotor noise contributes to some types of falls. However, further investigation of the repeatability of the results, the sensitivity of the results to progressive increases in cognitive load imposed by attention-demanding tasks, and the extent to which the results can be generalized to the gait of older adults seems warranted. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Multiscale Space-Time Computational Methods for Fluid-Structure Interactions

    Science.gov (United States)

    2015-09-13

    thermo-fluid analysis of a ground vehicle and its tires ST-SI Computational Analysis of a Vertical - Axis Wind Turbine We have successfully...of a vertical - axis wind turbine . Multiscale Compressible-Flow Computation with Particle Tracking We have successfully tested the multiscale...Tezduyar, Spenser McIntyre, Nikolay Kostov, Ryan Kolesar, Casey Habluetzel. Space–time VMS computation of wind - turbine rotor and tower aerodynamics

  19. Discrete maximal regularity of time-stepping schemes for fractional evolution equations.

    Science.gov (United States)

    Jin, Bangti; Li, Buyang; Zhou, Zhi

    2018-01-01

    In this work, we establish the maximal [Formula: see text]-regularity for several time stepping schemes for a fractional evolution model, which involves a fractional derivative of order [Formula: see text], [Formula: see text], in time. These schemes include convolution quadratures generated by backward Euler method and second-order backward difference formula, the L1 scheme, explicit Euler method and a fractional variant of the Crank-Nicolson method. The main tools for the analysis include operator-valued Fourier multiplier theorem due to Weis (Math Ann 319:735-758, 2001. doi:10.1007/PL00004457) and its discrete analogue due to Blunck (Stud Math 146:157-176, 2001. doi:10.4064/sm146-2-3). These results generalize the corresponding results for parabolic problems.

  20. Comparison of step-by-step kinematics of resisted, assisted and unloaded 20-m sprint runs.

    Science.gov (United States)

    van den Tillaar, Roland; Gamble, Paul

    2018-03-26

    This investigation examined step-by-step kinematics of sprint running acceleration. Using a randomised counterbalanced approach, 37 female team handball players (age 17.8 ± 1.6 years, body mass 69.6 ± 9.1 kg, height 1.74 ± 0.06 m) performed resisted, assisted and unloaded 20-m sprints within a single session. 20-m sprint times and step velocity, as well as step length, step frequency, contact and flight times of each step were evaluated for each condition with a laser gun and an infrared mat. Almost all measured parameters were altered for each step under the resisted and assisted sprint conditions (η 2  ≥ 0.28). The exception was step frequency, which did not differ between assisted and normal sprints. Contact time, flight time and step frequency at almost each step were different between 'fast' vs. 'slow' sub-groups (η 2  ≥ 0.22). Nevertheless overall both groups responded similarly to the respective sprint conditions. No significant differences in step length were observed between groups for the respective condition. It is possible that continued exposure to assisted sprinting might allow the female team-sports players studied to adapt their coordination to the 'over-speed' condition and increase step frequency. It is notable that step-by-step kinematics in these sprints were easy to obtain using relatively inexpensive equipment with possibilities of direct feedback.

  1. Random Walks with Anti-Correlated Steps

    OpenAIRE

    Wagner, Dirk; Noga, John

    2005-01-01

    We conjecture the expected value of random walks with anti-correlated steps to be exactly 1. We support this conjecture with 2 plausibility arguments and experimental data. The experimental analysis includes the computation of the expected values of random walks for steps up to 22. The result shows the expected value asymptotically converging to 1.

  2. Steps of Supercritical Fluid Extraction of Natural Products and Their Characteristic Times

    OpenAIRE

    Sovová, H. (Helena)

    2012-01-01

    Kinetics of supercritical fluid extraction (SFE) from plants is variable due to different micro-structure of plants and their parts, different properties of extracted substances and solvents, and different flow patterns in the extractor. Variety of published mathematical models for SFE of natural products corresponds to this diversification. This study presents simplified equations of extraction curves in terms of characteristic times of four single extraction steps: internal diffusion, exter...

  3. A one-step, real-time PCR assay for rapid detection of rhinovirus.

    Science.gov (United States)

    Do, Duc H; Laus, Stella; Leber, Amy; Marcon, Mario J; Jordan, Jeanne A; Martin, Judith M; Wadowsky, Robert M

    2010-01-01

    One-step, real-time PCR assays for rhinovirus have been developed for a limited number of PCR amplification platforms and chemistries, and some exhibit cross-reactivity with genetically similar enteroviruses. We developed a one-step, real-time PCR assay for rhinovirus by using a sequence detection system (Applied Biosystems; Foster City, CA). The primers were designed to amplify a 120-base target in the noncoding region of picornavirus RNA, and a TaqMan (Applied Biosystems) degenerate probe was designed for the specific detection of rhinovirus amplicons. The PCR assay had no cross-reactivity with a panel of 76 nontarget nucleic acids, which included RNAs from 43 enterovirus strains. Excellent lower limits of detection relative to viral culture were observed for the PCR assay by using 38 of 40 rhinovirus reference strains representing different serotypes, which could reproducibly detect rhinovirus serotype 2 in viral transport medium containing 10 to 10,000 TCID(50) (50% tissue culture infectious dose endpoint) units/ml of the virus. However, for rhinovirus serotypes 59 and 69, the PCR assay was less sensitive than culture. Testing of 48 clinical specimens from children with cold-like illnesses for rhinovirus by the PCR and culture assays yielded detection rates of 16.7% and 6.3%, respectively. For a batch of 10 specimens, the entire assay was completed in 4.5 hours. This real-time PCR assay enables detection of many rhinovirus serotypes with the Applied Biosystems reagent-instrument platform.

  4. Non-Causal Computation

    Directory of Open Access Journals (Sweden)

    Ämin Baumeler

    2017-07-01

    Full Text Available Computation models such as circuits describe sequences of computation steps that are carried out one after the other. In other words, algorithm design is traditionally subject to the restriction imposed by a fixed causal order. We address a novel computing paradigm beyond quantum computing, replacing this assumption by mere logical consistency: We study non-causal circuits, where a fixed time structure within a gate is locally assumed whilst the global causal structure between the gates is dropped. We present examples of logically consistent non-causal circuits outperforming all causal ones; they imply that suppressing loops entirely is more restrictive than just avoiding the contradictions they can give rise to. That fact is already known for correlations as well as for communication, and we here extend it to computation.

  5. Issues in measure-preserving three dimensional flow integrators: Self-adjointness, reversibility, and non-uniform time stepping

    International Nuclear Information System (INIS)

    Finn, John M.

    2015-01-01

    Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012

  6. Practical Steps toward Computational Unification: Helpful Perspectives for New Systems, Adding Functionality to Existing Ones

    Science.gov (United States)

    Troy, R. M.

    2005-12-01

    With ever increasing amounts of Earth-Science funding being diverted to the war in Iraq, the Earth-Science community must now more than ever wring every bit of utility out of every dollar. We're not likely to get funded any projects perceived by others as "pie in the sky", so we have to look at already funded programs within our community and directing new programs in a unifying direction. We have not yet begun the transition to a computationally unifying, general-purpose Earth Science computing paradigm, though it was proposed at the Fall 2002 AGU meeting in San Francisco, and perhaps earlier. Encouragingly, we do see a recognition that more commonality is needed as various projects have as funded goals the addition of the processing and dissemination of new datatypes, or data-sets, if you prefer, to their existing repertoires. Unfortunately, the timelines projected for adding a datatype to an existing system are typically estimated at around two years each. Further, many organizations have the perception that they can only use their dollars to support exclusively their own needs as they don't have the money to support the goals of others, thus overlooking opportunities to satisfy their own needs while at the same time aiding the creation of a global GeoScience cyber-infrastructure. While Computational Unification appears to be an unfunded, impossible dream, at least for now, individual projects can take steps that are compatible with a unified community and can help build one over time. This session explores these opportunities. The author will discuss the issues surrounding this topic, outlining alternative perspectives on the points of difficulty, and proposing straight-forward solutions which every Earth Science data processing system should consider. Sub-topics include distributed meta-data, distributed processing, distributed data objects, interdisciplinary concerns, and scientific defensibility with an overall emphasis on how previously written processes

  7. Fast parallel algorithms that compute transitive closure of a fuzzy relation

    Science.gov (United States)

    Kreinovich, Vladik YA.

    1993-01-01

    The notion of a transitive closure of a fuzzy relation is very useful for clustering in pattern recognition, for fuzzy databases, etc. The original algorithm proposed by L. Zadeh (1971) requires the computation time O(n(sup 4)), where n is the number of elements in the relation. In 1974, J. C. Dunn proposed a O(n(sup 2)) algorithm. Since we must compute n(n-1)/2 different values s(a, b) (a not equal to b) that represent the fuzzy relation, and we need at least one computational step to compute each of these values, we cannot compute all of them in less than O(n(sup 2)) steps. So, Dunn's algorithm is in this sense optimal. For small n, it is ok. However, for big n (e.g., for big databases), it is still a lot, so it would be desirable to decrease the computation time (this problem was formulated by J. Bezdek). Since this decrease cannot be done on a sequential computer, the only way to do it is to use a computer with several processors working in parallel. We show that on a parallel computer, transitive closure can be computed in time O((log(sub 2)(n))2).

  8. Applied time series analysis and innovative computing

    CERN Document Server

    Ao, Sio-Iong

    2010-01-01

    This text is a systematic, state-of-the-art introduction to the use of innovative computing paradigms as an investigative tool for applications in time series analysis. It includes frontier case studies based on recent research.

  9. A step-defined sedentary lifestyle index: <5000 steps/day.

    Science.gov (United States)

    Tudor-Locke, Catrine; Craig, Cora L; Thyfault, John P; Spence, John C

    2013-02-01

    Step counting (using pedometers or accelerometers) is widely accepted by researchers, practitioners, and the general public. Given the mounting evidence of the link between low steps/day and time spent in sedentary behaviours, how few steps/day some populations actually perform, and the growing interest in the potentially deleterious effects of excessive sedentary behaviours on health, an emerging question is "How many steps/day are too few?" This review examines the utility, appropriateness, and limitations of using a reoccurring candidate for a step-defined sedentary lifestyle index: 10 000) to lower (sedentary lifestyle index for adults is appropriate for researchers and practitioners and for communicating with the general public. There is little evidence to advocate any specific value indicative of a step-defined sedentary lifestyle index in children and adolescents.

  10. 22 CFR 1429.21 - Computation of time for filing papers.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Computation of time for filing papers. 1429.21... MISCELLANEOUS AND GENERAL REQUIREMENTS General Requirements § 1429.21 Computation of time for filing papers. In... subchapter requires the filing of any paper, such document must be received by the Board or the officer or...

  11. Comparison of step-by-step kinematics in repeated 30m sprints in female soccer players.

    Science.gov (United States)

    van den Tillaar, Roland

    2018-01-04

    The aim of this study was to compare kinematics in repeated 30m sprints in female soccer players. Seventeen subjects performed seven 30m sprints every 30s in one session. Kinematics were measured with an infrared contact mat and laser gun, and running times with an electronic timing device. The main findings were that sprint times increased in the repeated sprint ability test. The main changes in kinematics during the repeated sprint ability test were increased contact time and decreased step frequency, while no change in step length was observed. The step velocity increased in almost each step until the 14, which occurred around 22m. After this, the velocity was stable until the last step, when it decreased. This increase in step velocity was mainly caused by the increased step length and decreased contact times. It was concluded that the fatigue induced in repeated 30m sprints in female soccer players resulted in decreased step frequency and increased contact time. Employing this approach in combination with a laser gun and infrared mat for 30m makes it very easy to analyse running kinematics in repeated sprints in training. This extra information gives the athlete, coach and sports scientist the opportunity to give more detailed feedback and help to target these changes in kinematics better to enhance repeated sprint performance.

  12. Highly reliable computer network for real time system

    International Nuclear Information System (INIS)

    Mohammed, F.A.; Omar, A.A.; Ayad, N.M.A.; Madkour, M.A.I.; Ibrahim, M.K.

    1988-01-01

    Many of computer networks have been studied different trends regarding the network architecture and the various protocols that govern data transfers and guarantee a reliable communication among all a hierarchical network structure has been proposed to provide a simple and inexpensive way for the realization of a reliable real-time computer network. In such architecture all computers in the same level are connected to a common serial channel through intelligent nodes that collectively control data transfers over the serial channel. This level of computer network can be considered as a local area computer network (LACN) that can be used in nuclear power plant control system since it has geographically dispersed subsystems. network expansion would be straight the common channel for each added computer (HOST). All the nodes are designed around a microprocessor chip to provide the required intelligence. The node can be divided into two sections namely a common section that interfaces with serial data channel and a private section to interface with the host computer. This part would naturally tend to have some variations in the hardware details to match the requirements of individual host computers. fig 7

  13. Traffic safety and step-by-step driving licence for young people

    DEFF Research Database (Denmark)

    Tønning, Charlotte; Agerholm, Niels

    2017-01-01

    presents a review of safety effects from step-by-step driving licence schemes. Most of the investigated schemes consist of a step-by-step driving licence with Step 1) various tests and education, Step 2) a period where driving is only allowed together with an experienced driver and Step 3) driving without...... companion is allowed but with various restrictions and, in some cases, additional driving education and tests. In general, a step-by-step driving licence improves traffic safety even though the young people are permitted to drive a car earlier on. The effects from driving with an experienced driver vary......Young novice car drivers are much more accident-prone than other drivers - up to 10 times that of their parents' generation. A central solution to improve the traffic safety for this group is implementation of a step-by-step driving licence. A number of countries have introduced a step...

  14. Computer simulations of long-time tails: what's new?

    NARCIS (Netherlands)

    Hoef, van der M.A.; Frenkel, D.

    1995-01-01

    Twenty five years ago Alder and Wainwright discovered, by simulation, the 'long-time tails' in the velocity autocorrelation function of a single particle in fluid [1]. Since then, few qualitatively new results on long-time tails have been obtained by computer simulations. However, within the

  15. Spike-timing-based computation in sound localization.

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2010-11-01

    Full Text Available Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal.

  16. Bio-inspired step-climbing in a hexapod robot

    International Nuclear Information System (INIS)

    Chou, Ya-Cheng; Yu, Wei-Shun; Huang, Ke-Jung; Lin, Pei-Chun

    2012-01-01

    Inspired by the observation that the cockroach changes from a tripod gait to a different gait for climbing high steps, we report on the design and implementation of a novel, fully autonomous step-climbing maneuver, which enables a RHex-style hexapod robot to reliably climb a step up to 230% higher than the length of its leg. Similar to the climbing strategy most used by cockroaches, the proposed maneuver is composed of two stages. The first stage is the ‘rearing stage,’ inclining the body so the front side of the body is raised and it is easier for the front legs to catch the top of the step, followed by the ‘rising stage,’ maneuvering the body's center of mass to the top of the step. Two infrared range sensors are installed on the front of the robot to detect the presence of the step and its orientation relative to the robot's heading, so that the robot can perform automatic gait transition, from walking to step-climbing, as well as correct its initial tilt approaching posture. An inclinometer is utilized to measure body inclination and to compute step height, thus enabling the robot to adjust its gait automatically, in real time, and to climb steps of different heights and depths successfully. The algorithm is applicable for the robot to climb various rectangular obstacles, including a narrow bar, a bar and a step (i.e. a bar of infinite width). The performance of the algorithm is evaluated experimentally, and the comparison of climbing strategies and climbing behaviors in biological and robotic systems is discussed. (paper)

  17. INTRANS. A computer code for the non-linear structural response analysis of reactor internals under transient loads

    International Nuclear Information System (INIS)

    Ramani, D.T.

    1977-01-01

    The 'INTRANS' system is a general purpose computer code, designed to perform linear and non-linear structural stress and deflection analysis of impacting or non-impacting nuclear reactor internals components coupled with reactor vessel, shield building and external as well as internal gapped spring support system. This paper describes in general a unique computational procedure for evaluating the dynamic response of reactor internals, descretised as beam and lumped mass structural system and subjected to external transient loads such as seismic and LOCA time-history forces. The computational procedure is outlined in the INTRANS code, which computes component flexibilities of a discrete lumped mass planar model of reactor internals by idealising an assemblage of finite elements consisting of linear elastic beams with bending, torsional and shear stiffnesses interacted with external or internal linear as well as non-linear multi-gapped spring support system. The method of analysis is based on the displacement method and the code uses the fourth-order Runge-Kutta numerical integration technique as a basis for solution of dynamic equilibrium equations of motion for the system. During the computing process, the dynamic response of each lumped mass is calculated at specific instant of time using well-known step-by-step procedure. At any instant of time then, the transient dynamic motions of the system are held stationary and based on the predicted motions and internal forces of the previous instant. From which complete response at any time-step of interest may then be computed. Using this iterative process, the relationship between motions and internal forces is satisfied step by step throughout the time interval

  18. PID controller auto-tuning based on process step response and damping optimum criterion.

    Science.gov (United States)

    Pavković, Danijel; Polak, Siniša; Zorc, Davor

    2014-01-01

    This paper presents a novel method of PID controller tuning suitable for higher-order aperiodic processes and aimed at step response-based auto-tuning applications. The PID controller tuning is based on the identification of so-called n-th order lag (PTn) process model and application of damping optimum criterion, thus facilitating straightforward algebraic rules for the adjustment of both the closed-loop response speed and damping. The PTn model identification is based on the process step response, wherein the PTn model parameters are evaluated in a novel manner from the process step response equivalent dead-time and lag time constant. The effectiveness of the proposed PTn model parameter estimation procedure and the related damping optimum-based PID controller auto-tuning have been verified by means of extensive computer simulations. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  19. 5 CFR 831.703 - Computation of annuities for part-time service.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Computation of annuities for part-time... part-time service. (a) Purpose. The computational method in this section shall be used to determine the annuity for an employee who has part-time service on or after April 7, 1986. (b) Definitions. In this...

  20. Exshall: A Turkel-Zwas explicit large time-step FORTRAN program for solving the shallow-water equations in spherical coordinates

    Science.gov (United States)

    Navon, I. M.; Yu, Jian

    A FORTRAN computer program is presented and documented applying the Turkel-Zwas explicit large time-step scheme to a hemispheric barotropic model with constraint restoration of integral invariants of the shallow-water equations. We then proceed to detail the algorithms embodied in the code EXSHALL in this paper, particularly algorithms related to the efficiency and stability of T-Z scheme and the quadratic constraint restoration method which is based on a variational approach. In particular we provide details about the high-latitude filtering, Shapiro filtering, and Robert filtering algorithms used in the code. We explain in detail the various subroutines in the EXSHALL code with emphasis on algorithms implemented in the code and present the flowcharts of some major subroutines. Finally, we provide a visual example illustrating a 4-day run using real initial data, along with a sample printout and graphic isoline contours of the height field and velocity fields.

  1. Viking Afterbody Heating Computations and Comparisons to Flight Data

    Science.gov (United States)

    Edquist, Karl T.; Wright, Michael J.; Allen, Gary A., Jr.

    2006-01-01

    Computational fluid dynamics predictions of Viking Lander 1 entry vehicle afterbody heating are compared to flight data. The analysis includes a derivation of heat flux from temperature data at two base cover locations, as well as a discussion of available reconstructed entry trajectories. Based on the raw temperature-time history data, convective heat flux is derived to be 0.63-1.10 W/cm2 for the aluminum base cover at the time of thermocouple failure. Peak heat flux at the fiberglass base cover thermocouple is estimated to be 0.54-0.76 W/cm2, occurring 16 seconds after peak stagnation point heat flux. Navier-Stokes computational solutions are obtained with two separate codes using an 8- species Mars gas model in chemical and thermal non-equilibrium. Flowfield solutions using local time-stepping did not result in converged heating at either thermocouple location. A global time-stepping approach improved the computational stability, but steady state heat flux was not reached for either base cover location. Both thermocouple locations lie within a separated flow region of the base cover that is likely unsteady. Heat flux computations averaged over the solution history are generally below the flight data and do not vary smoothly over time for both base cover locations. Possible reasons for the mismatch between flight data and flowfield solutions include underestimated conduction effects and limitations of the computational methods.

  2. Real-time data acquisition and feedback control using Linux Intel computers

    International Nuclear Information System (INIS)

    Penaflor, B.G.; Ferron, J.R.; Piglowski, D.A.; Johnson, R.D.; Walker, M.L.

    2006-01-01

    This paper describes the experiences of the DIII-D programming staff in adapting Linux based Intel computing hardware for use in real-time data acquisition and feedback control systems. Due to the highly dynamic and unstable nature of magnetically confined plasmas in tokamak fusion experiments, real-time data acquisition and feedback control systems are in routine use with all major tokamaks. At DIII-D, plasmas are created and sustained using a real-time application known as the digital plasma control system (PCS). During each experiment, the PCS periodically samples data from hundreds of diagnostic signals and provides these data to control algorithms implemented in software. These algorithms compute the necessary commands to send to various actuators that affect plasma performance. The PCS consists of a group of rack mounted Intel Xeon computer systems running an in-house customized version of the Linux operating system tailored specifically to meet the real-time performance needs of the plasma experiments. This paper provides a more detailed description of the real-time computing hardware and custom developed software, including recent work to utilize dual Intel Xeon equipped computers within the PCS

  3. Modified random hinge transport mechanics and multiple scattering step-size selection in EGS5

    International Nuclear Information System (INIS)

    Wilderman, S.J.; Bielajew, A.F.

    2005-01-01

    The new transport mechanics in EGS5 allows for significantly longer electron transport step sizes and hence shorter computation times than required for identical problems in EGS4. But as with all Monte Carlo electron transport algorithms, certain classes of problems exhibit step-size dependencies even when operating within recommended ranges, sometimes making selection of step-sizes a daunting task for novice users. Further contributing to this problem, because of the decoupling of multiple scattering and continuous energy loss in the dual random hinge transport mechanics of EGS5, there are two independent step sizes in EGS5, one for multiple scattering and one for continuous energy loss, each of which influences speed and accuracy in a different manner. Further, whereas EGS4 used a single value of fractional energy loss (ESTEPE) to determine step sizes at all energies, to increase performance by decreasing the amount of effort expended simulating lower energy particles, EGS5 permits the fractional energy loss values which are used to determine both the multiple scattering and continuous energy loss step sizes to vary with energy. This results in requiring the user to specify four fractional energy loss values when optimizing computations for speed. Thus, in order to simplify step-size selection and to mitigate step-size dependencies, a method has been devised to automatically optimize step-size selection based on a single material dependent input related to the size of problem tally region. In this paper we discuss the new transport mechanics in EGS5 and describe the automatic step-size optimization algorithm. (author)

  4. Time step rescaling recovers continuous-time dynamical properties for discrete-time Langevin integration of nonequilibrium systems.

    Science.gov (United States)

    Sivak, David A; Chodera, John D; Crooks, Gavin E

    2014-06-19

    When simulating molecular systems using deterministic equations of motion (e.g., Newtonian dynamics), such equations are generally numerically integrated according to a well-developed set of algorithms that share commonly agreed-upon desirable properties. However, for stochastic equations of motion (e.g., Langevin dynamics), there is still broad disagreement over which integration algorithms are most appropriate. While multiple desiderata have been proposed throughout the literature, consensus on which criteria are important is absent, and no published integration scheme satisfies all desiderata simultaneously. Additional nontrivial complications stem from simulating systems driven out of equilibrium using existing stochastic integration schemes in conjunction with recently developed nonequilibrium fluctuation theorems. Here, we examine a family of discrete time integration schemes for Langevin dynamics, assessing how each member satisfies a variety of desiderata that have been enumerated in prior efforts to construct suitable Langevin integrators. We show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting (related to the velocity Verlet discretization) that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts.

  5. Step responses of a torsional system with multiple clearances: Study of vibro-impact phenomenon using experimental and computational methods

    Science.gov (United States)

    Oruganti, Pradeep Sharma; Krak, Michael D.; Singh, Rajendra

    2018-01-01

    Recently Krak and Singh (2017) proposed a scientific experiment that examined vibro-impacts in a torsional system under a step down excitation and provided preliminary measurements and limited non-linear model studies. A major goal of this article is to extend the prior work with a focus on the examination of vibro-impact phenomena observed under step responses in a torsional system with one, two or three controlled clearances. First, new measurements are made at several locations with a higher sampling frequency. Measured angular accelerations are examined in both time and time-frequency domains. Minimal order non-linear models of the experiment are successfully constructed, using piecewise linear stiffness and Coulomb friction elements; eight cases of the generic system are examined though only three are experimentally studied. Measured and predicted responses for single and dual clearance configurations exhibit double sided impacts and time varying periods suggest softening trends under the step down torque. Non-linear models are experimentally validated by comparing results with new measurements and with those previously reported. Several metrics are utilized to quantify and compare the measured and predicted responses (including peak to peak accelerations). Eigensolutions and step responses of the corresponding linearized models are utilized to better understand the nature of the non-linear dynamic system. Finally, the effect of step amplitude on the non-linear responses is examined for several configurations, and hardening trends are observed in the torsional system with three clearances.

  6. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    Science.gov (United States)

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and

  7. Time adaptivity in the diffusive wave approximation to the shallow water equations

    KAUST Repository

    Collier, Nathan; Radwan, Hany; Dalcí n, Lisandro D.; Calo, Victor M.

    2013-01-01

    We discuss the use of time adaptivity applied to the one dimensional diffusive wave approximation to the shallow water equations. A simple and computationally economical error estimator is discussed which enables time-step size adaptivity. This robust adaptive time discretization corrects the initial time step size to achieve a user specified bound on the discretization error and allows time step size variations of several orders of magnitude. In particular, the one dimensional results presented in this work feature a change of four orders of magnitudes for the time step over the entire simulation. © 2011 Elsevier B.V.

  8. Time adaptivity in the diffusive wave approximation to the shallow water equations

    KAUST Repository

    Collier, Nathan

    2013-05-01

    We discuss the use of time adaptivity applied to the one dimensional diffusive wave approximation to the shallow water equations. A simple and computationally economical error estimator is discussed which enables time-step size adaptivity. This robust adaptive time discretization corrects the initial time step size to achieve a user specified bound on the discretization error and allows time step size variations of several orders of magnitude. In particular, the one dimensional results presented in this work feature a change of four orders of magnitudes for the time step over the entire simulation. © 2011 Elsevier B.V.

  9. A formalization of computational trust

    NARCIS (Netherlands)

    Güven - Ozcelebi, C.; Holenderski, M.J.; Ozcelebi, T.; Lukkien, J.J.

    2018-01-01

    Computational trust aims to quantify trust and is studied by many disciplines including computer science, social sciences and business science. We propose a formal computational trust model, including its parameters and operations on these parameters, as well as a step by step guide to compute trust

  10. A positive and multi-element conserving time stepping scheme for biogeochemical processes in marine ecosystem models

    Science.gov (United States)

    Radtke, H.; Burchard, H.

    2015-01-01

    In this paper, an unconditionally positive and multi-element conserving time stepping scheme for systems of non-linearly coupled ODE's is presented. These systems of ODE's are used to describe biogeochemical transformation processes in marine ecosystem models. The numerical scheme is a positive-definite modification of the Runge-Kutta method, it can have arbitrarily high order of accuracy and does not require time step adaption. If the scheme is combined with a modified Patankar-Runge-Kutta method from Burchard et al. (2003), it also gets the ability to solve a certain class of stiff numerical problems, but the accuracy is restricted to second-order then. The performance of the new scheme on two test case problems is shown.

  11. A new deterministic Ensemble Kalman Filter with one-step-ahead smoothing for storm surge forecasting

    KAUST Repository

    Raboudi, Naila

    2016-01-01

    KF-OSA exploits the observation twice. The incoming observation is first used to smooth the ensemble at the previous time step. The resulting smoothed ensemble is then integrated forward to compute a "pseudo forecast" ensemble, which is again updated with the same

  12. New Flutter Analysis Technique for Time-Domain Computational Aeroelasticity

    Science.gov (United States)

    Pak, Chan-Gi; Lung, Shun-Fat

    2017-01-01

    A new time-domain approach for computing flutter speed is presented. Based on the time-history result of aeroelastic simulation, the unknown unsteady aerodynamics model is estimated using a system identification technique. The full aeroelastic model is generated via coupling the estimated unsteady aerodynamic model with the known linear structure model. The critical dynamic pressure is computed and used in the subsequent simulation until the convergence of the critical dynamic pressure is achieved. The proposed method is applied to a benchmark cantilevered rectangular wing.

  13. Stability of one-step methods in transient nonlinear heat conduction

    International Nuclear Information System (INIS)

    Hughes, J.R.

    1977-01-01

    The purpose of the present work is to ascertain practical stability conditions for one-step methods commonly used in transient nonlinear heat conduction analyses. In this paper the concepts of stability, appropriate to the nonlinear problem, are thoroughly discussed. They of course reduce to the usual stability critierion for the linear, constant coefficient case. However, for nonlinear problems there are differences and theses ideas are of key importance in obtaining practical stability conditions. Of particular importance is a recent result which indicates that, in a sense, the trapezoidal and midpoint families are equivalent. Thus, stability results for one family may be translated into a result for the other. The main results obtained are: The stability behaviour of the explicit Euler method in the nonlinear regime is analogous to that for linear problems. In particular, an a priori step size restriction may be determined for each time step. The precise time step restriction on implicit conditionally stable members of the trapezoidal and midpoint families is shown not to be determinable a priori. Of considerable practical significance, unconditionally stable members of the trapezoidal and midpoint families are identified. All notions of stability employed are motivated and defined, and their interpretations in practical computing are indicated. (Auth.)

  14. Time to pause before the next step

    International Nuclear Information System (INIS)

    Siemon, R.E.

    1998-01-01

    Many scientists, who have staunchly supported ITER for years, are coming to realize it is time to further rethink fusion energy's development strategy. Specifically, as was suggested by Grant Logan and Dale Meade, and in keeping with the restructuring of 1996, a theme of better, cheaper, faster fusion would serve the program more effectively than ''demonstrating controlled ignition...and integrated testing of the high-heat-flux and nuclear components required to utilize fusion energy...'' which are the important ingredients of ITER's objectives. The author has personally shifted his view for a mixture of technical and political reasons. On the technical side, he senses that through advanced tokamak research, spherical tokamak research, and advanced stellarator work, scientists are coming to a new understanding that might make a burning-plasma device significantly smaller and less expensive. Thus waiting for a few years, even ten years, seems prudent. Scientifically, there is fascinating physics to be learned through studies of burning plasma on a tokamak. And clearly if one wishes to study burning plasma physics in a sustained plasma, there is no other configuration with an adequate database on which to proceed. But what is the urgency of moving towards an ITER-like step focused on burning plasma? Some of the arguments put forward and the counter arguments are discussed here

  15. Discrete computational mechanics for stiff phenomena

    KAUST Repository

    Michels, Dominik L.

    2016-11-28

    Many natural phenomena which occur in the realm of visual computing and computational physics, like the dynamics of cloth, fibers, fluids, and solids as well as collision scenarios are described by stiff Hamiltonian equations of motion, i.e. differential equations whose solution spectra simultaneously contain extremely high and low frequencies. This usually impedes the development of physically accurate and at the same time efficient integration algorithms. We present a straightforward computationally oriented introduction to advanced concepts from classical mechanics. We provide an easy to understand step-by-step introduction from variational principles over the Euler-Lagrange formalism and the Legendre transformation to Hamiltonian mechanics. Based on such solid theoretical foundations, we study the underlying geometric structure of Hamiltonian systems as well as their discrete counterparts in order to develop sophisticated structure preserving integration algorithms to efficiently perform high fidelity simulations.

  16. Moving finite elements: A continuously adaptive method for computational fluid dynamics

    International Nuclear Information System (INIS)

    Glasser, A.H.; Miller, K.; Carlson, N.

    1991-01-01

    Moving Finite Elements (MFE), a recently developed method for computational fluid dynamics, promises major advances in the ability of computers to model the complex behavior of liquids, gases, and plasmas. Applications of computational fluid dynamics occur in a wide range of scientifically and technologically important fields. Examples include meteorology, oceanography, global climate modeling, magnetic and inertial fusion energy research, semiconductor fabrication, biophysics, automobile and aircraft design, industrial fluid processing, chemical engineering, and combustion research. The improvements made possible by the new method could thus have substantial economic impact. Moving Finite Elements is a moving node adaptive grid method which has a tendency to pack the grid finely in regions where it is most needed at each time and to leave it coarse elsewhere. It does so in a manner which is simple and automatic, and does not require a large amount of human ingenuity to apply it to each particular problem. At the same time, it often allows the time step to be large enough to advance a moving shock by many shock thicknesses in a single time step, moving the grid smoothly with the solution and minimizing the number of time steps required for the whole problem. For 2D problems (two spatial variables) the grid is composed of irregularly shaped and irregularly connected triangles which are very flexible in their ability to adapt to the evolving solution. While other adaptive grid methods have been developed which share some of these desirable properties, this is the only method which combines them all. In many cases, the method can save orders of magnitude of computing time, equivalent to several generations of advancing computer hardware

  17. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    Science.gov (United States)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  18. Stepping out: dare to step forward, step back, or just stand still and breathe.

    Science.gov (United States)

    Waisman, Mary Sue

    2012-01-01

    It is important to step out and make a difference. We have one of the most unique and diverse professions that allows for diversity in thought and practice, permitting each of us to grow in our unique niches and make significant contributions. I was frightened to 'step out' to go to culinary school at the age of 46, but it changed forever the way I look at my profession and I have since experienced the most enjoyable and innovative career. There are also times when it is important to 'step back' to relish the roots of our profession; to help bring food back into nutrition; to translate all of our wonderful science into a language of food that Canadians understand. We all need to take time to 'just stand still and breathe': to celebrate our accomplishments, reflect on our actions, ensure we are heading toward our vision, keep the profession vibrant and relevant, and cherish one another.

  19. Diffusive Wave Approximation to the Shallow Water Equations: Computational Approach

    KAUST Repository

    Collier, Nathan; Radwan, Hany; Dalcin, Lisandro; Calo, Victor M.

    2011-01-01

    We discuss the use of time adaptivity applied to the one dimensional diffusive wave approximation to the shallow water equations. A simple and computationally economical error estimator is discussed which enables time-step size adaptivity

  20. Distributed computing for real-time petroleum reservoir monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Ayodele, O. R. [University of Alberta, Edmonton, AB (Canada)

    2004-05-01

    Computer software architecture is presented to illustrate how the concept of distributed computing can be applied to real-time reservoir monitoring processes, permitting the continuous monitoring of the dynamic behaviour of petroleum reservoirs at much shorter intervals. The paper describes the fundamental technologies driving distributed computing, namely Java 2 Platform Enterprise edition (J2EE) by Sun Microsystems, and the Microsoft Dot-Net (Microsoft.Net) initiative, and explains the challenges involved in distributed computing. These are: (1) availability of permanently placed downhole equipment to acquire and transmit seismic data; (2) availability of high bandwidth to transmit the data; (3) security considerations; (4) adaptation of existing legacy codes to run on networks as downloads on demand; and (5) credibility issues concerning data security over the Internet. Other applications of distributed computing in the petroleum industry are also considered, specifically MWD, LWD and SWD (measurement-while-drilling, logging-while-drilling, and simulation-while-drilling), and drill-string vibration monitoring. 23 refs., 1 fig.

  1. Model Checking Quantified Computation Tree Logic

    NARCIS (Netherlands)

    Rensink, Arend; Baier, C; Hermanns, H.

    2006-01-01

    Propositional temporal logic is not suitable for expressing properties on the evolution of dynamically allocated entities over time. In particular, it is not possible to trace such entities through computation steps, since this requires the ability to freely mix quantification and temporal

  2. Long-wave model for strongly anisotropic growth of a crystal step.

    Science.gov (United States)

    Khenner, Mikhail

    2013-08-01

    A continuum model for the dynamics of a single step with the strongly anisotropic line energy is formulated and analyzed. The step grows by attachment of adatoms from the lower terrace, onto which atoms adsorb from a vapor phase or from a molecular beam, and the desorption is nonnegligible (the "one-sided" model). Via a multiscale expansion, we derived a long-wave, strongly nonlinear, and strongly anisotropic evolution PDE for the step profile. Written in terms of the step slope, the PDE can be represented in a form similar to a convective Cahn-Hilliard equation. We performed the linear stability analysis and computed the nonlinear dynamics. Linear stability depends on whether the stiffness is minimum or maximum in the direction of the step growth. It also depends nontrivially on the combination of the anisotropy strength parameter and the atomic flux from the terrace to the step. Computations show formation and coarsening of a hill-and-valley structure superimposed onto a long-wavelength profile, which independently coarsens. Coarsening laws for the hill-and-valley structure are computed for two principal orientations of a maximum step stiffness, the increasing anisotropy strength, and the varying atomic flux.

  3. Real Time Animation of Trees Based on BBSC in Computer Games

    Directory of Open Access Journals (Sweden)

    Xuefeng Ao

    2009-01-01

    Full Text Available That researchers in the field of computer games usually find it is difficult to simulate the motion of actual 3D model trees lies in the fact that the tree model itself has very complicated structure, and many sophisticated factors need to be considered during the simulation. Though there are some works on simulating 3D tree and its motion, few of them are used in computer games due to the high demand for real-time in computer games. In this paper, an approach of animating trees in computer games based on a novel tree model representation—Ball B-Spline Curves (BBSCs are proposed. By taking advantage of the good features of the BBSC-based model, physical simulation of the motion of leafless trees with wind blowing becomes easier and more efficient. The method can generate realistic 3D tree animation in real-time, which meets the high requirement for real time in computer games.

  4. Step training improves reaction time, gait and balance and reduces falls in older people: a systematic review and meta-analysis.

    Science.gov (United States)

    Okubo, Yoshiro; Schoene, Daniel; Lord, Stephen R

    2017-04-01

    To examine the effects of stepping interventions on fall risk factors and fall incidence in older people. Electronic databases (PubMed, EMBASE, CINAHL, Cochrane, CENTRAL) and reference lists of included articles from inception to March 2015. Randomised (RCT) or clinical controlled trials (CCT) of volitional and reactive stepping interventions that included older (minimum age 60) people providing data on falls or fall risk factors. Meta-analyses of seven RCTs (n=660) showed that the stepping interventions significantly reduced the rate of falls (rate ratio=0.48, 95% CI 0.36 to 0.65, prisk ratio=0.51, 95% CI 0.38 to 0.68, pfalls and proportion of fallers. A meta-analysis of two RCTs (n=62) showed that stepping interventions significantly reduced laboratory-induced falls, and meta-analysis findings of up to five RCTs and CCTs (n=36-416) revealed that stepping interventions significantly improved simple and choice stepping reaction time, single leg stance, timed up and go performance (pfalls among older adults by approximately 50%. This clinically significant reduction may be due to improvements in reaction time, gait, balance and balance recovery but not in strength. Further high-quality studies aimed at maximising the effectiveness and feasibility of stepping interventions are required. CRD42015017357. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  5. Time ordering of two-step processes in energetic ion-atom collisions: Basic formalism

    International Nuclear Information System (INIS)

    Stolterfoht, N.

    1993-01-01

    The semiclassical approximation is applied in second order to describe time ordering of two-step processes in energetic ion-atom collisions. Emphasis is given to the conditions for interferences between first- and second-order terms. In systems with two active electrons, time ordering gives rise to a pair of associated paths involving a second-order process and its time-inverted process. Combining these paths within the independent-particle frozen orbital model, time ordering is lost. It is shown that the loss of time ordering modifies the second-order amplitude so that its ability to interfere with the first-order amplitude is essentially reduced. Time ordering and the capability for interference is regained, as one path is blocked by means of the Pauli exclusion principle. The time-ordering formalism is prepared for papers dealing with collision experiments of single excitation [Stolterfoht et al., following paper, Phys. Rev. A 48, 2986 (1993)] and double excitation [Stolterfoht et al. (unpublished)

  6. Computer network time synchronization the network time protocol on earth and in space

    CERN Document Server

    Mills, David L

    2010-01-01

    Carefully coordinated, reliable, and accurate time synchronization is vital to a wide spectrum of fields-from air and ground traffic control, to buying and selling goods and services, to TV network programming. Ill-gotten time could even lead to the unimaginable and cause DNS caches to expire, leaving the entire Internet to implode on the root servers.Written by the original developer of the Network Time Protocol (NTP), Computer Network Time Synchronization: The Network Time Protocol on Earth and in Space, Second Edition addresses the technological infrastructure of time dissemination, distrib

  7. The reliable solution and computation time of variable parameters Logistic model

    OpenAIRE

    Pengfei, Wang; Xinnong, Pan

    2016-01-01

    The reliable computation time (RCT, marked as Tc) when applying a double precision computation of a variable parameters logistic map (VPLM) is studied. First, using the method proposed, the reliable solutions for the logistic map are obtained. Second, for a time-dependent non-stationary parameters VPLM, 10000 samples of reliable experiments are constructed, and the mean Tc is then computed. The results indicate that for each different initial value, the Tcs of the VPLM are generally different...

  8. PVT: an efficient computational procedure to speed up next-generation sequence analysis.

    Science.gov (United States)

    Maji, Ranjan Kumar; Sarkar, Arijita; Khatua, Sunirmal; Dasgupta, Subhasis; Ghosh, Zhumur

    2014-06-04

    High-throughput Next-Generation Sequencing (NGS) techniques are advancing genomics and molecular biology research. This technology generates substantially large data which puts up a major challenge to the scientists for an efficient, cost and time effective solution to analyse such data. Further, for the different types of NGS data, there are certain common challenging steps involved in analysing those data. Spliced alignment is one such fundamental step in NGS data analysis which is extremely computational intensive as well as time consuming. There exists serious problem even with the most widely used spliced alignment tools. TopHat is one such widely used spliced alignment tools which although supports multithreading, does not efficiently utilize computational resources in terms of CPU utilization and memory. Here we have introduced PVT (Pipelined Version of TopHat) where we take up a modular approach by breaking TopHat's serial execution into a pipeline of multiple stages, thereby increasing the degree of parallelization and computational resource utilization. Thus we address the discrepancies in TopHat so as to analyze large NGS data efficiently. We analysed the SRA dataset (SRX026839 and SRX026838) consisting of single end reads and SRA data SRR1027730 consisting of paired-end reads. We used TopHat v2.0.8 to analyse these datasets and noted the CPU usage, memory footprint and execution time during spliced alignment. With this basic information, we designed PVT, a pipelined version of TopHat that removes the redundant computational steps during 'spliced alignment' and breaks the job into a pipeline of multiple stages (each comprising of different step(s)) to improve its resource utilization, thus reducing the execution time. PVT provides an improvement over TopHat for spliced alignment of NGS data analysis. PVT thus resulted in the reduction of the execution time to ~23% for the single end read dataset. Further, PVT designed for paired end reads showed an

  9. New Approaches to the Computer Simulation of Amorphous Alloys: A Review.

    Science.gov (United States)

    Valladares, Ariel A; Díaz-Celaya, Juan A; Galván-Colín, Jonathan; Mejía-Mendoza, Luis M; Reyes-Retana, José A; Valladares, Renela M; Valladares, Alexander; Alvarez-Ramirez, Fernando; Qu, Dongdong; Shen, Jun

    2011-04-13

    In this work we review our new methods to computer generate amorphous atomic topologies of several binary alloys: SiH, SiN, CN; binary systems based on group IV elements like SiC; the GeSe 2 chalcogenide; aluminum-based systems: AlN and AlSi, and the CuZr amorphous alloy. We use an ab initio approach based on density functionals and computationally thermally-randomized periodically-continued cells with at least 108 atoms. The computational thermal process to generate the amorphous alloys is the undermelt-quench approach, or one of its variants, that consists in linearly heating the samples to just below their melting (or liquidus) temperatures, and then linearly cooling them afterwards. These processes are carried out from initial crystalline conditions using short and long time steps. We find that a step four-times the default time step is adequate for most of the simulations. Radial distribution functions (partial and total) are calculated and compared whenever possible with experimental results, and the agreement is very good. For some materials we report studies of the effect of the topological disorder on their electronic and vibrational densities of states and on their optical properties.

  10. Patent law for computer scientists steps to protect computer-implemented inventions

    CERN Document Server

    Closa, Daniel; Giemsa, Falk; Machek, Jörg

    2010-01-01

    Written from over 70 years of experience, this overview explains patent laws across Europe, the US and Japan, and teaches readers how to think from a patent examiner's perspective. Over 10 detailed case studies are presented from different computer science applications.

  11. On the Convexity of Step out - Step in Sequencing Games

    NARCIS (Netherlands)

    Musegaas, Marieke; Borm, Peter; Quant, Marieke

    2016-01-01

    The main result of this paper is the convexity of Step out - Step in (SoSi) sequencing games, a class of relaxed sequencing games first analyzed by Musegaas, Borm, and Quant (2015). The proof makes use of a polynomial time algorithm determining the value and an optimal processing order for an

  12. Soft Real-Time PID Control on a VME Computer

    Science.gov (United States)

    Karayan, Vahag; Sander, Stanley; Cageao, Richard

    2007-01-01

    microPID (uPID) is a computer program for real-time proportional + integral + derivative (PID) control of a translation stage in a Fourier-transform ultraviolet spectrometer. microPID implements a PID control loop over a position profile at sampling rate of 8 kHz (sampling period 125microseconds). The software runs in a strippeddown Linux operating system on a VersaModule Eurocard (VME) computer operating in real-time priority queue using an embedded controller, a 16-bit digital-to-analog converter (D/A) board, and a laser-positioning board (LPB). microPID consists of three main parts: (1) VME device-driver routines, (2) software that administers a custom protocol for serial communication with a control computer, and (3) a loop section that obtains the current position from an LPB-driver routine, calculates the ideal position from the profile, and calculates a new voltage command by use of an embedded PID routine all within each sampling period. The voltage command is sent to the D/A board to control the stage. microPID uses special kernel headers to obtain microsecond timing resolution. Inasmuch as microPID implements a single-threaded process and all other processes are disabled, the Linux operating system acts as a soft real-time system.

  13. A discrete classical space-time could require 6 extra-dimensions

    Science.gov (United States)

    Guillemant, Philippe; Medale, Marc; Abid, Cherifa

    2018-01-01

    We consider a discrete space-time in which conservation laws are computed in such a way that the density of information is kept bounded. We use a 2D billiard as a toy model to compute the uncertainty propagation in ball positions after every shock and the corresponding loss of phase information. Our main result is the computation of a critical time step above which billiard calculations are no longer deterministic, meaning that a multiverse of distinct billiard histories begins to appear, caused by the lack of information. Then, we highlight unexpected properties of this critical time step and the subsequent exponential evolution of the number of histories with time, to observe that after certain duration all billiard states could become possible final states, independent of initial conditions. We conclude that if our space-time is really a discrete one, one would need to introduce extra-dimensions in order to provide supplementary constraints that specify which history should be played.

  14. Quantum transport with long-range steps on Watts-Strogatz networks

    Science.gov (United States)

    Wang, Yan; Xu, Xin-Jian

    2016-07-01

    We study transport dynamics of quantum systems with long-range steps on the Watts-Strogatz network (WSN) which is generated by rewiring links of the regular ring. First, we probe physical systems modeled by the discrete nonlinear schrödinger (DNLS) equation. Using the localized initial condition, we compute the time-averaged occupation probability of the initial site, which is related to the nonlinearity, the long-range steps and rewiring links. Self-trapping transitions occur at large (small) nonlinear parameters for coupling ɛ=-1 (1), as long-range interactions are intensified. The structure disorder induced by random rewiring, however, has dual effects for ɛ=-1 and inhibits the self-trapping behavior for ɛ=1. Second, we investigate continuous-time quantum walks (CTQW) on the regular ring ruled by the discrete linear schrödinger (DLS) equation. It is found that only the presence of the long-range steps does not affect the efficiency of the coherent exciton transport, while only the allowance of random rewiring enhances the partial localization. If both factors are considered simultaneously, localization is greatly strengthened, and the transport becomes worse.

  15. SENSITIVITY OF HELIOSEISMIC TRAVEL TIMES TO THE IMPOSITION OF A LORENTZ FORCE LIMITER IN COMPUTATIONAL HELIOSEISMOLOGY

    Energy Technology Data Exchange (ETDEWEB)

    Moradi, Hamed; Cally, Paul S., E-mail: hamed.moradi@monash.edu [Monash Centre for Astrophysics, School of Mathematical Sciences, Monash University, Clayton, Victoria 3800 (Australia)

    2014-02-20

    The rapid exponential increase in the Alfvén wave speed with height above the solar surface presents a serious challenge to physical modeling of the effects of magnetic fields on solar oscillations, as it introduces a significant Courant-Friedrichs-Lewy time-step constraint for explicit numerical codes. A common approach adopted in computational helioseismology, where long simulations in excess of 10 hr (hundreds of wave periods) are often required, is to cap the Alfvén wave speed by artificially modifying the momentum equation when the ratio between the Lorentz and hydrodynamic forces becomes too large. However, recent studies have demonstrated that the Alfvén wave speed plays a critical role in the MHD mode conversion process, particularly in determining the reflection height of the upwardly propagating helioseismic fast wave. Using numerical simulations of helioseismic wave propagation in constant inclined (relative to the vertical) magnetic fields we demonstrate that the imposition of such artificial limiters significantly affects time-distance travel times unless the Alfvén wave-speed cap is chosen comfortably in excess of the horizontal phase speeds under investigation.

  16. Timing of the steps in transformation of C3H 10T1/2 cells by X-irradiation

    International Nuclear Information System (INIS)

    Kennedy, A.R.; Cairns, J.; Little, J.B.

    1984-01-01

    Transformation of cells in culture by chemical carcinogens or X-rays seems to require at least two steps. The initial step is a frequent event; for example, after transient exposure to either methylcholanthrene or X-rays. It has been hypothesized that the second step behaves like a spontaneous mutation in having a constant but small probability of occurring each time an initiated cell divides. We show here that the clone size distribution of transformed cells in growing cultures initiated by X-rays, is, indeed, exactly what would be expected on that hypothesis. (author)

  17. Computing return times or return periods with rare event algorithms

    Science.gov (United States)

    Lestang, Thibault; Ragone, Francesco; Bréhier, Charles-Edouard; Herbert, Corentin; Bouchet, Freddy

    2018-04-01

    The average time between two occurrences of the same event, referred to as its return time (or return period), is a useful statistical concept for practical applications. For instance insurances or public agencies may be interested by the return time of a 10 m flood of the Seine river in Paris. However, due to their scarcity, reliably estimating return times for rare events is very difficult using either observational data or direct numerical simulations. For rare events, an estimator for return times can be built from the extrema of the observable on trajectory blocks. Here, we show that this estimator can be improved to remain accurate for return times of the order of the block size. More importantly, we show that this approach can be generalised to estimate return times from numerical algorithms specifically designed to sample rare events. So far those algorithms often compute probabilities, rather than return times. The approach we propose provides a computationally extremely efficient way to estimate numerically the return times of rare events for a dynamical system, gaining several orders of magnitude of computational costs. We illustrate the method on two kinds of observables, instantaneous and time-averaged, using two different rare event algorithms, for a simple stochastic process, the Ornstein–Uhlenbeck process. As an example of realistic applications to complex systems, we finally discuss extreme values of the drag on an object in a turbulent flow.

  18. Near real-time digital holographic microscope based on GPU parallel computing

    Science.gov (United States)

    Zhu, Gang; Zhao, Zhixiong; Wang, Huarui; Yang, Yan

    2018-01-01

    A transmission near real-time digital holographic microscope with in-line and off-axis light path is presented, in which the parallel computing technology based on compute unified device architecture (CUDA) and digital holographic microscopy are combined. Compared to other holographic microscopes, which have to implement reconstruction in multiple focal planes and are time-consuming the reconstruction speed of the near real-time digital holographic microscope can be greatly improved with the parallel computing technology based on CUDA, so it is especially suitable for measurements of particle field in micrometer and nanometer scale. Simulations and experiments show that the proposed transmission digital holographic microscope can accurately measure and display the velocity of particle field in micrometer scale, and the average velocity error is lower than 10%.With the graphic processing units(GPU), the computing time of the 100 reconstruction planes(512×512 grids) is lower than 120ms, while it is 4.9s using traditional reconstruction method by CPU. The reconstruction speed has been raised by 40 times. In other words, it can handle holograms at 8.3 frames per second and the near real-time measurement and display of particle velocity field are realized. The real-time three-dimensional reconstruction of particle velocity field is expected to achieve by further optimization of software and hardware. Keywords: digital holographic microscope,

  19. Imaginary Time Step Method to Solve the Dirac Equation with Nonlocal Potential

    International Nuclear Information System (INIS)

    Zhang Ying; Liang Haozhao; Meng Jie

    2009-01-01

    The imaginary time step (ITS) method is applied to solve the Dirac equation with nonlocal potentials in coordinate space. Taking the nucleus 12 C as an example, even with nonlocal potentials, the direct ITS evolution for the Dirac equation still meets the disaster of the Dirac sea. However, following the recipe in our former investigation, the disaster can be avoided by the ITS evolution for the corresponding Schroedinger-like equation without localization, which gives the convergent results exactly the same with those obtained iteratively by the shooting method with localized effective potentials.

  20. Microsoft® Visual Basic® 2010 Step by Step

    CERN Document Server

    Halvorson, Michael

    2010-01-01

    Your hands-on, step-by-step guide to learning Visual Basic® 2010. Teach yourself the essential tools and techniques for Visual Basic® 2010-one step at a time. No matter what your skill level, you'll find the practical guidance and examples you need to start building professional applications for Windows® and the Web. Discover how to: Work in the Microsoft® Visual Studio® 2010 Integrated Development Environment (IDE)Master essential techniques-from managing data and variables to using inheritance and dialog boxesCreate professional-looking UIs; add visual effects and print supportBuild com

  1. Analog computing for a new nuclear reactor dynamic model based on a time-dependent second order form of the neutron transport equation

    International Nuclear Information System (INIS)

    Pirouzmand, Ahmad; Hadad, Kamal; Suh, Kune Y.

    2011-01-01

    This paper considers the concept of analog computing based on a cellular neural network (CNN) paradigm to simulate nuclear reactor dynamics using a time-dependent second order form of the neutron transport equation. Instead of solving nuclear reactor dynamic equations numerically, which is time-consuming and suffers from such weaknesses as vulnerability to transient phenomena, accumulation of round-off errors and floating-point overflows, use is made of a new method based on a cellular neural network. The state-of-the-art shows the CNN as being an alternative solution to the conventional numerical computation method. Indeed CNN is an analog computing paradigm that performs ultra-fast calculations and provides accurate results. In this study use is made of the CNN model to simulate the space-time response of scalar flux distribution in steady state and transient conditions. The CNN model also is used to simulate step perturbation in the core. The accuracy and capability of the CNN model are examined in 2D Cartesian geometry for two fixed source problems, a mini-BWR assembly, and a TWIGL Seed/Blanket problem. We also use the CNN model concurrently for a typical small PWR assembly to simulate the effect of temperature feedback, poisons, and control rods on the scalar flux distribution

  2. Canadian children's and youth's pedometer-determined steps/day, parent-reported TV watching time, and overweight/obesity: The CANPLAY Surveillance Study

    Directory of Open Access Journals (Sweden)

    Craig Cora L

    2011-06-01

    Full Text Available Abstract Background This study examines associations between pedometer-determined steps/day and parent-reported child's Body Mass Index (BMI and time typically spent watching television between school and dinner. Methods Young people (aged 5-19 years were recruited through their parents by random digit dialling and mailed a data collection package. Information on height and weight and time spent watching television between school and dinner on a typical school day was collected from parents. In total, 5949 boys and 5709 girls reported daily steps. BMI was categorized as overweight or obese using Cole's cut points. Participants wore pedometers for 7 days and logged daily steps. The odds of being overweight and obese by steps/day and parent-reported time spent television watching were estimated using logistic regression for complex samples. Results Girls had a lower median steps/day (10682 versus 11059 for boys and also a narrower variation in steps/day (interquartile range, 4410 versus 5309 for boys. 11% of children aged 5-19 years were classified as obese; 17% of boys and girls were overweight. Both boys and girls watched, on average, Discussion Television viewing is the more prominent factor in terms of predicting overweight, and it contributes to obesity, but steps/day attenuates the association between television viewing and obesity, and therefore can be considered protective against obesity. In addition to replacing opportunities for active alternative behaviours, exposure to television might also impact body weight by promoting excess energy intake. Conclusions In this large nationally representative sample, pedometer-determined steps/day was associated with reduced odds of being obese (but not overweight whereas each parent-reported hour spent watching television between school and dinner increased the odds of both overweight and obesity.

  3. Time reversibility, computer simulation, algorithms, chaos

    CERN Document Server

    Hoover, William Graham

    2012-01-01

    A small army of physicists, chemists, mathematicians, and engineers has joined forces to attack a classic problem, the "reversibility paradox", with modern tools. This book describes their work from the perspective of computer simulation, emphasizing the author's approach to the problem of understanding the compatibility, and even inevitability, of the irreversible second law of thermodynamics with an underlying time-reversible mechanics. Computer simulation has made it possible to probe reversibility from a variety of directions and "chaos theory" or "nonlinear dynamics" has supplied a useful vocabulary and a set of concepts, which allow a fuller explanation of irreversibility than that available to Boltzmann or to Green, Kubo and Onsager. Clear illustration of concepts is emphasized throughout, and reinforced with a glossary of technical terms from the specialized fields which have been combined here to focus on a common theme. The book begins with a discussion, contrasting the idealized reversibility of ba...

  4. PHISICS/RELAP5-3D Adaptive Time-Step Method Demonstrated for the HTTR LOFC#1 Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Robin Ivey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Balestra, Paolo [Univ. of Rome (Italy); Strydom, Gerhard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-05-01

    A collaborative effort between Japan Atomic Energy Agency (JAEA) and Idaho National Laboratory (INL) as part of the Civil Nuclear Energy Working Group is underway to model the high temperature engineering test reactor (HTTR) loss of forced cooling (LOFC) transient that was performed in December 2010. The coupled version of RELAP5-3D, a thermal fluids code, and PHISICS, a neutronics code, were used to model the transient. The focus of this report is to summarize the changes made to the PHISICS-RELAP5-3D code for implementing an adaptive time step methodology into the code for the first time, and to test it using the full HTTR PHISICS/RELAP5-3D model developed by JAEA and INL and the LOFC simulation. Various adaptive schemes are available based on flux or power convergence criteria that allow significantly larger time steps to be taken by the neutronics module. The report includes a description of the HTTR and the associated PHISICS/RELAP5-3D model test results as well as the University of Rome sub-contractor report documenting the adaptive time step theory and methodology implemented in PHISICS/RELAP5-3D. Two versions of the HTTR model were tested using 8 and 26 energy groups. It was found that most of the new adaptive methods lead to significant improvements in the LOFC simulation time required without significant accuracy penalties in the prediction of the fission power and the fuel temperature. In the best performing 8 group model scenarios, a LOFC simulation of 20 hours could be completed in real-time, or even less than real-time, compared with the previous version of the code that completed the same transient 3-8 times slower than real-time. A few of the user choice combinations between the methodologies available and the tolerance settings did however result in unacceptably high errors or insignificant gains in simulation time. The study is concluded with recommendations on which methods to use for this HTTR model. An important caveat is that these findings

  5. Attention demanding tasks during treadmill walking reduce step width variability in young adults

    Directory of Open Access Journals (Sweden)

    Troy Karen L

    2005-08-01

    Full Text Available Abstract Background The variability of step time and step width is associated with falls by older adults. Further, step time is significantly influenced when performing attention demanding tasks while walking. Without exception, step time variability has been reported to increase in normal and pathologically aging older adults. Because of the role of step width in managing frontal plane dynamic stability, documenting the influence of attention-demanding tasks on step width variability may provide insight to events that can disturb dynamic stability during locomotion and increase fall risk. Preliminary evidence suggests performance of an attention demanding task significantly decreases step width variability of young adults walking on a treadmill. The purpose of the present study was to confirm or refute this finding by characterizing the extent and direction of the effects of a widely used attention demanding task (Stroop test on the step width variability of young adults walking on a motorized treadmill. Methods Fifteen healthy young adults walked on a motorized treadmill at a self-selected velocity for 10 minutes under two conditions; without performing an attention demanding task and while performing the Stroop test. Step width of continuous and consecutive steps during the collection was derived from the data recorded using a motion capture system. Step width variability was computed as the standard deviation of all recorded steps. Results Step width decreased four percent during performance of the Stroop test but the effect was not significant (p = 0.10. In contrast, the 16 percent decrease in step width variability during the Stroop test condition was significant (p = 0.029. Conclusion The results support those of our previous work in which a different attention demanding task also decreased step width variability of young subjects while walking on a treadmill. The decreased step width variability observed while performing an attention

  6. PIXAN: the Lucas Heights PIXE analysis computer package

    International Nuclear Information System (INIS)

    Clayton, E.

    1986-11-01

    To fully utilise the multielement capability and short measurement time of PIXE it is desirable to have an automated computer evaluation of the measured spectra. Because of the complex nature of PIXE spectra, a critical step in the analysis is the data reduction, in which the areas of characteristic peaks in the spectrum are evaluated. In this package the computer program BATTY is presented for such an analysis. The second step is to determine element concentrations, knowing the characteristic peak areas in the spectrum. This requires a knowledge of the expected X-ray yield for that element in the sample. The computer program THICK provides that information for both thick and thin PIXE samples. Together, these programs form the package PIXAN used at Lucas Heights for PIXE analysis

  7. A two-step method for developing a control rod program for boiling water reactors

    International Nuclear Information System (INIS)

    Taner, M.S.; Levine, S.H.; Hsiao, M.Y.

    1992-01-01

    This paper reports on a two-step method that is established for the generation of a long-term control rod program for boiling water reactors (BWRs). The new method assumes a time-variant target power distribution in core depletion. In the new method, the BWR control rod programming is divided into two steps. In step 1, a sequence of optimal, exposure-dependent Haling power distribution profiles is generated, utilizing the spectral shift concept. In step 2, a set of exposure-dependent control rod patterns is developed by using the Haling profiles generated at step 1 as a target. The new method is implemented in a computer program named OCTOPUS. The optimization procedure of OCTOPUS is based on the method of approximation programming, in which the SIMULATE-E code is used to determine the nucleonics characteristics of the reactor core state. In a test in cycle length over a time-invariant, target Haling power distribution case because of a moderate application of spectral shift. No thermal limits of the core were violated. The gain in cycle length could be increased further by broadening the extent of the spetral shift

  8. Toward a web-based real-time radiation treatment planning system in a cloud computing environment.

    Science.gov (United States)

    Na, Yong Hum; Suh, Tae-Suk; Kapp, Daniel S; Xing, Lei

    2013-09-21

    To exploit the potential dosimetric advantages of intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT), an in-depth approach is required to provide efficient computing methods. This needs to incorporate clinically related organ specific constraints, Monte Carlo (MC) dose calculations, and large-scale plan optimization. This paper describes our first steps toward a web-based real-time radiation treatment planning system in a cloud computing environment (CCE). The Amazon Elastic Compute Cloud (EC2) with a master node (named m2.xlarge containing 17.1 GB of memory, two virtual cores with 3.25 EC2 Compute Units each, 420 GB of instance storage, 64-bit platform) is used as the backbone of cloud computing for dose calculation and plan optimization. The master node is able to scale the workers on an 'on-demand' basis. MC dose calculation is employed to generate accurate beamlet dose kernels by parallel tasks. The intensity modulation optimization uses total-variation regularization (TVR) and generates piecewise constant fluence maps for each initial beam direction in a distributed manner over the CCE. The optimized fluence maps are segmented into deliverable apertures. The shape of each aperture is iteratively rectified to be a sequence of arcs using the manufacture's constraints. The output plan file from the EC2 is sent to the simple storage service. Three de-identified clinical cancer treatment plans have been studied for evaluating the performance of the new planning platform with 6 MV flattening filter free beams (40 × 40 cm(2)) from the Varian TrueBeam(TM) STx linear accelerator. A CCE leads to speed-ups of up to 14-fold for both dose kernel calculations and plan optimizations in the head and neck, lung, and prostate cancer cases considered in this study. The proposed system relies on a CCE that is able to provide an infrastructure for parallel and distributed computing. The resultant plans from the cloud computing are

  9. Toward a web-based real-time radiation treatment planning system in a cloud computing environment

    International Nuclear Information System (INIS)

    Na, Yong Hum; Kapp, Daniel S; Xing, Lei; Suh, Tae-Suk

    2013-01-01

    To exploit the potential dosimetric advantages of intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT), an in-depth approach is required to provide efficient computing methods. This needs to incorporate clinically related organ specific constraints, Monte Carlo (MC) dose calculations, and large-scale plan optimization. This paper describes our first steps toward a web-based real-time radiation treatment planning system in a cloud computing environment (CCE). The Amazon Elastic Compute Cloud (EC2) with a master node (named m2.xlarge containing 17.1 GB of memory, two virtual cores with 3.25 EC2 Compute Units each, 420 GB of instance storage, 64-bit platform) is used as the backbone of cloud computing for dose calculation and plan optimization. The master node is able to scale the workers on an ‘on-demand’ basis. MC dose calculation is employed to generate accurate beamlet dose kernels by parallel tasks. The intensity modulation optimization uses total-variation regularization (TVR) and generates piecewise constant fluence maps for each initial beam direction in a distributed manner over the CCE. The optimized fluence maps are segmented into deliverable apertures. The shape of each aperture is iteratively rectified to be a sequence of arcs using the manufacture’s constraints. The output plan file from the EC2 is sent to the simple storage service. Three de-identified clinical cancer treatment plans have been studied for evaluating the performance of the new planning platform with 6 MV flattening filter free beams (40 × 40 cm 2 ) from the Varian TrueBeam TM STx linear accelerator. A CCE leads to speed-ups of up to 14-fold for both dose kernel calculations and plan optimizations in the head and neck, lung, and prostate cancer cases considered in this study. The proposed system relies on a CCE that is able to provide an infrastructure for parallel and distributed computing. The resultant plans from the cloud computing are

  10. Computation of reactor control rod drop time under accident conditions

    International Nuclear Information System (INIS)

    Dou Yikang; Yao Weida; Yang Renan; Jiang Nanyan

    1998-01-01

    The computational method of reactor control rod drop time under accident conditions lies mainly in establishing forced vibration equations for the components under action of outside forces on control rod driven line and motion equation for the control rod moving in vertical direction. The above two kinds of equations are connected by considering the impact effects between control rod and its outside components. Finite difference method is adopted to make discretization of the vibration equations and Wilson-θ method is applied to deal with the time history problem. The non-linearity caused by impact is iteratively treated with modified Newton method. Some experimental results are used to validate the validity and reliability of the computational method. Theoretical and experimental testing problems show that the computer program based on the computational method is applicable and reliable. The program can act as an effective tool of design by analysis and safety analysis for the relevant components

  11. Spin-wave utilization in a quantum computer

    Science.gov (United States)

    Khitun, A.; Ostroumov, R.; Wang, K. L.

    2001-12-01

    We propose a quantum computer scheme using spin waves for quantum-information exchange. We demonstrate that spin waves in the antiferromagnetic layer grown on silicon may be used to perform single-qubit unitary transformations together with two-qubit operations during the cycle of computation. The most attractive feature of the proposed scheme is the possibility of random access to any qubit and, consequently, the ability to recognize two qubit gates between any two distant qubits. Also, spin waves allow us to eliminate the use of a strong external magnetic field and microwave pulses. By estimate, the proposed scheme has as high as 104 ratio between quantum system coherence time and the time of a single computational step.

  12. Television viewing, computer use and total screen time in Canadian youth.

    Science.gov (United States)

    Mark, Amy E; Boyce, William F; Janssen, Ian

    2006-11-01

    Research has linked excessive television viewing and computer use in children and adolescents to a variety of health and social problems. Current recommendations are that screen time in children and adolescents should be limited to no more than 2 h per day. To determine the percentage of Canadian youth meeting the screen time guideline recommendations. The representative study sample consisted of 6942 Canadian youth in grades 6 to 10 who participated in the 2001/2002 World Health Organization Health Behaviour in School-Aged Children survey. Only 41% of girls and 34% of boys in grades 6 to 10 watched 2 h or less of television per day. Once the time of leisure computer use was included and total daily screen time was examined, only 18% of girls and 14% of boys met the guidelines. The prevalence of those meeting the screen time guidelines was higher in girls than boys. Fewer than 20% of Canadian youth in grades 6 to 10 met the total screen time guidelines, suggesting that increased public health interventions are needed to reduce the number of leisure time hours that Canadian youth spend watching television and using the computer.

  13. Neural Computations in a Dynamical System with Multiple Time Scales.

    Science.gov (United States)

    Mi, Yuanyuan; Lin, Xiaohan; Wu, Si

    2016-01-01

    Neural systems display rich short-term dynamics at various levels, e.g., spike-frequency adaptation (SFA) at the single-neuron level, and short-term facilitation (STF) and depression (STD) at the synapse level. These dynamical features typically cover a broad range of time scales and exhibit large diversity in different brain regions. It remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics. In this study, we propose that the brain can exploit such dynamical features to implement multiple seemingly contradictory computations in a single neural circuit. To demonstrate this idea, we use continuous attractor neural network (CANN) as a working model and include STF, SFA and STD with increasing time constants in its dynamics. Three computational tasks are considered, which are persistent activity, adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, and hence cannot be implemented by a single dynamical feature or any combination with similar time constants. However, with properly coordinated STF, SFA and STD, we show that the network is able to implement the three computational tasks concurrently. We hope this study will shed light on the understanding of how the brain orchestrates its rich dynamics at various levels to realize diverse cognitive functions.

  14. Automated Generation of User Guidance by Combining Computation and Deduction

    Directory of Open Access Journals (Sweden)

    Walther Neuper

    2012-02-01

    Full Text Available Herewith, a fairly old concept is published for the first time and named "Lucas Interpretation". This has been implemented in a prototype, which has been proved useful in educational practice and has gained academic relevance with an emerging generation of educational mathematics assistants (EMA based on Computer Theorem Proving (CTP. Automated Theorem Proving (ATP, i.e. deduction, is the most reliable technology used to check user input. However ATP is inherently weak in automatically generating solutions for arbitrary problems in applied mathematics. This weakness is crucial for EMAs: when ATP checks user input as incorrect and the learner gets stuck then the system should be able to suggest possible next steps. The key idea of Lucas Interpretation is to compute the steps of a calculation following a program written in a novel CTP-based programming language, i.e. computation provides the next steps. User guidance is generated by combining deduction and computation: the latter is performed by a specific language interpreter, which works like a debugger and hands over control to the learner at breakpoints, i.e. tactics generating the steps of calculation. The interpreter also builds up logical contexts providing ATP with the data required for checking user input, thus combining computation and deduction. The paper describes the concepts underlying Lucas Interpretation so that open questions can adequately be addressed, and prerequisites for further work are provided.

  15. 12 CFR 516.10 - How does OTS compute time periods under this part?

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 5 2010-01-01 2010-01-01 false How does OTS compute time periods under this part? 516.10 Section 516.10 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY APPLICATION PROCESSING PROCEDURES § 516.10 How does OTS compute time periods under this part? In computing...

  16. Accessible high performance computing solutions for near real-time image processing for time critical applications

    Science.gov (United States)

    Bielski, Conrad; Lemoine, Guido; Syryczynski, Jacek

    2009-09-01

    High Performance Computing (HPC) hardware solutions such as grid computing and General Processing on a Graphics Processing Unit (GPGPU) are now accessible to users with general computing needs. Grid computing infrastructures in the form of computing clusters or blades are becoming common place and GPGPU solutions that leverage the processing power of the video card are quickly being integrated into personal workstations. Our interest in these HPC technologies stems from the need to produce near real-time maps from a combination of pre- and post-event satellite imagery in support of post-disaster management. Faster processing provides a twofold gain in this situation: 1. critical information can be provided faster and 2. more elaborate automated processing can be performed prior to providing the critical information. In our particular case, we test the use of the PANTEX index which is based on analysis of image textural measures extracted using anisotropic, rotation-invariant GLCM statistics. The use of this index, applied in a moving window, has been shown to successfully identify built-up areas in remotely sensed imagery. Built-up index image masks are important input to the structuring of damage assessment interpretation because they help optimise the workload. The performance of computing the PANTEX workflow is compared on two different HPC hardware architectures: (1) a blade server with 4 blades, each having dual quad-core CPUs and (2) a CUDA enabled GPU workstation. The reference platform is a dual CPU-quad core workstation and the PANTEX workflow total computing time is measured. Furthermore, as part of a qualitative evaluation, the differences in setting up and configuring various hardware solutions and the related software coding effort is presented.

  17. Effect of the MCNP model definition on the computation time

    International Nuclear Information System (INIS)

    Šunka, Michal

    2017-01-01

    The presented work studies the influence of the method of defining the geometry in the MCNP transport code and its impact on the computational time, including the difficulty of preparing an input file describing the given geometry. Cases using different geometric definitions including the use of basic 2-dimensional and 3-dimensional objects and theirs combinations were studied. The results indicate that an inappropriate definition can increase the computational time by up to 59% (a more realistic case indicates 37%) for the same results and the same statistical uncertainty. (orig.)

  18. An Implementation of Parallel and Networked Computing Schemes for the Real-Time Image Reconstruction Based on Electrical Tomography

    International Nuclear Information System (INIS)

    Park, Sook Hee

    2001-02-01

    boost the reliability and high computation speed of basic primitive matrix operations. The DLL(Dynamic Link Library) is a good candidate for a Matlab programmer to conveniently call the new library, since the original Matlab code does not need to be changed. The DLL library receives Matlab array represented as mxArray, and converts it into the appropriate C language structure after partitioning the array for the parallel operation. Then the DLL calls Matlab's efficient C language library, which is enabled by creating the definition files as well as including the Matlab library into the Visual C 6.0 project file. Finally, the partial results are merged at the shared memory, so the DLL integrates them to pass the final result to the caller residing in Matlab code. In this procedure, the elimination of the complex Matlab interpreting step, in addition to the parallel programming. According to the implementation described as above, matrix multiplication, inverse, pseudo inverse, and Jacobian are implemented. The first two DLLs speed up the computation by the effect of pure parallel processing. Pseudo inverse can enhance the performance based on the previous parallel procedures if and only if the given matrix is full-rank one, as data dependancy hinders the parallel computing otherwise. The enhancement of Jacobian code owes to eliminating the unnecessary code rather than parallel processing, as the operation contains so much overhead. Also implemented are the network version libraries. However, the speed is not so good as the original code because there is network speed limitation. With the better network interface, the speed up can be expected. The performance of the implemented parallel libraries has been assessed by directly measuring the execution time comparing with the original Matlab code. And the calculating times of matrix multiplications, inverse, and pseudo inverse have been reduced to 59.4 %, 34.8 % and 52 %, respectively. The execution time of Jacobian is

  19. Biomechanical influences on balance recovery by stepping.

    Science.gov (United States)

    Hsiao, E T; Robinovitch, S N

    1999-10-01

    Stepping represents a common means for balance recovery after a perturbation to upright posture. Yet little is known regarding the biomechanical factors which determine whether a step succeeds in preventing a fall. In the present study, we developed a simple pendulum-spring model of balance recovery by stepping, and used this to assess how step length and step contact time influence the effort (leg contact force) and feasibility of balance recovery by stepping. We then compared model predictions of step characteristics which minimize leg contact force to experimentally observed values over a range of perturbation strengths. At all perturbation levels, experimentally observed step execution times were higher than optimal, and step lengths were smaller than optimal. However, the predicted increase in leg contact force associated with these deviations was substantial only for large perturbations. Furthermore, increases in the strength of the perturbation caused subjects to take larger, quicker steps, which reduced their predicted leg contact force. We interpret these data to reflect young subjects' desire to minimize recovery effort, subject to neuromuscular constraints on step execution time and step length. Finally, our model predicts that successful balance recovery by stepping is governed by a coupling between step length, step execution time, and leg strength, so that the feasibility of balance recovery decreases unless declines in one capacity are offset by enhancements in the others. This suggests that one's risk for falls may be affected more by small but diffuse neuromuscular impairments than by larger impairment in a single motor capacity.

  20. Small Steps: Preliminary effectiveness and feasibility of an incremental goal-setting intervention to reduce sitting time in older adults.

    Science.gov (United States)

    Lewis, L K; Rowlands, A V; Gardiner, P A; Standage, M; English, C; Olds, T

    2016-03-01

    This study aimed to evaluate the preliminary effectiveness and feasibility of a theory-informed program to reduce sitting time in older adults. Pre-experimental (pre-post) study. Thirty non-working adult (≥ 60 years) participants attended a one hour face-to-face intervention session and were guided through: a review of their sitting time; normative feedback on sitting time; and setting goals to reduce total sitting time and bouts of prolonged sitting. Participants chose six goals and integrated one per week incrementally for six weeks. Participants received weekly phone calls. Sitting time and bouts of prolonged sitting (≥ 30 min) were measured objectively for seven days (activPAL3c inclinometer) pre- and post-intervention. During these periods, a 24-h time recall instrument was administered by computer-assisted telephone interview. Participants completed a post-intervention project evaluation questionnaire. Paired t tests with sequential Bonferroni corrections and Cohen's d effect sizes were calculated for all outcomes. Twenty-seven participants completed the assessments (71.7 ± 6.5 years). Post-intervention, objectively-measured total sitting time was significantly reduced by 51.5 min per day (p=0.006; d=-0.58) and number of bouts of prolonged sitting by 0.8 per day (p=0.002; d=-0.70). Objectively-measured standing increased by 39 min per day (p=0.006; d=0.58). Participants self-reported spending 96 min less per day sitting (p<0.001; d=-0.77) and 32 min less per day watching television (p=0.005; d=-0.59). Participants were highly satisfied with the program. The 'Small Steps' program is a feasible and promising avenue for behavioral modification to reduce sitting time in older adults. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Scalable space-time adaptive simulation tools for computational electrocardiology

    OpenAIRE

    Krause, Dorian; Krause, Rolf

    2013-01-01

    This work is concerned with the development of computational tools for the solution of reaction-diffusion equations from the field of computational electrocardiology. We designed lightweight spatially and space-time adaptive schemes for large-scale parallel simulations. We propose two different adaptive schemes based on locally structured meshes, managed either via a conforming coarse tessellation or a forest of shallow trees. A crucial ingredient of our approach is a non-conforming morta...

  2. The association between choice stepping reaction time and falls in older adults--a path analysis model

    NARCIS (Netherlands)

    Pijnappels, M.A.G.M.; Delbaere, K.; Sturnieks, D.L.; Lord, S.R.

    2010-01-01

    Background: choice stepping reaction time (CSRT) is a functional measure that has been shown to significantly discriminate older fallers from non-fallers. Objective: to investigate how physiological and cognitive factors mediate the association between CSRT performance and multiple falls by use of

  3. NNSA?s Computing Strategy, Acquisition Plan, and Basis for Computing Time Allocation

    Energy Technology Data Exchange (ETDEWEB)

    Nikkel, D J

    2009-07-21

    This report is in response to the Omnibus Appropriations Act, 2009 (H.R. 1105; Public Law 111-8) in its funding of the National Nuclear Security Administration's (NNSA) Advanced Simulation and Computing (ASC) Program. This bill called for a report on ASC's plans for computing and platform acquisition strategy in support of stockpile stewardship. Computer simulation is essential to the stewardship of the nation's nuclear stockpile. Annual certification of the country's stockpile systems, Significant Finding Investigations (SFIs), and execution of Life Extension Programs (LEPs) are dependent on simulations employing the advanced ASC tools developed over the past decade plus; indeed, without these tools, certification would not be possible without a return to nuclear testing. ASC is an integrated program involving investments in computer hardware (platforms and computing centers), software environments, integrated design codes and physical models for these codes, and validation methodologies. The significant progress ASC has made in the past derives from its focus on mission and from its strategy of balancing support across the key investment areas necessary for success. All these investment areas must be sustained for ASC to adequately support current stockpile stewardship mission needs and to meet ever more difficult challenges as the weapons continue to age or undergo refurbishment. The appropriations bill called for this report to address three specific issues, which are responded to briefly here but are expanded upon in the subsequent document: (1) Identify how computing capability at each of the labs will specifically contribute to stockpile stewardship goals, and on what basis computing time will be allocated to achieve the goal of a balanced program among the labs. (2) Explain the NNSA's acquisition strategy for capacity and capability of machines at each of the labs and how it will fit within the existing budget constraints. (3

  4. LHC Computing Grid Project Launches intAction with International Support. A thousand times more computing power by 2006

    CERN Multimedia

    2001-01-01

    The first phase of the LHC Computing Grid project was approved at an extraordinary meeting of the Council on 20 September 2001. CERN is preparing for the unprecedented avalanche of data that will be produced by the Large Hadron Collider experiments. A thousand times more computer power will be needed by 2006! CERN's need for a dramatic advance in computing capacity is urgent. As from 2006, the four giant detectors observing trillions of elementary particle collisions at the LHC will accumulate over ten million Gigabytes of data, equivalent to the contents of about 20 million CD-ROMs, each year of its operation. A thousand times more computing power will be needed than is available to CERN today. The strategy the collabortations have adopted to analyse and store this unprecedented amount of data is the coordinated deployment of Grid technologies at hundreds of institutes which will be able to search out and analyse information from an interconnected worldwide grid of tens of thousands of computers and storag...

  5. Real-time digital simulation of power electronics systems with Neutral Point Piloted multilevel inverter using FPGA

    Energy Technology Data Exchange (ETDEWEB)

    Rakotozafy, Mamianja [Groupe de Recherches en Electrotechnique et Electronique de Nancy (GREEN), Faculte des Sciences et Techniques, BP 70239, 54506 Vandoeuvre Cedex (France); CONVERTEAM SAS, Parc d' activites Techn' hom, 24 avenue du Marechal Juin, BP 40437, 90008 Belfort Cedex (France); Poure, Philippe [Laboratoire d' Instrumentation Electronique de Nancy (LIEN), Faculte des Sciences et Techniques, BP 70239, 54506 Vandoeuvre Cedex (France); Saadate, Shahrokh [Groupe de Recherches en Electrotechnique et Electronique de Nancy (GREEN), Faculte des Sciences et Techniques, BP 70239, 54506 Vandoeuvre Cedex (France); Bordas, Cedric; Leclere, Loic [CONVERTEAM SAS, Parc d' activites Techn' hom, 24 avenue du Marechal Juin, BP 40437, 90008 Belfort Cedex (France)

    2011-02-15

    Most of actual real time simulation platforms have practically about ten microseconds as minimum calculation time step, mainly due to computation limits such as processing speed, architecture adequacy and modeling complexities. Therefore, simulation of fast switching converters' instantaneous models requires smaller computing time step. The approach presented in this paper proposes an answer to such limited modeling accuracies and computational bandwidth of the currently available digital simulators.As an example, the authors present a low cost, flexible and high performance FPGA-based real-time digital simulator for a complete complex power system with Neutral Point Piloted (NPP) three-level inverter. The proposed real-time simulator can model accurately and efficiently the complete power system, reducing costs, physical space and avoiding any damage to the actual equipment in the case of any dysfunction of the digital controller prototype. The converter model is computed at a small fixed time step as low as 100 ns. Such a computation time step allows high precision account of the gating signals and thus avoids averaging methods and event compensations. Moreover, a novel high performance model of the NPP three-level inverter has also been proposed for FPGA implementation. The proposed FPGA-based simulator models the environment of the NPP converter: the dc link, the RLE load and the digital controller and gating signals. FPGA-based real time simulation results are presented and compared with offline results obtained using PLECS software. They validate the efficiency and accuracy of the modeling for the proposed high performance FPGA-based real-time simulation approach. This paper also introduces new potential FPGA-based applications such as low cost real time simulator for power systems by developing a library of flexible and portable models for power converters, electrical machines and drives. (author)

  6. Real-time brain computer interface using imaginary movements

    DEFF Research Database (Denmark)

    El-Madani, Ahmad; Sørensen, Helge Bjarup Dissing; Kjær, Troels W.

    2015-01-01

    Background: Brain Computer Interface (BCI) is the method of transforming mental thoughts and imagination into actions. A real-time BCI system can improve the quality of life of patients with severe neuromuscular disorders by enabling them to communicate with the outside world. In this paper...

  7. Influence of step complexity and presentation style on step performance of computerized emergency operating procedures

    Energy Technology Data Exchange (ETDEWEB)

    Xu Song [Department of Industrial Engineering, Tsinghua University, Beijing 100084 (China); Li Zhizhong [Department of Industrial Engineering, Tsinghua University, Beijing 100084 (China)], E-mail: zzli@tsinghua.edu.cn; Song Fei; Luo Wei; Zhao Qianyi; Salvendy, Gavriel [Department of Industrial Engineering, Tsinghua University, Beijing 100084 (China)

    2009-02-15

    With the development of information technology, computerized emergency operating procedures (EOPs) are taking the place of paper-based ones. However, ergonomics issues of computerized EOPs have not been studied adequately since the industrial practice is quite limited yet. This study examined the influence of step complexity and presentation style of EOPs on step performance. A simulated computerized EOP system was developed in two presentation styles: Style A: one- and two-dimensional flowcharts combination; Style B: two-dimensional flowchart and success logic tree combination. Step complexity was quantified by a complexity measure model based on an entropy concept. Forty subjects participated in the experiment of EOP execution using the simulated system. The results of data analysis on the experiment data indicate that step complexity and presentation style could significantly influence step performance (both step error rate and operation time). Regression models were also developed. The regression analysis results imply that operation time of a step could be well predicted by step complexity while step error rate could only partly predicted by it. The result of a questionnaire investigation implies that step error rate was influenced not only by the operation task itself but also by other human factors. These findings may be useful for the design and assessment of computerized EOPs.

  8. QUANTUM COMPUTING: Quantum Entangled Bits Step Closer to IT.

    Science.gov (United States)

    Zeilinger, A

    2000-07-21

    In contrast to today's computers, quantum computers and information technologies may in future be able to store and transmit information not only in the state "0" or "1," but also in superpositions of the two; information will then be stored and transmitted in entangled quantum states. Zeilinger discusses recent advances toward using this principle for quantum cryptography and highlights studies into the entanglement (or controlled superposition) of several photons, atoms, or ions.

  9. A survey of computational physics introductory computational science

    CERN Document Server

    Landau, Rubin H; Bordeianu, Cristian C

    2008-01-01

    Computational physics is a rapidly growing subfield of computational science, in large part because computers can solve previously intractable problems or simulate natural processes that do not have analytic solutions. The next step beyond Landau's First Course in Scientific Computing and a follow-up to Landau and Páez's Computational Physics, this text presents a broad survey of key topics in computational physics for advanced undergraduates and beginning graduate students, including new discussions of visualization tools, wavelet analysis, molecular dynamics, and computational fluid dynamics

  10. Sorting on STAR. [CDC computer algorithm timing comparison

    Science.gov (United States)

    Stone, H. S.

    1978-01-01

    Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

  11. Variation in computer time with geometry prescription in monte carlo code KENO-IV

    International Nuclear Information System (INIS)

    Gopalakrishnan, C.R.

    1988-01-01

    In most studies, the Monte Carlo criticality code KENO-IV has been compared with other Monte Carlo codes, but evaluation of its performance with different box descriptions has not been done so far. In Monte Carlo computations, any fractional savings of computing time is highly desirable. Variation in computation time with box description in KENO for two different fast reactor fuel subassemblies of FBTR and PFBR is studied. The K eff of an infinite array of fuel subassemblies is calculated by modelling the subassemblies in two different ways (i) multi-region, (ii) multi-box. In addition to these two cases, excess reactivity calculations of FBTR are also performed in two ways to study this effect in a complex geometry. It is observed that the K eff values calculated by multi-region and multi-box models agree very well. However the increase in computation time from the multi-box to the multi-region is considerable, while the difference in computer storage requirements for the two models is negligible. This variation in computing time arises from the way the neutron is tracked in the two cases. (author)

  12. SLMRACE: a noise-free RACE implementation with reduced computational time

    Science.gov (United States)

    Chauvin, Juliet; Provenzi, Edoardo

    2017-05-01

    We present a faster and noise-free implementation of the RACE algorithm. RACE has mixed characteristics between the famous Retinex model of Land and McCann and the automatic color equalization (ACE) color-correction algorithm. The original random spray-based RACE implementation suffers from two main problems: its computational time and the presence of noise. Here, we will show that it is possible to adapt two techniques recently proposed by Banić et al. to the RACE framework in order to drastically decrease the computational time and noise generation. The implementation will be called smart-light-memory-RACE (SLMRACE).

  13. Efficient Geo-Computational Algorithms for Constructing Space-Time Prisms in Road Networks

    Directory of Open Access Journals (Sweden)

    Hui-Ping Chen

    2016-11-01

    Full Text Available The Space-time prism (STP is a key concept in time geography for analyzing human activity-travel behavior under various Space-time constraints. Most existing time-geographic studies use a straightforward algorithm to construct STPs in road networks by using two one-to-all shortest path searches. However, this straightforward algorithm can introduce considerable computational overhead, given the fact that accessible links in a STP are generally a small portion of the whole network. To address this issue, an efficient geo-computational algorithm, called NTP-A*, is proposed. The proposed NTP-A* algorithm employs the A* and branch-and-bound techniques to discard inaccessible links during two shortest path searches, and thereby improves the STP construction performance. Comprehensive computational experiments are carried out to demonstrate the computational advantage of the proposed algorithm. Several implementation techniques, including the label-correcting technique and the hybrid link-node labeling technique, are discussed and analyzed. Experimental results show that the proposed NTP-A* algorithm can significantly improve STP construction performance in large-scale road networks by a factor of 100, compared with existing algorithms.

  14. Computational Procedures for a Class of GI/D/k Systems in Discrete Time

    Directory of Open Access Journals (Sweden)

    Md. Mostafizur Rahman

    2009-01-01

    Full Text Available A class of discrete time GI/D/k systems is considered for which the interarrival times have finite support and customers are served in first-in first-out (FIFO order. The system is formulated as a single server queue with new general independent interarrival times and constant service duration by assuming cyclic assignment of customers to the identical servers. Then the queue length is set up as a quasi-birth-death (QBD type Markov chain. It is shown that this transformed GI/D/1 system has special structures which make the computation of the matrix R simple and efficient, thereby reducing the number of multiplications in each iteration significantly. As a result we were able to keep the computation time very low. Moreover, use of the resulting structural properties makes the computation of the distribution of queue length of the transformed system efficient. The computation of the distribution of waiting time is also shown to be simple by exploiting the special structures.

  15. Teaching Computational Thinking: Deciding to Take Small Steps in a Curriculum

    Science.gov (United States)

    Madoff, R. D.; Putkonen, J.

    2016-12-01

    While computational thinking and reasoning are not necessarily the same as computer programming, programs such as MATLAB can provide the medium through which the logical and computational thinking at the foundation of science can be taught, learned, and experienced. And while math and computer anxiety are often discussed as critical obstacles to students' progress in their geoscience curriculum, it is here suggested that an unfamiliarity with the computational and logical reasoning is what poses a first stumbling block, in addition to the hurdle of expending the effort to learn how to translate a computational problem into the appropriate computer syntax in order to achieve the intended results. Because computational thinking is so vital for all fields, there is a need to initiate many and to build support in the curriculum for it. This presentation focuses on elements to bring into the teaching of computational thinking that are intended as additions to learning MATLAB programming as a basic tool. Such elements include: highlighting a key concept, discussing a basic geoscience problem where the concept would show up, having the student draw or outline a sketch of what they think an operation needs to do in order to perform a desired result, and then finding the relevant syntax to work with. This iterative pedagogy simulates what someone with more experience in programming does, so it discloses the thinking process in the black box of a result. Intended as only a very early stage introduction, advanced applications would need to be developed as students go through an academic program. The objective would be to expose and introduce computational thinking to majors and non-majors and to alleviate some of the math and computer anxiety so that students would choose to advance on with programming or modeling, whether it is built into a 4-year curriculum or not.

  16. In-Network Computation is a Dumb Idea Whose Time Has Come

    KAUST Repository

    Sapio, Amedeo; Abdelaziz, Ibrahim; Aldilaijan, Abdulla; Canini, Marco; Kalnis, Panos

    2017-01-01

    Programmable data plane hardware creates new opportunities for infusing intelligence into the network. This raises a fundamental question: what kinds of computation should be delegated to the network? In this paper, we discuss the opportunities and challenges for co-designing data center distributed systems with their network layer. We believe that the time has finally come for offloading part of their computation to execute in-network. However, in-network computation tasks must be judiciously crafted to match the limitations of the network machine architecture of programmable devices. With the help of our experiments on machine learning and graph analytics workloads, we identify that aggregation functions raise opportunities to exploit the limited computation power of networking hardware to lessen network congestion and improve the overall application performance. Moreover, as a proof-of-concept, we propose DAIET, a system that performs in-network data aggregation. Experimental results with an initial prototype show a large data reduction ratio (86.9%-89.3%) and a similar decrease in the workers' computation time.

  17. In-Network Computation is a Dumb Idea Whose Time Has Come

    KAUST Repository

    Sapio, Amedeo

    2017-11-27

    Programmable data plane hardware creates new opportunities for infusing intelligence into the network. This raises a fundamental question: what kinds of computation should be delegated to the network? In this paper, we discuss the opportunities and challenges for co-designing data center distributed systems with their network layer. We believe that the time has finally come for offloading part of their computation to execute in-network. However, in-network computation tasks must be judiciously crafted to match the limitations of the network machine architecture of programmable devices. With the help of our experiments on machine learning and graph analytics workloads, we identify that aggregation functions raise opportunities to exploit the limited computation power of networking hardware to lessen network congestion and improve the overall application performance. Moreover, as a proof-of-concept, we propose DAIET, a system that performs in-network data aggregation. Experimental results with an initial prototype show a large data reduction ratio (86.9%-89.3%) and a similar decrease in the workers\\' computation time.

  18. Universal quantum computation by discontinuous quantum walk

    International Nuclear Information System (INIS)

    Underwood, Michael S.; Feder, David L.

    2010-01-01

    Quantum walks are the quantum-mechanical analog of random walks, in which a quantum ''walker'' evolves between initial and final states by traversing the edges of a graph, either in discrete steps from node to node or via continuous evolution under the Hamiltonian furnished by the adjacency matrix of the graph. We present a hybrid scheme for universal quantum computation in which a quantum walker takes discrete steps of continuous evolution. This ''discontinuous'' quantum walk employs perfect quantum-state transfer between two nodes of specific subgraphs chosen to implement a universal gate set, thereby ensuring unitary evolution without requiring the introduction of an ancillary coin space. The run time is linear in the number of simulated qubits and gates. The scheme allows multiple runs of the algorithm to be executed almost simultaneously by starting walkers one time step apart.

  19. Reduced computational cost in the calculation of worst case response time for real time systems

    OpenAIRE

    Urriza, José M.; Schorb, Lucas; Orozco, Javier D.; Cayssials, Ricardo

    2009-01-01

    Modern Real Time Operating Systems require reducing computational costs even though the microprocessors become more powerful each day. It is usual that Real Time Operating Systems for embedded systems have advance features to administrate the resources of the applications that they support. In order to guarantee either the schedulability of the system or the schedulability of a new task in a dynamic Real Time System, it is necessary to know the Worst Case Response Time of the Real Time tasks ...

  20. Space-Time Transformation in Flux-form Semi-Lagrangian Schemes

    Directory of Open Access Journals (Sweden)

    Peter C. Chu Chenwu Fan

    2010-01-01

    Full Text Available With a finite volume approach, a flux-form semi-Lagrangian (TFSL scheme with space-time transformation was developed to provide stable and accurate algorithm in solving the advection-diffusion equation. Different from the existing flux-form semi-Lagrangian schemes, the temporal integration of the flux from the present to the next time step is transformed into a spatial integration of the flux at the side of a grid cell (space for the present time step using the characteristic-line concept. The TFSL scheme not only keeps the good features of the semi-Lagrangian schemes (no Courant number limitation, but also has higher accuracy (of a second order in both time and space. The capability of the TFSL scheme is demonstrated by the simulation of the equatorial Rossby-soliton propagation. Computational stability and high accuracy makes this scheme useful in ocean modeling, computational fluid dynamics, and numerical weather prediction.

  1. A GPU-accelerated semi-implicit fractional-step method for numerical solutions of incompressible Navier-Stokes equations

    Science.gov (United States)

    Ha, Sanghyun; Park, Junshin; You, Donghyun

    2018-01-01

    Utility of the computational power of Graphics Processing Units (GPUs) is elaborated for solutions of incompressible Navier-Stokes equations which are integrated using a semi-implicit fractional-step method. The Alternating Direction Implicit (ADI) and the Fourier-transform-based direct solution methods used in the semi-implicit fractional-step method take advantage of multiple tridiagonal matrices whose inversion is known as the major bottleneck for acceleration on a typical multi-core machine. A novel implementation of the semi-implicit fractional-step method designed for GPU acceleration of the incompressible Navier-Stokes equations is presented. Aspects of the programing model of Compute Unified Device Architecture (CUDA), which are critical to the bandwidth-bound nature of the present method are discussed in detail. A data layout for efficient use of CUDA libraries is proposed for acceleration of tridiagonal matrix inversion and fast Fourier transform. OpenMP is employed for concurrent collection of turbulence statistics on a CPU while the Navier-Stokes equations are computed on a GPU. Performance of the present method using CUDA is assessed by comparing the speed of solving three tridiagonal matrices using ADI with the speed of solving one heptadiagonal matrix using a conjugate gradient method. An overall speedup of 20 times is achieved using a Tesla K40 GPU in comparison with a single-core Xeon E5-2660 v3 CPU in simulations of turbulent boundary-layer flow over a flat plate conducted on over 134 million grids. Enhanced performance of 48 times speedup is reached for the same problem using a Tesla P100 GPU.

  2. Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.

    Science.gov (United States)

    Battiti, Roberto

    1990-01-01

    This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from

  3. Time-step selection considerations in the analysis of reactor transients with DIF3D-K

    International Nuclear Information System (INIS)

    Taiwo, T.A.; Khalil, H.S.; Cahalan, J.E.; Morris, E.E.

    1993-01-01

    The DIF3D-K code solves the three-dimensional, time-dependent multigroup neutron diffusion equations by using a nodal approach for spatial discretization and either the theta method or one of three space-time factorization approaches for temporal integration of the nodal equations. The three space-time factorization options (namely, improved quasistatic, adiabatic and conventional point kinetics) were implemented because of their potential efficiency advantage for the analysis of transients in which the flux shape changes more slowly than its amplitude. Here we describe the implementation of DIF3D-K as the neutronics module within the SAS-HWR accident analysis code. We also describe the neutronics-related time step selection algorithms and their influence on the accuracy and efficiency of the various solution options

  4. Parallelized computation for computer simulation of electrocardiograms using personal computers with multi-core CPU and general-purpose GPU.

    Science.gov (United States)

    Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong

    2010-10-01

    Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  5. Associations of office workers' objectively assessed occupational sitting, standing and stepping time with musculoskeletal symptoms.

    Science.gov (United States)

    Coenen, Pieter; Healy, Genevieve N; Winkler, Elisabeth A H; Dunstan, David W; Owen, Neville; Moodie, Marj; LaMontagne, Anthony D; Eakin, Elizabeth A; O'Sullivan, Peter B; Straker, Leon M

    2018-04-22

    We examined the association of musculoskeletal symptoms (MSS) with workplace sitting, standing and stepping time, as well as sitting and standing time accumulation (i.e. usual bout duration of these activities), measured objectively with the activPAL3 monitor. Using baseline data from the Stand Up Victoria trial (216 office workers, 14 workplaces), cross-sectional associations of occupational activities with self-reported MSS (low-back, upper and lower extremity symptoms in the last three months) were examined using probit regression, correcting for clustering and adjusting for confounders. Sitting bout duration was significantly (p < 0.05) associated, non-linearly, with MSS, such that those in the middle tertile displayed the highest prevalence of upper extremity symptoms. Other associations were non-significant but sometimes involved large differences in symptom prevalence (e.g. 38%) by activity. Though causation is unclear, these non-linear associations suggest that sitting and its alternatives (i.e. standing and stepping) interact with MSS and this should be considered when designing safe work systems. Practitioner summary: We studied associations of objectively assessed occupational activities with musculoskeletal symptoms in office workers. Workers who accumulated longer sitting bouts reported fewer upper extremity symptoms. Total activity duration was not significantly associated with musculoskeletal symptoms. We underline the importance of considering total volumes and patterns of activity time in musculoskeletal research.

  6. A strategy for reducing turnaround time in design optimization using a distributed computer system

    Science.gov (United States)

    Young, Katherine C.; Padula, Sharon L.; Rogers, James L.

    1988-01-01

    There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.

  7. Computer Architecture Techniques for Power-Efficiency

    CERN Document Server

    Kaxiras, Stefanos

    2008-01-01

    In the last few years, power dissipation has become an important design constraint, on par with performance, in the design of new computer systems. Whereas in the past, the primary job of the computer architect was to translate improvements in operating frequency and transistor count into performance, now power efficiency must be taken into account at every step of the design process. While for some time, architects have been successful in delivering 40% to 50% annual improvement in processor performance, costs that were previously brushed aside eventually caught up. The most critical of these

  8. Computing the Maximum Detour of a Plane Graph in Subquadratic Time

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    Let G be a plane graph where each edge is a line segment. We consider the problem of computing the maximum detour of G, defined as the maximum over all pairs of distinct points p and q of G of the ratio between the distance between p and q in G and the distance |pq|. The fastest known algorithm f...... for this problem has O(n^2) running time. We show how to obtain O(n^{3/2}*(log n)^3) expected running time. We also show that if G has bounded treewidth, its maximum detour can be computed in O(n*(log n)^3) expected time....

  9. Current algorithms used in reactor safety codes and the impact of future computer development on these algorithms

    International Nuclear Information System (INIS)

    Mahaffy, J.H.; Liles, D.R.; Woodruff, S.B.

    1985-01-01

    Computational methods and solution procedures used in the US Nuclear Regulatory Commission's reactor safety systems codes, Transient Reactor Analysis Code (TRAC) and Reactor Leak and Power Safety Excursion Code (RELAP), are reviewed. Methods used in TRAC-PF1/MOD1, including the stability-enhancing two-step (SETS) technique, which permits fast computations by allowing time steps larger than the material Courant stability limit, are described in detail, and the differences from RELAP5/MOD2 are noted. Developments in computing, including parallel and vector processing, and their applicability to nuclear reactor safety codes are described. These developments, coupled with appropriate numerical methods, make detailed faster-than-real-time reactor safety analysis a realistic near-term possibility

  10. Analysis of discrete and continuous distributions of ventilatory time constants from dynamic computed tomography

    International Nuclear Information System (INIS)

    Doebrich, Marcus; Markstaller, Klaus; Karmrodt, Jens; Kauczor, Hans-Ulrich; Eberle, Balthasar; Weiler, Norbert; Thelen, Manfred; Schreiber, Wolfgang G

    2005-01-01

    In this study, an algorithm was developed to measure the distribution of pulmonary time constants (TCs) from dynamic computed tomography (CT) data sets during a sudden airway pressure step up. Simulations with synthetic data were performed to test the methodology as well as the influence of experimental noise. Furthermore the algorithm was applied to in vivo data. In five pigs sudden changes in airway pressure were imposed during dynamic CT acquisition in healthy lungs and in a saline lavage ARDS model. The fractional gas content in the imaged slice (FGC) was calculated by density measurements for each CT image. Temporal variations of the FGC were analysed assuming a model with a continuous distribution of exponentially decaying time constants. The simulations proved the feasibility of the method. The influence of experimental noise could be well evaluated. Analysis of the in vivo data showed that in healthy lungs ventilation processes can be more likely characterized by discrete TCs whereas in ARDS lungs continuous distributions of TCs are observed. The temporal behaviour of lung inflation and deflation can be characterized objectively using the described new methodology. This study indicates that continuous distributions of TCs reflect lung ventilation mechanics more accurately compared to discrete TCs

  11. Intraindividual Stepping Reaction Time Variability Predicts Falls in Older Adults With Mild Cognitive Impairment.

    Science.gov (United States)

    Bunce, David; Haynes, Becky I; Lord, Stephen R; Gschwind, Yves J; Kochan, Nicole A; Reppermund, Simone; Brodaty, Henry; Sachdev, Perminder S; Delbaere, Kim

    2017-06-01

    Reaction time measures have considerable potential to aid neuropsychological assessment in a variety of health care settings. One such measure, the intraindividual reaction time variability (IIV), is of particular interest as it is thought to reflect neurobiological disturbance. IIV is associated with a variety of age-related neurological disorders, as well as gait impairment and future falls in older adults. However, although persons diagnosed with Mild Cognitive Impairment (MCI) are at high risk of falling, the association between IIV and prospective falls is unknown. We conducted a longitudinal cohort study in cognitively intact (n = 271) and MCI (n = 154) community-dwelling adults aged 70-90 years. IIV was assessed through a variety of measures including simple and choice hand reaction time and choice stepping reaction time tasks (CSRT), the latter administered as a single task and also with a secondary working memory task. Logistic regression did not show an association between IIV on the hand-held tasks and falls. Greater IIV in both CSRT tasks, however, did significantly increase the risk of future falls. This effect was specific to the MCI group, with a stronger effect in persons exhibiting gait, posture, or physiological impairment. The findings suggest that increased stepping IIV may indicate compromised neural circuitry involved in executive function, gait, and posture in persons with MCI increasing their risk of falling. IIV measures have potential to assess neurobiological disturbance underlying physical and cognitive dysfunction in old age, and aid fall risk assessment and routine care in community and health care settings. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Speed scaling for weighted flow time

    NARCIS (Netherlands)

    Bansal, N.; Pruhs, K.R.; Stein, C.

    2007-01-01

    In addition to the traditional goal of efficiently managing time and space, many computers now need to efficiently manage power usage. For example, Intel's SpeedStep and AMD's PowerNOW technologies allow the Windows XP operating system to dynamically change the speed of the processor to prolong

  13. A computational method for sharp interface advection

    Science.gov (United States)

    Bredmose, Henrik; Jasak, Hrvoje

    2016-01-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face–interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM® extension and is published as open source. PMID:28018619

  14. A computational method for sharp interface advection.

    Science.gov (United States)

    Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje

    2016-11-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face-interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM ® extension and is published as open source.

  15. Memory controllers for real-time embedded systems predictable and composable real-time systems

    CERN Document Server

    Akesson, Benny

    2012-01-01

      Verification of real-time requirements in systems-on-chip becomes more complex as more applications are integrated. Predictable and composable systems can manage the increasing complexity using formal verification and simulation.  This book explains the concepts of predictability and composability and shows how to apply them to the design and analysis of a memory controller, which is a key component in any real-time system. This book is generally intended for readers interested in Systems-on-Chips with real-time applications.   It is especially well-suited for readers looking to use SDRAM memories in systems with hard or firm real-time requirements. There is a strong focus on real-time concepts, such as predictability and composability, as well as a brief discussion about memory controller architectures for high-performance computing. Readers will learn step-by-step how to go from an unpredictable SDRAM memory, offering highly variable bandwidth and latency, to a predictable and composable shared memory...

  16. Optimizing the number of steps in learning tasks for complex skills.

    Science.gov (United States)

    Nadolski, Rob J; Kirschner, Paul A; van Merriënboer, Jeroen J G

    2005-06-01

    Carrying out whole tasks is often too difficult for novice learners attempting to acquire complex skills. The common solution is to split up the tasks into a number of smaller steps. The number of steps must be optimized for efficient and effective learning. The aim of the study is to investigate the relation between the number of steps provided to learners and the quality of their learning of complex skills. It is hypothesized that students receiving an optimized number of steps will learn better than those receiving either the whole task in only one step or those receiving a large number of steps. Participants were 35 sophomore law students studying at Dutch universities, mean age=22.8 years (SD=3.5), 63% were female. Participants were randomly assigned to 1 of 3 computer-delivered versions of a multimedia programme on how to prepare and carry out a law plea. The versions differed only in the number of learning steps provided. Videotaped plea-performance results were determined, various related learning measures were acquired and all computer actions were logged and analyzed. Participants exposed to an intermediate (i.e. optimized) number of steps outperformed all others on the compulsory learning task. No differences in performance on a transfer task were found. A high number of steps proved to be less efficient for carrying out the learning task. An intermediate number of steps is the most effective, proving that the number of steps can be optimized for improving learning.

  17. Applications of parallel computer architectures to the real-time simulation of nuclear power systems

    International Nuclear Information System (INIS)

    Doster, J.M.; Sills, E.D.

    1988-01-01

    In this paper the authors report on efforts to utilize parallel computer architectures for the thermal-hydraulic simulation of nuclear power systems and current research efforts toward the development of advanced reactor operator aids and control systems based on this new technology. Many aspects of reactor thermal-hydraulic calculations are inherently parallel, and the computationally intensive portions of these calculations can be effectively implemented on modern computers. Timing studies indicate faster-than-real-time, high-fidelity physics models can be developed when the computational algorithms are designed to take advantage of the computer's architecture. These capabilities allow for the development of novel control systems and advanced reactor operator aids. Coupled with an integral real-time data acquisition system, evolving parallel computer architectures can provide operators and control room designers improved control and protection capabilities. Current research efforts are currently under way in this area

  18. Step-to-step spatiotemporal variables and ground reaction forces of intra-individual fastest sprinting in a single session.

    Science.gov (United States)

    Nagahara, Ryu; Mizutani, Mirai; Matsuo, Akifumi; Kanehisa, Hiroaki; Fukunaga, Tetsuo

    2018-06-01

    We aimed to investigate the step-to-step spatiotemporal variables and ground reaction forces during the acceleration phase for characterising intra-individual fastest sprinting within a single session. Step-to-step spatiotemporal variables and ground reaction forces produced by 15 male athletes were measured over a 50-m distance during repeated (three to five) 60-m sprints using a long force platform system. Differences in measured variables between the fastest and slowest trials were examined at each step until the 22nd step using a magnitude-based inferences approach. There were possibly-most likely higher running speed and step frequency (2nd to 22nd steps) and shorter support time (all steps) in the fastest trial than in the slowest trial. Moreover, for the fastest trial there were likely-very likely greater mean propulsive force during the initial four steps and possibly-very likely larger mean net anterior-posterior force until the 17th step. The current results demonstrate that better sprinting performance within a single session is probably achieved by 1) a high step frequency (except the initial step) with short support time at all steps, 2) exerting a greater mean propulsive force during initial acceleration, and 3) producing a greater mean net anterior-posterior force during initial and middle acceleration.

  19. Calculating Characteristics of the Screws with Constant And Variable Step

    Directory of Open Access Journals (Sweden)

    B. N. Zotov

    2015-01-01

    Full Text Available This work is devoted to creating a technique for calculating power characteristics of the screws with constant and variable step for the centrifugal pumps. The technique feature is that the reverse currents, which are observed in screws working at low flow, are numerically taken into account. The paper presents a diagram of the stream in the screw with flow to the network Q=0, and the static pressure of the screw in this mode is computed according to reverse current parameters. Maximum flow of screw is determined from the known formulas. When calculating the power characteristics and computing the overall efficiency of the screw, for the first time a volumetric efficiency of the screw is introduced. It is defined as a ratio between the flow into the network and the sum of the reverse current flows and a flow into the network. This approach allowed us to determine the efficiency of the screw over the entire range of flows.A comparison of experimental characteristics of the constant step screw with those of calculated by the proposed technique shows their good agreement.The technique is also used in calculating characteristics of the variable step screws. The variable step screw is considered as a screw consisting of two screws with a smooth transition of the blades from the inlet to the outlet. Screws in which the step at the inlet is less than that of at the outlet as well as screws with the step at the inlet being more than that of at the outlet were investigated. It is shown that a pressure of the screw with zero step and the value of the reverse currents depend only on the parameters of the input section of the screw, and the maximum flow, if the step at the inlet is more than the step at the outlet, is determined by the parameters of the output part of the screw. Otherwise, the maximum flow is determined a little bit differently.The paper compares experimental characteristics with characteristics calculated by the technique for variable step

  20. Time-step selection considerations in the analysis of reactor transients with DIF3D-K

    International Nuclear Information System (INIS)

    Taiwo, T.A.; Khalil, H.S.; Cahalan, J.E.; Morris, E.E.

    1993-01-01

    The DIF3D-K code solves the three-dimensional, time-dependent multigroup neutron diffusion equations by using a nodal approach for spatial discretization and either the theta method or one of three space-time factorization approaches for temporal integration of the nodal equations. The three space-time factorization options (namely, improved quasistatic, adiabatic, and conventional point kinetics) were implemented because of their potential efficiency advantage for the analysis of transients in which the flux shape changes more slowly than its amplitude. In this paper, we describe the implementation of DIF3D-K as the neutronics module within the SAS-HWR accident analysis code. We also describe the neuronic-related time-step selection algorithms and their influence on the accuracy and efficiency of the various solution options

  1. Computation of transit times using the milestoning method with applications to polymer translocation

    Science.gov (United States)

    Hawk, Alexander T.; Konda, Sai Sriharsha M.; Makarov, Dmitrii E.

    2013-08-01

    Milestoning is an efficient approximation for computing long-time kinetics and thermodynamics of large molecular systems, which are inaccessible to brute-force molecular dynamics simulations. A common use of milestoning is to compute the mean first passage time (MFPT) for a conformational transition of interest. However, the MFPT is not always the experimentally observed timescale. In particular, the duration of the transition path, or the mean transit time, can be measured in single-molecule experiments, such as studies of polymers translocating through pores and fluorescence resonance energy transfer studies of protein folding. Here we show how to use milestoning to compute transit times and illustrate our approach by applying it to the translocation of a polymer through a narrow pore.

  2. A heterogeneous hierarchical architecture for real-time computing

    Energy Technology Data Exchange (ETDEWEB)

    Skroch, D.A.; Fornaro, R.J.

    1988-12-01

    The need for high-speed data acquisition and control algorithms has prompted continued research in the area of multiprocessor systems and related programming techniques. The result presented here is a unique hardware and software architecture for high-speed real-time computer systems. The implementation of a prototype of this architecture has required the integration of architecture, operating systems and programming languages into a cohesive unit. This report describes a Heterogeneous Hierarchial Architecture for Real-Time (H{sup 2} ART) and system software for program loading and interprocessor communication.

  3. Modern EMC analysis I time-domain computational schemes

    CERN Document Server

    Kantartzis, Nikolaos V

    2008-01-01

    The objective of this two-volume book is the systematic and comprehensive description of the most competitive time-domain computational methods for the efficient modeling and accurate solution of contemporary real-world EMC problems. Intended to be self-contained, it performs a detailed presentation of all well-known algorithms, elucidating on their merits or weaknesses, and accompanies the theoretical content with a variety of applications. Outlining the present volume, the analysis covers the theory of the finite-difference time-domain, the transmission-line matrix/modeling, and the finite i

  4. MIDAS robust trend estimator for accurate GPS station velocities without step detection

    Science.gov (United States)

    Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C.; Gazeaux, Julien

    2016-03-01

    Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi)/(tj-ti) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.

  5. Efficient quantum algorithm for computing n-time correlation functions.

    Science.gov (United States)

    Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E

    2014-07-11

    We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.

  6. Impact of first-step potential and time on the vertical growth of ZnO nanorods on ITO substrate by two-step electrochemical deposition

    International Nuclear Information System (INIS)

    Kim, Tae Gyoum; Jang, Jin-Tak; Ryu, Hyukhyun; Lee, Won-Jae

    2013-01-01

    Highlights: •We grew vertical ZnO nanorods on ITO substrate using a two-step continuous potential process. •The nucleation for the ZnO nanorods growth was changed by first-step potential and duration. •The vertical ZnO nanorods were well grown when first-step potential was −1.2 V and 10 s. -- Abstract: In this study, we analyzed the growth of ZnO nanorods on an ITO (indium doped tin oxide) substrate by electrochemical deposition using a two-step, continuous potential process. We examined the effect of changing the first-step potential as well as the first-step duration on the morphological, structural and optical properties of ZnO nanorods, measured via using field emission scanning electron microscopy (FE-SEM), X-ray diffraction (XRD) and photoluminescence (PL), respectively. As a result, vertical ZnO nanorods were grown on ITO substrate without the need for a template when the first-step potential was set to −1.2 V for a duration of 10 s, and the second-step potential was set to −0.7 V for a duration of 1190 s. The ZnO nanorods on this sample showed the highest XRD (0 0 2)/(1 0 0) peak intensity ratio and the highest PL near band edge emission to deep level emission peak intensity ratio (NBE/DLE). In this study, the nucleation for vertical ZnO nanorod growth on an ITO substrate was found to be affected by changes in the first-step potential and first-step duration

  7. Future trends in power plant process computer techniques

    International Nuclear Information System (INIS)

    Dettloff, K.

    1975-01-01

    The development of new concepts of the process computer technique has advanced in great steps. The steps are in the three sections: hardware, software, application concept. New computers with a new periphery such as, e.g., colour layer equipment, have been developed in hardware. In software, a decisive step in the sector 'automation software' has been made. Through these components, a step forwards has also been made in the question of incorporating the process computer in the structure of the whole power plant control technique. (orig./LH) [de

  8. A Modular Environment for Geophysical Inversion and Run-time Autotuning using Heterogeneous Computing Systems

    Science.gov (United States)

    Myre, Joseph M.

    Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that

  9. One-step versus two-step mechanism of Diels-Alder reaction of 1-chloro-1-nitroethene with cyclopentadiene and furan.

    Science.gov (United States)

    Jasiński, Radomir

    2017-08-01

    DFT computational study shows that Diels-Alder (DA) reactions of 1-chloro-1-nitroethene with cyclopentadiene and furan have polar nature. However, their mechanism is substantially different. In particular, 1-chloro-1-nitroethene react with cyclopentadiene according to one-step mechanism. In the same time, more favourable channel associated with the P-DA reaction between furan and 1-chloro-1-nitroethene is a domino process, that comprises an initial hetero-Diels-Alder reaction yielding a [2+4] cycloadduct, which experiences a subsequent [3,3] sigmatropic shift to yield the expected formal [4+2] cycloadduct. This is a consequence of more polar nature of reaction, due to higher nucleophilicity of furan in comparison to cyclopentadiene. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. A Non-Linear Digital Computer Model Requiring Short Computation Time for Studies Concerning the Hydrodynamics of the BWR

    Energy Technology Data Exchange (ETDEWEB)

    Reisch, F; Vayssier, G

    1969-05-15

    This non-linear model serves as one of the blocks in a series of codes to study the transient behaviour of BWR or PWR type reactors. This program is intended to be the hydrodynamic part of the BWR core representation or the hydrodynamic part of the PWR heat exchanger secondary side representation. The equations have been prepared for the CSMP digital simulation language. By using the most suitable integration routine available, the ratio of simulation time to real time is about one on an IBM 360/75 digital computer. Use of the slightly different language DSL/40 on an IBM 7044 computer takes about four times longer. The code has been tested against the Eindhoven loop with satisfactory agreement.

  11. Numerical Simulation of Air Entrainment for Flat-Sloped Stepped Spillway

    Directory of Open Access Journals (Sweden)

    Bentalha Chakib

    2015-03-01

    Full Text Available Stepped spillway is a good hydraulic structure for energy dissipation because of the large value of the surface roughness. The performance of the stepped spillway is enhanced with the presence of air that can prevent or reduce the cavitation damage. Chanson developed a method to determine the position of the start of air entrainment called inception point. Within this work the inception point is determined by using fluent computational fluid dynamics (CFD where the volume of fluid (VOF model is used as a tool to simulate air-water interaction on the free surface thereby the turbulence closure is derived in the k –ε turbulence standard model, at the same time one-sixth power law distribution of the velocity profile is verified. Also the pressure contours and velocity vectors at the bed surface are determined. The found numerical results agree well with experimental results.

  12. Effect of increased exposure times on amount of residual monomer released from single-step self-etch adhesives.

    Science.gov (United States)

    Altunsoy, Mustafa; Botsali, Murat Selim; Tosun, Gonca; Yasar, Ahmet

    2015-10-16

    The aim of this study was to evaluate the effect of increased exposure times on the amount of residual Bis-GMA, TEGDMA, HEMA and UDMA released from single-step self-etch adhesive systems. Two adhesive systems were used. The adhesives were applied to bovine dentin surface according to the manufacturer's instructions and were polymerized using an LED curing unit for 10, 20 and 40 seconds (n = 5). After polymerization, the specimens were stored in 75% ethanol-water solution (6 mL). Residual monomers (Bis-GMA, TEGDMA, UDMA and HEMA) that were eluted from the adhesives (after 10 minutes, 1 hour, 1 day, 7 days and 30 days) were analyzed by high-performance liquid chromatography (HPLC). The data were analyzed using 1-way analysis of variance and Tukey HSD tests. Among the time periods, the highest amount of released residual monomers from adhesives was observed in the 10th minute. There were statistically significant differences regarding released Bis-GMA, UDMA, HEMA and TEGDMA between the adhesive systems (p<0.05). There were no significant differences among the 10, 20 and 40 second polymerization times according to their effect on residual monomer release from adhesives (p>0.05). Increasing the polymerization time did not have an effect on residual monomer release from single-step self-etch adhesives.

  13. Stepping stability: effects of sensory perturbation

    Directory of Open Access Journals (Sweden)

    Krebs David E

    2005-05-01

    Full Text Available Abstract Background Few tools exist for quantifying locomotor stability in balance impaired populations. The objective of this study was to develop and evaluate a technique for quantifying stability of stepping in healthy people and people with peripheral (vestibular hypofunction, VH and central (cerebellar pathology, CB balance dysfunction by means a sensory (auditory perturbation test. Methods Balance impaired and healthy subjects performed a repeated bench stepping task. The perturbation was applied by suddenly changing the cadence of the metronome (100 beat/min to 80 beat/min at a predetermined time (but unpredictable by the subject during the trial. Perturbation response was quantified by computing the Euclidian distance, expressed as a fractional error, between the anterior-posterior center of gravity attractor trajectory before and after the perturbation was applied. The error immediately after the perturbation (Emax, error after recovery (Emin and the recovery response (Edif were documented for each participant, and groups were compared with ANOVA. Results Both balance impaired groups exhibited significantly higher Emax (p = .019 and Emin (p = .028 fractional errors compared to the healthy (HE subjects, but there were no significant differences between CB and VH groups. Although response recovery was slower for CB and VH groups compared to the HE group, the difference was not significant (p = .051. Conclusion The findings suggest that individuals with balance impairment have reduced ability to stabilize locomotor patterns following perturbation, revealing the fragility of their impairment adaptations and compensations. These data suggest that auditory perturbations applied during a challenging stepping task may be useful for measuring rehabilitation outcomes.

  14. Diffraction model of a step-out transition

    Energy Technology Data Exchange (ETDEWEB)

    Chao, A.W.; Zimmermann, F.

    1996-06-01

    The diffraction model of a cavity, suggested by Lawson, Bane and Sands is generalized to a step out transition. Using this model, the high frequency impedance is calculated explicitly for the case that the transition step is small compared with the beam pipe radius. In the diffraction model for a small step out transition, the total energy is conserved, but, unlike the cavity case, the diffracted waves in the geometric shadow and the pipe region, in general, do not always carry equal energy. In the limit of small step sizes, the impedance derived from the diffraction model agrees with that found by Balakin, Novokhatsky and also Kheifets. This impedance can be used to compute the wake field of a round collimator whose half aperture is much larger than the bunch length, as existing in the SLC final focus.

  15. Climate Data Provenance Tracking for Just-In-Time Computation

    Science.gov (United States)

    Fries, S.; Nadeau, D.; Doutriaux, C.; Williams, D. N.

    2016-12-01

    The "Climate Data Management System" (CDMS) was created in 1996 as part of the Climate Data Analysis Tools suite of software. It provides a simple interface into a wide variety of climate data formats, and creates NetCDF CF-Compliant files. It leverages the NumPy framework for high performance computation, and is an all-in-one IO and computation package. CDMS has been extended to track manipulations of data, and trace that data all the way to the original raw data. This extension tracks provenance about data, and enables just-in-time (JIT) computation. The provenance for each variable is packaged as part of the variable's metadata, and can be used to validate data processing and computations (by repeating the analysis on the original data). It also allows for an alternate solution for sharing analyzed data; if the bandwidth for a transfer is prohibitively expensive, the provenance serialization can be passed in a much more compact format and the analysis rerun on the input data. Data provenance tracking in CDMS enables far-reaching and impactful functionalities, permitting implementation of many analytical paradigms.

  16. Event Based Simulator for Parallel Computing over the Wide Area Network for Real Time Visualization

    Science.gov (United States)

    Sundararajan, Elankovan; Harwood, Aaron; Kotagiri, Ramamohanarao; Satria Prabuwono, Anton

    As the computational requirement of applications in computational science continues to grow tremendously, the use of computational resources distributed across the Wide Area Network (WAN) becomes advantageous. However, not all applications can be executed over the WAN due to communication overhead that can drastically slowdown the computation. In this paper, we introduce an event based simulator to investigate the performance of parallel algorithms executed over the WAN. The event based simulator known as SIMPAR (SIMulator for PARallel computation), simulates the actual computations and communications involved in parallel computation over the WAN using time stamps. Visualization of real time applications require steady stream of processed data flow for visualization purposes. Hence, SIMPAR may prove to be a valuable tool to investigate types of applications and computing resource requirements to provide uninterrupted flow of processed data for real time visualization purposes. The results obtained from the simulation show concurrence with the expected performance using the L-BSP model.

  17. Preimages for Step-Reduced SHA-2

    DEFF Research Database (Denmark)

    Aoki, Kazumaro; Guo, Jian; Matusiewicz, Krystian

    2009-01-01

    In this paper, we present preimage attacks on up to 43-step SHA-256 (around 67% of the total 64 steps) and 46-step SHA-512 (around 57.5% of the total 80 steps), which significantly increases the number of attacked steps compared to the best previously published preimage attack working for 24 steps....... The time complexities are 2^251.9, 2^509 for finding pseudo-preimages and 2^254.9, 2^511.5 compression function operations for full preimages. The memory requirements are modest, around 2^6 words for 43-step SHA-256 and 46-step SHA-512. The pseudo-preimage attack also applies to 43-step SHA-224 and SHA-384...

  18. Modeling subsurface reactive flows using leadership-class computing

    Energy Technology Data Exchange (ETDEWEB)

    Mills, Richard Tran [Computational Earth Sciences Group, Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831-6015 (United States); Hammond, Glenn E [Hydrology Group, Environmental Technology Division, Pacific Northwest National Laboratory, Richland, WA 99352 (United States); Lichtner, Peter C [Hydrology, Geochemistry, and Geology Group, Earth and Environmental Sciences Division, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Sripathi, Vamsi [Department of Computer Science, North Carolina State University, Raleigh, NC 27695-8206 (United States); Mahinthakumar, G [Department of Civil, Construction, and Environmental Engineering, North Carolina State University, Raleigh, NC 27695-7908 (United States); Smith, Barry F, E-mail: rmills@ornl.go, E-mail: glenn.hammond@pnl.go, E-mail: lichtner@lanl.go, E-mail: vamsi_s@ncsu.ed, E-mail: gmkumar@ncsu.ed, E-mail: bsmith@mcs.anl.go [Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439-4844 (United States)

    2009-07-01

    We describe our experiences running PFLOTRAN-a code for simulation of coupled hydro-thermal-chemical processes in variably saturated, non-isothermal, porous media- on leadership-class supercomputers, including initial experiences running on the petaflop incarnation of Jaguar, the Cray XT5 at the National Center for Computational Sciences at Oak Ridge National Laboratory. PFLOTRAN utilizes fully implicit time-stepping and is built on top of the Portable, Extensible Toolkit for Scientific Computation (PETSc). We discuss some of the hurdles to 'at scale' performance with PFLOTRAN and the progress we have made in overcoming them on leadership-class computer architectures.

  19. Modeling subsurface reactive flows using leadership-class computing

    International Nuclear Information System (INIS)

    Mills, Richard Tran; Hammond, Glenn E; Lichtner, Peter C; Sripathi, Vamsi; Mahinthakumar, G; Smith, Barry F

    2009-01-01

    We describe our experiences running PFLOTRAN-a code for simulation of coupled hydro-thermal-chemical processes in variably saturated, non-isothermal, porous media- on leadership-class supercomputers, including initial experiences running on the petaflop incarnation of Jaguar, the Cray XT5 at the National Center for Computational Sciences at Oak Ridge National Laboratory. PFLOTRAN utilizes fully implicit time-stepping and is built on top of the Portable, Extensible Toolkit for Scientific Computation (PETSc). We discuss some of the hurdles to 'at scale' performance with PFLOTRAN and the progress we have made in overcoming them on leadership-class computer architectures.

  20. An atomic orbital based real-time time-dependent density functional theory for computing electronic circular dichroism band spectra

    Energy Technology Data Exchange (ETDEWEB)

    Goings, Joshua J.; Li, Xiaosong, E-mail: xsli@uw.edu [Department of Chemistry, University of Washington, Seattle, Washington 98195 (United States)

    2016-06-21

    One of the challenges of interpreting electronic circular dichroism (ECD) band spectra is that different states may have different rotatory strength signs, determined by their absolute configuration. If the states are closely spaced and opposite in sign, observed transitions may be washed out by nearby states, unlike absorption spectra where transitions are always positive additive. To accurately compute ECD bands, it is necessary to compute a large number of excited states, which may be prohibitively costly if one uses the linear-response time-dependent density functional theory (TDDFT) framework. Here we implement a real-time, atomic-orbital based TDDFT method for computing the entire ECD spectrum simultaneously. The method is advantageous for large systems with a high density of states. In contrast to previous implementations based on real-space grids, the method is variational, independent of nuclear orientation, and does not rely on pseudopotential approximations, making it suitable for computation of chiroptical properties well into the X-ray regime.

  1. First Steps Toward a Computational Theory of Autism

    OpenAIRE

    Balkenius, Christian; Bjorne, Petra

    2004-01-01

    A computational model with three interacting components for context sensitive reinforcement learning, context processing and automation can autonomously learn a focus attention and a shift attention task. The performance of the model is similar to that of normal children, and when a single parameter is changed, the performance on the two tasks approaches that of autistic children.

  2. Finite difference time domain analysis of a chiro plasma

    International Nuclear Information System (INIS)

    Torres-Silva, H.; Obligado, A.; Reggiani, N.; Sakanaka, P.H.

    1995-01-01

    The finite difference time-domain (FDTD) method is one of the most widely used computational methods in electromagnetics. Using FDTD, Maxwell's equations are solved directly in the time domain via finite differences and time stepping. The basic approach is relatively easy to understand and is an alternative to the more usual frequency-domain approaches. (author). 5 refs

  3. Microsoft Office SharePoint Designer 2007 Step by Step

    CERN Document Server

    Coventry, Penelope

    2008-01-01

    The smart way to learn Office SharePoint Designer 2007-one step at a time! Work at your own pace through the easy numbered steps, practice files on CD, helpful hints, and troubleshooting tips to master the fundamentals of building customized SharePoint sites and applications. You'll learn how to work with Windows® SharePoint Services 3.0 and Office SharePoint Server 2007 to create Web pages complete with Cascading Style Sheets, Lists, Libraries, and customized Web parts. Then, make your site really work for you by adding data sources, including databases, XML data and Web services, and RSS fe

  4. [Computed tomography of the lungs. A step into the fourth dimension].

    Science.gov (United States)

    Dinkel, J; Hintze, C; Rochet, N; Thieke, C; Biederer, J

    2009-08-01

    To discuss the techniques for four dimensional computed tomography of the lungs in tumour patients. The image acquisition in CT can be done using respiratory gating in two different ways: the helical or cine mode. In the helical mode, the couch moves continuously during image and respiratory signal acquisition. In the cine mode, the couch remains in the same position during at least one complete respiratory cycle and then moves to next position. The 4D images are either acquired prospectively or reconstructed retrospectively with dedicated algorithms in a freely selectable respiratory phase. The time information required for motion depiction in 4D imaging can be obtained with tolerable motion artefacts. Partial projection and stepladder-artifacts are occurring predominantly close to the diaphragm, where the displacement is most prominent. Due to the long exposure times, radiation exposure is significantly higher compared to a simple breathhold helical acquisition. Therefore, the use of 4D-CT is restricted to only specific indications (i.e. radiotherapy planning). 4D-CT of the lung allows evaluating the respiration-correlated displacement of lungs and tumours in space for radiotherapy planning.

  5. Considering dominance in reduced single-step genomic evaluations.

    Science.gov (United States)

    Ertl, J; Edel, C; Pimentel, E C G; Emmerling, R; Götz, K-U

    2018-06-01

    Single-step models including dominance can be an enormous computational task and can even be prohibitive for practical application. In this study, we try to answer the question whether a reduced single-step model is able to estimate breeding values of bulls and breeding values, dominance deviations and total genetic values of cows with acceptable quality. Genetic values and phenotypes were simulated (500 repetitions) for a small Fleckvieh pedigree consisting of 371 bulls (180 thereof genotyped) and 553 cows (40 thereof genotyped). This pedigree was virtually extended for 2,407 non-genotyped daughters. Genetic values were estimated with the single-step model and with different reduced single-step models. Including more relatives of genotyped cows in the reduced single-step model resulted in a better agreement of results with the single-step model. Accuracies of genetic values were largest with single-step and smallest with reduced single-step when only the cows genotyped were modelled. The results indicate that a reduced single-step model is suitable to estimate breeding values of bulls and breeding values, dominance deviations and total genetic values of cows with acceptable quality. © 2018 Blackwell Verlag GmbH.

  6. Spectrum of Slip Processes on the Subduction Interface in a Continuum Framework Resolved by Rate-and State Dependent Friction and Adaptive Time Stepping

    Science.gov (United States)

    Herrendoerfer, R.; van Dinther, Y.; Gerya, T.

    2015-12-01

    To explore the relationships between subduction dynamics and the megathrust earthquake potential, we have recently developed a numerical model that bridges the gap between processes on geodynamic and earthquake cycle time scales. In a self-consistent, continuum-based framework including a visco-elasto-plastic constitutive relationship, cycles of megathrust earthquake-like ruptures were simulated through a purely slip rate-dependent friction, albeit with very low slip rates (van Dinther et al., JGR, 2013). In addition to much faster earthquakes, a range of aseismic slip processes operate at different time scales in nature. These aseismic processes likely accommodate a considerable amount of the plate convergence and are thus relevant in order to estimate the long-term seismic coupling and related hazard in subduction zones. To simulate and resolve this wide spectrum of slip processes, we innovatively implemented rate-and state dependent friction (RSF) and an adaptive time-stepping into our continuum framework. The RSF formulation, in contrast to our previous friction formulation, takes the dependency of frictional strength on a state variable into account. It thereby allows for continuous plastic yielding inside rate-weakening regions, which leads to aseismic slip. In contrast to the conventional RSF formulation, we relate slip velocities to strain rates and use an invariant formulation. Thus we do not require the a priori definition of infinitely thin, planar faults in a homogeneous elastic medium. With this new implementation of RSF, we succeed to produce consistent cycles of frictional instabilities. By changing the frictional parameter a, b, and the characteristic slip distance, we observe a transition from stable sliding to stick-slip behaviour. This transition is in general agreement with predictions from theoretical estimates of the nucleation size, thereby to first order validating our implementation. By incorporating adaptive time-stepping based on a

  7. Effect of One-Step and Multi-Steps Polishing System on Enamel Roughness

    Directory of Open Access Journals (Sweden)

    Cynthia Sumali

    2013-07-01

    Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 The final procedures of orthodontic treatment are bracket debonding and cleaning the remaining adhesive. Multi-step polishing system is the most common method used. The disadvantage of that system is long working time, because of the stages that should be done. Therefore, dental material manufacturer make an improvement to the system, to reduce several stages into one stage only. This new system is known as one-step polishing system. Objective: To compare the effect of one-step and multi-step polishing system on enamel roughness after orthodontic bracket debonding. Methods: Randomized control trial was conducted included twenty-eight maxillary premolar randomized into two polishing system; one-step OptraPol (Ivoclar, Vivadent and multi-step AstroPol (Ivoclar, Vivadent. After bracket debonding, the remaining adhesive on each group was cleaned by subjective polishing system for ninety seconds using low speed handpiece. The enamel roughness was subjected to profilometer, registering two roughness parameters (Ra, Rz. Independent t-test was used to analyze the mean score of enamel roughness in each group. Results: There was no significant difference of enamel roughness between one-step and multi-step polishing system (p>0.005. Conclusion: One-step polishing system can produce a similar enamel roughness to multi-step polishing system after bracket debonding and adhesive cleaning.DOI: 10.14693/jdi.v19i3.136

  8. Reverse time migration by Krylov subspace reduced order modeling

    Science.gov (United States)

    Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali

    2018-04-01

    Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.

  9. Multi-step EMG Classification Algorithm for Human-Computer Interaction

    Science.gov (United States)

    Ren, Peng; Barreto, Armando; Adjouadi, Malek

    A three-electrode human-computer interaction system, based on digital processing of the Electromyogram (EMG) signal, is presented. This system can effectively help disabled individuals paralyzed from the neck down to interact with computers or communicate with people through computers using point-and-click graphic interfaces. The three electrodes are placed on the right frontalis, the left temporalis and the right temporalis muscles in the head, respectively. The signal processing algorithm used translates the EMG signals during five kinds of facial movements (left jaw clenching, right jaw clenching, eyebrows up, eyebrows down, simultaneous left & right jaw clenching) into five corresponding types of cursor movements (left, right, up, down and left-click), to provide basic mouse control. The classification strategy is based on three principles: the EMG energy of one channel is typically larger than the others during one specific muscle contraction; the spectral characteristics of the EMG signals produced by the frontalis and temporalis muscles during different movements are different; the EMG signals from adjacent channels typically have correlated energy profiles. The algorithm is evaluated on 20 pre-recorded EMG signal sets, using Matlab simulations. The results show that this method provides improvements and is more robust than other previous approaches.

  10. One-step trinary signed-digit arithmetic using an efficient encoding scheme

    Science.gov (United States)

    Salim, W. Y.; Fyath, R. S.; Ali, S. A.; Alam, Mohammad S.

    2000-11-01

    The trinary signed-digit (TSD) number system is of interest for ultra fast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.

  11. SPAR-H Step-by-Step Guidance

    Energy Technology Data Exchange (ETDEWEB)

    W. J. Galyean; A. M. Whaley; D. L. Kelly; R. L. Boring

    2011-05-01

    This guide provides step-by-step guidance on the use of the SPAR-H method for quantifying Human Failure Events (HFEs). This guide is intended to be used with the worksheets provided in: 'The SPAR-H Human Reliability Analysis Method,' NUREG/CR-6883, dated August 2005. Each step in the process of producing a Human Error Probability (HEP) is discussed. These steps are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff. The discussions on dependence are extensive and include an appendix that describes insights obtained from the psychology literature.

  12. SPAR-H Step-by-Step Guidance

    International Nuclear Information System (INIS)

    Galyean, W.J.; Whaley, A.M.; Kelly, D.L.; Boring, R.L.

    2011-01-01

    This guide provides step-by-step guidance on the use of the SPAR-H method for quantifying Human Failure Events (HFEs). This guide is intended to be used with the worksheets provided in: 'The SPAR-H Human Reliability Analysis Method,' NUREG/CR-6883, dated August 2005. Each step in the process of producing a Human Error Probability (HEP) is discussed. These steps are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff. The discussions on dependence are extensive and include an appendix that describes insights obtained from the psychology literature.

  13. Real-Time Incompressible Fluid Simulation on the GPU

    Directory of Open Access Journals (Sweden)

    Xiao Nie

    2015-01-01

    Full Text Available We present a parallel framework for simulating incompressible fluids with predictive-corrective incompressible smoothed particle hydrodynamics (PCISPH on the GPU in real time. To this end, we propose an efficient GPU streaming pipeline to map the entire computational task onto the GPU, fully exploiting the massive computational power of state-of-the-art GPUs. In PCISPH-based simulations, neighbor search is the major performance obstacle because this process is performed several times at each time step. To eliminate this bottleneck, an efficient parallel sorting method for this time-consuming step is introduced. Moreover, we discuss several optimization techniques including using fast on-chip shared memory to avoid global memory bandwidth limitations and thus further improve performance on modern GPU hardware. With our framework, the realism of real-time fluid simulation is significantly improved since our method enforces incompressibility constraint which is typically ignored due to efficiency reason in previous GPU-based SPH methods. The performance results illustrate that our approach can efficiently simulate realistic incompressible fluid in real time and results in a speed-up factor of up to 23 on a high-end NVIDIA GPU in comparison to single-threaded CPU-based implementation.

  14. Computer-controlled attenuator.

    Science.gov (United States)

    Mitov, D; Grozev, Z

    1991-01-01

    Various possibilities for applying electronic computer-controlled attenuators for the automation of physiological experiments are considered. A detailed description is given of the design of a 4-channel computer-controlled attenuator, in two of the channels of which the output signal can change by a linear step, in the other two channels--by a logarithmic step. This, together with the existence of additional programmable timers, allows to automate a wide range of studies in different spheres of physiology and psychophysics, including vision and hearing.

  15. SPAR-H Step-by-Step Guidance

    Energy Technology Data Exchange (ETDEWEB)

    April M. Whaley; Dana L. Kelly; Ronald L. Boring; William J. Galyean

    2012-06-01

    Step-by-step guidance was developed recently at Idaho National Laboratory for the US Nuclear Regulatory Commission on the use of the Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) method for quantifying Human Failure Events (HFEs). This work was done to address SPAR-H user needs, specifically requests for additional guidance on the proper application of various aspects of the methodology. This paper overviews the steps of the SPAR-H analysis process and highlights some of the most important insights gained during the development of the step-by-step directions. This supplemental guidance for analysts is applicable when plant-specific information is available, and goes beyond the general guidance provided in existing SPAR-H documentation. The steps highlighted in this paper are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff.

  16. Effect of the processing steps on compositions of table olive since harvesting time to pasteurization.

    Science.gov (United States)

    Nikzad, Nasim; Sahari, Mohammad A; Vanak, Zahra Piravi; Safafar, Hamed; Boland-nazar, Seyed A

    2013-08-01

    Weight, oil, fatty acids, tocopherol, polyphenol, and sterol properties of 5 olive cultivars (Zard, Fishomi, Ascolana, Amigdalolia, and Conservalia) during crude, lye treatment, washing, fermentation, and pasteurization steps were studied. Results showed: oil percent was higher and lower in Ascolana (crude step) and in Fishomi (pasteurization step), respectively; during processing steps, in all cultivars, oleic, palmitic, linoleic, and stearic acids were higher; the highest changes in saturated and unsaturated fatty acids were in fermentation step; the highest and the lowest ratios of ω3 / ω6 were in Ascolana (washing step) and in Zard (pasteurization step), respectively; the highest and the lowest tocopherol were in Amigdalolia and Fishomi, respectively, and major damage occurred in lye step; the highest and the lowest polyphenols were in Ascolana (crude step) and in Zard and Ascolana (pasteurization step), respectively; the major damage among cultivars occurred during lye step, in which the polyphenol reduced to 1/10 of first content; sterol did not undergo changes during steps. Reviewing of olive patents shows that many compositions of fruits such as oil quality, fatty acids, quantity and its fraction can be changed by alteration in cultivar and process.

  17. 3D elastic wave modeling using modified high‐order time stepping schemes with improved stability conditions

    KAUST Repository

    Chu, Chunlei; Stoffa, Paul L.; Seif, Roustam

    2009-01-01

    We present two Lax‐Wendroff type high‐order time stepping schemes and apply them to solving the 3D elastic wave equation. The proposed schemes have the same format as the Taylor series expansion based schemes, only with modified temporal extrapolation coefficients. We demonstrate by both theoretical analysis and numerical examples that the modified schemes significantly improve the stability conditions.

  18. Multiple-step fault estimation for interval type-II T-S fuzzy system of hypersonic vehicle with time-varying elevator faults

    Directory of Open Access Journals (Sweden)

    Jin Wang

    2017-03-01

    Full Text Available This article proposes a multiple-step fault estimation algorithm for hypersonic flight vehicles that uses an interval type-II Takagi–Sugeno fuzzy model. An interval type-II Takagi–Sugeno fuzzy model is developed to approximate the nonlinear dynamic system and handle the parameter uncertainties of hypersonic firstly. Then, a multiple-step time-varying additive fault estimation algorithm is designed to estimate time-varying additive elevator fault of hypersonic flight vehicles. Finally, the simulation is conducted in both aspects of modeling and fault estimation; the validity and availability of such method are verified by a series of the comparison of numerical simulation results.

  19. A new deterministic Ensemble Kalman Filter with one-step-ahead smoothing for storm surge forecasting

    KAUST Repository

    Raboudi, Naila

    2016-11-01

    The Ensemble Kalman Filter (EnKF) is a popular data assimilation method for state-parameter estimation. Following a sequential assimilation strategy, it breaks the problem into alternating cycles of forecast and analysis steps. In the forecast step, the dynamical model is used to integrate a stochastic sample approximating the state analysis distribution (called analysis ensemble) to obtain a forecast ensemble. In the analysis step, the forecast ensemble is updated with the incoming observation using a Kalman-like correction, which is then used for the next forecast step. In realistic large-scale applications, EnKFs are implemented with limited ensembles, and often poorly known model errors statistics, leading to a crude approximation of the forecast covariance. This strongly limits the filter performance. Recently, a new EnKF was proposed in [1] following a one-step-ahead smoothing strategy (EnKF-OSA), which involves an OSA smoothing of the state between two successive analysis. At each time step, EnKF-OSA exploits the observation twice. The incoming observation is first used to smooth the ensemble at the previous time step. The resulting smoothed ensemble is then integrated forward to compute a "pseudo forecast" ensemble, which is again updated with the same observation. The idea of constraining the state with future observations is to add more information in the estimation process in order to mitigate for the sub-optimal character of EnKF-like methods. The second EnKF-OSA "forecast" is computed from the smoothed ensemble and should therefore provide an improved background. In this work, we propose a deterministic variant of the EnKF-OSA, based on the Singular Evolutive Interpolated Ensemble Kalman (SEIK) filter. The motivation behind this is to avoid the observations perturbations of the EnKF in order to improve the scheme\\'s behavior when assimilating big data sets with small ensembles. The new SEIK-OSA scheme is implemented and its efficiency is demonstrated

  20. Mobile computing in the humanitarian assistance setting: an introduction and some first steps.

    Science.gov (United States)

    Selanikio, Joel D; Kemmer, Teresa M; Bovill, Maria; Geisler, Karen

    2002-04-01

    We developed a Palm operating system-based handheld computer system for admin istering nutrition questionnaires and used it to gather nutritional information among the Burmese refugees in the Mae La refugee camp on the Thai-Burma border Our experience demonstrated that such technology can be easily adapted for such an austere setting and used to great advantage. Further, the technology showed tremendous potential to reduce both time required and errors commonly encountered when field staff collect information in the humanitarian setting. We also identified several areas needing further development.

  1. Fundamentals of computer architecture and design

    CERN Document Server

    Bindal, Ahmet

    2017-01-01

    This textbook provides semester-length coverage of computer architecture and design, providing a strong foundation for students to understand modern computer system architecture and to apply these insights and principles to future computer designs.  It is based on the author’s decades of industrial experience with computer architecture and design, as well as with teaching students focused on pursuing careers in computer engineering.  Unlike a number of existing textbooks for this course, this one focuses not only on CPU architecture, but also covers in great detail in system buses, peripherals and memories.This book teaches every element in a computing system in two steps.  First, it introduces the functionality of each topic (and subtopics) and then goes into “from-scratch design” of a particular digital block from its architectural specifications using timing diagrams.  The author describes how the data-path of a certain digital block is generated using timin g diagrams, a method which most textbo...

  2. All-optical reservoir computer based on saturation of absorption.

    Science.gov (United States)

    Dejonckheere, Antoine; Duport, François; Smerieri, Anteo; Fang, Li; Oudar, Jean-Louis; Haelterman, Marc; Massar, Serge

    2014-05-05

    Reservoir computing is a new bio-inspired computation paradigm. It exploits a dynamical system driven by a time-dependent input to carry out computation. For efficient information processing, only a few parameters of the reservoir needs to be tuned, which makes it a promising framework for hardware implementation. Recently, electronic, opto-electronic and all-optical experimental reservoir computers were reported. In those implementations, the nonlinear response of the reservoir is provided by active devices such as optoelectronic modulators or optical amplifiers. By contrast, we propose here the first reservoir computer based on a fully passive nonlinearity, namely the saturable absorption of a semiconductor mirror. Our experimental setup constitutes an important step towards the development of ultrafast low-consumption analog computers.

  3. Microeconomic theory and computation applying the maxima open-source computer algebra system

    CERN Document Server

    Hammock, Michael R

    2014-01-01

    This book provides a step-by-step tutorial for using Maxima, an open-source multi-platform computer algebra system, to examine the economic relationships that form the core of microeconomics in a way that complements traditional modeling techniques.

  4. Modified Three-Step Search Block Matching Motion Estimation and Weighted Finite Automata based Fractal Video Compression

    Directory of Open Access Journals (Sweden)

    Shailesh Kamble

    2017-08-01

    Full Text Available The major challenge with fractal image/video coding technique is that, it requires more encoding time. Therefore, how to reduce the encoding time is the research component remains in the fractal coding. Block matching motion estimation algorithms are used, to reduce the computations performed in the process of encoding. The objective of the proposed work is to develop an approach for video coding using modified three step search (MTSS block matching algorithm and weighted finite automata (WFA coding with a specific focus on reducing the encoding time. The MTSS block matching algorithm are used for computing motion vectors between the two frames i.e. displacement of pixels and WFA is used for the coding as it behaves like the Fractal Coding (FC. WFA represents an image (frame or motion compensated prediction error based on the idea of fractal that the image has self-similarity in itself. The self-similarity is sought from the symmetry of an image, so the encoding algorithm divides an image into multi-levels of quad-tree segmentations and creates an automaton from the sub-images. The proposed MTSS block matching algorithm is based on the combination of rectangular and hexagonal search pattern and compared with the existing New Three-Step Search (NTSS, Three-Step Search (TSS, and Efficient Three-Step Search (ETSS block matching estimation algorithm. The performance of the proposed MTSS block matching algorithm is evaluated on the basis of performance evaluation parameters i.e. mean absolute difference (MAD and average search points required per frame. Mean of absolute difference (MAD distortion function is used as the block distortion measure (BDM. Finally, developed approaches namely, MTSS and WFA, MTSS and FC, and Plane FC (applied on every frame are compared with each other. The experimentations are carried out on the standard uncompressed video databases, namely, akiyo, bus, mobile, suzie, traffic, football, soccer, ice etc. Developed

  5. Hard Real-Time Task Scheduling in Cloud Computing Using an Adaptive Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Amjad Mahmood

    2017-04-01

    Full Text Available In the Infrastructure-as-a-Service cloud computing model, virtualized computing resources in the form of virtual machines are provided over the Internet. A user can rent an arbitrary number of computing resources to meet their requirements, making cloud computing an attractive choice for executing real-time tasks. Economical task allocation and scheduling on a set of leased virtual machines is an important problem in the cloud computing environment. This paper proposes a greedy and a genetic algorithm with an adaptive selection of suitable crossover and mutation operations (named as AGA to allocate and schedule real-time tasks with precedence constraint on heterogamous virtual machines. A comprehensive simulation study has been done to evaluate the performance of the proposed algorithms in terms of their solution quality and efficiency. The simulation results show that AGA outperforms the greedy algorithm and non-adaptive genetic algorithm in terms of solution quality.

  6. Detection and Correction of Step Discontinuities in Kepler Flux Time Series

    Science.gov (United States)

    Kolodziejczak, J. J.; Morris, R. L.

    2011-01-01

    PDC 8.0 includes an implementation of a new algorithm to detect and correct step discontinuities appearing in roughly one of every 20 stellar light curves during a given quarter. The majority of such discontinuities are believed to result from high-energy particles (either cosmic or solar in origin) striking the photometer and causing permanent local changes (typically -0.5%) in quantum efficiency, though a partial exponential recovery is often observed [1]. Since these features, dubbed sudden pixel sensitivity dropouts (SPSDs), are uncorrelated across targets they cannot be properly accounted for by the current detrending algorithm. PDC detrending is based on the assumption that features in flux time series are due either to intrinsic stellar phenomena or to systematic errors and that systematics will exhibit measurable correlations across targets. SPSD events violate these assumptions and their successful removal not only rectifies the flux values of affected targets, but demonstrably improves the overall performance of PDC detrending [1].

  7. On an adaptive time stepping strategy for solving nonlinear diffusion equations

    International Nuclear Information System (INIS)

    Chen, K.; Baines, M.J.; Sweby, P.K.

    1993-01-01

    A new time step selection procedure is proposed for solving non- linear diffusion equations. It has been implemented in the ASWR finite element code of Lorenz and Svoboda [10] for 2D semiconductor process modelling diffusion equations. The strategy is based on equi-distributing the local truncation errors of the numerical scheme. The use of B-splines for interpolation (as well as for the trial space) results in a banded and diagonally dominant matrix. The approximate inverse of such a matrix can be provided to a high degree of accuracy by another banded matrix, which in turn can be used to work out the approximate finite difference scheme corresponding to the ASWR finite element method, and further to calculate estimates of the local truncation errors of the numerical scheme. Numerical experiments on six full simulation problems arising in semiconductor process modelling have been carried out. Results show that our proposed strategy is more efficient and better conserves the total mass. 18 refs., 6 figs., 2 tabs

  8. A computer-based time study system for timber harvesting operations

    Science.gov (United States)

    Jingxin Wang; Joe McNeel; John Baumgras

    2003-01-01

    A computer-based time study system was developed for timber harvesting operations. Object-oriented techniques were used to model and design the system. The front-end of the time study system resides on the MS Windows CE and the back-end is supported by MS Access. The system consists of three major components: a handheld system, data transfer interface, and data storage...

  9. Progress in parallel implementation of the multilevel plane wave time domain algorithm

    KAUST Repository

    Liu, Yang; Bagci, Hakan; Michielssen, Eric

    2013-01-01

    The computational complexity and memory requirements of classical schemes for evaluating transient electromagnetic fields produced by Ns dipoles active for Nt time steps scale as O(NtN s 2) and O(Ns 2), respectively. The multilevel plane wave time

  10. Continuous-variable quantum computing in optical time-frequency modes using quantum memories.

    Science.gov (United States)

    Humphreys, Peter C; Kolthammer, W Steven; Nunn, Joshua; Barbieri, Marco; Datta, Animesh; Walmsley, Ian A

    2014-09-26

    We develop a scheme for time-frequency encoded continuous-variable cluster-state quantum computing using quantum memories. In particular, we propose a method to produce, manipulate, and measure two-dimensional cluster states in a single spatial mode by exploiting the intrinsic time-frequency selectivity of Raman quantum memories. Time-frequency encoding enables the scheme to be extremely compact, requiring a number of memories that are a linear function of only the number of different frequencies in which the computational state is encoded, independent of its temporal duration. We therefore show that quantum memories can be a powerful component for scalable photonic quantum information processing architectures.

  11. The Next Step in Deployment of Computer Based Procedures For Field Workers: Insights And Results From Field Evaluations at Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna; Le Blanc, Katya L.; Bly, Aaron

    2015-02-01

    The paper-based procedures currently used for nearly all activities in the commercial nuclear power industry have a long history of ensuring safe operation of the plants. However, there is potential to greatly increase efficiency and safety by improving how the human operator interacts with the procedures. One way to achieve these improvements is through the use of computer-based procedures (CBPs). A CBP system offers a vast variety of improvements, such as context driven job aids, integrated human performance tools (e.g., placekeeping, correct component verification, etc.), and dynamic step presentation. The latter means that the CBP system could only display relevant steps based on operating mode, plant status, and the task at hand. A dynamic presentation of the procedure (also known as context-sensitive procedures) will guide the operator down the path of relevant steps based on the current conditions. This feature will reduce the operator’s workload and inherently reduce the risk of incorrectly marking a step as not applicable and the risk of incorrectly performing a step that should be marked as not applicable. The research team at the Idaho National Laboratory has developed a prototype CBP system for field workers, which has been evaluated from a human factors and usability perspective in four laboratory studies. Based on the results from each study revisions were made to the CBP system. However, a crucial step to get the end users' (e.g., auxiliary operators, maintenance technicians, etc.) acceptance is to put the system in their hands and let them use it as a part of their everyday work activities. In the spring 2014 the first field evaluation of the INL CBP system was conducted at a nuclear power plant. Auxiliary operators conduct a functional test of one out of three backup air compressors each week. During the field evaluation activity, one auxiliary operator conducted the test with the paper-based procedure while a second auxiliary operator

  12. Research of scatter correction on industry computed tomography

    International Nuclear Information System (INIS)

    Sun Shaohua; Gao Wenhuan; Zhang Li; Chen Zhiqiang

    2002-01-01

    In the scanning process of industry computer tomography, scatter blurs the reconstructed image. The grey values of pixels in the reconstructed image are away from what is true and such effect need to be corrected. If the authors use the conventional method of deconvolution, many steps of iteration are needed and the computing time is not satisfactory. The author discusses a method combining Ordered Subsets Convex algorithm and scatter model to implement scatter correction and promising results are obtained in both speed and image quality

  13. The Relaxation of Vicinal (001) with ZigZag [110] Steps

    Science.gov (United States)

    Hawkins, Micah; Hamouda, Ajmi Bh; González-Cabrera, Diego Luis; Einstein, Theodore L.

    2012-02-01

    This talk presents a kinetic Monte Carlo study of the relaxation dynamics of [110] steps on a vicinal (001) simple cubic surface. This system is interesting because [110] steps have different elementary excitation energetics and favor step diffusion more than close-packed [100] steps. In this talk we show how this leads to relaxation dynamics showing greater fluctuations on a shorter time scale for [110] steps as well as 2-bond breaking processes being rate determining in contrast to 3-bond breaking processes for [100] steps. The existence of a steady state is shown via the convergence of terrace width distributions at times much longer than the relaxation time. In this time regime excellent fits to the modified generalized Wigner distribution (as well as to the Berry-Robnik model when steps can overlap) were obtained. Also, step-position correlation function data show diffusion-limited increase for small distances along the step as well as greater average step displacement for zigzag steps compared to straight steps for somewhat longer distances along the step. Work supported by NSF-MRSEC Grant DMR 05-20471 as well as a DOE-CMCSN Grant.

  14. Classified one-step high-radix signed-digit arithmetic units

    Science.gov (United States)

    Cherri, Abdallah K.

    1998-08-01

    High-radix number systems enable higher information storage density, less complexity, fewer system components, and fewer cascaded gates and operations. A simple one-step fully parallel high-radix signed-digit arithmetic is proposed for parallel optical computing based on new joint spatial encodings. This reduces hardware requirements and improves throughput by reducing the space-bandwidth produce needed. The high-radix signed-digit arithmetic operations are based on classifying the neighboring input digit pairs into various groups to reduce the computation rules. A new joint spatial encoding technique is developed to present both the operands and the computation rules. This technique increases the spatial bandwidth product of the spatial light modulators of the system. An optical implementation of the proposed high-radix signed-digit arithmetic operations is also presented. It is shown that our one-step trinary signed-digit and quaternary signed-digit arithmetic units are much simpler and better than all previously reported high-radix signed-digit techniques.

  15. Partition-based discrete-time quantum walks

    Science.gov (United States)

    Konno, Norio; Portugal, Renato; Sato, Iwao; Segawa, Etsuo

    2018-04-01

    We introduce a family of discrete-time quantum walks, called two-partition model, based on two equivalence-class partitions of the computational basis, which establish the notion of local dynamics. This family encompasses most versions of unitary discrete-time quantum walks driven by two local operators studied in literature, such as the coined model, Szegedy's model, and the 2-tessellable staggered model. We also analyze the connection of those models with the two-step coined model, which is driven by the square of the evolution operator of the standard discrete-time coined walk. We prove formally that the two-step coined model, an extension of Szegedy model for multigraphs, and the two-tessellable staggered model are unitarily equivalent. Then, selecting one specific model among those families is a matter of taste not generality.

  16. Three-step approach for prediction of limit cycle pressure oscillations in combustion chambers of gas turbines

    Science.gov (United States)

    Iurashev, Dmytro; Campa, Giovanni; Anisimov, Vyacheslav V.; Cosatto, Ezio

    2017-11-01

    Currently, gas turbine manufacturers frequently face the problem of strong acoustic combustion driven oscillations inside combustion chambers. These combustion instabilities can cause extensive wear and sometimes even catastrophic damages to combustion hardware. This requires prevention of combustion instabilities, which, in turn, requires reliable and fast predictive tools. This work presents a three-step method to find stability margins within which gas turbines can be operated without going into self-excited pressure oscillations. As a first step, a set of unsteady Reynolds-averaged Navier-Stokes simulations with the Flame Speed Closure (FSC) model implemented in the OpenFOAM® environment are performed to obtain the flame describing function of the combustor set-up. The standard FSC model is extended in this work to take into account the combined effect of strain and heat losses on the flame. As a second step, a linear three-time-lag-distributed model for a perfectly premixed swirl-stabilized flame is extended to the nonlinear regime. The factors causing changes in the model parameters when applying high-amplitude velocity perturbations are analysed. As a third step, time-domain simulations employing a low-order network model implemented in Simulink® are performed. In this work, the proposed method is applied to a laboratory test rig. The proposed method permits not only the unsteady frequencies of acoustic oscillations to be computed, but the amplitudes of such oscillations as well. Knowing the amplitudes of unstable pressure oscillations, it is possible to determine how these oscillations are harmful to the combustor equipment. The proposed method has a low cost because it does not require any license for computational fluid dynamics software.

  17. VNAP2: a computer program for computation of two-dimensional, time-dependent, compressible, turbulent flow

    Energy Technology Data Exchange (ETDEWEB)

    Cline, M.C.

    1981-08-01

    VNAP2 is a computer program for calculating turbulent (as well as laminar and inviscid), steady, and unsteady flow. VNAP2 solves the two-dimensional, time-dependent, compressible Navier-Stokes equations. The turbulence is modeled with either an algebraic mixing-length model, a one-equation model, or the Jones-Launder two-equation model. The geometry may be a single- or a dual-flowing stream. The interior grid points are computed using the unsplit MacCormack scheme. Two options to speed up the calculations for high Reynolds number flows are included. The boundary grid points are computed using a reference-plane-characteristic scheme with the viscous terms treated as source functions. An explicit artificial viscosity is included for shock computations. The fluid is assumed to be a perfect gas. The flow boundaries may be arbitrary curved solid walls, inflow/outflow boundaries, or free-jet envelopes. Typical problems that can be solved concern nozzles, inlets, jet-powered afterbodies, airfoils, and free-jet expansions. The accuracy and efficiency of the program are shown by calculations of several inviscid and turbulent flows. The program and its use are described completely, and six sample cases and a code listing are included.

  18. A Matter of Computer Time

    Science.gov (United States)

    Celano, Donna; Neuman, Susan B.

    2010-01-01

    Many low-income children do not have the opportunity to develop the computer skills necessary to succeed in our technological economy. Their only access to computers and the Internet--school, afterschool programs, and community organizations--is woefully inadequate. Educators must work to close this knowledge gap and to ensure that low-income…

  19. A note on computing average state occupation times

    Directory of Open Access Journals (Sweden)

    Jan Beyersmann

    2014-05-01

    Full Text Available Objective: This review discusses how biometricians would probably compute or estimate expected waiting times, if they had the data. Methods: Our framework is a time-inhomogeneous Markov multistate model, where all transition hazards are allowed to be time-varying. We assume that the cumulative transition hazards are given. That is, they are either known, as in a simulation, determined by expert guesses, or obtained via some method of statistical estimation. Our basic tool is product integration, which transforms the transition hazards into the matrix of transition probabilities. Product integration enjoys a rich mathematical theory, which has successfully been used to study probabilistic and statistical aspects of multistate models. Our emphasis will be on practical implementation of product integration, which allows us to numerically approximate the transition probabilities. Average state occupation times and other quantities of interest may then be derived from the transition probabilities.

  20. Computational electrodynamics the finite-difference time-domain method

    CERN Document Server

    Taflove, Allen

    2005-01-01

    This extensively revised and expanded third edition of the Artech House bestseller, Computational Electrodynamics: The Finite-Difference Time-Domain Method, offers engineers the most up-to-date and definitive resource on this critical method for solving Maxwell's equations. The method helps practitioners design antennas, wireless communications devices, high-speed digital and microwave circuits, and integrated optical devices with unsurpassed efficiency. There has been considerable advancement in FDTD computational technology over the past few years, and the third edition brings professionals the very latest details with entirely new chapters on important techniques, major updates on key topics, and new discussions on emerging areas such as nanophotonics. What's more, to supplement the third edition, the authors have created a Web site with solutions to problems, downloadable graphics and videos, and updates, making this new edition the ideal textbook on the subject as well.

  1. Computational issues in the analysis of nonlinear two-phase flow dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Rosa, Mauricio A. Pinheiro [Centro Tecnico Aeroespacial (CTA-IEAv), Sao Jose dos Campos, SP (Brazil). Inst. de Estudos Avancados. Div. de Energia Nuclear], e-mail: pinheiro@ieav.cta.br; Podowski, Michael Z. [Rensselaer Polytechnic Institute, New York, NY (United States)

    2001-07-01

    This paper is concerned with the issue of computer simulations of flow-induced instabilities in boiling channels and systems. A computational model is presented for the time-domain analysis of nonlinear oscillations in interconnected parallel boiling channels. The results of model testing and validation are shown. One of the main concerns here has been to show the importance in performing numerical testing regarding the selection of a proper numerical integration method and associated nodalization and time step as well as to demonstrate the convergence of the numerical solution prior to any analysis. (author)

  2. Real-time data-intensive computing

    Energy Technology Data Exchange (ETDEWEB)

    Parkinson, Dilworth Y., E-mail: dyparkinson@lbl.gov; Chen, Xian; Hexemer, Alexander; MacDowell, Alastair A.; Padmore, Howard A.; Shapiro, David; Tamura, Nobumichi [Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Beattie, Keith; Krishnan, Harinarayan; Patton, Simon J.; Perciano, Talita; Stromsness, Rune; Tull, Craig E.; Ushizima, Daniela [Computational Research Division, Lawrence Berkeley National Laboratory Berkeley CA 94720 (United States); Correa, Joaquin; Deslippe, Jack R. [National Energy Research Scientific Computing Center, Berkeley, CA 94720 (United States); Dart, Eli; Tierney, Brian L. [Energy Sciences Network, Berkeley, CA 94720 (United States); Daurer, Benedikt J.; Maia, Filipe R. N. C. [Uppsala University, Uppsala (Sweden); and others

    2016-07-27

    Today users visit synchrotrons as sources of understanding and discovery—not as sources of just light, and not as sources of data. To achieve this, the synchrotron facilities frequently provide not just light but often the entire end station and increasingly, advanced computational facilities that can reduce terabytes of data into a form that can reveal a new key insight. The Advanced Light Source (ALS) has partnered with high performance computing, fast networking, and applied mathematics groups to create a “super-facility”, giving users simultaneous access to the experimental, computational, and algorithmic resources to make this possible. This combination forms an efficient closed loop, where data—despite its high rate and volume—is transferred and processed immediately and automatically on appropriate computing resources, and results are extracted, visualized, and presented to users or to the experimental control system, both to provide immediate insight and to guide decisions about subsequent experiments during beamtime. We will describe our work at the ALS ptychography, scattering, micro-diffraction, and micro-tomography beamlines.

  3. GRAPHIC, time-sharing magnet design computer programs at Argonne

    International Nuclear Information System (INIS)

    Lari, R.J.

    1974-01-01

    This paper describes three magnet design computer programs in use at the Zero Gradient Synchrotron of Argonne National Laboratory. These programs are used in the time sharing mode in conjunction with a Tektronix model 4012 graphic display terminal. The first program in called TRIM, the second MAGNET, and the third GFUN. (U.S.)

  4. Computation of Asteroid Proper Elements on the Grid

    Science.gov (United States)

    Novakovic, B.; Balaz, A.; Knezevic, Z.; Potocnik, M.

    2009-12-01

    A procedure of gridification of the computation of asteroid proper orbital elements is described. The need to speed up the time consuming computations and make them more efficient is justified by the large increase of observational data expected from the next generation all sky surveys. We give the basic notion of proper elements and of the contemporary theories and methods used to compute them for different populations of objects. Proper elements for nearly 70,000 asteroids are derived since the beginning of use of the Grid infrastructure for the purpose. The average time for the catalogs update is significantly shortened with respect to the time needed with stand-alone workstations. We also present basics of the Grid computing, the concepts of Grid middleware and its Workload management system. The practical steps we undertook to efficiently gridify our application are described in full detail. We present the results of a comprehensive testing of the performance of different Grid sites, and offer some practical conclusions based on the benchmark results and on our experience. Finally, we propose some possibilities for the future work.

  5. Instant Google Compute Engine

    CERN Document Server

    Papaspyrou, Alexander

    2013-01-01

    Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks. This book is a step-by-step guide to installing and using Google Compute Engine.""Instant Google Compute Engine"" is great for developers and operators who are new to Cloud computing, and who are looking to get a good grounding in using Infrastructure-as-a-Service as part of their daily work. It's assumed that you will have some experience with the Linux operating system as well as familiarity with the concept of virtualization technologies, suc

  6. Three-Step Predictor-Corrector of Exponential Fitting Method for Nonlinear Schroedinger Equations

    International Nuclear Information System (INIS)

    Tang Chen; Zhang Fang; Yan Haiqing; Luo Tao; Chen Zhanqing

    2005-01-01

    We develop the three-step explicit and implicit schemes of exponential fitting methods. We use the three-step explicit exponential fitting scheme to predict an approximation, then use the three-step implicit exponential fitting scheme to correct this prediction. This combination is called the three-step predictor-corrector of exponential fitting method. The three-step predictor-corrector of exponential fitting method is applied to numerically compute the coupled nonlinear Schroedinger equation and the nonlinear Schroedinger equation with varying coefficients. The numerical results show that the scheme is highly accurate.

  7. A theory of the stepped leader in lightning

    International Nuclear Information System (INIS)

    Lowke, J.J.

    1999-01-01

    There is no generally accepted explanation of the stepped leader behaviour in terms of basic physical processes. Existing theories generally involve significant gas heating within the stepped leader. In the present paper, the stepped nature of the leader is proposed to arise due to a combination of two physical phenomena. Electron transport is dominant over ion transport, during the luminous step stage, because electron mobilities are about 100 times larger than ion mobilities, and the streamer front velocity is determined by electron ionization effects. During the dark time between steps, there are only ions and charge transport is very much slower. The second effect leading to stepped behaviour arises because the electric field required for electric breakdown in air prior to a discharge is ∼30kV/cm, and is very much higher than the electric field of 5kV/cm that is required to sustain a glow discharge in air. During the luminous step stage, electrons tend to produce space charges to make a uniform field in the streamer of ∼5kV/cm. During the dark time between steps, there are no electrons but only ions. Time is required for ion drift to produce a space charge sheath of negative ions at the head of the streamer to produce a field of ∼30kV/cm sufficient for electron ionization to produce a new luminous step

  8. Gaussian Radial Basis Function for Efficient Computation of Forest Indirect Illumination

    Science.gov (United States)

    Abbas, Fayçal; Babahenini, Mohamed Chaouki

    2018-06-01

    Global illumination of natural scenes in real time like forests is one of the most complex problems to solve, because the multiple inter-reflections between the light and material of the objects composing the scene. The major problem that arises is the problem of visibility computation. In fact, the computing of visibility is carried out for all the set of leaves visible from the center of a given leaf, given the enormous number of leaves present in a tree, this computation performed for each leaf of the tree which also reduces performance. We describe a new approach that approximates visibility queries, which precede in two steps. The first step is to generate point cloud representing the foliage. We assume that the point cloud is composed of two classes (visible, not-visible) non-linearly separable. The second step is to perform a point cloud classification by applying the Gaussian radial basis function, which measures the similarity in term of distance between each leaf and a landmark leaf. It allows approximating the visibility requests to extract the leaves that will be used to calculate the amount of indirect illumination exchanged between neighbor leaves. Our approach allows efficiently treat the light exchanges in the scene of a forest, it allows a fast computation and produces images of good visual quality, all this takes advantage of the immense power of computation of the GPU.

  9. A neuro-fuzzy computing technique for modeling hydrological time series

    Science.gov (United States)

    Nayak, P. C.; Sudheer, K. P.; Rangan, D. M.; Ramasastri, K. S.

    2004-05-01

    Intelligent computing tools such as artificial neural network (ANN) and fuzzy logic approaches are proven to be efficient when applied individually to a variety of problems. Recently there has been a growing interest in combining both these approaches, and as a result, neuro-fuzzy computing techniques have evolved. This approach has been tested and evaluated in the field of signal processing and related areas, but researchers have only begun evaluating the potential of this neuro-fuzzy hybrid approach in hydrologic modeling studies. This paper presents the application of an adaptive neuro fuzzy inference system (ANFIS) to hydrologic time series modeling, and is illustrated by an application to model the river flow of Baitarani River in Orissa state, India. An introduction to the ANFIS modeling approach is also presented. The advantage of the method is that it does not require the model structure to be known a priori, in contrast to most of the time series modeling techniques. The results showed that the ANFIS forecasted flow series preserves the statistical properties of the original flow series. The model showed good performance in terms of various statistical indices. The results are highly promising, and a comparative analysis suggests that the proposed modeling approach outperforms ANNs and other traditional time series models in terms of computational speed, forecast errors, efficiency, peak flow estimation etc. It was observed that the ANFIS model preserves the potential of the ANN approach fully, and eases the model building process.

  10. State-plane trajectories used to observe and control the behavior of a voltage step-up dc-to-dc converter

    Science.gov (United States)

    Burns, W. W., III; Wilson, T. G.

    1976-01-01

    State-plane analysis techniques are employed to study the voltage step up energy storage dc-to-dc converter. Within this framework, an example converter operating under the influence of a constant on time and a constant frequency controller is examined. Qualitative insight gained through this approach is used to develop a conceptual free running control law for the voltage step up converter which can achieve steady state operation in one on/off cycle of control. Digital computer simulation data is presented to illustrate and verify the theoretical discussions presented.

  11. Effect of moisture and drying time on the bond strength of the one-step self-etching adhesive system

    Directory of Open Access Journals (Sweden)

    Yoon Lee

    2012-08-01

    Full Text Available Objectives To investigate the effect of dentin moisture degree and air-drying time on dentin-bond strength of two different one-step self-etching adhesive systems. Materials and Methods Twenty-four human third molars were used for microtensile bond strength testing of G-Bond and Clearfil S3 Bond. The dentin surface was either blot-dried or air-dried before applying these adhesive agents. After application of the adhesive agent, three different air drying times were evaluated: 1, 5, and 10 sec. Composite resin was build up to 4 mm thickness and light cured for 40 sec with 2 separate layers. Then the tooth was sectioned and trimmed to measure the microtensile bond strength using a universal testing machine. The measured bond strengths were analyzed with three-way ANOVA and regression analysis was done (p = 0.05. Results All three factors, materials, dentin wetness and air drying time, showed significant effect on the microtensile bond strength. Clearfil S3 Bond, dry dentin surface and 10 sec air drying time showed higher bond strength. Conclusions Within the limitation of this experiment, air drying time after the application of the one-step self-etching adhesive agent was the most significant factor affecting the bond strength, followed by the material difference and dentin moisture before applying the adhesive agent.

  12. Numerical method for solving the three-dimensional time-dependent neutron diffusion equation

    International Nuclear Information System (INIS)

    Khaled, S.M.; Szatmary, Z.

    2005-01-01

    A numerical time-implicit method has been developed for solving the coupled three-dimensional time-dependent multi-group neutron diffusion and delayed neutron precursor equations. The numerical stability of the implicit computation scheme and the convergence of the iterative associated processes have been evaluated. The computational scheme requires the solution of large linear systems at each time step. For this purpose, the point over-relaxation Gauss-Seidel method was chosen. A new scheme was introduced instead of the usual source iteration scheme. (author)

  13. Rigid Body Sampling and Individual Time Stepping for Rigid-Fluid Coupling of Fluid Simulation

    Directory of Open Access Journals (Sweden)

    Xiaokun Wang

    2017-01-01

    Full Text Available In this paper, we propose an efficient and simple rigid-fluid coupling scheme with scientific programming algorithms for particle-based fluid simulation and three-dimensional visualization. Our approach samples the surface of rigid bodies with boundary particles that interact with fluids. It contains two procedures, that is, surface sampling and sampling relaxation, which insures uniform distribution of particles with less iterations. Furthermore, we present a rigid-fluid coupling scheme integrating individual time stepping to rigid-fluid coupling, which gains an obvious speedup compared to previous method. The experimental results demonstrate the effectiveness of our approach.

  14. One False Step: "Detroit," "Step" and Movies of Rising and Falling

    Science.gov (United States)

    Beck, Bernard

    2018-01-01

    "Detroit" and "Step" are two recent movies in the context of urban riots in protest of police brutality. They refer to time periods separated by half a century, but there are common themes in the two that seem appropriate to both times. The movies are not primarily concerned with the riot events, but the riot is a major…

  15. Multigrid Reduction in Time for Nonlinear Parabolic Problems

    Energy Technology Data Exchange (ETDEWEB)

    Falgout, R. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Manteuffel, T. A. [Univ. of Colorado, Boulder, CO (United States); O' Neill, B. [Univ. of Colorado, Boulder, CO (United States); Schroder, J. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-04

    The need for parallel-in-time is being driven by changes in computer architectures, where future speed-ups will be available through greater concurrency, but not faster clock speeds, which are stagnant.This leads to a bottleneck for sequential time marching schemes, because they lack parallelism in the time dimension. Multigrid Reduction in Time (MGRIT) is an iterative procedure that allows for temporal parallelism by utilizing multigrid reduction techniques and a multilevel hierarchy of coarse time grids. MGRIT has been shown to be effective for linear problems, with speedups of up to 50 times. The goal of this work is the efficient solution of nonlinear problems with MGRIT, where efficient is defined as achieving similar performance when compared to a corresponding linear problem. As our benchmark, we use the p-Laplacian, where p = 4 corresponds to a well-known nonlinear diffusion equation and p = 2 corresponds to our benchmark linear diffusion problem. When considering linear problems and implicit methods, the use of optimal spatial solvers such as spatial multigrid imply that the cost of one time step evaluation is fixed across temporal levels, which have a large variation in time step sizes. This is not the case for nonlinear problems, where the work required increases dramatically on coarser time grids, where relatively large time steps lead to worse conditioned nonlinear solves and increased nonlinear iteration counts per time step evaluation. This is the key difficulty explored by this paper. We show that by using a variety of strategies, most importantly, spatial coarsening and an alternate initial guess to the nonlinear time-step solver, we can reduce the work per time step evaluation over all temporal levels to a range similar with the corresponding linear problem. This allows for parallel scaling behavior comparable to the corresponding linear problem.

  16. Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation

    Science.gov (United States)

    Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab

    2015-05-01

    3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.

  17. Fast Parallel Computation of Polynomials Using Few Processors

    DEFF Research Database (Denmark)

    Valiant, Leslie G.; Skyum, Sven; Berkowitz, S.

    1983-01-01

    It is shown that any multivariate polynomial of degree $d$ that can be computed sequentially in $C$ steps can be computed in parallel in $O((\\log d)(\\log C + \\log d))$ steps using only $(Cd)^{O(1)} $ processors....

  18. Microfluidic step-emulsification in axisymmetric geometry.

    Science.gov (United States)

    Chakraborty, I; Ricouvier, J; Yazhgur, P; Tabeling, P; Leshansky, A M

    2017-10-25

    Biphasic step-emulsification (Z. Li et al., Lab Chip, 2015, 15, 1023) is a promising microfluidic technique for high-throughput production of μm and sub-μm highly monodisperse droplets. The step-emulsifier consists of a shallow (Hele-Shaw) microchannel operating with two co-flowing immiscible liquids and an abrupt expansion (i.e., step) to a deep and wide reservoir. Under certain conditions the confined stream of the disperse phase, engulfed by the co-flowing continuous phase, breaks into small highly monodisperse droplets at the step. Theoretical investigation of the corresponding hydrodynamics is complicated due to the complex geometry of the planar device, calling for numerical approaches. However, direct numerical simulations of the three dimensional surface-tension-dominated biphasic flows in confined geometries are computationally expensive. In the present paper we study a model problem of axisymmetric step-emulsification. This setup consists of a stable core-annular biphasic flow in a cylindrical capillary tube connected co-axially to a reservoir tube of a larger diameter through a sudden expansion mimicking the edge of the planar step-emulsifier. We demonstrate that the axisymmetric setup exhibits similar regimes of droplet generation to the planar device. A detailed parametric study of the underlying hydrodynamics is feasible via inexpensive (two dimensional) simulations owing to the axial symmetry. The phase diagram quantifying the different regimes of droplet generation in terms of governing dimensionless parameters is presented. We show that in qualitative agreement with experiments in planar devices, the size of the droplets generated in the step-emulsification regime is independent of the capillary number and almost insensitive to the viscosity ratio. These findings confirm that the step-emulsification regime is solely controlled by surface tension. The numerical predictions are in excellent agreement with in-house experiments with the axisymmetric

  19. Polynomial-time computability of the edge-reliability of graphs using Gilbert's formula

    Directory of Open Access Journals (Sweden)

    Marlowe Thomas J.

    1998-01-01

    Full Text Available Reliability is an important consideration in analyzing computer and other communication networks, but current techniques are extremely limited in the classes of graphs which can be analyzed efficiently. While Gilbert's formula establishes a theoretically elegant recursive relationship between the edge reliability of a graph and the reliability of its subgraphs, naive evaluation requires consideration of all sequences of deletions of individual vertices, and for many graphs has time complexity essentially Θ (N!. We discuss a general approach which significantly reduces complexity, encoding subgraph isomorphism in a finer partition by invariants, and recursing through the set of invariants. We illustrate this approach using threshhold graphs, and show that any computation of reliability using Gilbert's formula will be polynomial-time if and only if the number of invariants considered is polynomial; we then show families of graphs with polynomial-time, and non-polynomial reliability computation, and show that these encompass most previously known results. We then codify our approach to indicate how it can be used for other classes of graphs, and suggest several classes to which the technique can be applied.

  20. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Anthony B., E-mail: acosta@northwestern.edu [Department of Chemistry, Northwestern University, Evanston, IL 60208 (United States); Green, Jason R., E-mail: jason.green@umb.edu [Department of Chemistry, Northwestern University, Evanston, IL 60208 (United States); Department of Chemistry, University of Massachusetts Boston, Boston, MA 02125 (United States)

    2013-08-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N{sup 2} (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.

  1. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    International Nuclear Information System (INIS)

    Costa, Anthony B.; Green, Jason R.

    2013-01-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N 2 (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra

  2. Fast parallel computation of polynomials using few processors

    DEFF Research Database (Denmark)

    Valiant, Leslie; Skyum, Sven

    1981-01-01

    It is shown that any multivariate polynomial that can be computed sequentially in C steps and has degree d can be computed in parallel in 0((log d) (log C + log d)) steps using only (Cd)0(1) processors....

  3. Step-and-Repeat Nanoimprint-, Photo- and Laser Lithography from One Customised CNC Machine.

    Science.gov (United States)

    Greer, Andrew Im; Della-Rosa, Benoit; Khokhar, Ali Z; Gadegaard, Nikolaj

    2016-12-01

    The conversion of a computer numerical control machine into a nanoimprint step-and-repeat tool with additional laser- and photolithography capacity is documented here. All three processes, each demonstrated on a variety of photoresists, are performed successfully and analysed so as to enable the reader to relate their known lithography process(es) to the findings. Using the converted tool, 1 cm(2) of nanopattern may be exposed in 6 s, over 3300 times faster than the electron beam equivalent. Nanoimprint tools are commercially available, but these can cost around 1000 times more than this customised computer numerical control (CNC) machine. The converted equipment facilitates rapid production and large area micro- and nanoscale research on small grants, ultimately enabling faster and more diverse growth in this field of science. In comparison to commercial tools, this converted CNC also boasts capacity to handle larger substrates, temperature control and active force control, up to ten times more curing dose and compactness. Actual devices are fabricated using the machine including an expanded nanotopographic array and microfluidic PDMS Y-channel mixers.

  4. The K-Step Spatial Sign Covariance Matrix

    NARCIS (Netherlands)

    Croux, C.; Dehon, C.; Yadine, A.

    2010-01-01

    The Sign Covariance Matrix is an orthogonal equivariant estimator of mul- tivariate scale. It is often used as an easy-to-compute and highly robust estimator. In this paper we propose a k-step version of the Sign Covariance Matrix, which improves its e±ciency while keeping the maximal breakdown

  5. A Two-Step Approach to Uncertainty Quantification of Core Simulators

    Directory of Open Access Journals (Sweden)

    Artem Yankov

    2012-01-01

    Full Text Available For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor and in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.

  6. Efficient Machine Learning Approach for Optimizing Scientific Computing Applications on Emerging HPC Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Arumugam, Kamesh [Old Dominion Univ., Norfolk, VA (United States)

    2017-05-01

    Efficient parallel implementations of scientific applications on multi-core CPUs with accelerators such as GPUs and Xeon Phis is challenging. This requires - exploiting the data parallel architecture of the accelerator along with the vector pipelines of modern x86 CPU architectures, load balancing, and efficient memory transfer between different devices. It is relatively easy to meet these requirements for highly structured scientific applications. In contrast, a number of scientific and engineering applications are unstructured. Getting performance on accelerators for these applications is extremely challenging because many of these applications employ irregular algorithms which exhibit data-dependent control-ow and irregular memory accesses. Furthermore, these applications are often iterative with dependency between steps, and thus making it hard to parallelize across steps. As a result, parallelism in these applications is often limited to a single step. Numerical simulation of charged particles beam dynamics is one such application where the distribution of work and memory access pattern at each time step is irregular. Applications with these properties tend to present significant branch and memory divergence, load imbalance between different processor cores, and poor compute and memory utilization. Prior research on parallelizing such irregular applications have been focused around optimizing the irregular, data-dependent memory accesses and control-ow during a single step of the application independent of the other steps, with the assumption that these patterns are completely unpredictable. We observed that the structure of computation leading to control-ow divergence and irregular memory accesses in one step is similar to that in the next step. It is possible to predict this structure in the current step by observing the computation structure of previous steps. In this dissertation, we present novel machine learning based optimization techniques to address

  7. Effects of Conjugate Gradient Methods and Step-Length Formulas on the Multiscale Full Waveform Inversion in Time Domain: Numerical Experiments

    Science.gov (United States)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José; Liu, Qinya; Zhou, Bing

    2017-05-01

    We carry out full waveform inversion (FWI) in time domain based on an alternative frequency-band selection strategy that allows us to implement the method with success. This strategy aims at decomposing the seismic data within partially overlapped frequency intervals by carrying out a concatenated treatment of the wavelet to largely avoid redundant frequency information to adapt to wavelength or wavenumber coverage. A pertinent numerical test proves the effectiveness of this strategy. Based on this strategy, we comparatively analyze the effects of update parameters for the nonlinear conjugate gradient (CG) method and step-length formulas on the multiscale FWI through several numerical tests. The investigations of up to eight versions of the nonlinear CG method with and without Gaussian white noise make clear that the HS (Hestenes and Stiefel in J Res Natl Bur Stand Sect 5:409-436, 1952), CD (Fletcher in Practical methods of optimization vol. 1: unconstrained optimization, Wiley, New York, 1987), and PRP (Polak and Ribière in Revue Francaise Informat Recherche Opertionelle, 3e Année 16:35-43, 1969; Polyak in USSR Comput Math Math Phys 9:94-112, 1969) versions are more efficient among the eight versions, while the DY (Dai and Yuan in SIAM J Optim 10:177-182, 1999) version always yields inaccurate result, because it overestimates the deeper parts of the model. The application of FWI algorithms using distinct step-length formulas, such as the direct method ( Direct), the parabolic search method ( Search), and the two-point quadratic interpolation method ( Interp), proves that the Interp is more efficient for noise-free data, while the Direct is more efficient for Gaussian white noise data. In contrast, the Search is less efficient because of its slow convergence. In general, the three step-length formulas are robust or partly insensitive to Gaussian white noise and the complexity of the model. When the initial velocity model deviates far from the real model or the

  8. Computing discharge using the index velocity method

    Science.gov (United States)

    Levesque, Victor A.; Oberg, Kevin A.

    2012-01-01

    techniques in which the mean cross-sectional velocity for the standard section is related to the measured index velocity. Most ratings are simple-linear regressions, but more complex ratings may be necessary in some cases. Once the rating is established, validation measurements should be made periodically. Over time, validation measurements may provide additional definition to the rating or result in the creation of a new rating. The computation of discharge is the last step in the index velocity method, and in some ways it is the most straight-forward step. This step differs little from the steps used to compute discharge records for stage-discharge gaging stations. The ratings are entered into database software used for records computation, and continuous records of discharge are computed.

  9. Stability analysis of the Backward Euler time discretization for the pin-resolved transport transient reactor calculation

    International Nuclear Information System (INIS)

    Zhu, Ang; Xu, Yunlin; Downar, Thomas

    2016-01-01

    Three-dimensional, full core transport modeling with pin-resolved detail for reactor dynamic simulation is important for some multi-physics reactor applications. However, it can be computationally intensive due to the difficulty in maintaining accuracy while minimizing the number of time steps. A recently proposed Transient Multi-Level (TML) methodology overcomes this difficulty by use multi-level transient solvers to capture the physical phenomenal in different time domains and thus maximize the numerical accuracy and computational efficiency. One major problem with the TML method is the negative flux/precursor number density generated using large time steps for the MOC solver, which is due to the Backward Euler discretization scheme. In this paper, the stability issue of Backward Euler discretization is first investigated using the Point Kinetics Equations (PKEs), and the predicted maximum allowed time step for SPERT test 60 case is shown to be less than 10 ms. To overcome this difficulty, linear and exponential transformations are investigated using the PKEs. The linear transformation is shown to increase the maximum time step by a factor of 2, and the exponential transformation is shown to increase the maximum time step by a factor of 5, as well as provide unconditionally stability above a specified threshold. The two sets of transformations are then applied to TML scheme in the MPACT code, and the numerical results presented show good agreement for standard, linear transformed, and exponential transformed maximum time step between the PKEs model and the MPACT whole core transport solution for three different cases, including a pin cell case, a 3D SPERT assembly case and a row of assemblies (“striped assembly case”) from the SPERT model. Finally, the successful whole transient execution of the stripe assembly case shows the ability of the exponential transformation method to use 10 ms and 20 ms time steps, which all failed using the standard method.

  10. Computing Refined Buneman Trees in Cubic Time

    DEFF Research Database (Denmark)

    Brodal, G.S.; Fagerberg, R.; Östlin, A.

    2003-01-01

    Reconstructing the evolutionary tree for a set of n species based on pairwise distances between the species is a fundamental problem in bioinformatics. Neighbor joining is a popular distance based tree reconstruction method. It always proposes fully resolved binary trees despite missing evidence...... in the underlying distance data. Distance based methods based on the theory of Buneman trees and refined Buneman trees avoid this problem by only proposing evolutionary trees whose edges satisfy a number of constraints. These trees might not be fully resolved but there is strong combinatorial evidence for each...... proposed edge. The currently best algorithm for computing the refined Buneman tree from a given distance measure has a running time of O(n 5) and a space consumption of O(n 4). In this paper, we present an algorithm with running time O(n 3) and space consumption O(n 2). The improved complexity of our...

  11. Fast time- and frequency-domain finite-element methods for electromagnetic analysis

    Science.gov (United States)

    Lee, Woochan

    Fast electromagnetic analysis in time and frequency domain is of critical importance to the design of integrated circuits (IC) and other advanced engineering products and systems. Many IC structures constitute a very large scale problem in modeling and simulation, the size of which also continuously grows with the advancement of the processing technology. This results in numerical problems beyond the reach of existing most powerful computational resources. Different from many other engineering problems, the structure of most ICs is special in the sense that its geometry is of Manhattan type and its dielectrics are layered. Hence, it is important to develop structure-aware algorithms that take advantage of the structure specialties to speed up the computation. In addition, among existing time-domain methods, explicit methods can avoid solving a matrix equation. However, their time step is traditionally restricted by the space step for ensuring the stability of a time-domain simulation. Therefore, making explicit time-domain methods unconditionally stable is important to accelerate the computation. In addition to time-domain methods, frequency-domain methods have suffered from an indefinite system that makes an iterative solution difficult to converge fast. The first contribution of this work is a fast time-domain finite-element algorithm for the analysis and design of very large-scale on-chip circuits. The structure specialty of on-chip circuits such as Manhattan geometry and layered permittivity is preserved in the proposed algorithm. As a result, the large-scale matrix solution encountered in the 3-D circuit analysis is turned into a simple scaling of the solution of a small 1-D matrix, which can be obtained in linear (optimal) complexity with negligible cost. Furthermore, the time step size is not sacrificed, and the total number of time steps to be simulated is also significantly reduced, thus achieving a total cost reduction in CPU time. The second contribution

  12. COMPUGIRLS: Stepping Stone to Future Computer-Based Technology Pathways

    Science.gov (United States)

    Lee, Jieun; Husman, Jenefer; Scott, Kimberly A.; Eggum-Wilkens, Natalie D.

    2015-01-01

    The COMPUGIRLS: Culturally relevant technology program for adolescent girls was developed to promote underrepresented girls' future possible selves and career pathways in computer-related technology fields. We hypothesized that the COMPUGIRLS would promote academic possible selves and self-regulation to achieve these possible selves. We compared…

  13. IMPLEMENTATION OF THE IMPROVED QUASI-STATIC METHOD IN RATTLESNAKE/MOOSE FOR TIME-DEPENDENT RADIATION TRANSPORT MODELLING

    Energy Technology Data Exchange (ETDEWEB)

    Zachary M. Prince; Jean C. Ragusa; Yaqi Wang

    2016-02-01

    Because of the recent interest in reactor transient modeling and the restart of the Transient Reactor (TREAT) Facility, there has been a need for more efficient, robust methods in computation frameworks. This is the impetus of implementing the Improved Quasi-Static method (IQS) in the RATTLESNAKE/MOOSE framework. IQS has implemented with CFEM diffusion by factorizing flux into time-dependent amplitude and spacial- and weakly time-dependent shape. The shape evaluation is very similar to a flux diffusion solve and is computed at large (macro) time steps. While the amplitude evaluation is a PRKE solve where the parameters are dependent on the shape and is computed at small (micro) time steps. IQS has been tested with a custom one-dimensional example and the TWIGL ramp benchmark. These examples prove it to be a viable and effective method for highly transient cases. More complex cases are intended to be applied to further test the method and its implementation.

  14. Step out - Step in Sequencing Games

    NARCIS (Netherlands)

    Musegaas, M.; Borm, P.E.M.; Quant, M.

    2014-01-01

    In this paper a new class of relaxed sequencing games is introduced: the class of Step out - Step in sequencing games. In this relaxation any player within a coalition is allowed to step out from his position in the processing order and to step in at any position later in the processing order.

  15. Computational intelligence in time series forecasting theory and engineering applications

    CERN Document Server

    Palit, Ajoy K

    2005-01-01

    Foresight in an engineering enterprise can make the difference between success and failure, and can be vital to the effective control of industrial systems. Applying time series analysis in the on-line milieu of most industrial plants has been problematic owing to the time and computational effort required. The advent of soft computing tools offers a solution. The authors harness the power of intelligent technologies individually and in combination. Examples of the particular systems and processes susceptible to each technique are investigated, cultivating a comprehensive exposition of the improvements on offer in quality, model building and predictive control and the selection of appropriate tools from the plethora available. Application-oriented engineers in process control, manufacturing, production industry and research centres will find much to interest them in this book. It is suitable for industrial training purposes, as well as serving as valuable reference material for experimental researchers.

  16. Step-to-step variability in treadmill walking: influence of rhythmic auditory cueing.

    Directory of Open Access Journals (Sweden)

    Philippe Terrier

    Full Text Available While walking, human beings continuously adjust step length (SpL, step time (SpT, step speed (SpS = SpL/SpT and step width (SpW by integrating both feedforward and feedback mechanisms. These motor control processes result in correlations of gait parameters between consecutive strides (statistical persistence. Constraining gait with a speed cue (treadmill and/or a rhythmic auditory cue (metronome, modifies the statistical persistence to anti-persistence. The objective was to analyze whether the combined effect of treadmill and rhythmic auditory cueing (RAC modified not only statistical persistence, but also fluctuation magnitude (standard deviation, SD, and stationarity of SpL, SpT, SpS and SpW. Twenty healthy subjects performed 6 × 5 min. walking tests at various imposed speeds on a treadmill instrumented with foot-pressure sensors. Freely-chosen walking cadences were assessed during the first three trials, and then imposed accordingly in the last trials with a metronome. Fluctuation magnitude (SD of SpT, SpL, SpS and SpW was assessed, as well as NonStationarity Index (NSI, which estimates the dispersion of local means in the times series (SD of 20 local means over 10 steps. No effect of RAC on fluctuation magnitude (SD was observed. SpW was not modified by RAC, what is likely the evidence that lateral foot placement is separately regulated. Stationarity (NSI was modified by RAC in the same manner as persistent pattern: Treadmill induced low NSI in the time series of SpS, and high NSI in SpT and SpL. On the contrary, SpT, SpL and SpS exhibited low NSI under RAC condition. We used relatively short sample of consecutive strides (100 as compared to the usual number of strides required to analyze fluctuation dynamics (200 to 1000 strides. Therefore, the responsiveness of stationarity measure (NSI to cued walking opens the perspective to perform short walking tests that would be adapted to patients with a reduced gait perimeter.

  17. Theory and computation of disturbance invariant sets for discrete-time linear systems

    Directory of Open Access Journals (Sweden)

    Kolmanovsky Ilya

    1998-01-01

    Full Text Available This paper considers the characterization and computation of invariant sets for discrete-time, time-invariant, linear systems with disturbance inputs whose values are confined to a specified compact set but are otherwise unknown. The emphasis is on determining maximal disturbance-invariant sets X that belong to a specified subset Γ of the state space. Such d-invariant sets have important applications in control problems where there are pointwise-in-time state constraints of the form χ ( t ∈ Γ . One purpose of the paper is to unite and extend in a rigorous way disparate results from the prior literature. In addition there are entirely new results. Specific contributions include: exploitation of the Pontryagin set difference to clarify conceptual matters and simplify mathematical developments, special properties of maximal invariant sets and conditions for their finite determination, algorithms for generating concrete representations of maximal invariant sets, practical computational questions, extension of the main results to general Lyapunov stable systems, applications of the computational techniques to the bounding of state and output response. Results on Lyapunov stable systems are applied to the implementation of a logic-based, nonlinear multimode regulator. For plants with disturbance inputs and state-control constraints it enlarges the constraint-admissible domain of attraction. Numerical examples illustrate the various theoretical and computational results.

  18. Calibration and Evaluation of Different Estimation Models of Daily Solar Radiation in Seasonally and Annual Time Steps in Shiraz Region

    Directory of Open Access Journals (Sweden)

    Hamid Reza Fooladmand

    2017-06-01

    2006 to 2008 were used for calibrating fourteen estimated models of solar radiation in seasonally and annual time steps and the measured data of years 2009 and 2010 were used for evaluating the obtained results. The equations were used in this study divided into three groups contains: 1 The equations based on only sunshine hours. 2 The equations based on only air temperature. 3 The equations based on sunshine hours and air temperature together. On the other hand, statistical comparison must be done to select the best equation for estimating solar radiation in seasonally and annual time steps. For this purpose, in validation stage the combination of statistical equations and linear correlation was used, and then the value of mean square deviation (MSD was calculated to evaluate the different models for estimating solar radiation in mentioned time steps. Results and Discussion: The mean values of mean square deviation (MSD of fourteen models for estimating solar radiation were equal to 24.16, 20.42, 4.08 and 16.19 for spring to winter respectively, and 15.40 in annual time step. Therefore, the results showed that using the equations for autumn enjoyed high accuracy, however for other seasons had low accuracy. So, using the equations for annual time step were appropriate more than the equations for seasonally time steps. Also, the mean values of mean square deviation (MSD of the equations based on only sunshine hours, the equations based on only air temperature, and the equations based on the combination of sunshine hours and air temperature for estimating solar radiation were equal to 14.82, 17.40 and 14.88, respectively. Therefore, the results indicated that the models based on only air temperature were the worst conditions for estimating solar radiation in Shiraz region, and therefore, using the sunshine hours for estimating solar radiation is necessary. Conclusions: In this study for estimating solar radiation in seasonally and annual time steps in Shiraz region

  19. Materials Frontiers to Empower Quantum Computing

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Antoinette Jane [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sarrao, John Louis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Richardson, Christopher [Laboratory for Physical Sciences, College Park, MD (United States)

    2015-06-11

    This is an exciting time at the nexus of quantum computing and materials research. The materials frontiers described in this report represent a significant advance in electronic materials and our understanding of the interactions between the local material and a manufactured quantum state. Simultaneously, directed efforts to solve materials issues related to quantum computing provide an opportunity to control and probe the fundamental arrangement of matter that will impact all electronic materials. An opportunity exists to extend our understanding of materials functionality from electronic-grade to quantum-grade by achieving a predictive understanding of noise and decoherence in qubits and their origins in materials defects and environmental coupling. Realizing this vision systematically and predictively will be transformative for quantum computing and will represent a qualitative step forward in materials prediction and control.

  20. Step out-step in sequencing games

    NARCIS (Netherlands)

    Musegaas, Marieke; Borm, Peter; Quant, Marieke

    2015-01-01

    In this paper a new class of relaxed sequencing games is introduced: the class of Step out–Step in sequencing games. In this relaxation any player within a coalition is allowed to step out from his position in the processing order and to step in at any position later in the processing order. First,

  1. Assessment of radiopacity of restorative composite resins with various target distances and exposure times and a modified aluminum step wedge

    Energy Technology Data Exchange (ETDEWEB)

    Bejeh Mir, Arash Poorsattar [Dentistry Student Research Committee (DSRC), Dental Materials Research Center, Dentistry School, Babol University of Medical Sciences, Babol (Iran, Islamic Republic of); Bejeh Mir, Morvarid Poorsattar [Private Practice of Orthodontics, Montreal, Quebec (Canada)

    2012-09-15

    ANSI/ADA has established standards for adequate radiopacity. This study was aimed to assess the changes in radiopacity of composite resins according to various tube-target distances and exposure times. Five 1-mm thick samples of Filtek P60 and Clearfil composite resins were prepared and exposed with six tube-target distance/exposure time setups (i.e., 40 cm, 0.2 seconds; 30 cm, 0.2 seconds; 30 cm, 0.16 seconds, 30 cm, 0.12 seconds; 15 cm, 0.2 seconds; 15 cm, 0.12 seconds) performing at 70 kVp and 7 mA along with a 12-step aluminum stepwedge (1 mm incremental steps) using a PSP digital sensor. Thereafter, the radiopacities measured with Digora for Windows software 2.5 were converted to absorbencies (i.e., A=-log (1-G/255)), where A is the absorbency and G is the measured gray scale). Furthermore, the linear regression model of aluminum thickness and absorbency was developed and used to convert the radiopacity of dental materials to the equivalent aluminum thickness. In addition, all calculations were compared with those obtained from a modified 3-step stepwedge (i.e., using data for the 2nd, 5th, and 8th steps). The radiopacities of the composite resins differed significantly with various setups (p<0.001) and between the materials (p<0.001). The best predicted model was obtained for the 30 cm 0.2 seconds setup (R2=0.999). Data from the reduced modified stepwedge was remarkable and comparable with the 12-step stepwedge. Within the limits of the present study, our findings support that various setups might influence the radiopacity of dental materials on digital radiographs.

  2. Ultrasonic divergent-beam scanner for time-of-flight tomography with computer evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Glover, G H

    1978-03-02

    The rotatable ultrasonic divergent-beam scanner is designed for time-of-flight tomography with computer evaluation. With it there can be measured parameters that are of importance for the structure of soft tissues, e.g. time as a function of the velocity distribution along a certain path of flight(the method is analogous to the transaxial X-ray tomography). Moreover it permits to perform the quantitative measurement of two-dimensional velocity distributions and may therefore be applied to serial examinations for detecting cancer of the breast. As computers digital memories as well as analog-digital-hybrid systems are suitable.

  3. Computing moment to moment BOLD activation for real-time neurofeedback

    Science.gov (United States)

    Hinds, Oliver; Ghosh, Satrajit; Thompson, Todd W.; Yoo, Julie J.; Whitfield-Gabrieli, Susan; Triantafyllou, Christina; Gabrieli, John D.E.

    2013-01-01

    Estimating moment to moment changes in blood oxygenation level dependent (BOLD) activation levels from functional magnetic resonance imaging (fMRI) data has applications for learned regulation of regional activation, brain state monitoring, and brain-machine interfaces. In each of these contexts, accurate estimation of the BOLD signal in as little time as possible is desired. This is a challenging problem due to the low signal-to-noise ratio of fMRI data. Previous methods for real-time fMRI analysis have either sacrificed the ability to compute moment to moment activation changes by averaging several acquisitions into a single activation estimate or have sacrificed accuracy by failing to account for prominent sources of noise in the fMRI signal. Here we present a new method for computing the amount of activation present in a single fMRI acquisition that separates moment to moment changes in the fMRI signal intensity attributable to neural sources from those due to noise, resulting in a feedback signal more reflective of neural activation. This method computes an incremental general linear model fit to the fMRI timeseries, which is used to calculate the expected signal intensity at each new acquisition. The difference between the measured intensity and the expected intensity is scaled by the variance of the estimator in order to transform this residual difference into a statistic. Both synthetic and real data were used to validate this method and compare it to the only other published real-time fMRI method. PMID:20682350

  4. Evolution of robot-assisted orthotopic ileal neobladder formation: a step-by-step update to the University of Southern California (USC) technique.

    Science.gov (United States)

    Chopra, Sameer; de Castro Abreu, Andre Luis; Berger, Andre K; Sehgal, Shuchi; Gill, Inderbir; Aron, Monish; Desai, Mihir M

    2017-01-01

    To describe our, step-by-step, technique for robotic intracorporeal neobladder formation. The main surgical steps to forming the intracorporeal orthotopic ileal neobladder are: isolation of 65 cm of small bowel; small bowel anastomosis; bowel detubularisation; suture of the posterior wall of the neobladder; neobladder-urethral anastomosis and cross folding of the pouch; and uretero-enteral anastomosis. Improvements have been made to these steps to enhance time efficiency without compromising neobladder configuration. Our technical improvements have resulted in an improvement in operative time from 450 to 360 min. We describe an updated step-by-step technique of robot-assisted intracorporeal orthotopic ileal neobladder formation. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.

  5. Time Domain Terahertz Axial Computed Tomography Non Destructive Evaluation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to demonstrate key elements of feasibility for a high speed automated time domain terahertz computed axial tomography (TD-THz CT) non destructive...

  6. Step Detection Robust against the Dynamics of Smartphones

    Science.gov (United States)

    Lee, Hwan-hee; Choi, Suji; Lee, Myeong-jin

    2015-01-01

    A novel algorithm is proposed for robust step detection irrespective of step mode and device pose in smartphone usage environments. The dynamics of smartphones are decoupled into a peak-valley relationship with adaptive magnitude and temporal thresholds. For extracted peaks and valleys in the magnitude of acceleration, a step is defined as consisting of a peak and its adjacent valley. Adaptive magnitude thresholds consisting of step average and step deviation are applied to suppress pseudo peaks or valleys that mostly occur during the transition among step modes or device poses. Adaptive temporal thresholds are applied to time intervals between peaks or valleys to consider the time-varying pace of human walking or running for the correct selection of peaks or valleys. From the experimental results, it can be seen that the proposed step detection algorithm shows more than 98.6% average accuracy for any combination of step mode and device pose and outperforms state-of-the-art algorithms. PMID:26516857

  7. GRAPH-BASED POST INCIDENT INTERNAL AUDIT METHOD OF COMPUTER EQUIPMENT

    Directory of Open Access Journals (Sweden)

    I. S. Pantiukhin

    2016-05-01

    Full Text Available Graph-based post incident internal audit method of computer equipment is proposed. The essence of the proposed solution consists in the establishing of relationships among hard disk damps (image, RAM and network. This method is intended for description of information security incident properties during the internal post incident audit of computer equipment. Hard disk damps receiving and formation process takes place at the first step. It is followed by separation of these damps into the set of components. The set of components includes a large set of attributes that forms the basis for the formation of the graph. Separated data is recorded into the non-relational database management system (NoSQL that is adapted for graph storage, fast access and processing. Damps linking application method is applied at the final step. The presented method gives the possibility to human expert in information security or computer forensics for more precise, informative internal audit of computer equipment. The proposed method allows reducing the time spent on internal audit of computer equipment, increasing accuracy and informativeness of such audit. The method has a development potential and can be applied along with the other components in the tasks of users’ identification and computer forensics.

  8. The Computer Revolution in Science: Steps towards the realization of computer-supported discovery environments

    NARCIS (Netherlands)

    de Jong, Hidde; Rip, Arie

    1997-01-01

    The tools that scientists use in their search processes together form so-called discovery environments. The promise of artificial intelligence and other branches of computer science is to radically transform conventional discovery environments by equipping scientists with a range of powerful

  9. Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods

    DEFF Research Database (Denmark)

    Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin

    2013-01-01

    Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was......, on the other hand, lighter than the single-step method....

  10. EnergyPlus Run Time Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences, identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.

  11. Cartoon computation: quantum-like computing without quantum mechanics

    International Nuclear Information System (INIS)

    Aerts, Diederik; Czachor, Marek

    2007-01-01

    We present a computational framework based on geometric structures. No quantum mechanics is involved, and yet the algorithms perform tasks analogous to quantum computation. Tensor products and entangled states are not needed-they are replaced by sets of basic shapes. To test the formalism we solve in geometric terms the Deutsch-Jozsa problem, historically the first example that demonstrated the potential power of quantum computation. Each step of the algorithm has a clear geometric interpretation and allows for a cartoon representation. (fast track communication)

  12. Kinetics of protein–ligand unbinding: Predicting pathways, rates, and rate-limiting steps

    Science.gov (United States)

    Tiwary, Pratyush; Limongelli, Vittorio; Salvalaglio, Matteo; Parrinello, Michele

    2015-01-01

    The ability to predict the mechanisms and the associated rate constants of protein–ligand unbinding is of great practical importance in drug design. In this work we demonstrate how a recently introduced metadynamics-based approach allows exploration of the unbinding pathways, estimation of the rates, and determination of the rate-limiting steps in the paradigmatic case of the trypsin–benzamidine system. Protein, ligand, and solvent are described with full atomic resolution. Using metadynamics, multiple unbinding trajectories that start with the ligand in the crystallographic binding pose and end with the ligand in the fully solvated state are generated. The unbinding rate koff is computed from the mean residence time of the ligand. Using our previously computed binding affinity we also obtain the binding rate kon. Both rates are in agreement with reported experimental values. We uncover the complex pathways of unbinding trajectories and describe the critical rate-limiting steps with unprecedented detail. Our findings illuminate the role played by the coupling between subtle protein backbone fluctuations and the solvation by water molecules that enter the binding pocket and assist in the breaking of the shielded hydrogen bonds. We expect our approach to be useful in calculating rates for general protein–ligand systems and a valid support for drug design. PMID:25605901

  13. Detection of Tomato black ring virus by real-time one-step RT-PCR.

    Science.gov (United States)

    Harper, Scott J; Delmiglio, Catia; Ward, Lisa I; Clover, Gerard R G

    2011-01-01

    A TaqMan-based real-time one-step RT-PCR assay was developed for the rapid detection of Tomato black ring virus (TBRV), a significant plant pathogen which infects a wide range of economically important crops. Primers and a probe were designed against existing genomic sequences to amplify a 72 bp fragment from RNA-2. The assay amplified all isolates of TBRV tested, but no amplification was observed from the RNA of other nepovirus species or healthy host plants. The detection limit of the assay was estimated to be around nine copies of the TBRV target region in total RNA. A comparison with conventional RT-PCR and ELISA, indicated that ELISA, the current standard test method, lacked specificity and reacted to all nepovirus species tested, while conventional RT-PCR was approximately ten-fold less sensitive than the real-time RT-PCR assay. Finally, the real-time RT-PCR assay was tested using five different RT-PCR reagent kits and was found to be robust and reliable, with no significant differences in sensitivity being found. The development of this rapid assay should aid in quarantine and post-border surveys for regulatory agencies. Copyright © 2010 Elsevier B.V. All rights reserved.

  14. Finite element time domain modeling of controlled-Source electromagnetic data with a hybrid boundary condition

    DEFF Research Database (Denmark)

    Cai, Hongzhu; Hu, Xiangyun; Xiong, Bin

    2017-01-01

    method which is unconditionally stable. We solve the diffusion equation for the electric field with a total field formulation. The finite element system of equation is solved using the direct method. The solutions of electric field, at different time, can be obtained using the effective time stepping...... method with trivial computation cost once the matrix is factorized. We try to keep the same time step size for a fixed number of steps using an adaptive time step doubling (ATSD) method. The finite element modeling domain is also truncated using a semi-adaptive method. We proposed a new boundary...... condition based on approximating the total field on the modeling boundary using the primary field corresponding to a layered background model. We validate our algorithm using several synthetic model studies....

  15. Computing and visualizing time-varying merge trees for high-dimensional data

    Energy Technology Data Exchange (ETDEWEB)

    Oesterling, Patrick [Univ. of Leipzig (Germany); Heine, Christian [Univ. of Kaiserslautern (Germany); Weber, Gunther H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Morozov, Dmitry [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Scheuermann, Gerik [Univ. of Leipzig (Germany)

    2017-06-03

    We introduce a new method that identifies and tracks features in arbitrary dimensions using the merge tree -- a structure for identifying topological features based on thresholding in scalar fields. This method analyzes the evolution of features of the function by tracking changes in the merge tree and relates features by matching subtrees between consecutive time steps. Using the time-varying merge tree, we present a structural visualization of the changing function that illustrates both features and their temporal evolution. We demonstrate the utility of our approach by applying it to temporal cluster analysis of high-dimensional point clouds.

  16. Standardization of a two-step real-time polymerase chain reaction based method for species-specific detection of medically important Aspergillus species.

    Science.gov (United States)

    Das, P; Pandey, P; Harishankar, A; Chandy, M; Bhattacharya, S; Chakrabarti, A

    2017-01-01

    Standardization of Aspergillus polymerase chain reaction (PCR) poses two technical challenges (a) standardization of DNA extraction, (b) optimization of PCR against various medically important Aspergillus species. Many cases of aspergillosis go undiagnosed because of relative insensitivity of conventional diagnostic methods such as microscopy, culture or antigen detection. The present study is an attempt to standardize real-time PCR assay for rapid sensitive and specific detection of Aspergillus DNA in EDTA whole blood. Three nucleic acid extraction protocols were compared and a two-step real-time PCR assay was developed and validated following the recommendations of the European Aspergillus PCR Initiative in our setup. In the first PCR step (pan-Aspergillus PCR), the target was 28S rDNA gene, whereas in the second step, species specific PCR the targets were beta-tubulin (for Aspergillus fumigatus, Aspergillus flavus, Aspergillus terreus), gene and calmodulin gene (for Aspergillus niger). Species specific identification of four medically important Aspergillus species, namely, A. fumigatus, A. flavus, A. niger and A. terreus were achieved by this PCR. Specificity of the PCR was tested against 34 different DNA source including bacteria, virus, yeast, other Aspergillus sp., other fungal species and for human DNA and had no false-positive reactions. The analytical sensitivity of the PCR was found to be 102 CFU/ml. The present protocol of two-step real-time PCR assays for genus- and species-specific identification for commonly isolated species in whole blood for diagnosis of invasive Aspergillus infections offers a rapid, sensitive and specific assay option and requires clinical validation at multiple centers.

  17. Iteratively improving Hi-C experiments one step at a time.

    Science.gov (United States)

    Golloshi, Rosela; Sanders, Jacob T; McCord, Rachel Patton

    2018-04-30

    The 3D organization of eukaryotic chromosomes affects key processes such as gene expression, DNA replication, cell division, and response to DNA damage. The genome-wide chromosome conformation capture (Hi-C) approach can characterize the landscape of 3D genome organization by measuring interaction frequencies between all genomic regions. Hi-C protocol improvements and rapid advances in DNA sequencing power have made Hi-C useful to study diverse biological systems, not only to elucidate the role of 3D genome structure in proper cellular function, but also to characterize genomic rearrangements, assemble new genomes, and consider chromatin interactions as potential biomarkers for diseases. Yet, the Hi-C protocol is still complex and subject to variations at numerous steps that can affect the resulting data. Thus, there is still a need for better understanding and control of factors that contribute to Hi-C experiment success and data quality. Here, we evaluate recently proposed Hi-C protocol modifications as well as often overlooked variables in sample preparation and examine their effects on Hi-C data quality. We examine artifacts that can occur during Hi-C library preparation, including microhomology-based artificial template copying and chimera formation that can add noise to the downstream data. Exploring the mechanisms underlying Hi-C artifacts pinpoints steps that should be further optimized in the future. To improve the utility of Hi-C in characterizing the 3D genome of specialized populations of cells or small samples of primary tissue, we identify steps prone to DNA loss which should be considered to adapt Hi-C to lower cell numbers. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Time Domain Terahertz Axial Computed Tomography Non Destructive Evaluation, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — In this Phase 2 project, we propose to develop, construct, and deliver to NASA a computed axial tomography time-domain terahertz (CT TD-THz) non destructive...

  19. Local time stepping with the discontinuous Galerkin method for wave propagation in 3D heterogeneous media

    NARCIS (Netherlands)

    Minisini, S.; Zhebel, E.; Kononov, A.; Mulder, W.A.

    2013-01-01

    Modeling and imaging techniques for geophysics are extremely demanding in terms of computational resources. Seismic data attempt to resolve smaller scales and deeper targets in increasingly more complex geologic settings. Finite elements enable accurate simulation of time-dependent wave propagation

  20. The reliable solution and computation time of variable parameters logistic model

    Science.gov (United States)

    Wang, Pengfei; Pan, Xinnong

    2018-05-01

    The study investigates the reliable computation time (RCT, termed as T c) by applying a double-precision computation of a variable parameters logistic map (VPLM). Firstly, by using the proposed method, we obtain the reliable solutions for the logistic map. Secondly, we construct 10,000 samples of reliable experiments from a time-dependent non-stationary parameters VPLM and then calculate the mean T c. The results indicate that, for each different initial value, the T cs of the VPLM are generally different. However, the mean T c trends to a constant value when the sample number is large enough. The maximum, minimum, and probable distribution functions of T c are also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting by using the VPLM output. In addition, the T c of the fixed parameter experiments of the logistic map is obtained, and the results suggest that this T c matches the theoretical formula-predicted value.

  1. Comparison of single-step and two-step purified coagulants from Moringa oleifera seed for turbidity and DOC removal.

    Science.gov (United States)

    Sánchez-Martín, J; Ghebremichael, K; Beltrán-Heredia, J

    2010-08-01

    The coagulant proteins from Moringa oleifera purified with single-step and two-step ion-exchange processes were used for the coagulation of surface water from Meuse river in The Netherlands. The performances of the two purified coagulants and the crude extract were assessed in terms of turbidity and DOC removal. The results indicated that the optimum dosage of the single-step purified coagulant was more than two times higher compared to the two-step purified coagulant in terms of turbidity removal. And the residual DOC in the two-step purified coagulant was lower than in single-step purified coagulant or crude extract. (c) 2010 Elsevier Ltd. All rights reserved.

  2. Qualitative and quantitative assessment of step size adaptation rules

    DEFF Research Database (Denmark)

    Krause, Oswin; Glasmachers, Tobias; Igel, Christian

    2017-01-01

    We present a comparison of step size adaptation methods for evolution strategies, covering recent developments in the field. Following recent work by Hansen et al. we formulate a concise list of performance criteria: a) fast convergence of the mean, b) near-optimal fixed point of the normalized s...... that cumulative step size adaptation (CSA) and twopoint adaptation (TPA) provide reliable estimates of the optimal step size. We further find that removing the evolution path of CSA still leads to a reliable algorithm without the computational requirements of CSA.......We present a comparison of step size adaptation methods for evolution strategies, covering recent developments in the field. Following recent work by Hansen et al. we formulate a concise list of performance criteria: a) fast convergence of the mean, b) near-optimal fixed point of the normalized...

  3. Computer-aided classification of lung nodules on computed tomography images via deep learning technique

    Directory of Open Access Journals (Sweden)

    Hua KL

    2015-08-01

    Full Text Available Kai-Lung Hua,1 Che-Hao Hsu,1 Shintami Chusnul Hidayati,1 Wen-Huang Cheng,2 Yu-Jen Chen3 1Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, 2Research Center for Information Technology Innovation, Academia Sinica, 3Department of Radiation Oncology, MacKay Memorial Hospital, Taipei, Taiwan Abstract: Lung cancer has a poor prognosis when not diagnosed early and unresectable lesions are present. The management of small lung nodules noted on computed tomography scan is controversial due to uncertain tumor characteristics. A conventional computer-aided diagnosis (CAD scheme requires several image processing and pattern recognition steps to accomplish a quantitative tumor differentiation result. In such an ad hoc image analysis pipeline, every step depends heavily on the performance of the previous step. Accordingly, tuning of classification performance in a conventional CAD scheme is very complicated and arduous. Deep learning techniques, on the other hand, have the intrinsic advantage of an automatic exploitation feature and tuning of performance in a seamless fashion. In this study, we attempted to simplify the image analysis pipeline of conventional CAD with deep learning techniques. Specifically, we introduced models of a deep belief network and a convolutional neural network in the context of nodule classification in computed tomography images. Two baseline methods with feature computing steps were implemented for comparison. The experimental results suggest that deep learning methods could achieve better discriminative results and hold promise in the CAD application domain. Keywords: nodule classification, deep learning, deep belief network, convolutional neural network

  4. 78 FR 38949 - Computer Security Incident Coordination (CSIC): Providing Timely Cyber Incident Response

    Science.gov (United States)

    2013-06-28

    ... exposed to various forms of cyber attack. In some cases, attacks can be thwarted through the use of...-3383-01] Computer Security Incident Coordination (CSIC): Providing Timely Cyber Incident Response... systems will be successfully attacked. When a successful attack occurs, the job of a Computer Security...

  5. Detection of Listeria monocytogenes in ready-to-eat food by Step One real-time polymerase chain reaction.

    Science.gov (United States)

    Pochop, Jaroslav; Kačániová, Miroslava; Hleba, Lukáš; Lopasovský, L'ubomír; Bobková, Alica; Zeleňáková, Lucia; Stričík, Michal

    2012-01-01

    The aim of this study was to follow contamination of ready-to-eat food with Listeria monocytogenes by using the Step One real time polymerase chain reaction (PCR). We used the PrepSEQ Rapid Spin Sample Preparation Kit for isolation of DNA and MicroSEQ® Listeria monocytogenes Detection Kit for the real-time PCR performance. In 30 samples of ready-to-eat milk and meat products without incubation we detected strains of Listeria monocytogenes in five samples (swabs). Internal positive control (IPC) was positive in all samples. Our results indicated that the real-time PCR assay developed in this study could sensitively detect Listeria monocytogenes in ready-to-eat food without incubation.

  6. Performance evaluation of CT measurements made on step gauges using statistical methodologies

    DEFF Research Database (Denmark)

    Angel, J.; De Chiffre, L.; Kruth, J.P.

    2015-01-01

    In this paper, a study is presented in which statistical methodologies were applied to evaluate the measurement of step gauges on an X-ray computed tomography (CT) system. In particular, the effects of step gauge material density and orientation were investigated. The step gauges consist of uni......- and bidirectional lengths. By confirming the repeatability of measurements made on the test system, the number of required scans in the design of experiment (DOE) was reduced. The statistical model was checked using model adequacy principles; model adequacy checking is an important step in validating...

  7. CROSAT: A digital computer program for statistical-spectral analysis of two discrete time series

    International Nuclear Information System (INIS)

    Antonopoulos Domis, M.

    1978-03-01

    The program CROSAT computes directly from two discrete time series auto- and cross-spectra, transfer and coherence functions, using a Fast Fourier Transform subroutine. Statistical analysis of the time series is optional. While of general use the program is constructed to be immediately compatible with the ICL 4-70 and H316 computers at AEE Winfrith, and perhaps with minor modifications, with any other hardware system. (author)

  8. The STEP standard as an approach for design and prototyping

    OpenAIRE

    Plantec , Alain; Ribaud , Vincent

    1998-01-01

    International audience; STEP is an ISO standard (ISO-10303) for the computer-interpretable representation and exchange of product data. Parts of STEP standardize conceptual structures and usage of information in generic or specific domains. The standardization process of these constructs is an evolutionary approach , which uses generated prototypes at different phases of the process. This paper presents a method for the building of prototype generators, inspired by this standardization proces...

  9. Structural comparison of anodic nanoporous-titania fabricated from single-step and three-step of anodization using two paralleled-electrodes anodizing cell

    Directory of Open Access Journals (Sweden)

    Mallika Thabuot

    2016-02-01

    Full Text Available Anodization of Ti sheet in the ethylene glycol electrolyte containing 0.38wt% NH4F with the addition of 1.79wt% H2O at room temperature was studied. Applied potential of 10-60 V and anodizing time of 1-3 h were conducted by single-step and three-step of anodization within the two paralleled-electrodes anodizing cell. Their structural and textural properties were investigated by X-ray diffraction (XRD and scanning electron microscopy (SEM. After annealing at 600°C in the air furnace for 3 h, TiO2-nanotubes was transformed to the higher proportion of anatase crystal phase. Also crystallization of anatase phase was enhanced as the duration of anodization as the final step increased. By using single-step of anodization, pore texture of oxide film was started to reveal at the applied potential of 30 V. Better orderly arrangement of the TiO2-nanotubes array with larger pore size was obtained with the increase of applied potential. The applied potential of 60 V was selected for the three-step of anodization with anodizing time of 1-3 h. Results showed that the well-smooth surface coverage with higher density of porous-TiO2 was achieved using prolonging time at the first and second step, however, discontinuity tube in length was produced instead of the long-vertical tube. Layer thickness of anodic oxide film depended on the anodizing time at the last step of anodization. More well arrangement of nanostructured-TiO2 was produced using three-step of anodization under 60 V with 3 h for each step.

  10. Continuous versus step-by-step scanning mode of a novel 3D scanner for CyberKnife measurements

    International Nuclear Information System (INIS)

    Al Kafi, M Abdullah; Mwidu, Umar; Moftah, Belal

    2015-01-01

    The purpose of the study is to investigate the continuous versus step-by-step scanning mode of a commercial circular 3D scanner for commissioning measurements of a robotic stereotactic radiosurgery system. The 3D scanner was used for profile measurements in step-by-step and continuous modes with the intent of comparing the two scanning modes for consistency. The profile measurements of in-plane, cross-plane, 15 degree, and 105 degree were performed for both fixed cones and Iris collimators at depth of maximum dose and at 10 cm depth. For CyberKnife field size, penumbra, flatness and symmetry analysis, it was observed that the measurements with continuous mode, which can be up to 6 times faster than step-by-step mode, are comparable and produce scans nearly identical to step-by-step mode. When compared with centered step-by-step mode data, a fully processed continuous mode data gives rise to maximum of 0.50% and 0.60% symmetry and flatness difference respectfully for all the fixed cones and Iris collimators studied. - Highlights: • D scanner for CyberKnife beam data measurements. • Beam data analysis for continuous and step-by-step scan modes. • Faster continuous scanning data are comparable to step-by-step mode scan data.

  11. Compensatory stepping responses in individuals with stroke: a pilot study.

    Science.gov (United States)

    Lakhani, Bimal; Mansfield, Avril; Inness, Elizabeth L; McIlroy, William E

    2011-05-01

    Impaired postural control and a high incidence of falls are commonly observed following stroke. Compensatory stepping responses are critical to reactive balance control. We hypothesize that, following a stroke, individuals with unilateral limb dyscontrol will be faced with the unique challenge of controlling such rapid stepping reactions that may eventually be linked to the high rate of falling. The objectives of this exploratory pilot study were to investigate compensatory stepping in individuals poststroke with regard to: (1) choice of initial stepping limb (paretic or non-paretic); (2) step characteristics; and (3) differences in step characteristics when the initial step is taken with the paretic vs. the non-paretic limb. Four subjects following stroke (38-165 days post) and 11 healthy young adults were recruited. Anterior and posterior perturbations were delivered by using a weight drop system. Force plates recorded centre-of-pressure excursion prior to the onset of stepping and step timing. Of the four subjects, three only attempted to step with their non-paretic limb and one stepped with either limb. Time to foot-off was generally slow, whereas step onset time and swing time were comparable to healthy controls. Two of the four subjects executed multistep responses in every trial, and attempts to force stepping with the paretic limb were unsuccessful in three of the four subjects. Despite high clinical balance scores, these individuals with stroke demonstrated impaired compensatory stepping responses, suggesting that current clinical evaluations might not accurately reflect reactive balance control in this population.

  12. Recovery of forward stepping in spinal cord injured patients does not transfer to untrained backward stepping.

    Science.gov (United States)

    Grasso, Renato; Ivanenko, Yuri P; Zago, Myrka; Molinari, Marco; Scivoletto, Giorgio; Lacquaniti, Francesco

    2004-08-01

    Six spinal cord injured (SCI) patients were trained to step on a treadmill with body-weight support for 1.5-3 months. At the end of training, foot motion recovered the shape and the step-by-step reproducibility that characterize normal gait. They were then asked to step backward on the treadmill belt that moved in the opposite direction relative to standard forward training. In contrast to healthy subjects, who can immediately reverse the direction of walking by time-reversing the kinematic waveforms, patients were unable to step backward. Similarly patients were unable to perform another untrained locomotor task, namely stepping in place on the idle treadmill. Two patients who were trained to step backward for 2-3 weeks were able to develop control of foot motion appropriate for this task. The results show that locomotor improvement does not transfer to untrained tasks, thus supporting the idea of task-dependent plasticity in human locomotor networks.

  13. A Step Towards A Computing Grid For The LHC Experiments ATLAS Data Challenge 1

    CERN Document Server

    Sturrock, R; Epp, B; Ghete, V M; Kuhn, D; Mello, A G; Caron, B; Vetterli, M C; Karapetian, G V; Martens, K; Agarwal, A; Poffenberger, P R; McPherson, R A; Sobie, R J; Amstrong, S; Benekos, N C; Boisvert, V; Boonekamp, M; Brandt, S; Casado, M P; Elsing, M; Gianotti, F; Goossens, L; Grote, M; Hansen, J B; Mair, K; Nairz, A; Padilla, C; Poppleton, A; Poulard, G; Richter-Was, Elzbieta; Rosati, S; Schörner-Sadenius, T; Wengler, T; Xu, G F; Ping, J L; Chudoba, J; Kosina, J; Lokajícek, M; Svec, J; Tas, P; Hansen, J R; Lytken, E; Nielsen, J L; Wäänänen, A; Tapprogge, Stefan; Calvet, D; Albrand, S; Collot, J; Fulachier, J; Ledroit-Guillon, F; Ohlsson-Malek, F; Viret, S; Wielers, M; Bernardet, K; Corréard, S; Rozanov, A; De Vivie de Régie, J B; Arnault, C; Bourdarios, C; Hrivnác, J; Lechowski, M; Parrour, G; Perus, A; Rousseau, D; Schaffer, A; Unal, G; Derue, F; Chevalier, L; Hassani, S; Laporte, J F; Nicolaidou, R; Pomarède, D; Virchaux, M; Nesvadba, N; Baranov, S; Putzer, A; Khonich, A; Duckeck, G; Schieferdecker, P; Kiryunin, A E; Schieck, J; Lagouri, T; Duchovni, E; Levinson, L; Schrager, D; Negri, G; Bilokon, H; Spogli, L; Barberis, D; Parodi, F; Cataldi, G; Gorini, E; Primavera, M; Spagnolo, S; Cavalli, D; Heldmann, M; Lari, T; Perini, L; Rebatto, D; Resconi, S; Tatarelli, F; Vaccarossa, L; Biglietti, M; Carlino, G; Conventi, F; Doria, A; Merola, L; Polesello, G; Vercesi, V; De Salvo, A; Di Mattia, A; Luminari, L; Nisati, A; Reale, M; Testa, M; Farilla, A; Verducci, M; Cobal, M; Santi, L; Hasegawa, Y; Ishino, M; Mashimo, T; Matsumoto, H; Sakamoto, H; Tanaka, J; Ueda, I; Bentvelsen, Stanislaus Cornelius Maria; Fornaini, A; Gorfine, G; Groep, D; Templon, J; Köster, L J; Konstantinov, A; Myklebust, T; Ould-Saada, F; Bold, T; Kaczmarska, A; Malecki, P; Szymocha, T; Turala, M; Kulchitskii, Yu A; Khoreauli, G; Gromova, N; Tsulaia, V; Minaenko, A A; Rudenko, R; Slabospitskaya, E; Solodkov, A; Gavrilenko, I; Nikitine, N; Sivoklokov, S Yu; Toms, K; Zalite, A; Zalite, Yu; Kervesan, B; Bosman, M; González, S; Sánchez, J; Salt, J; Andersson, N; Nixon, L; Eerola, Paule Anna Mari; Kónya, B; Smirnova, O G; Sandgren, A; Ekelöf, T J C; Ellert, M; Gollub, N; Hellman, S; Lipniacka, A; Corso-Radu, A; Pérez-Réale, V; Lee, S C; CLin, S C; Ren, Z L; Teng, P K; Faulkner, P J W; O'Neale, S W; Watson, A; Brochu, F; Lester, C; Thompson, S; Kennedy, J; Bouhova-Thacker, E; Henderson, R; Jones, R; Kartvelishvili, V G; Smizanska, M; Washbrook, A J; Drohan, J; Konstantinidis, N P; Moyse, E; Salih, S; Loken, J; Baines, J T M; Candlin, D; Candlin, R; Clifft, R; Li, W; McCubbin, N A; George, S; Lowe, A; Buttar, C; Dawson, I; Moraes, A; Tovey, Daniel R; Gieraltowski, J; Malon, D; May, E; LeCompte, T J; Vaniachine, A; Adams, D L; Assamagan, Ketevi A; Baker, R; Deng, W; Fine, V; Fisyak, Yu; Gibbard, B; Ma, H; Nevski, P; Paige, F; Rajagopalan, S; Smith, J; Undrus, A; Wenaus, T; Yu, D; Calafiura, P; Canon, S; Costanzo, D; Hinchliffe, Ian; Lavrijsen, W; Leggett, C; Marino, M; Quarrie, D R; Sakrejda, I; Stravopoulos, G; Tull, C; Loch, P; Youssef, S; Shank, J T; Engh, D; Frank, E; Sen-Gupta, A; Gardner, R; Meritt, F; Smirnov, Y; Huth, J; Grundhoefer, L; Luehring, F C; Goldfarb, S; Severini, H; Skubic, P L; Gao, Y; Ryan, T; De, K; Sosebee, M; McGuigan, P; Ozturk, N

    2004-01-01

    The ATLAS Collaboration at CERN is preparing for the data taking and analysis at the LHC that will start in 2007. Therefore, a series of Data Challenges was started in 2002 whose goals are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made for the final offline computing environment. A major feature of the first Data Challenge (DC1) was the preparation and the deployment of the software required for the production of large event samples as a worldwide distributed activity. It should be noted that it was not an option to "run the complete production at CERN" even if we had wanted to; the resources were not available at CERN to carry out the production on a reasonable time-scale. The great challenge of organising and carrying out this large-scale production at a significant number of sites around the world had therefore to be faced. However, the benefits of this are manifold: apart from realising the require...

  14. A real-time camera calibration system based on OpenCV

    Science.gov (United States)

    Zhang, Hui; Wang, Hua; Guo, Huinan; Ren, Long; Zhou, Zuofeng

    2015-07-01

    Camera calibration is one of the essential steps in the computer vision research. This paper describes a real-time OpenCV based camera calibration system, and developed and implemented in the VS2008 environment. Experimental results prove that the system to achieve a simple and fast camera calibration, compared with MATLAB, higher precision and does not need manual intervention, and can be widely used in various computer vision system.

  15. Computation of asteroid proper elements on the Grid

    Directory of Open Access Journals (Sweden)

    Novaković B.

    2009-01-01

    Full Text Available A procedure of gridification of the computation of asteroid proper orbital elements is described. The need to speed up the time consuming computations and make them more efficient is justified by the large increase of observational data expected from the next generation all sky surveys. We give the basic notion of proper elements and of the contemporary theories and methods used to compute them for different populations of objects. Proper elements for nearly 70,000 asteroids are derived since the beginning of use of the Grid infrastructure for the purpose. The average time for the catalogs update is significantly shortened with respect to the time needed with stand-alone workstations. We also present basics of the Grid computing, the concepts of Grid middleware and its Workload management system. The practical steps we undertook to efficiently gridify our application are described in full detail. We present the results of a comprehensive testing of the performance of different Grid sites, and offer some practical conclusions based on the benchmark results and on our experience. Finally, we propose some possibilities for the future work.

  16. Computation of Asteroid Proper Elements on the Grid

    Directory of Open Access Journals (Sweden)

    Novaković, B.

    2009-12-01

    Full Text Available A procedure of gridification of the computation of asteroid proper orbital elements is described. The need to speed up the time consuming computations and make them more efficient is justified by the large increase of observational data expected from the next generation all sky surveys. We give the basic notion of proper elements and of the contemporary theories and methods used to compute them for different populations of objects. Proper elements for nearly 70,000 asteroids are derived since the beginning of use of the Grid infrastructure for the purpose. The average time for the catalogs update is significantly shortened with respect to the time needed with stand-alone workstations. We also present basics of the Grid computing, the concepts of Grid middleware and its Workload management system. The practical steps we undertook to efficiently gridify our application are described in full detail. We present the results of a comprehensive testing of the performance of different Grid sites, and offer some practical conclusions based on the benchmark results and on our experience. Finally, we propose some possibilities for the future work.

  17. A Compact Unconditionally Stable Method for Time-Domain Maxwell's Equations

    Directory of Open Access Journals (Sweden)

    Zhuo Su

    2013-01-01

    Full Text Available Higher order unconditionally stable methods are effective ways for simulating field behaviors of electromagnetic problems since they are free of Courant-Friedrich-Levy conditions. The development of accurate schemes with less computational expenditure is desirable. A compact fourth-order split-step unconditionally-stable finite-difference time-domain method (C4OSS-FDTD is proposed in this paper. This method is based on a four-step splitting form in time which is constructed by symmetric operator and uniform splitting. The introduction of spatial compact operator can further improve its performance. Analyses of stability and numerical dispersion are carried out. Compared with noncompact counterpart, the proposed method has reduced computational expenditure while keeping the same level of accuracy. Comparisons with other compact unconditionally-stable methods are provided. Numerical dispersion and anisotropy errors are shown to be lower than those of previous compact unconditionally-stable methods.

  18. Effects of walking speed on the step-by-step control of step width.

    Science.gov (United States)

    Stimpson, Katy H; Heitkamp, Lauren N; Horne, Joscelyn S; Dean, Jesse C

    2018-02-08

    Young, healthy adults walking at typical preferred speeds use step-by-step adjustments of step width to appropriately redirect their center of mass motion and ensure mediolateral stability. However, it is presently unclear whether this control strategy is retained when walking at the slower speeds preferred by many clinical populations. We investigated whether the typical stabilization strategy is influenced by walking speed. Twelve young, neurologically intact participants walked on a treadmill at a range of prescribed speeds (0.2-1.2 m/s). The mediolateral stabilization strategy was quantified as the proportion of step width variance predicted by the mechanical state of the pelvis throughout a step (calculated as R 2 magnitude from a multiple linear regression). Our ability to accurately predict the upcoming step width increased over the course of a step. The strength of the relationship between step width and pelvis mechanics at the start of a step was reduced at slower speeds. However, these speed-dependent differences largely disappeared by the end of a step, other than at the slowest walking speed (0.2 m/s). These results suggest that mechanics-dependent adjustments in step width are a consistent component of healthy gait across speeds and contexts. However, slower walking speeds may ease this control by allowing mediolateral repositioning of the swing leg to occur later in a step, thus encouraging slower walking among clinical populations with limited sensorimotor control. Published by Elsevier Ltd.

  19. Comparison of vortex-element and finite-volume simulations of low Reynolds number flow over a confined backward-facing step

    International Nuclear Information System (INIS)

    Barber, R.W.; Fonty, A.

    2003-01-01

    This paper describes a novel vortex element method for simulating incompressible laminar flow over a two-dimensional backward-facing step. The model employs an operator-splitting technique to compute the evolution of the vorticity field downstream of abrupt changes in flow geometry. During the advective stage of the computation, a semi-Lagrangian scheme is used to update the positions of the vortex elements, whilst an analytical diffusion algorithm employing Oseen vortices is implemented during the diffusive time step. Redistributing the vorticity analytically instead of using the more traditional random-walk method enables the numerical model to simulate steady flows directly and avoids the need to filter the results to remove the oscillations created by the random-walk procedure. Model validation has been achieved by comparing the length of the recirculating eddy behind a confined backward-facing step against data from experimental and alternative numerical investigations. In addition, results from the vortex element method are compared against predictions obtained using the commercial finite-volume computational fluid dynamics code, CFD-ACE+. The results show that the vortex element scheme marginally overpredicts the length of the downstream recirculating eddy, implying that the method may be associated with an artificial reduction in the vorticity diffusion rate. Nevertheless the results demonstrate that the proposed vortex redistribution scheme provides a practical alternative to traditional random-walk discrete vortex algorithms. (author)

  20. Image preprocessing for improving computational efficiency in implementation of restoration and superresolution algorithms.

    Science.gov (United States)

    Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen

    2002-12-10

    Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the