An explicit multi-time-stepping algorithm for aerodynamic flows
Niemann-Tuitman, B.E.; Veldman, A.E.P.
1997-01-01
An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for
A local time stepping algorithm for GPU-accelerated 2D shallow water models
Dazzi, Susanna; Vacondio, Renato; Dal Palù, Alessandro; Mignosa, Paolo
2018-01-01
In the simulation of flooding events, mesh refinement is often required to capture local bathymetric features and/or to detail areas of interest; however, if an explicit finite volume scheme is adopted, the presence of small cells in the domain can restrict the allowable time step due to the stability condition, thus reducing the computational efficiency. With the aim of overcoming this problem, the paper proposes the application of a Local Time Stepping (LTS) strategy to a GPU-accelerated 2D shallow water numerical model able to handle non-uniform structured meshes. The algorithm is specifically designed to exploit the computational capability of GPUs, minimizing the overheads associated with the LTS implementation. The results of theoretical and field-scale test cases show that the LTS model guarantees appreciable reductions in the execution time compared to the traditional Global Time Stepping strategy, without compromising the solution accuracy.
Chen, Yunjie; Kale, Seyit; Weare, Jonathan; Dinner, Aaron R; Roux, Benoît
2016-04-12
A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method.
The G2 erosion model: An algorithm for month-time step assessments.
Karydas, Christos G; Panagos, Panos
2018-02-01
A detailed description of the G2 erosion model is presented, in order to support potential users. G2 is a complete, quantitative algorithm for mapping soil loss and sediment yield rates on month-time intervals. G2 has been designed to run in a GIS environment, taking input from geodatabases available by European or other international institutions. G2 adopts fundamental equations from the Revised Universal Soil Loss Equation (RUSLE) and the Erosion Potential Method (EPM), especially for rainfall erosivity, soil erodibility, and sediment delivery ratio. However, it has developed its own equations and matrices for the vegetation cover and management factor and the effect of landscape alterations on erosion. Provision of month-time step assessments is expected to improve understanding of erosion processes, especially in relation to land uses and climate change. In parallel, G2 has full potential to decision-making support with standardised maps on a regular basis. Geospatial layers of rainfall erosivity, soil erodibility, and terrain influence, recently developed by the Joint Research Centre (JRC) on a European or global scale, will further facilitate applications of G2. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Symplectic integrators with adaptive time steps
Richardson, A. S.; Finn, J. M.
2012-01-01
In recent decades, there have been many attempts to construct symplectic integrators with variable time steps, with rather disappointing results. In this paper, we identify the causes for this lack of performance, and find that they fall into two categories. In the first, the time step is considered a function of time alone, Δ = Δ(t). In this case, backward error analysis shows that while the algorithms remain symplectic, parametric instabilities may arise because of resonance between oscillations of Δ(t) and the orbital motion. In the second category the time step is a function of phase space variables Δ = Δ(q, p). In this case, the system of equations to be solved is analyzed by introducing a new time variable τ with dt = Δ(q, p) dτ. The transformed equations are no longer in Hamiltonian form, and thus do not benefit from integration methods which would be symplectic for Hamiltonian systems. We analyze two methods for integrating the transformed equations which do, however, preserve the structure of the original equations. The first is an extended phase space method, which has been successfully used in previous studies of adaptive time step symplectic integrators. The second, novel, method is based on a non-canonical mixed-variable generating function. Numerical trials for both of these methods show good results, without parametric instabilities or spurious growth or damping. It is then shown how to adapt the time step to an error estimate found by backward error analysis, in order to optimize the time-stepping scheme. Numerical results are obtained using this formulation and compared with other time-stepping schemes for the extended phase space symplectic method.
High-resolution seismic wave propagation using local time stepping
Peter, Daniel
2017-03-13
High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step size for ground-motion simulations due to numerical stability conditions. To alleviate this problem, local time stepping (LTS) algorithms allow an explicit time stepping scheme to adapt the time step to the element size, allowing nearoptimal time steps everywhere in the mesh. This can potentially lead to significantly faster simulation runtimes.
Multi-time-step domain coupling method with energy control
DEFF Research Database (Denmark)
Mahjoubi, N.; Krenk, Steen
2010-01-01
A multi-time-step integration method is proposed for solving structural dynamics problems on multiple domains. The method generalizes earlier state-space integration algorithms by introducing displacement constraints via Lagrange multipliers, representing the time-integrated constraint forces over...
Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.; Tang, Qi
2017-08-01
A stable partitioned algorithm is developed for fluid-structure interaction (FSI) problems involving viscous incompressible flow and rigid bodies. This added-mass partitioned (AMP) algorithm remains stable, without sub-iterations, for light and even zero mass rigid bodies when added-mass and viscous added-damping effects are large. The scheme is based on a generalized Robin interface condition for the fluid pressure that includes terms involving the linear acceleration and angular acceleration of the rigid body. Added-mass effects are handled in the Robin condition by inclusion of a boundary integral term that depends on the pressure. Added-damping effects due to the viscous shear forces on the body are treated by inclusion of added-damping tensors that are derived through a linearization of the integrals defining the force and torque. Added-damping effects may be important at low Reynolds number, or, for example, in the case of a rotating cylinder or rotating sphere when the rotational moments of inertia are small. In this first part of a two-part series, the properties of the AMP scheme are motivated and evaluated through the development and analysis of some model problems. The analysis shows when and why the traditional partitioned scheme becomes unstable due to either added-mass or added-damping effects. The analysis also identifies the proper form of the added-damping which depends on the discrete time-step and the grid-spacing normal to the rigid body. The results of the analysis are confirmed with numerical simulations that also demonstrate a second-order accurate implementation of the AMP scheme.
Newmark local time stepping on high-performance computing architectures
Rietmann, Max
2016-11-25
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.
Nonlinear stability and time step selection for the MPM method
Berzins, Martin
2018-01-01
The Material Point Method (MPM) has been developed from the Particle in Cell (PIC) method over the last 25 years and has proved its worth in solving many challenging problems involving large deformations. Nevertheless there are many open questions regarding the theoretical properties of MPM. For example in while Fourier methods, as applied to PIC may provide useful insight, the non-linear nature of MPM makes it necessary to use a full non-linear stability analysis to determine a stable time step for MPM. In order to begin to address this the stability analysis of Spigler and Vianello is adapted to MPM and used to derive a stable time step bound for a model problem. This bound is contrasted against traditional Speed of sound and CFL bounds and shown to be a realistic stability bound for a model problem.
A semi-Lagrangian method for DNS with large time-stepping
Xiu, Dongbin; Karniadakis, George
2000-11-01
An efficient time-step discretization based on semi-Lagrangian methods, often used in metereology, is proposed for direct numerical simulations. It is unconditionally stable and retains high-order accuracy comparable to Eulerian schemes. The structure of the total error is analyzed in detail, and shows a non-monotonic trend with the size of the time-step. Numerical experiments for a variety of flows shows that stable and accurate results are obtained with time steps more than fifty times the CFL-bound time step used in current semi-implicit DNS.
A stable partitioned FSI algorithm for incompressible flow and deforming beams
International Nuclear Information System (INIS)
Li, L.; Henshaw, W.D.; Banks, J.W.; Schwendeman, D.W.; Main, A.
2016-01-01
An added-mass partitioned (AMP) algorithm is described for solving fluid–structure interaction (FSI) problems coupling incompressible flows with thin elastic structures undergoing finite deformations. The new AMP scheme is fully second-order accurate and stable, without sub-time-step iterations, even for very light structures when added-mass effects are strong. The fluid, governed by the incompressible Navier–Stokes equations, is solved in velocity-pressure form using a fractional-step method; large deformations are treated with a mixed Eulerian-Lagrangian approach on deforming composite grids. The motion of the thin structure is governed by a generalized Euler–Bernoulli beam model, and these equations are solved in a Lagrangian frame using two approaches, one based on finite differences and the other on finite elements. The key AMP interface condition is a generalized Robin (mixed) condition on the fluid pressure. This condition, which is derived at a continuous level, has no adjustable parameters and is applied at the discrete level to couple the partitioned domain solvers. Special treatment of the AMP condition is required to couple the finite-element beam solver with the finite-difference-based fluid solver, and two coupling approaches are described. A normal-mode stability analysis is performed for a linearized model problem involving a beam separating two fluid domains, and it is shown that the AMP scheme is stable independent of the ratio of the mass of the fluid to that of the structure. A traditional partitioned (TP) scheme using a Dirichlet–Neumann coupling for the same model problem is shown to be unconditionally unstable if the added mass of the fluid is too large. A series of benchmark problems of increasing complexity are considered to illustrate the behavior of the AMP algorithm, and to compare the behavior with that of the TP scheme. The results of all these benchmark problems verify the stability and accuracy of the AMP scheme. Results for
A note on extending decision algorithms by stable predicates
Directory of Open Access Journals (Sweden)
Alfredo Ferro
1988-11-01
Full Text Available A general mechanism to extend decision algorithms to deal with additional predicates is described. The only conditions imposed on the predicates is stability with respect to some transitive relations.
Time-step coupling for hybrid simulations of multiscale flows
Lockerby, Duncan A.; Duque-Daza, Carlos A.; Borg, Matthew K.; Reese, Jason M.
2013-03-01
A new method is presented for the exploitation of time-scale separation in hybrid continuum-molecular models of multiscale flows. Our method is a generalisation of existing approaches, and is evaluated in terms of computational efficiency and physical/numerical error. Comparison with existing schemes demonstrates comparable, or much improved, physical accuracy, at comparable, or far greater, efficiency (in terms of the number of time-step operations required to cover the same physical time). A leapfrog coupling is proposed between the 'macro' and 'micro' components of the hybrid model and demonstrates potential for improved numerical accuracy over a standard simultaneous approach. A general algorithm for a coupled time step is presented. Three test cases are considered where the degree of time-scale separation naturally varies during the course of the simulation. First, the step response of a second-order system composed of two linearly-coupled ODEs. Second, a micro-jet actuator combining a kinetic treatment in a small flow region where rarefaction is important with a simple ODE enforcing mass conservation in a much larger spatial region. Finally, the transient start-up flow of a journal bearing with a cylindrical rarefied gas layer. Our new time-stepping method consistently demonstrates as good as or better performance than existing schemes. This superior overall performance is due to an adaptability inherent in the method, which allows the most-desirable aspects of existing schemes to be applied only in the appropriate conditions.
Adaptive time-stepping Monte Carlo integration of Coulomb collisions
Särkimäki, K.; Hirvijoki, E.; Terävä, J.
2018-01-01
We report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell-Jüttner statistics. The implementation is based on the Beliaev-Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space. Detailed description is provided for both the physics and implementation of the operator. The focus is in adaptive integration of stochastic differential equations, which is an overlooked aspect among existing Monte Carlo implementations of Coulomb collision operators. We verify that our operator converges to known analytical results and demonstrate that careless implementation of the adaptive time step can lead to severely erroneous results. The operator is provided as a self-contained Fortran 95 module and can be included into existing orbit-following tools that trace either the full Larmor motion or the guiding center dynamics. The adaptive time-stepping algorithm is expected to be useful in situations where the collision frequencies vary greatly over the course of a simulation. Examples include the slowing-down of fusion products or other fast ions, and the Dreicer generation of runaway electrons as well as the generation of fast ions or electrons with ion or electron cyclotron resonance heating.
Aggressive time step selection for the time asymptotic velocity diffusion problem
International Nuclear Information System (INIS)
Hewett, D.W.; Krapchev, V.B.; Hizanidis, K.; Bers, A.
1984-12-01
An aggressive time step selector for an ADI algorithm is preseneted that is applied to the linearized 2-D Fokker-Planck equation including an externally imposed quasilinear diffusion term. This method provides a reduction in CPU requirements by factors of two or three compared to standard ADI. More important, the robustness of the procedure greatly reduces the work load of the user. The procedure selects a nearly optimal Δt with a minimum of intervention by the user thus relieving the need to supervise the algorithm. In effect, the algorithm does its own supervision by discarding time steps made with Δt too large
Guermond, J.-L.
2011-01-01
In this paper we analyze the convergence properties of a new fractional time-stepping technique for the solution of the variable density incompressible Navier-Stokes equations. The main feature of this method is that, contrary to other existing algorithms, the pressure is determined by just solving one Poisson equation per time step. First-order error estimates are proved, and stability of a formally second-order variant of the method is established. © 2011 Society for Industrial and Applied Mathematics.
Bezus, Evgeni A; Doskolovich, Leonid L
2012-11-01
In the present work, a stable algorithm for the calculation of the electromagnetic field distributions of the eigenmodes of one-dimensional diffraction gratings is presented. The proposed approach is based on the method for the computation of the propagation constants of Bloch waves of such structures previously presented by Cao et al.[J. Opt. Soc. Am. A 19, 335 (2002)] and uses a modified S-matrix algorithm to ensure numerical stability.
Time step length versus efficiency of Monte Carlo burnup calculations
International Nuclear Information System (INIS)
Dufek, Jan; Valtavirta, Ville
2014-01-01
Highlights: • Time step length largely affects efficiency of MC burnup calculations. • Efficiency of MC burnup calculations improves with decreasing time step length. • Results were obtained from SIE-based Monte Carlo burnup calculations. - Abstract: We demonstrate that efficiency of Monte Carlo burnup calculations can be largely affected by the selected time step length. This study employs the stochastic implicit Euler based coupling scheme for Monte Carlo burnup calculations that performs a number of inner iteration steps within each time step. In a series of calculations, we vary the time step length and the number of inner iteration steps; the results suggest that Monte Carlo burnup calculations get more efficient as the time step length is reduced. More time steps must be simulated as they get shorter; however, this is more than compensated by the decrease in computing cost per time step needed for achieving a certain accuracy
Sotiropoulos, Vassilios; Kaznessis, Yiannis N
2008-01-07
Models involving stochastic differential equations (SDEs) play a prominent role in a wide range of applications where systems are not at the thermodynamic limit, for example, biological population dynamics. Therefore there is a need for numerical schemes that are capable of accurately and efficiently integrating systems of SDEs. In this work we introduce a variable size step algorithm and apply it to systems of stiff SDEs with multiple multiplicative noise. The algorithm is validated using a subclass of SDEs called chemical Langevin equations that appear in the description of dilute chemical kinetics models, with important applications mainly in biology. Three representative examples are used to test and report on the behavior of the proposed scheme. We demonstrate the advantages and disadvantages over fixed time step integration schemes of the proposed method, showing that the adaptive time step method is considerably more stable than fixed step methods with no excessive additional computational overhead.
Adaptive Time Stepping for Transient Network Flow Simulation in Rocket Propulsion Systems
Majumdar, Alok K.; Ravindran, S. S.
2017-01-01
Fluid and thermal transients found in rocket propulsion systems such as propellant feedline system is a complex process involving fast phases followed by slow phases. Therefore their time accurate computation requires use of short time step initially followed by the use of much larger time step. Yet there are instances that involve fast-slow-fast phases. In this paper, we present a feedback control based adaptive time stepping algorithm, and discuss its use in network flow simulation of fluid and thermal transients. The time step is automatically controlled during the simulation by monitoring changes in certain key variables and by feedback. In order to demonstrate the viability of time adaptivity for engineering problems, we applied it to simulate water hammer and cryogenic chill down in pipelines. Our comparison and validation demonstrate the accuracy and efficiency of this adaptive strategy.
Energy Technology Data Exchange (ETDEWEB)
Silveira, L.M.; Kamon, M.; Elfadel, I.; White, J. [Massachusetts Inst. of Technology, Cambridge, MA (United States)
1996-12-31
Model order reduction based on Krylov subspace iterative methods has recently emerged as a major tool for compressing the number of states in linear models used for simulating very large physical systems (VLSI circuits, electromagnetic interactions). There are currently two main methods for accomplishing such a compression: one is based on the nonsymmetric look-ahead Lanczos algorithm that gives a numerically stable procedure for finding Pade approximations, while the other is based on a less well characterized Arnoldi algorithm. In this paper, we show that for certain classes of generalized state-space systems, the reduced-order models produced by a coordinate-transformed Arnoldi algorithm inherit the stability of the original system. Complete Proofs of our results will be given in the final paper.
Diffeomorphic image registration with automatic time-step adjustment
DEFF Research Database (Denmark)
Pai, Akshay Sadananda Uppinakudru; Klein, S.; Sommer, Stefan Horst
2015-01-01
In this paper, we propose an automated Euler's time-step adjustment scheme for diffeomorphic image registration using stationary velocity fields (SVFs). The proposed variational problem aims at bounding the inverse consistency error by adaptively adjusting the number of Euler's step required to r...... accuracy as a fixed time-step scheme however at a much less computational cost....
Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping
Bonito, Andrea
2014-10-31
© Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.
Novel stable structure of Li3PS4 predicted by evolutionary algorithm under high-pressure
Directory of Open Access Journals (Sweden)
S. Iikubo
2018-01-01
Full Text Available By combining theoretical predictions and in-situ X-ray diffraction under high pressure, we found a novel stable crystal structure of Li3PS4 under high pressures. At ambient pressure, Li3PS4 shows successive structural transitions from γ-type to β-type and from β-type to α type with increasing temperature, as is well established. In this study, an evolutionary algorithm successfully predicted the γ-type crystal structure at ambient pressure and further predicted a possible stable δ-type crystal structures under high pressure. The stability of the obtained structures is examined in terms of both static and dynamic stability by first-principles calculations. In situ X-ray diffraction using a synchrotron radiation revealed that the high-pressure phase is the predicted δ-Li3PS4 phase.
GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling
Miki, Yohei; Umemura, Masayuki
2017-04-01
The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.
Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.; Tang, Qi
2017-08-01
A stable partitioned algorithm is developed for fluid-structure interaction (FSI) problems involving viscous incompressible flow and rigid bodies. This added-mass partitioned (AMP) algorithm remains stable, without sub-iterations, for light and even zero mass rigid bodies when added-mass and viscous added-damping effects are large. The scheme is based on a generalized Robin interface condition for the fluid pressure that includes terms involving the linear acceleration and angular acceleration of the rigid body. Added mass effects are handled in the Robin condition by inclusion of a boundary integral term that depends on the pressure. Added-damping effects due to the viscous shear forces on the body are treated by inclusion of added-damping tensors that are derived through a linearization of the integrals defining the force and torque. Added-damping effects may be important at low Reynolds number, or, for example, in the case of a rotating cylinder or rotating sphere when the rotational moments of inertia are small. In this second part of a two-part series, the general formulation of the AMP scheme is presented including the form of the AMP interface conditions and added-damping tensors for general geometries. A fully second-order accurate implementation of the AMP scheme is developed in two dimensions based on a fractional-step method for the incompressible Navier-Stokes equations using finite difference methods and overlapping grids to handle the moving geometry. The numerical scheme is verified on a number of difficult benchmark problems.
Solving the dirac equation with nonlocal potential by imaginary time step method
International Nuclear Information System (INIS)
Zhang Ying; Liang Haozhao; Meng Jie
2009-01-01
The imaginary time step (ITS) method is applied to solve the Dirac equation with the nonlocal potential in coordinate space by the ITS evolution for the corresponding Schroedinger-like equation for the upper component. It is demonstrated that the ITS evolution can be equivalently performed for the Schroedinger-like equation with or without localization. The latter algorithm is recommended in the application for the reason of simplicity and efficiency. The feasibility and reliability of this algorithm are also illustrated by taking the nucleus 16 O as an example, where the same results as the shooting method for the Dirac equation with localized effective potentials are obtained. (authors)
Faenza, Y.; Oriolo, G.; Stauffer, G.
2011-01-01
We propose an algorithm for solving the maximum weighted stable set problem on claw-free graphs that runs in O(n^3)-time, drastically improving the previous best known complexity bound. This algorithm is based on a novel decomposition theorem for claw-free graphs, which is also intioduced in the present paper. Despite being weaker than the well-known structure result for claw-free graphs given by Chudnovsky and Seymour, our decomposition theorem is, on the other hand, algorithmic, i.e. it is ...
Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.
2018-04-01
An optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubic "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a constraint on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.
Rigid Body Sampling and Individual Time Stepping for Rigid-Fluid Coupling of Fluid Simulation
Directory of Open Access Journals (Sweden)
Xiaokun Wang
2017-01-01
Full Text Available In this paper, we propose an efficient and simple rigid-fluid coupling scheme with scientific programming algorithms for particle-based fluid simulation and three-dimensional visualization. Our approach samples the surface of rigid bodies with boundary particles that interact with fluids. It contains two procedures, that is, surface sampling and sampling relaxation, which insures uniform distribution of particles with less iterations. Furthermore, we present a rigid-fluid coupling scheme integrating individual time stepping to rigid-fluid coupling, which gains an obvious speedup compared to previous method. The experimental results demonstrate the effectiveness of our approach.
Gamshadzaei, Mohammad Hossein; Rahimzadegan, Majid
2017-10-01
Identification of water extents in Landsat images is challenging due to surfaces with similar reflectance to water extents. The objective of this study is to provide stable and accurate methods for identifying water extents in Landsat images based on meta-heuristic algorithms. Then, seven Landsat images were selected from various environmental regions in Iran. Training of the algorithms was performed using 40 water pixels and 40 nonwater pixels in operational land imager images of Chitgar Lake (one of the study regions). Moreover, high-resolution images from Google Earth were digitized to evaluate the results. Two approaches were considered: index-based and artificial intelligence (AI) algorithms. In the first approach, nine common water spectral indices were investigated. AI algorithms were utilized to acquire coefficients of optimal band combinations to extract water extents. Among the AI algorithms, the artificial neural network algorithm and also the ant colony optimization, genetic algorithm, and particle swarm optimization (PSO) meta-heuristic algorithms were implemented. Index-based methods represented different performances in various regions. Among AI methods, PSO had the best performance with average overall accuracy and kappa coefficient of 93% and 98%, respectively. The results indicated the applicability of acquired band combinations to extract accurately and stably water extents in Landsat imagery.
Zhu, Ying; Herbert, John M.
2018-01-01
The "real time" formulation of time-dependent density functional theory (TDDFT) involves integration of the time-dependent Kohn-Sham (TDKS) equation in order to describe the time evolution of the electron density following a perturbation. This approach, which is complementary to the more traditional linear-response formulation of TDDFT, is more efficient for computation of broad-band spectra (including core-excited states) and for systems where the density of states is large. Integration of the TDKS equation is complicated by the time-dependent nature of the effective Hamiltonian, and we introduce several predictor/corrector algorithms to propagate the density matrix, one of which can be viewed as a self-consistent extension of the widely used modified-midpoint algorithm. The predictor/corrector algorithms facilitate larger time steps and are shown to be more efficient despite requiring more than one Fock build per time step, and furthermore can be used to detect a divergent simulation on-the-fly, which can then be halted or else the time step modified.
The importance of time-stepping errors in ocean models
Williams, P. D.
2011-12-01
Many ocean models use leapfrog time stepping. The Robert-Asselin (RA) filter is usually applied after each leapfrog step, to control the computational mode. However, it will be shown in this presentation that the RA filter generates very large amounts of numerical diapycnal mixing. In some ocean models, the numerical diapycnal mixing from the RA filter is as large as the physical diapycnal mixing. This lowers our confidence in the fidelity of the simulations. In addition to the above problem, the RA filter also damps the physical solution and degrades the numerical accuracy. These two concomitant problems occur because the RA filter does not conserve the mean state, averaged over the three time slices on which it operates. The presenter has recently proposed a simple modification to the RA filter, which does conserve the three-time-level mean state. The modified filter has become known as the Robert-Asselin-Williams (RAW) filter. When used in conjunction with the leapfrog scheme, the RAW filter eliminates the numerical damping of the physical solution and increases the amplitude accuracy by two orders, yielding third-order accuracy. The phase accuracy is unaffected and remains second-order. The RAW filter can easily be incorporated into existing models of the ocean, typically via the insertion of just a single line of code. Better simulations are obtained, at almost no additional computational expense. Results will be shown from recent implementations of the RAW filter in various ocean models. For example, in the UK Met Office Hadley Centre ocean model, sea-surface temperature and sea-ice biases in the North Atlantic Ocean are found to be reduced. These improvements are encouraging for the use of the RAW filter in other ocean models.
The PX-EM algorithm for fast stable fitting of Henderson's mixed model
Directory of Open Access Journals (Sweden)
Van Dyk David A
2000-03-01
Full Text Available Abstract This paper presents procedures for implementing the PX-EM algorithm of Liu, Rubin and Wu to compute REML estimates of variance covariance components in Henderson's linear mixed models. The class of models considered encompasses several correlated random factors having the same vector length e.g., as in random regression models for longitudinal data analysis and in sire-maternal grandsire models for genetic evaluation. Numerical examples are presented to illustrate the procedures. Much better results in terms of convergence characteristics (number of iterations and time required for convergence are obtained for PX-EM relative to the basic EM algorithm in the random regression.
Combined Effects of Numerical Method Type and Time Step on Water Stressed Actual Crop ET
Directory of Open Access Journals (Sweden)
B. Ghahraman
2016-02-01
Full Text Available Introduction: Actual crop evapotranspiration (Eta is important in hydrologic modeling and irrigation water management issues. Actual ET depends on an estimation of a water stress index and average soil water at crop root zone, and so depends on a chosen numerical method and adapted time step. During periods with no rainfall and/or irrigation, actual ET can be computed analytically or by using different numerical methods. Overal, there are many factors that influence actual evapotranspiration. These factors are crop potential evapotranspiration, available root zone water content, time step, crop sensitivity, and soil. In this paper different numerical methods are compared for different soil textures and different crops sensitivities. Materials and Methods: During a specific time step with no rainfall or irrigation, change in soil water content would be equal to evapotranspiration, ET. In this approach, however, deep percolation is generally ignored due to deep water table and negligible unsaturated hydraulic conductivity below rooting depth. This differential equation may be solved analytically or numerically considering different algorithms. We adapted four different numerical methods, as explicit, implicit, and modified Euler, midpoint method, and 3-rd order Heun method to approximate the differential equation. Three general soil types of sand, silt, and clay, and three different crop types of sensitive, moderate, and resistant under Nishaboor plain were used. Standard soil fraction depletion (corresponding to ETc=5 mm.d-1, pstd, below which crop faces water stress is adopted for crop sensitivity. Three values for pstd were considered in this study to cover the common crops in the area, including winter wheat and barley, cotton, alfalfa, sugar beet, saffron, among the others. Based on this parameter, three classes for crop sensitivity was considered, sensitive crops with pstd=0.2, moderate crops with pstd=0.5, and resistive crops with pstd=0
Avoid the tsunami of the Dirac sea in the imaginary time step method
International Nuclear Information System (INIS)
Zhang, Ying; Liang, Haozhao; Meng, Jie
2010-01-01
The discrete single-particle spectra in both the Fermi and Dirac sea have been calculated by the imaginary time step (ITS) method for the Schroedinger-like equation after avoiding the "tsunami" of the Dirac sea, i.e. the diving behavior of the single-particle level into the Dirac sea in the direct application of the ITS method for the Dirac equation. It is found that by the transform from the Dirac equation to the Schroedinger-like equation, the single-particle spectra, which extend from the positive to the negative infinity, can be separately obtained by the ITS evolution in either the Fermi sea or the Dirac sea. Identical results with those in the conventional shooting method have been obtained via the ITS evolution for the equivalent Schroedinger-like equation, which demonstrates the feasibility, practicality and reliability of the present algorithm and dispels the doubts on the ITS method in the relativistic system. (author)
Kou, Jisheng
2017-09-30
Capillary pressure can significantly affect the phase properties and flow of liquid-gas fluids in porous media, and thus, the phase equilibrium calculation incorporating capillary pressure is crucial to simulate such problems accurately. Recently, the phase equilibrium calculation at specified moles, volume and temperature (NVT-flash) becomes an attractive issue. In this paper, capillarity is incorporated into the phase equilibrium calculation at specified moles, volume and temperature. A dynamical model for such problem is developed for the first time by using the laws of thermodynamics and Onsager\\'s reciprocal principle. This model consists of the evolutionary equations for moles and volume, and it can characterize the evolutionary process from a non-equilibrium state to an equilibrium state in the presence of capillarity effect at specified moles, volume and temperature. The phase equilibrium equations are naturally derived. To simulate the proposed dynamical model efficiently, we adopt the convex-concave splitting of the total Helmholtz energy, and propose a thermodynamically stable numerical algorithm, which is proved to preserve the second law of thermodynamics at the discrete level. Using the thermodynamical relations, we derive a phase stability condition with capillarity effect at specified moles, volume and temperature. Moreover, we propose a stable numerical algorithm for the phase stability testing, which can provide the feasible initial conditions. The performance of the proposed methods in predicting phase properties under capillarity effect is demonstrated on various cases of pure substance and mixture systems.
Energy Technology Data Exchange (ETDEWEB)
Wannert, Martin
2010-07-01
For optimum fuel cell operation, the power distribution must be as homogeneous as possible. It is therefore important for research, diagnosis and maintenance to be able to measure the power distribution inside a fuel cell. The book presents a non-invasive measuring method that reconstructs the internal power distribution from the magnetic field outside the fuel cell. Reconstruction algorithms are developed and tested for stability. The algorithms comprise a certain prior knowledge of the real power distribution; on the other hand, the magnetic field is split up numerically into different part-fields in order to reduce the complexity of the problem. [German] Um einen optimalen Brennstoffzellenbetrieb zu gewaehrleisten, ist es notwendig, eine moeglichst homogene Stromverteilung sicher zu stellen. Daher ist es aus Forschungs-, Diagnose- und Wartungszwecken von Interesse, die Stromverteilung in einer Zelle messen zu koennen. In diesem Buch wird ein nicht-invasives Messverfahren vorgestellt, das aus dem Magnetfeld ausserhalb der Brennstoffzelle die innere Stromverteilung rekonstruiert. Dabei werden Rekonstruktionsalgorithmen entwickelt und auf ihre Stabilitaet hin analysiert. Die Algorithmen beinhalten zum einen ein gewisses Vorwissen ueber die wahre Stromverteilung, zum anderen wird zuerst das Magnetfeld numerisch in unterschiedliche Teilfelder aufgespaltet, um so die Komplexitaet des Problems zu reduzieren.
A stable and optimally convergent LaTIn-CutFEM algorithm for multiple unilateral contact problems
Claus, Susanne; Kerfriden, Pierre
2018-02-01
In this paper, we propose a novel unfitted finite element method for the simulation of multiple body contact. The computational mesh is generated independently of the geometry of the interacting solids, which can be arbitrarily complex. The key novelty of the approach is the combination of elements of the CutFEM technology, namely the enrichment of the solution field via the definition of overlapping fictitious domains with a dedicated penalty-type regularisation of discrete operators, and the LaTIn hybrid-mixed formulation of complex interface conditions. Furthermore, the novel P1-P1 discretisation scheme that we propose for the unfitted LaTIn solver is shown to be stable, robust and optimally convergent with mesh refinement. Finally, the paper introduces a high-performance 3D level-set/CutFEM framework for the versatile and robust solution of contact problems involving multiple bodies of complex geometries, with more than two bodies interacting at a single point.
International Nuclear Information System (INIS)
Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.
2014-01-01
Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s 2 times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful
Hoepfer, Matthias
co-simulation approach to modeling and simulation. It lays out the general approach to dynamic system co-simulation, and gives a comprehensive overview of what co-simulation is and what it is not. It creates a taxonomy of the requirements and limits of co-simulation, and the issues arising with co-simulating sub-models. Possible solutions towards resolving the stated problems are investigated to a certain depth. A particular focus is given to the issue of time stepping. It will be shown that for dynamic models, the selection of the simulation time step is a crucial issue with respect to computational expense, simulation accuracy, and error control. The reasons for this are discussed in depth, and a time stepping algorithm for co-simulation with unknown dynamic sub-models is proposed. Motivations and suggestions for the further treatment of selected issues are presented.
Janus, J. Mark; Whitfield, David L.
1990-01-01
Improvements are presented of a computer algorithm developed for the time-accurate flow analysis of rotating machines. The flow model is a finite volume method utilizing a high-resolution approximate Riemann solver for interface flux definitions. The numerical scheme is a block LU implicit iterative-refinement method which possesses apparent unconditional stability. Multiblock composite gridding is used to orderly partition the field into a specified arrangement of blocks exhibiting varying degrees of similarity. Block-block relative motion is achieved using local grid distortion to reduce grid skewness and accommodate arbitrary time step selection. A general high-order numerical scheme is applied to satisfy the geometric conservation law. An even-blade-count counterrotating unducted fan configuration is chosen for a computational study comparing solutions resulting from altering parameters such as time step size and iteration count. The solutions are compared with measured data.
Cavalcanti, José Rafael; Dumbser, Michael; Motta-Marques, David da; Fragoso Junior, Carlos Ruberto
2015-12-01
In this article we propose a new conservative high resolution TVD (total variation diminishing) finite volume scheme with time-accurate local time stepping (LTS) on unstructured grids for the solution of scalar transport problems, which are typical in the context of water quality simulations. To keep the presentation of the new method as simple as possible, the algorithm is only derived in two space dimensions and for purely convective transport problems, hence neglecting diffusion and reaction terms. The new numerical method for the solution of the scalar transport is directly coupled to the hydrodynamic model of Casulli and Walters (2000) that provides the dynamics of the free surface and the velocity vector field based on a semi-implicit discretization of the shallow water equations. Wetting and drying is handled rigorously by the nonlinear algorithm proposed by Casulli (2009). The new time-accurate LTS algorithm allows a different time step size for each element of the unstructured grid, based on an element-local Courant-Friedrichs-Lewy (CFL) stability condition. The proposed method does not need any synchronization between different time steps of different elements and is by construction locally and globally conservative. The LTS scheme is based on a piecewise linear polynomial reconstruction in space-time using the MUSCL-Hancock method, to obtain second order of accuracy in both space and time. The new algorithm is first validated on some classical test cases for pure advection problems, for which exact solutions are known. In all cases we obtain a very good level of accuracy, showing also numerical convergence results; we furthermore confirm mass conservation up to machine precision and observe an improved computational efficiency compared to a standard second order TVD scheme for scalar transport with global time stepping (GTS). Then, the new LTS method is applied to some more complex problems, where the new scalar transport scheme has also been coupled to
Indian Academy of Sciences (India)
have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming language Is called a program. From activities 1-3, we can observe that: • Each activity is a command.
On an efficient multiple time step Monte Carlo simulation of the SABR model
A. Leitao Rodriguez (Álvaro); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)
2017-01-01
textabstractIn this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl.
On an efficient multiple time step Monte Carlo simulation of the SABR model
Leitao Rodriguez, A.; Grzelak, L.A.; Oosterlee, C.W.
2017-01-01
In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math.
Khan, M O; Valen-Sendstad, K; Steinman, D A
2015-07-01
Recent high-resolution computational fluid dynamics studies have uncovered the presence of laminar flow instabilities and possible transitional or turbulent flow in some intracranial aneurysms. The purpose of this study was to elucidate requirements for computational fluid dynamics to detect these complex flows, and, in particular, to discriminate the impact of solver numerics versus mesh and time-step resolution. We focused on 3 MCA aneurysms, exemplifying highly unstable, mildly unstable, or stable flow phenotypes, respectively. For each, the number of mesh elements was varied by 320× and the number of time-steps by 25×. Computational fluid dynamics simulations were performed by using an optimized second-order, minimally dissipative solver, and a more typical first-order, stabilized solver. With the optimized solver and settings, qualitative differences in flow and wall shear stress patterns were negligible for models down to ∼800,000 tetrahedra and ∼5000 time-steps per cardiac cycle and could be solved within clinically acceptable timeframes. At the same model resolutions, however, the stabilized solver had poorer accuracy and completely suppressed flow instabilities for the 2 unstable flow cases. These findings were verified by using the popular commercial computational fluid dynamics solver, Fluent. Solver numerics must be considered at least as important as mesh and time-step resolution in determining the quality of aneurysm computational fluid dynamics simulations. Proper computational fluid dynamics verification studies, and not just superficial grid refinements, are therefore required to avoid overlooking potentially clinically and biologically relevant flow features. © 2015 by American Journal of Neuroradiology.
Indian Academy of Sciences (India)
algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language ... ·1 x:=sln(theta) x : = sm(theta) 1. ~. Idl d.t Read A.B,C. ~ lei ~ Print x.y.z. L;;;J. Figure 2 Symbols used In flowchart language to rep- resent Assignment, Read.
Indian Academy of Sciences (India)
In the previous articles, we have discussed various common data-structures such as arrays, lists, queues and trees and illustrated the widely used algorithm design paradigm referred to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted ...
National Aeronautics and Space Administration — The code in the stableGP package implements Gaussian process calculations using efficient and numerically stable algorithms. Description of the algorithms is in the...
Leliaert, J.; Mulkers, J.; De Clercq, J.; Coene, A.; Dvornik, M.; Van Waeyenberge, B.
2017-12-01
Thermal fluctuations play an increasingly important role in micromagnetic research relevant for various biomedical and other technological applications. Until now, it was deemed necessary to use a time stepping algorithm with a fixed time step in order to perform micromagnetic simulations at nonzero temperatures. However, Berkov and Gorn have shown in [D. Berkov and N. Gorn, J. Phys.: Condens. Matter,14, L281, 2002] that the drift term which generally appears when solving stochastic differential equations can only influence the length of the magnetization. This quantity is however fixed in the case of the stochastic Landau-Lifshitz-Gilbert equation. In this paper, we exploit this fact to straightforwardly extend existing high order solvers with an adaptive time stepping algorithm. We implemented the presented methods in the freely available GPU-accelerated micromagnetic software package MuMax3 and used it to extensively validate the presented methods. Next to the advantage of having control over the error tolerance, we report a twenty fold speedup without a loss of accuracy, when using the presented methods as compared to the hereto best practice of using Heun's solver with a small fixed time step.
Indian Academy of Sciences (India)
In the program shown in Figure 1, we have repeated the algorithm. M times and we can make the following observations. Each block is essentially a different instance of "code"; that is, the objects differ by the value to which N is initialized before the execution of the. "code" block. Thus, we can now avoid the repetition of the ...
Indian Academy of Sciences (India)
algorithms built into the computer corresponding to the logic- circuit rules that are used to .... For the purpose of carrying ou t ari thmetic or logical operations the memory is organized in terms .... In fixed point representation, one essentially uses integer arithmetic operators assuming the binary point to be at some point other ...
Perturbed Strong Stability Preserving Time-Stepping Methods For Hyperbolic PDEs
Hadjimichael, Yiannis
2017-09-30
A plethora of physical phenomena are modelled by hyperbolic partial differential equations, for which the exact solution is usually not known. Numerical methods are employed to approximate the solution to hyperbolic problems; however, in many cases it is difficult to satisfy certain physical properties while maintaining high order of accuracy. In this thesis, we develop high-order time-stepping methods that are capable of maintaining stability constraints of the solution, when coupled with suitable spatial discretizations. Such methods are called strong stability preserving (SSP) time integrators, and we mainly focus on perturbed methods that use both upwind- and downwind-biased spatial discretizations. Firstly, we introduce a new family of third-order implicit Runge–Kuttas methods with arbitrarily large SSP coefficient. We investigate the stability and accuracy of these methods and we show that they perform well on hyperbolic problems with large CFL numbers. Moreover, we extend the analysis of SSP linear multistep methods to semi-discretized problems for which different terms on the right-hand side of the initial value problem satisfy different forward Euler (or circle) conditions. Optimal perturbed and additive monotonicity-preserving linear multistep methods are studied in the context of such problems. Optimal perturbed methods attain augmented monotonicity-preserving step sizes when the different forward Euler conditions are taken into account. On the other hand, we show that optimal SSP additive methods achieve a monotonicity-preserving step-size restriction no better than that of the corresponding non-additive SSP linear multistep methods. Furthermore, we develop the first SSP linear multistep methods of order two and three with variable step size, and study their optimality. We describe an optimal step-size strategy and demonstrate the effectiveness of these methods on various one- and multi-dimensional problems. Finally, we establish necessary conditions
Time-step selection considerations in the analysis of reactor transients with DIF3D-K
International Nuclear Information System (INIS)
Taiwo, T.A.; Khalil, H.S.; Cahalan, J.E.; Morris, E.E.
1993-01-01
The DIF3D-K code solves the three-dimensional, time-dependent multigroup neutron diffusion equations by using a nodal approach for spatial discretization and either the theta method or one of three space-time factorization approaches for temporal integration of the nodal equations. The three space-time factorization options (namely, improved quasistatic, adiabatic, and conventional point kinetics) were implemented because of their potential efficiency advantage for the analysis of transients in which the flux shape changes more slowly than its amplitude. In this paper, we describe the implementation of DIF3D-K as the neutronics module within the SAS-HWR accident analysis code. We also describe the neuronic-related time-step selection algorithms and their influence on the accuracy and efficiency of the various solution options
Time-step selection considerations in the analysis of reactor transients with DIF3D-K
International Nuclear Information System (INIS)
Taiwo, T.A.; Khalil, H.S.; Cahalan, J.E.; Morris, E.E.
1993-01-01
The DIF3D-K code solves the three-dimensional, time-dependent multigroup neutron diffusion equations by using a nodal approach for spatial discretization and either the theta method or one of three space-time factorization approaches for temporal integration of the nodal equations. The three space-time factorization options (namely, improved quasistatic, adiabatic and conventional point kinetics) were implemented because of their potential efficiency advantage for the analysis of transients in which the flux shape changes more slowly than its amplitude. Here we describe the implementation of DIF3D-K as the neutronics module within the SAS-HWR accident analysis code. We also describe the neutronics-related time step selection algorithms and their influence on the accuracy and efficiency of the various solution options
Xu, Jingjiang; Song, Shaozhen; Li, Yuandong; Wang, Ruikang K.
2018-01-01
Optical coherence tomography angiography (OCTA) is increasingly becoming a popular inspection tool for biomedical imaging applications. By exploring the amplitude, phase and complex information available in OCT signals, numerous algorithms have been proposed that contrast functional vessel networks within microcirculatory tissue beds. However, it is not clear which algorithm delivers optimal imaging performance. Here, we investigate systematically how amplitude and phase information have an impact on the OCTA imaging performance, to establish the relationship of amplitude and phase stability with OCT signal-to-noise ratio (SNR), time interval and particle dynamics. With either repeated A-scan or repeated B-scan imaging protocols, the amplitude noise increases with the increase of OCT SNR; however, the phase noise does the opposite, i.e. it increases with the decrease of OCT SNR. Coupled with experimental measurements, we utilize a simple Monte Carlo (MC) model to simulate the performance of amplitude-, phase- and complex-based algorithms for OCTA imaging, the results of which suggest that complex-based algorithms deliver the best performance when the phase noise is animal models and human retina to verify the findings from the MC model through assessing the OCTA performance metrics of vessel connectivity, image SNR and contrast-to-noise ratio. We show that for all the metrics assessed, the complex-based algorithm delivers better performance than either the amplitude- or phase-based algorithms for both the repeated A-scan and the B-scan imaging protocols, which agrees well with the conclusion drawn from the MC simulations.
Liu, Haifeng; Gong, Xianghui; Jing, Xiaohui; Ding, Xili; Yao, Yuan; Huang, Yan; Fan, Yubo
2017-11-01
Endothelial cells (ECs) are sensitive to changes in shear stress. The application of shear stress to ECs has been well documented to improve cell retention when placed into a haemodynamically active environment. However, the relationship between the time-step and amplification of shear stress on EC functions remains elusive. In the present study, human umbilical cord veins endothelial cells (HUVECs) were seeded on silk fibroin nanofibrous scaffolds and were preconditioned by shear stress at different time-steps and amplifications. It is shown that gradually increasing shear stress with appropriate time-steps and amplification could improve EC retention, yielding a complete endothelial-like monolayer both in vitro and in vivo. The mechanism of this improvement is mediated, at least in part, by an upregulation of integrin β1 and focal adhesion kinase (FAK) expression, which contributed to fibronectin (FN) assembly enhancement in ECs in response to the shear stress. A modest gradual increase in shear stress was essential to allow additional time for ECs to gradually acclimatize to the changing environment, with the goal of withstanding the physiological levels of shear stress. This study recognized that the time-steps and amplifications of shear stress could regulate EC tolerance to shear stress and the anti-thrombogenicity function of engineered vascular grafts via an extracellular cell matrix-specific, mechanosensitive signalling pathway and might prevent thrombus formation in vivo. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Optimal order and time-step criterion for Aarseth-type N-body integrators
International Nuclear Information System (INIS)
Makino, Junichiro
1991-01-01
How the selection of the time-step criterion and the order of the integrator change the efficiency of Aarseth-type N-body integrators is discussed. An alternative to Aarseth's scheme based on the direct calculation of the time derivative of the force using the Hermite interpolation is compared to Aarseth's scheme, which uses the Newton interpolation to construct the predictor and corrector. How the number of particles in the system changes the behavior of integrators is examined. The Hermite scheme allows a time step twice as large as that for the standard Aarseth scheme for the same accuracy. The calculation cost of the Hermite scheme per time step is roughly twice as much as that of the standard Aarseth scheme. The optimal order of the integrators depends on both the particle number and the accuracy required. The time-step criterion of the standard Aarseth scheme is found to be inapplicable to higher-order integrators, and a more uniformly reliable criterion is proposed. 18 refs
Optimal order and time-step criterion for Aarseth-type N-body integrators
Makino, Junichiro
1991-03-01
How the selection of the time-step criterion and the order of the integrator change the efficiency of Aarseth-type N-body integrators is discussed. An alternative to Aarseth's scheme based on the direct calculation of the time derivative of the force using the Hermite interpolation is compared to Aarseth's scheme, which uses the Newton interpolation to construct the predictor and corrector. How the number of particles in the system changes the behavior of integrators is examined. The Hermite scheme allows a time step twice as large as that for the standard Aarseth scheme for the same accuracy. The calculation cost of the Hermite scheme per time step is roughly twice as much as that of the standard Aarseth scheme. The optimal order of the integrators depends on both the particle number and the accuracy required. The time-step criterion of the standard Aarseth scheme is found to be inapplicable to higher-order integrators, and a more uniformly reliable criterion is proposed.
Optimal order and time-step criterion for Aarseth-type N-body integrators
Energy Technology Data Exchange (ETDEWEB)
Makino, Junichiro (Tokyo Univ. (Japan))
1991-03-01
How the selection of the time-step criterion and the order of the integrator change the efficiency of Aarseth-type N-body integrators is discussed. An alternative to Aarseth's scheme based on the direct calculation of the time derivative of the force using the Hermite interpolation is compared to Aarseth's scheme, which uses the Newton interpolation to construct the predictor and corrector. How the number of particles in the system changes the behavior of integrators is examined. The Hermite scheme allows a time step twice as large as that for the standard Aarseth scheme for the same accuracy. The calculation cost of the Hermite scheme per time step is roughly twice as much as that of the standard Aarseth scheme. The optimal order of the integrators depends on both the particle number and the accuracy required. The time-step criterion of the standard Aarseth scheme is found to be inapplicable to higher-order integrators, and a more uniformly reliable criterion is proposed. 18 refs.
Directory of Open Access Journals (Sweden)
O.V. Kostyrka
2016-09-01
Full Text Available At the organization of a covert communication channel a number of requirements are imposed on used steganography algorithms among which one of the main are: resistance to attacks against the built-in message, reliability of perception of formed steganography message, significant throughput of a steganography communication channel. Aim: The aim of this research is to modify the steganography method, developed by the author earlier, which will allow to increase the throughput of the corresponding covert communication channel when saving resistance to attacks against the built-in message and perception reliability of the created steganography message, inherent to developed method. Materials and Methods: Modifications of a steganography method that is steady against attacks against the built-in message which is carrying out the inclusion and decoding of the sent (additional information in spatial domain of the image allowing to increase the throughput of the organized communication channel are offered. Use of spatial domain of the image allows to avoid accumulation of an additional computational error during the inclusion/decoding of additional information due to “transitions” from spatial domain of the image to the area of conversion and back that positively affects the efficiency of decoding. Such methods are considered as attacks against the built-in message: imposing of different noise on a steganography message, filtering, lossy compression of a ste-ganography message where the JPEG and JPEG2000 formats with different quality coefficients for saving of a steganography message are used. Results: It is shown that algorithmic implementations of the offered methods modifications remain steady against the perturbing influences, including considerable, provide reliability of perception of the created steganography message, increase the throughput of the created steganography communication channel in comparison with the algorithm implementing
A coupled weather generator - rainfall-runoff approach on hourly time steps for flood risk analysis
Winter, Benjamin; Schneeberger, Klaus; Dung Nguyen, Viet; Vorogushyn, Sergiy; Huttenlau, Matthias; Merz, Bruno; Stötter, Johann
2017-04-01
The evaluation of potential monetary damage of flooding is an essential part of flood risk management. One possibility to estimate the monetary risk is to analyze long time series of observed flood events and their corresponding damages. In reality, however, only few flood events are documented. This limitation can be overcome by the generation of a set of synthetic, physically and spatial plausible flood events and subsequently the estimation of the resulting monetary damages. In the present work, a set of synthetic flood events is generated by a continuous rainfall-runoff simulation in combination with a coupled weather generator and temporal disaggregation procedure for the study area of Vorarlberg (Austria). Most flood risk studies focus on daily time steps, however, the mesoscale alpine study area is characterized by short concentration times, leading to large differences between daily mean and daily maximum discharge. Accordingly, an hourly time step is needed for the simulations. The hourly metrological input for the rainfall-runoff model is generated in a two-step approach. A synthetic daily dataset is generated by a multivariate and multisite weather generator and subsequently disaggregated to hourly time steps with a k-Nearest-Neighbor model. Following the event generation procedure, the negative consequences of flooding are analyzed. The corresponding flood damage for each synthetic event is estimated by combining the synthetic discharge at representative points of the river network with a loss probability relation for each community in the study area. The loss probability relation is based on exposure and susceptibility analyses on a single object basis (residential buildings) for certain return periods. For these impact analyses official inundation maps of the study area are used. Finally, by analyzing the total event time series of damages, the expected annual damage or losses associated with a certain probability of occurrence can be estimated for
Botts, Jonathan; Savioja, Lauri
2015-04-01
For time-domain modeling based on the acoustic wave equation, spectral methods have recently demonstrated promise. This letter presents an extension of a spectral domain decomposition approach, previously used to solve the lossless linear wave equation, which accommodates frequency-dependent atmospheric attenuation and assignment of arbitrary dispersion relations. Frequency-dependence is straightforward to assign when time-stepping is done in the spectral domain, so combined losses from molecular relaxation, thermal conductivity, and viscosity can be approximated with little extra computation or storage. A mode update free from numerical dispersion is derived, and the model is confirmed with a numerical experiment.
Fractional time stepping for unsteady engineering calculations on parallel computer systems
Molev, Sergey; Podaruev, Vladimir; Troshin, Alexey
2017-11-01
The tool for explicit scheme acceleration is described. Its essence is reducing arithmetic operations. Cells of the mesh are scattered by groups named levels. Each level has own time step. Coordination of levels is carried out. The method may be useful for great time scale scattering problems of aerodynamics. Reasons that produce deterioration of unsteady process modelling are revealed. Resolutions that correct the troubles are proposed. Example that demonstrates troubles rising conditions and successful abolition of them is presented. Limit of producing acceleration is denoted. Means that favor effective parallel computing with method are discussed.
Imaginary Time Step Method to Solve the Dirac Equation with Nonlocal Potential
International Nuclear Information System (INIS)
Zhang Ying; Liang Haozhao; Meng Jie
2009-01-01
The imaginary time step (ITS) method is applied to solve the Dirac equation with nonlocal potentials in coordinate space. Taking the nucleus 12 C as an example, even with nonlocal potentials, the direct ITS evolution for the Dirac equation still meets the disaster of the Dirac sea. However, following the recipe in our former investigation, the disaster can be avoided by the ITS evolution for the corresponding Schroedinger-like equation without localization, which gives the convergent results exactly the same with those obtained iteratively by the shooting method with localized effective potentials.
Real-Time Step Length Control Method for a Biped Robot
Aiko, Takahiro; Ohnishi, Kouhei
In this paper, the real-time step length control method for a biped robot is proposed. In human environment, it is necessary for a biped robot to change its gait on real-time since it is required to walk according to situations. By use of the proposed method, the center-of-gravity trajectory and swing leg trajectory were generated on real-time with that its command value is the step length. For generating the center-of-gravity trajectory, we develop Linear Inverted Pendulum Mode and additionally consider walking stability by ZMP. In order to demonstrate the proposed method, the simulation and experiment of a biped walk is performed.
Variable time-stepping in the pathwise numerical solution of the chemical Langevin equation
Ilie, Silvana
2012-12-01
Stochastic modeling is essential for an accurate description of the biochemical network dynamics at the level of a single cell. Biochemically reacting systems often evolve on multiple time-scales, thus their stochastic mathematical models manifest stiffness. Stochastic models which, in addition, are stiff and computationally very challenging, therefore the need for developing effective and accurate numerical methods for approximating their solution. An important stochastic model of well-stirred biochemical systems is the chemical Langevin Equation. The chemical Langevin equation is a system of stochastic differential equation with multidimensional non-commutative noise. This model is valid in the regime of large molecular populations, far from the thermodynamic limit. In this paper, we propose a variable time-stepping strategy for the numerical solution of a general chemical Langevin equation, which applies for any level of randomness in the system. Our variable stepsize method allows arbitrary values of the time-step. Numerical results on several models arising in applications show significant improvement in accuracy and efficiency of the proposed adaptive scheme over the existing methods, the strategies based on halving/doubling of the stepsize and the fixed step-size ones.
Vaidya, Bhargav; Prasad, Deovrat; Mignone, Andrea; Sharma, Prateek; Rickler, Luca
2017-12-01
An important ingredient in numerical modelling of high temperature magnetized astrophysical plasmas is the anisotropic transport of heat along magnetic field lines from higher to lower temperatures. Magnetohydrodynamics typically involves solving the hyperbolic set of conservation equations along with the induction equation. Incorporating anisotropic thermal conduction requires to also treat parabolic terms arising from the diffusion operator. An explicit treatment of parabolic terms will considerably reduce the simulation time step due to its dependence on the square of the grid resolution (Δx) for stability. Although an implicit scheme relaxes the constraint on stability, it is difficult to distribute efficiently on a parallel architecture. Treating parabolic terms with accelerated super-time-stepping (STS) methods has been discussed in literature, but these methods suffer from poor accuracy (first order in time) and also have difficult-to-choose tuneable stability parameters. In this work, we highlight a second-order (in time) Runge-Kutta-Legendre (RKL) scheme (first described by Meyer, Balsara & Aslam 2012) that is robust, fast and accurate in treating parabolic terms alongside the hyperbolic conversation laws. We demonstrate its superiority over the first-order STS schemes with standard tests and astrophysical applications. We also show that explicit conduction is particularly robust in handling saturated thermal conduction. Parallel scaling of explicit conduction using RKL scheme is demonstrated up to more than 104 processors.
Ni, Xiao-Ting; Wu, Xin
2014-10-01
The time-transformed leapfrog scheme of Mikkola & Aarseth was specifically designed for a second-order differential equation with two individually separable forms of positions and velocities. It can have good numerical accuracy for extremely close two-body encounters in gravitating few-body systems with large mass ratios, but the non-time-transformed one does not work well. Following this idea, we develop a new explicit symplectic integrator with an adaptive time step that can be applied to a time-dependent Hamiltonian. Our method relies on a time step function having two distinct but equivalent forms and on the inclusion of two pairs of new canonical conjugate variables in the extended phase space. In addition, the Hamiltonian must be modified to be a new time-transformed Hamiltonian with three integrable parts. When this method is applied to the elliptic restricted three-body problem, its numerical precision is explicitly higher by several orders of magnitude than the nonadaptive one's, and its numerical stability is also better. In particular, it can eliminate the overestimation of Lyapunov exponents and suppress the spurious rapid growth of fast Lyapunov indicators for high-eccentricity orbits of a massless third body. The present technique will be useful for conservative systems including N-body problems in the Jacobian coordinates in the the field of solar system dynamics, and nonconservative systems such as a time-dependent barred galaxy model in a rotating coordinate system.
National Research Council Canada - National Science Library
Munsky, Brian; Khammash, Mustafa
2006-01-01
At the mesoscopic scale, chemical processes have probability distributions that evolve according to an infinite set of linear ordinary differential equations known as the chemical master equation (CME...
2006-11-30
begins in state k, the initial probability distribution for the CME was written, pi(0) = δik, where δik is the Kronecker delta . Suppose now that the...initial distribution is given not by the Kronecker delta but by a vector with many non-zero elements. For example, suppose that the initial distribution is...pap-pili epigenetic switch,” Proc. FOSBE , pp. 145–148, August 2005. [16] B. Munsky and M. Khammash, “A reduced model solution for the chemical master
Modified SIMPLE algorithm for the numerical analysis of incompressible flows with free surface
International Nuclear Information System (INIS)
Mok, Jin Ho; Hong, Chun Pyo; Lee, Jin Ho
2005-01-01
While the SIMPLE algorithm is most widely used for the simulations of flow phenomena that take place in the industrial equipment or the manufacturing processes, it is less adopted for the simulations of the free surface flow. Though the SIMPLE algorithm is free from the limitation of time step, the free surface behavior imposes the restriction on the time step. As a result, the explicit schemes are faster than the implicit scheme in terms of computation time when the same time step is applied to, since the implicit scheme includes the numerical method to solve the simultaneous equations in its procedure. If the computation time of SIMPLE algorithm can be reduced when it is applied to the unsteady free surface flow problems, the calculation can be carried out in the more stable way and, in the design process, the process variables can be controlled based on the more accurate data base. In this study, a modified SIMPLE algorithm is presented for the free surface flow. The broken water column problem is adopted for the validation of the modified algorithm (MoSIMPLE) and for comparison to the conventional SIMPLE algorithm
Hsu, Ming-Chen
2010-02-01
The objective of this paper is to show that use of the element-vector-based definition of stabilization parameters, introduced in [T.E. Tezduyar, Computation of moving boundaries and interfaces and stabilization parameters, Int. J. Numer. Methods Fluids 43 (2003) 555-575; T.E. Tezduyar, Y. Osawa, Finite element stabilization parameters computed from element matrices and vectors, Comput. Methods Appl. Mech. Engrg. 190 (2000) 411-430], circumvents the well-known instability associated with conventional stabilized formulations at small time steps. We describe formulations for linear advection-diffusion and incompressible Navier-Stokes equations and test them on three benchmark problems: advection of an L-shaped discontinuity, laminar flow in a square domain at low Reynolds number, and turbulent channel flow at friction-velocity Reynolds number of 395. © 2009 Elsevier B.V. All rights reserved.
Electric and hybrid electric vehicle study utilizing a time-stepping simulation
Schreiber, Jeffrey G.; Shaltens, Richard K.; Beremand, Donald G.
1992-01-01
The applicability of NASA's advanced power technologies to electric and hybrid vehicles was assessed using a time-stepping computer simulation to model electric and hybrid vehicles operating over the Federal Urban Driving Schedule (FUDS). Both the energy and power demands of the FUDS were taken into account and vehicle economy, range, and performance were addressed simultaneously. Results indicate that a hybrid electric vehicle (HEV) configured with a flywheel buffer energy storage device and a free-piston Stirling convertor fulfills the emissions, fuel economy, range, and performance requirements that would make it acceptable to the consumer. It is noted that an assessment to determine which of the candidate technologies are suited for the HEV application has yet to be made. A proper assessment should take into account the fuel economy and range, along with the driveability and total emissions produced.
A chaos detectable and time step-size adaptive numerical scheme for nonlinear dynamical systems
Chen, Yung-Wei; Liu, Chein-Shan; Chang, Jiang-Ren
2007-02-01
The first step in investigation the dynamics of a continuous time system described by ordinary differential equations is to integrate them to obtain trajectories. In this paper, we convert the group-preserving scheme (GPS) developed by Liu [International Journal of Non-Linear Mechanics 36 (2001) 1047-1068] to a time step-size adaptive scheme, x=x+hf(x,t), where x∈R is the system variables we are concerned with, and f(x,t)∈R is a time-varying vector field. The scheme has the form similar to the Euler scheme, x=x+Δtf(x,t), but our step-size h is adaptive automatically. Very interestingly, the ratio h/Δt, which we call the adaptive factor, can forecast the appearance of chaos if the considered dynamical system becomes chaotical. The numerical examples of the Duffing equation, the Lorenz equation and the Rossler equation, which may exhibit chaotic behaviors under certain parameters values, are used to demonstrate these phenomena. Two other non-chaotic examples are included to compare the performance of the GPS and the adaptive one.
Djaman, Koffi; Irmak, Suat; Sall, Mamadou; Sow, Abdoulaye; Kabenge, Isa
2017-10-01
The objective of this study was to quantify differences associated with using 24-h time step reference evapotranspiration (ETo), as compared with the sum of hourly ETo computations with the standardized ASCE Penman-Monteith (ASCE-PM) model for semi-arid dry conditions at Fanaye and Ndiaye (Senegal) and semiarid humid conditions at Sapu (The Gambia) and Kankan (Guinea). The results showed that there was good agreement between the sum of hourly ETo and daily time step ETo at all four locations. The daily time step overestimated the daily ETo relative to the sum of hourly ETo by 1.3 to 8% for the whole study periods. However, there is location and monthly dependence of the magnitude of ETo values and the ratio of the ETo values estimated by both methods. Sum of hourly ETo tends to give higher ETo during winter time at Fanaye and Sapu, while the daily ETo was higher from March to November at the same weather stations. At Ndiaye and Kankan, daily time step estimates of ETo were high during the year. The simple linear regression slopes between the sum of 24-h ETo and the daily time step ETo at all weather stations varied from 1.02 to 1.08 with high coefficient of determination (R 2 ≥ 0.87). Application of the hourly ETo estimation method might help on accurate ETo estimation to meet irrigation requirement under precision agriculture.
De Basabe, Jonás D.
2010-04-01
We investigate the stability of some high-order finite element methods, namely the spectral element method and the interior-penalty discontinuous Galerkin method (IP-DGM), for acoustic or elastic wave propagation that have become increasingly popular in the recent past. We consider the Lax-Wendroff method (LWM) for time stepping and show that it allows for a larger time step than the classical leap-frog finite difference method, with higher-order accuracy. In particular the fourth-order LWM allows for a time step 73 per cent larger than that of the leap-frog method; the computational cost is approximately double per time step, but the larger time step partially compensates for this additional cost. Necessary, but not sufficient, stability conditions are given for the mentioned methods for orders up to 10 in space and time. The stability conditions for IP-DGM are approximately 20 and 60 per cent more restrictive than those for SEM in the acoustic and elastic cases, respectively. © 2010 The Authors Journal compilation © 2010 RAS.
Mulder, W.A.; Zhebel, E.; Minisini, S.
2013-01-01
We analyse the time-stepping stability for the 3-D acoustic wave equation, discretized on tetrahedral meshes. Two types of methods are considered: mass-lumped continuous finite elements and the symmetric interior-penalty discontinuous Galerkin method. Combining the spatial discretization with the
Chu, Chunlei
2009-01-01
We present two Lax‐Wendroff type high‐order time stepping schemes and apply them to solving the 3D elastic wave equation. The proposed schemes have the same format as the Taylor series expansion based schemes, only with modified temporal extrapolation coefficients. We demonstrate by both theoretical analysis and numerical examples that the modified schemes significantly improve the stability conditions.
Keslerová, R.; Kozel, K.
2014-03-01
This work deals with the numerical solution of viscous and viscoelastic fluids flow. The governing system of equations is based on the system of balance laws for mass and momentum for incompressible laminar fluids. Different models for the stress tensor are considered. For viscous fluids flow Newtonian model is used. For the describing of the behaviour of the mixture of viscous and viscoelastic fluids Oldroyd-B model is used. Numerical solution of the described models is based on cell-centered finite volume method in conjunction with artificial compressibility method. For time integration an explicit multistage Runge-Kutta scheme is used. In the case of unsteady computation dual-time stepping method is considered. The principle of dual-time stepping method is following. The artificial time is introduced and the artificial compressibility method in the artificial time is applied.
Kou, Jisheng
2016-02-25
In this paper, we propose an energy-stable evolution method for the calculation of the phase equilibria under given volume, temperature, and moles (VT-flash). An evolution model for describing the dynamics of two-phase fluid system is based on Fick’s law of diffusion for multi-component fluids and the Peng-Robinson equation of state. The mobility is obtained from diffusion coefficients by relating the gradient of chemical potential to the gradient of molar density. The evolution equation for moles of each component is derived using the discretization of diffusion equations, while the volume evolution equation is constructed based on the mechanical mechanism and the Peng-Robinson equation of state. It is proven that the proposed evolution system can well model the VT-flash problem, and moreover, it possesses the property of total energy decay. By using the Euler time scheme to discretize this evolution system, we develop an energy stable algorithm with an adaptive choice strategy of time steps, which allows us to calculate the suitable time step size to guarantee the physical properties of moles and volumes, including positivity, maximum limits, and correct definition of the Helmhotz free energy function. The proposed evolution method is also proven to be energy-stable under the proposed time step choice. Numerical examples are tested to demonstrate efficiency and robustness of the proposed method.
Energy Technology Data Exchange (ETDEWEB)
Finn, John M., E-mail: finn@lanl.gov [T-5, Applied Mathematics and Plasma Physics, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)
2015-03-15
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012
Energy Technology Data Exchange (ETDEWEB)
Franz, A., LLNL
1998-02-17
The numerical simulation of chemically reacting flows is a topic, that has attracted a great deal of current research At the heart of numerical reactive flow simulations are large sets of coupled, nonlinear Partial Differential Equations (PDES). Due to the stiffness that is usually present, explicit time differencing schemes are not used despite their inherent simplicity and efficiency on parallel and vector machines, since these schemes require prohibitively small numerical stepsizes. Implicit time differencing schemes, although possessing good stability characteristics, introduce a great deal of computational overhead necessary to solve the simultaneous algebraic system at each timestep. This thesis examines an algorithm based on a preconditioned time differencing scheme. The algorithm is explicit and permits a large stable time step. An investigation of the algorithm`s accuracy, stability and performance on a parallel architecture is presented
Directory of Open Access Journals (Sweden)
Hamid Reza Fooladmand
2017-06-01
2006 to 2008 were used for calibrating fourteen estimated models of solar radiation in seasonally and annual time steps and the measured data of years 2009 and 2010 were used for evaluating the obtained results. The equations were used in this study divided into three groups contains: 1 The equations based on only sunshine hours. 2 The equations based on only air temperature. 3 The equations based on sunshine hours and air temperature together. On the other hand, statistical comparison must be done to select the best equation for estimating solar radiation in seasonally and annual time steps. For this purpose, in validation stage the combination of statistical equations and linear correlation was used, and then the value of mean square deviation (MSD was calculated to evaluate the different models for estimating solar radiation in mentioned time steps. Results and Discussion: The mean values of mean square deviation (MSD of fourteen models for estimating solar radiation were equal to 24.16, 20.42, 4.08 and 16.19 for spring to winter respectively, and 15.40 in annual time step. Therefore, the results showed that using the equations for autumn enjoyed high accuracy, however for other seasons had low accuracy. So, using the equations for annual time step were appropriate more than the equations for seasonally time steps. Also, the mean values of mean square deviation (MSD of the equations based on only sunshine hours, the equations based on only air temperature, and the equations based on the combination of sunshine hours and air temperature for estimating solar radiation were equal to 14.82, 17.40 and 14.88, respectively. Therefore, the results indicated that the models based on only air temperature were the worst conditions for estimating solar radiation in Shiraz region, and therefore, using the sunshine hours for estimating solar radiation is necessary. Conclusions: In this study for estimating solar radiation in seasonally and annual time steps in Shiraz region
DEFF Research Database (Denmark)
Auken, Esben; Christiansen, Anders Vest; Kirkegaard, Casper
2015-01-01
We present an overview of a mature, robust and general algorithm providing a single framework for the inversion of most electromagnetic and electrical data types and instrument geometries. The implementation mainly uses a 1D earth formulation for electromagnetics and magnetic resonance sounding...... (MRS) responses, while the geoelectric responses are both 1D and 2D and the sheet's response models a 3D conductive sheet in a conductive host with an overburden of varying thickness and resistivity. In all cases, the focus is placed on delivering full system forward modelling across all supported...... illustrating the versatility of the algorithm. The first example is a laterally constrained joint inversion (LCI) of surface time domain induced polarisation (TDIP) data and borehole TDIP data. The second example shows a spatially constrained inversion (SCI) of airborne transient electromagnetic (AEM) data...
PHISICS/RELAP5-3D Adaptive Time-Step Method Demonstrated for the HTTR LOFC#1 Simulation
Energy Technology Data Exchange (ETDEWEB)
Baker, Robin Ivey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Balestra, Paolo [Univ. of Rome (Italy); Strydom, Gerhard [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2017-05-01
A collaborative effort between Japan Atomic Energy Agency (JAEA) and Idaho National Laboratory (INL) as part of the Civil Nuclear Energy Working Group is underway to model the high temperature engineering test reactor (HTTR) loss of forced cooling (LOFC) transient that was performed in December 2010. The coupled version of RELAP5-3D, a thermal fluids code, and PHISICS, a neutronics code, were used to model the transient. The focus of this report is to summarize the changes made to the PHISICS-RELAP5-3D code for implementing an adaptive time step methodology into the code for the first time, and to test it using the full HTTR PHISICS/RELAP5-3D model developed by JAEA and INL and the LOFC simulation. Various adaptive schemes are available based on flux or power convergence criteria that allow significantly larger time steps to be taken by the neutronics module. The report includes a description of the HTTR and the associated PHISICS/RELAP5-3D model test results as well as the University of Rome sub-contractor report documenting the adaptive time step theory and methodology implemented in PHISICS/RELAP5-3D. Two versions of the HTTR model were tested using 8 and 26 energy groups. It was found that most of the new adaptive methods lead to significant improvements in the LOFC simulation time required without significant accuracy penalties in the prediction of the fission power and the fuel temperature. In the best performing 8 group model scenarios, a LOFC simulation of 20 hours could be completed in real-time, or even less than real-time, compared with the previous version of the code that completed the same transient 3-8 times slower than real-time. A few of the user choice combinations between the methodologies available and the tolerance settings did however result in unacceptably high errors or insignificant gains in simulation time. The study is concluded with recommendations on which methods to use for this HTTR model. An important caveat is that these findings
Unterweger, K.
2015-01-01
© Springer International Publishing Switzerland 2015. We propose to couple our adaptive mesh refinement software PeanoClaw with existing solvers for complex overland flows that are tailored to regular Cartesian meshes. This allows us to augment them with spatial adaptivity and local time-stepping without altering the computational kernels. FullSWOF2D—Full Shallow Water Overland Flows—here is our software of choice though all paradigms hold for other solvers as well.We validate our hybrid simulation software in an artificial test scenario before we provide results for a large-scale flooding scenario of the Mecca region. The latter demonstrates that our coupling approach enables the simulation of complex “real-world” scenarios.
A new free-surface stabilization algorithm for geodynamical modelling: Theory and numerical tests
Andrés-Martínez, Miguel; Morgan, Jason P.; Pérez-Gussinyé, Marta; Rüpke, Lars
2015-09-01
The surface of the solid Earth is effectively stress free in its subaerial portions, and hydrostatic beneath the oceans. Unfortunately, this type of boundary condition is difficult to treat computationally, and for computational convenience, numerical models have often used simpler approximations that do not involve a normal stress-loaded, shear-stress free top surface that is free to move. Viscous flow models with a computational free surface typically confront stability problems when the time step is bigger than the viscous relaxation time. The small time step required for stability (develop strategies that mitigate the stability problem by making larger (at least ∼10 Kyr) time steps stable and accurate. Here we present a new free-surface stabilization algorithm for finite element codes which solves the stability problem by adding to the Stokes formulation an intrinsic penalization term equivalent to a portion of the future load at the surface nodes. Our algorithm is straightforward to implement and can be used with both Eulerian or Lagrangian grids. It includes α and β parameters to respectively control both the vertical and the horizontal slope-dependent penalization terms, and uses Uzawa-like iterations to solve the resulting system at a cost comparable to a non-stress free surface formulation. Four tests were carried out in order to study the accuracy and the stability of the algorithm: (1) a decaying first-order sinusoidal topography test, (2) a decaying high-order sinusoidal topography test, (3) a Rayleigh-Taylor instability test, and (4) a steep-slope test. For these tests, we investigate which α and β parameters give the best results in terms of both accuracy and stability. We also compare the accuracy and the stability of our algorithm with a similar implicit approach recently developed by Kaus et al. (2010). We find that our algorithm is slightly more accurate and stable for steep slopes, and also conclude that, for longer time steps, the optimal
International Nuclear Information System (INIS)
Evans, D.K.
1986-01-01
Seventy-five percent of the world's stable isotope supply comes from one producer, Oak Ridge Nuclear Laboratory (ORNL) in the US. Canadian concern is that foreign needs will be met only after domestic needs, thus creating a shortage of stable isotopes in Canada. This article describes the present situation in Canada (availability and cost) of stable isotopes, the isotope enrichment techniques, and related research programs at Chalk River Nuclear Laboratories (CRNL)
Reactive Collision Avoidance Algorithm
Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred
2010-01-01
-line. The optimal avoidance trajectory is implemented as a receding-horizon model predictive control law. Therefore, at each time step, the optimal avoidance trajectory is found and the first time step of its acceleration is applied. At the next time step of the control computer, the problem is re-solved and the new first time step is again applied. This continual updating allows the RCA algorithm to adapt to a colliding spacecraft that is making erratic course changes.
Horton, Pascal; Obled, Charles; Jaboyedoff, Michel
2017-07-01
Analogue methods (AMs) predict local weather variables (predictands) such as precipitation by means of a statistical relationship with predictors at a synoptic scale. The analogy is generally assessed on gradients of geopotential heights first to sample days with a similar atmospheric circulation. Other predictors such as moisture variables can also be added in a successive level of analogy. The search for candidate situations similar to a given target day is usually undertaken by comparing the state of the atmosphere at fixed hours of the day for both the target day and the candidate analogues. This is a consequence of using standard daily precipitation time series, which are available over longer periods than sub-daily data. However, it is unlikely for the best analogy to occur at the exact same hour for the target and candidate situations. A better analogue situation may be found with a time shift of several hours since a better fit can occur at different times of the day. In order to assess the potential for finding better analogues at a different hour, a moving time window (MTW) has been introduced. The MTW resulted in a better analogy in terms of the atmospheric circulation and showed improved values of the analogy criterion on the entire distribution of the extracted analogue dates. The improvement was found to increase with the analogue rank due to an accumulation of better analogues in the selection. A seasonal effect has also been identified, with larger improvements shown in winter than in summer. This may be attributed to stronger diurnal cycles in summer that favour predictors taken at the same hour for the target and analogue days. The impact of the MTW on the precipitation prediction skill has been assessed by means of a sub-daily precipitation series transformed into moving 24 h totals at 12, 6, and 3 h time steps. The prediction skill was improved by the MTW, as was the reliability of the prediction. Moreover, the improvements were greater for days
Menanteau, Laurent; Pantalé, Olivier; Caperaa, Serge
2005-01-01
This work concerns the development of a virtual prototyping tool for large scale electro-thermo-mechanical simulation of power converters used in railway transport including multi-domain and multiple time-steps aspects. For this purpose, Domain Decomposition Method (DDM) is used to give on one hand the ability to treat large scale problems and on the other hand, for transient analysis, the ability to use different time-steps in different parts of the numerical model. An Object-Oriented progra...
An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm
Chen, G.; Chacón, L.; Barnes, D. C.
2011-08-01
This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov-Poisson formulation), ours is based on a nonlinearly converged Vlasov-Ampére (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant-Friedrichs-Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicit time steps (unlike the earlier "energy-conserving" explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton-Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical
An adaptive scale factor based MPPT algorithm for changing solar irradiation levels in outer space
Kwan, Trevor Hocksun; Wu, Xiaofeng
2017-03-01
Maximum power point tracking (MPPT) techniques are popularly used for maximizing the output of solar panels by continuously tracking the maximum power point (MPP) of their P-V curves, which depend both on the panel temperature and the input insolation. Various MPPT algorithms have been studied in literature, including perturb and observe (P&O), hill climbing, incremental conductance, fuzzy logic control and neural networks. This paper presents an algorithm which improves the MPP tracking performance by adaptively scaling the DC-DC converter duty cycle. The principle of the proposed algorithm is to detect the oscillation by checking the sign (ie. direction) of the duty cycle perturbation between the current and previous time steps. If there is a difference in the signs then it is clear an oscillation is present and the DC-DC converter duty cycle perturbation is subsequently scaled down by a constant factor. By repeating this process, the steady state oscillations become negligibly small which subsequently allows for a smooth steady state MPP response. To verify the proposed MPPT algorithm, a simulation involving irradiances levels that are typically encountered in outer space is conducted. Simulation and experimental results prove that the proposed algorithm is fast and stable in comparison to not only the conventional fixed step counterparts, but also to previous variable step size algorithms.
Cox, Christopher; Liang, Chunlei; Plesniak, Michael
2015-11-01
This paper reports development of a high-order compact method for solving unsteady incompressible flow on unstructured grids with implicit time stepping. The method falls under the class of methods now referred to as flux reconstruction/correction procedure via reconstruction. The governing equations employ the classical artificial compressibility treatment, where dual time stepping is needed to solve unsteady flow problems. An implicit non-linear lower-upper symmetric Gauss-Seidel scheme with backward Euler discretization is used to efficiently march the solution in pseudo time, while a second-order backward Euler discretization is used to march in physical time. We verify and validate implementation of the high-order method coupled with our implicit time-stepping scheme. Three-dimensional results computed on many processing elements will be presented. The high-order method is very suitable for parallel computing and can easily be extended to moving and deforming grids. The current implicit time stepping scheme is proven effective in satisfying the divergence-free constraint on the velocity field in the artificial compressibility formulation within the context of the high-order flux reconstruction method. Financial support provided under the GW Presidential Merit Fellowship.
Computational plasticity algorithm for particle dynamics simulations
Krabbenhoft, K.; Lyamin, A. V.; Vignes, C.
2018-01-01
The problem of particle dynamics simulation is interpreted in the framework of computational plasticity leading to an algorithm which is mathematically indistinguishable from the common implicit scheme widely used in the finite element analysis of elastoplastic boundary value problems. This algorithm provides somewhat of a unification of two particle methods, the discrete element method and the contact dynamics method, which usually are thought of as being quite disparate. In particular, it is shown that the former appears as the special case where the time stepping is explicit while the use of implicit time stepping leads to the kind of schemes usually labelled contact dynamics methods. The framing of particle dynamics simulation within computational plasticity paves the way for new approaches similar (or identical) to those frequently employed in nonlinear finite element analysis. These include mixed implicit-explicit time stepping, dynamic relaxation and domain decomposition schemes.
Wen, Baole; Chini, Gregory P.; Kerswell, Rich R.; Doering, Charles R.
2015-10-01
An alternative computational procedure for numerically solving a class of variational problems arising from rigorous upper-bound analysis of forced-dissipative infinite-dimensional nonlinear dynamical systems, including the Navier-Stokes and Oberbeck-Boussinesq equations, is analyzed and applied to Rayleigh-Bénard convection. A proof that the only steady state to which this numerical algorithm can converge is the required global optimal of the relevant variational problem is given for three canonical flow configurations. In contrast with most other numerical schemes for computing the optimal bounds on transported quantities (e.g., heat or momentum) within the "background field" variational framework, which employ variants of Newton's method and hence require very accurate initial iterates, the new computational method is easy to implement and, crucially, does not require numerical continuation. The algorithm is used to determine the optimal background-method bound on the heat transport enhancement factor, i.e., the Nusselt number (Nu), as a function of the Rayleigh number (Ra), Prandtl number (Pr), and domain aspect ratio L in two-dimensional Rayleigh-Bénard convection between stress-free isothermal boundaries (Rayleigh's original 1916 model of convection). The result of the computation is significant because analyses, laboratory experiments, and numerical simulations have suggested a range of exponents α and β in the presumed Nu˜PrαRaβ scaling relation. The computations clearly show that for Ra≤1010 at fixed L =2 √{2 },Nu≤0.106 Pr0Ra5/12 , which indicates that molecular transport cannot generally be neglected in the "ultimate" high-Ra regime.
International Nuclear Information System (INIS)
Samios, N.P.
1994-01-01
I have been asked to review the subject of stable particles, essentially the particles that eventually comprised the meson and baryon octets, with a few more additions - with an emphasis on the contributions made by experiments utilizing the bubble chamber technique. In this activity, much work had been done by the photographic emulsion technique and cloud chambers - exposed to cosmic rays as well as accelerator based beams. In fact, many if not most of the stable particles were found by these latter two techniques, however, the foree of the bubble chamber (coupled with the newer and more powerful accelerators) was to verify, and reinforce with large statistics, the existence of these states, to find some of the more difficult ones, mainly neutrals and further to elucidate their properties, i.e., spin, parity, lifetimes, decay parameters, etc. (orig.)
International Nuclear Information System (INIS)
Samios, N.P.
1993-01-01
I have been asked to review the subject of stable particles, essentially the particles that eventually comprised the meson and baryon octets. with a few more additions -- with an emphasis on the contributions made by experiments utilizing the bubble chamber technique. In this activity, much work had been done by the photographic emulsion technique and cloud chambers-exposed to cosmic rays as well as accelerator based beams. In fact, many if not most of the stable particles were found by these latter two techniques, however, the forte of the bubble chamber (coupled with the newer and more powerful accelerators) was to verify, and reinforce with large statistics, the existence of these states, to find some of the more difficult ones, mainly neutrals and further to elucidate their properties, i.e., spin, parity, lifetimes, decay parameters, etc
Khuvis, Samuel; Gobbert, Matthias K; Peercy, Bradford E
2015-05-01
Physiologically realistic simulations of computational islets of beta cells require the long-time solution of several thousands of coupled ordinary differential equations (ODEs), resulting from the combination of several ODEs in each cell and realistic numbers of several hundreds of cells in an islet. For a reliable and accurate solution of complex nonlinear models up to the desired final times on the scale of several bursting periods, an appropriate ODE solver designed for stiff problems is eventually a necessity, since other solvers may not be able to handle the problem or are exceedingly inefficient. But stiff solvers are potentially significantly harder to use, since their algorithms require at least an approximation of the Jacobian matrix. For sophisticated models, systems of several complex ODEs in each cell, it is practically unworkable to differentiate these intricate nonlinear systems analytically and to manually program the resulting Jacobian matrix in computer code. This paper demonstrates that automatic differentiation can be used to obtain code for the Jacobian directly from code for the ODE system, which allows a full accounting for the sophisticated model equations. This technique is also feasible in source-code languages Fortran and C, and the conclusions apply to a wide range of systems of coupled, nonlinear reaction equations. However, when we combine an appropriately supplied Jacobian with slightly modified memory management in the ODE solver, simulations on the realistic scale of one thousand cells in the islet become possible that are several orders of magnitude faster than the original solver in the software Matlab, a language that is particularly user friendly for programming complicated model equations. We use the efficient simulator to analyze electrical bursting and show non-monotonic average burst period between fast and slow cells for increasing coupling strengths. We also find that interestingly, the arrangement of the connected fast
A parallel approach to the stable marriage problem
DEFF Research Database (Denmark)
Larsen, Jesper
1997-01-01
This paper describes two parallel algorithms for the stable marriage problem implemented on a MIMD parallel computer. The algorithms are tested against sequential algorithms on randomly generated and worst-case instances. The results clearly show that the combination fo a very simple problem...... and a commercial MIMD system results in parallel algorithms which are not competitive with sequential algorithms wrt. practical performance. 1 Introduction In 1962 the Stable Marriage Problem was....
Implementation of a Two-phase Three-field Model on an Unstructured-mesh Pressure-based Algorithm
International Nuclear Information System (INIS)
Kim, Jongtae; Park, Ik-Kyu; Cho, Hyoung Kyu; Yoon, Han-Young; Kim, Kyung Doo; Jeong, Jae Jun
2008-01-01
Over the past three decades, many researches have been conducted in computational fluid dynamics (CFD). The main outputs from the researches are wide spread in numerics and physics, for example, developments of solution algorithms for the Navier-Stokes equations and numerical schemes for a high-resolution, and modeling of turbulence, combustion, and multi-phase. For the multi-phase flow analysis, many CFD algorithms have been implemented. The density-base or pressure-based preconditioned coupled algorithm was used in the OVAP code. Theoretically, the fully coupled method is very stable and efficient computationally. But it is difficult to derive an upwind flux and to implement physical models for the multi-phase flows. Implicit- Continuous-Eulerian (ICE) algorithm, which was used in the RELAP and MARS codes, has been implemented into the 3-D multi-phase CFD code. In the method, because the equations are coupled by a chain rule at each grid point, it is very stable for inter-phasic exchanges. But because of the decoupling between the computational nodes or cells, the computational time step is limited by convective and diffusive time scales. For many years, the projection method such as the fractional-step or SMAC method has been used for the multi-phase flow analysis. It is very efficient for the unsteady calculations by the nature of the non-iterative scheme. On the contrary because of the phasic decoupling, it is known that a phase change between liquid and vapor is difficult to resolve. The SIMPLEbased algorithms have been popularly used for the general fluid and heat transfer problems such as internal and external turbulent flows and species transport problems with or without combustion. In contrast to the above mentioned ICE and projection method, it is based on an iterative or multi-correction scheme in order to satisfy a mass conservation and to couple the variables in the equations. The time accuracy of a solution can be obtained by a marching in time, in
Implementation of a Two-phase Three-field Model on an Unstructured-mesh Pressure-based Algorithm
Energy Technology Data Exchange (ETDEWEB)
Kim, Jongtae; Park, Ik-Kyu; Cho, Hyoung Kyu; Yoon, Han-Young; Kim, Kyung Doo; Jeong, Jae Jun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2008-05-15
Over the past three decades, many researches have been conducted in computational fluid dynamics (CFD). The main outputs from the researches are wide spread in numerics and physics, for example, developments of solution algorithms for the Navier-Stokes equations and numerical schemes for a high-resolution, and modeling of turbulence, combustion, and multi-phase. For the multi-phase flow analysis, many CFD algorithms have been implemented. The density-base or pressure-based preconditioned coupled algorithm was used in the OVAP code. Theoretically, the fully coupled method is very stable and efficient computationally. But it is difficult to derive an upwind flux and to implement physical models for the multi-phase flows. Implicit- Continuous-Eulerian (ICE) algorithm, which was used in the RELAP and MARS codes, has been implemented into the 3-D multi-phase CFD code. In the method, because the equations are coupled by a chain rule at each grid point, it is very stable for inter-phasic exchanges. But because of the decoupling between the computational nodes or cells, the computational time step is limited by convective and diffusive time scales. For many years, the projection method such as the fractional-step or SMAC method has been used for the multi-phase flow analysis. It is very efficient for the unsteady calculations by the nature of the non-iterative scheme. On the contrary because of the phasic decoupling, it is known that a phase change between liquid and vapor is difficult to resolve. The SIMPLEbased algorithms have been popularly used for the general fluid and heat transfer problems such as internal and external turbulent flows and species transport problems with or without combustion. In contrast to the above mentioned ICE and projection method, it is based on an iterative or multi-correction scheme in order to satisfy a mass conservation and to couple the variables in the equations. The time accuracy of a solution can be obtained by a marching in time, in
2015-01-01
Stable beams: two simple words that carry so much meaning at CERN. When LHC page one switched from "squeeze" to "stable beams" at 10.40 a.m. on Wednesday, 3 June, it triggered scenes of jubilation in control rooms around the CERN sites, as the LHC experiments started to record physics data for the first time in 27 months. This is what CERN is here for, and it’s great to be back in business after such a long period of preparation for the next stage in the LHC adventure. I’ve said it before, but I’ll say it again. This was a great achievement, and testimony to the hard and dedicated work of so many people in the global CERN community. I could start to list the teams that have contributed, but that would be a mistake. Instead, I’d simply like to say that an achievement as impressive as running the LHC – a machine of superlatives in every respect – takes the combined effort and enthusiasm of everyone ...
MPI and GPU parallelization of novel SD algorithms
Indian Academy of Sciences (India)
with 3200 CG particles and with a time step of 10 fs, using Gromacs version 4.0.7. For Langevin dynamics we compared the 'new' impulsive algorithm with the. 'old' algorithm available in Gromacs, which is based on integration of continuous friction,10 and with pure MD without and with global thermostat. In the table below.
Faster and Simpler Approximation of Stable Matchings
Directory of Open Access Journals (Sweden)
Katarzyna Paluch
2014-04-01
Full Text Available We give a 3 2 -approximation algorithm for finding stable matchings that runs in O(m time. The previous most well-known algorithm, by McDermid, has the same approximation ratio but runs in O(n3/2m time, where n denotes the number of people andm is the total length of the preference lists in a given instance. In addition, the algorithm and the analysis are much simpler. We also give the extension of the algorithm for computing stable many-to-many matchings.
DEFF Research Database (Denmark)
Mahnke, Martina; Uprichard, Emma
2014-01-01
changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...
An Empirical Study of Online Packet Scheduling Algorithms
Sakr, Nourhan; Stein, Cliff
2016-01-01
This work studies online scheduling algorithms for buffer management, develops new algorithms, and analyzes their performances. Packets arrive at a release time r, with a non-negative weight w and an integer deadline d. At each time step, at most one packet is scheduled. The modified greedy (MG) algorithm is 1.618-competitive for the objective of maximizing the sum of weights of packets sent, assuming agreeable deadlines. We analyze the empirical behavior of MG in a situation with arbitrary d...
Energy Technology Data Exchange (ETDEWEB)
Wan, Hui; Zhang, Kai; Rasch, Philip J.; Singh, Balwinder; Chen, Xingyuan; Edwards, Jim
2017-02-03
A test procedure is proposed for identifying numerically significant solution changes in evolution equations used in atmospheric models. The test issues a fail
signal when any code modifications or computing environment changes lead to solution differences that exceed the known time step sensitivity of the reference model. Initial evidence is provided using the Community Atmosphere Model (CAM) version 5.3 that the proposed procedure can be used to distinguish rounding-level solution changes from impacts of compiler optimization or parameter perturbation, which are known to cause substantial differences in the simulated climate. The test is not exhaustive since it does not detect issues associated with diagnostic calculations that do not feedback to the model state variables. Nevertheless, it provides a practical and objective way to assess the significance of solution changes. The short simulation length implies low computational cost. The independence between ensemble members allows for parallel execution of all simulations, thus facilitating fast turnaround. The new method is simple to implement since it does not require any code modifications. We expect that the same methodology can be used for any geophysical model to which the concept of time step convergence is applicable.
Energy Technology Data Exchange (ETDEWEB)
Mather, Barry
2017-08-24
The increasing deployment of distribution-connected photovoltaic (DPV) systems requires utilities to complete complex interconnection studies. Relatively simple interconnection study methods worked well for low penetrations of photovoltaic systems, but more complicated quasi-static time-series (QSTS) analysis is required to make better interconnection decisions as DPV penetration levels increase. Tools and methods must be developed to support this. This paper presents a variable-time-step solver for QSTS analysis that significantly shortens the computational time and effort to complete a detailed analysis of the operation of a distribution circuit with many DPV systems. Specifically, it demonstrates that the proposed variable-time-step solver can reduce the required computational time by as much as 84% without introducing any important errors to metrics, such as the highest and lowest voltage occurring on the feeder, number of voltage regulator tap operations, and total amount of losses realized in the distribution circuit during a 1-yr period. Further improvement in computational speed is possible with the introduction of only modest errors in these metrics, such as a 91 percent reduction with less than 5 percent error when predicting voltage regulator operations.
Stable Treemaps via Local Moves.
Sondag, Max; Speckmann, Bettina; Verbeek, Kevin
2018-01-01
Treemaps are a popular tool to visualize hierarchical data: items are represented by nested rectangles and the area of each rectangle corresponds to the data being visualized for this item. The visual quality of a treemap is commonly measured via the aspect ratio of the rectangles. If the data changes, then a second important quality criterion is the stability of the treemap: how much does the treemap change as the data changes. We present a novel stable treemapping algorithm that has very high visual quality. Whereas existing treemapping algorithms generally recompute the treemap every time the input changes, our algorithm changes the layout of the treemap using only local modifications. This approach not only gives us direct control over stability, but it also allows us to use a larger set of possible layouts, thus provably resulting in treemaps of higher visual quality compared to existing algorithms. We further prove that we can reach all possible treemap layouts using only our local modifications. Furthermore, we introduce a new measure for stability that better captures the relative positions of rectangles. We finally show via experiments on real-world data that our algorithm outperforms existing treemapping algorithms also in practice on either visual quality and/or stability. Our algorithm scores high on stability regardless of whether we use an existing stability measure or our new measure.
Toward Practical Secure Stable Matching
Directory of Open Access Journals (Sweden)
Riazi M. Sadegh
2017-01-01
Full Text Available The Stable Matching (SM algorithm has been deployed in many real-world scenarios including the National Residency Matching Program (NRMP and financial applications such as matching of suppliers and consumers in capital markets. Since these applications typically involve highly sensitive information such as the underlying preference lists, their current implementations rely on trusted third parties. This paper introduces the first provably secure and scalable implementation of SM based on Yao’s garbled circuit protocol and Oblivious RAM (ORAM. Our scheme can securely compute a stable match for 8k pairs four orders of magnitude faster than the previously best known method. We achieve this by introducing a compact and efficient sub-linear size circuit. We even further decrease the computation cost by three orders of magnitude by proposing a novel technique to avoid unnecessary iterations in the SM algorithm. We evaluate our implementation for several problem sizes and plan to publish it as open-source.
A computational fluid dynamics algorithm on a massively parallel computer
International Nuclear Information System (INIS)
Jespersen, D.C.; Levit, C.
1989-01-01
The discipline of computational fluid dynamics is demanding ever-increasing computational power to deal with complex fluid flow problems. The authors investigate the performance of a finite-difference computational fluid dynamics algorithm on a massively parallel computer, the Connection Machine. Of special interest is an implicitly time-stepping algorithm; to obtain maximum performance from the Connection Machine, it is necessary to use a nonstandard algorithm to solve the linear systems that arise in the implicit algorithm. The authors find that the Connection Machine can achieve very high computation rates on both explicit and implicit algorithms. The performance of the Connection Machine puts it in the same class as conventional supercomputers
National Oceanic and Atmospheric Administration, Department of Commerce — Tissue samples (skin, bone, blood, muscle) are analyzed for stable carbon, stable nitrogen, and stable sulfur analysis. Many samples are used in their entirety for...
Fast autodidactic adaptive equalization algorithms
Hilal, Katia
Autodidactic equalization by adaptive filtering is addressed in a mobile radio communication context. A general method, using an adaptive stochastic gradient Bussgang type algorithm, to deduce two low cost computation algorithms is given: one equivalent to the initial algorithm and the other having improved convergence properties thanks to a block criteria minimization. Two start algorithms are reworked: the Godard algorithm and the decision controlled algorithm. Using a normalization procedure, and block normalization, the performances are improved, and their common points are evaluated. These common points are used to propose an algorithm retaining the advantages of the two initial algorithms. This thus inherits the robustness of the Godard algorithm and the precision and phase correction of the decision control algorithm. The work is completed by a study of the stable states of Bussgang type algorithms and of the stability of the Godard algorithms, initial and normalized. The simulation of these algorithms, carried out in a mobile radio communications context, and under severe conditions on the propagation channel, gave a 75% reduction in the number of samples required for the processing in relation with the initial algorithms. The improvement of the residual error was of a much lower return. These performances are close to making possible the use of autodidactic equalization in the mobile radio system.
Molnar, Melissa; Marek, C. John
2005-01-01
A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two time step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting rates of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx are obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3). The temperature of the gas entering
Meliga, Philippe
2017-07-01
We provide in-depth scrutiny of two methods making use of adjoint-based gradients to compute the sensitivity of drag in the two-dimensional, periodic flow past a circular cylinder (Re≲189 ): first, the time-stepping analysis used in Meliga et al. [Phys. Fluids 26, 104101 (2014), 10.1063/1.4896941] that relies on classical Navier-Stokes modeling and determines the sensitivity to any generic control force from time-dependent adjoint equations marched backwards in time; and, second, a self-consistent approach building on the model of Mantič-Lugo et al. [Phys. Rev. Lett. 113, 084501 (2014), 10.1103/PhysRevLett.113.084501] to compute semilinear approximations of the sensitivity to the mean and fluctuating components of the force. Both approaches are applied to open-loop control by a small secondary cylinder and allow identifying the sensitive regions without knowledge of the controlled states. The theoretical predictions obtained by time-stepping analysis reproduce well the results obtained by direct numerical simulation of the two-cylinder system. So do the predictions obtained by self-consistent analysis, which corroborates the relevance of the approach as a guideline for efficient and systematic control design in the attempt to reduce drag, even though the Reynolds number is not close to the instability threshold and the oscillation amplitude is not small. This is because, unlike simpler approaches relying on linear stability analysis to predict the main features of the flow unsteadiness, the semilinear framework encompasses rigorously the effect of the control on the mean flow, as well as on the finite-amplitude fluctuation that feeds back nonlinearly onto the mean flow via the formation of Reynolds stresses. Such results are especially promising as the self-consistent approach determines the sensitivity from time-independent equations that can be solved iteratively, which makes it generally less computationally demanding. We ultimately discuss the extent to
Stochastic disturbance rejection in model predictive control by randomized algorithms
Batina, Ivo; Stoorvogel, Antonie Arij; Weiland, Siep
2001-01-01
In this paper we consider model predictive control with stochastic disturbances and input constraints. We present an algorithm which can solve this problem approximately but with arbitrary high accuracy. The optimization at each time step is a closed loop optimization and therefore takes into
Quadratic adaptive algorithm for solving cardiac action potential models.
Chen, Min-Hung; Chen, Po-Yuan; Luo, Ching-Hsing
2016-10-01
An adaptive integration method is proposed for computing cardiac action potential models accurately and efficiently. Time steps are adaptively chosen by solving a quadratic formula involving the first and second derivatives of the membrane action potential. To improve the numerical accuracy, we devise an extremum-locator (el) function to predict the local extremum when approaching the peak amplitude of the action potential. In addition, the time step restriction (tsr) technique is designed to limit the increase in time steps, and thus prevent the membrane potential from changing abruptly. The performance of the proposed method is tested using the Luo-Rudy phase 1 (LR1), dynamic (LR2), and human O'Hara-Rudy dynamic (ORd) ventricular action potential models, and the Courtemanche atrial model incorporating a Markov sodium channel model. Numerical experiments demonstrate that the action potential generated using the proposed method is more accurate than that using the traditional Hybrid method, especially near the peak region. The traditional Hybrid method may choose large time steps near to the peak region, and sometimes causes the action potential to become distorted. In contrast, the proposed new method chooses very fine time steps in the peak region, but large time steps in the smooth region, and the profiles are smoother and closer to the reference solution. In the test on the stiff Markov ionic channel model, the Hybrid blows up if the allowable time step is set to be greater than 0.1ms. In contrast, our method can adjust the time step size automatically, and is stable. Overall, the proposed method is more accurate than and as efficient as the traditional Hybrid method, especially for the human ORd model. The proposed method shows improvement for action potentials with a non-smooth morphology, and it needs further investigation to determine whether the method is helpful during propagation of the action potential. Copyright © 2016 Elsevier Ltd. All rights
Milroy, Daniel J.; Baker, Allison H.; Hammerling, Dorit M.; Jessup, Elizabeth R.
2018-02-01
The Community Earth System Model Ensemble Consistency Test (CESM-ECT) suite was developed as an alternative to requiring bitwise identical output for quality assurance. This objective test provides a statistical measurement of consistency between an accepted ensemble created by small initial temperature perturbations and a test set of CESM simulations. In this work, we extend the CESM-ECT suite with an inexpensive and robust test for ensemble consistency that is applied to Community Atmospheric Model (CAM) output after only nine model time steps. We demonstrate that adequate ensemble variability is achieved with instantaneous variable values at the ninth step, despite rapid perturbation growth and heterogeneous variable spread. We refer to this new test as the Ultra-Fast CAM Ensemble Consistency Test (UF-CAM-ECT) and demonstrate its effectiveness in practice, including its ability to detect small-scale events and its applicability to the Community Land Model (CLM). The new ultra-fast test facilitates CESM development, porting, and optimization efforts, particularly when used to complement information from the original CESM-ECT suite of tools.
Joux, Antoine
2009-01-01
Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic
On Stable Marriages and Greedy Matchings
Energy Technology Data Exchange (ETDEWEB)
Manne, Fredrik; Naim, Md; Lerring, Hakon; Halappanavar, Mahantesh
2016-12-11
Research on stable marriage problems has a long and mathematically rigorous history, while that of exploiting greedy matchings in combinatorial scientific computing is a younger and less developed research field. In this paper we consider the relationships between these two areas. In particular we show that several problems related to computing greedy matchings can be formulated as stable marriage problems and as a consequence several recently proposed algorithms for computing greedy matchings are in fact special cases of well known algorithms for the stable marriage problem. However, in terms of implementations and practical scalable solutions on modern hardware, the greedy matching community has made considerable progress. We show that due to the strong relationship between these two fields many of these results are also applicable for solving stable marriage problems.
Hougardy, Stefan
2016-01-01
Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.
Stable convergence and stable limit theorems
Häusler, Erich
2015-01-01
The authors present a concise but complete exposition of the mathematical theory of stable convergence and give various applications in different areas of probability theory and mathematical statistics to illustrate the usefulness of this concept. Stable convergence holds in many limit theorems of probability theory and statistics – such as the classical central limit theorem – which are usually formulated in terms of convergence in distribution. Originated by Alfred Rényi, the notion of stable convergence is stronger than the classical weak convergence of probability measures. A variety of methods is described which can be used to establish this stronger stable convergence in many limit theorems which were originally formulated only in terms of weak convergence. Naturally, these stronger limit theorems have new and stronger consequences which should not be missed by neglecting the notion of stable convergence. The presentation will be accessible to researchers and advanced students at the master's level...
Performance Analyses of IDEAL Algorithm on Highly Skewed Grid System
Directory of Open Access Journals (Sweden)
Dongliang Sun
2014-03-01
Full Text Available IDEAL is an efficient segregated algorithm for the fluid flow and heat transfer problems. This algorithm has now been extended to the 3D nonorthogonal curvilinear coordinates. Highly skewed grids in the nonorthogonal curvilinear coordinates can decrease the convergence rate and deteriorate the calculating stability. In this study, the feasibility of the IDEAL algorithm on highly skewed grid system is analyzed by investigating the lid-driven flow in the inclined cavity. It can be concluded that the IDEAL algorithm is more robust and more efficient than the traditional SIMPLER algorithm, especially for the highly skewed and fine grid system. For example, at θ = 5° and grid number = 70 × 70 × 70, the convergence rate of the IDEAL algorithm is 6.3 times faster than that of the SIMPLER algorithm, and the IDEAL algorithm can converge almost at any time step multiple.
Tel, G.
We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of
Marek, C. John; Molnar, Melissa
2005-01-01
A simplified single rate expression for hydrogen combustion and nitrogen oxide production was developed. Detailed kinetics are predicted for the chemical kinetic times using the complete chemical mechanism over the entire operating space. These times are then correlated to the reactor conditions using an exponential fit. Simple first order reaction expressions are then used to find the conversion in the reactor. The method uses a two time step kinetic scheme. The first time averaged step is used at the initial times with smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, temperature, and pressure. The second instantaneous step is used at higher water concentrations (greater than l x 10(exp -20)) moles per cc) in the mixture which gives the chemical kinetic time as a function of the instantaneous fuel and water mole concentrations, pressure and temperature (T(sub 4)). The simple correlations are then compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. This time is regressed over the complete initial conditions using the Excel regression routine. Chemical kinetic time equations for H2 and NOx are obtained for H2/Air fuel and for H2/O2. A similar correlation is also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium temperature (T(sub 4)) as a function of overall fuel/air ratio, pressure and initial temperature (T(sub 3)). High values of the regression coefficient R squared are obtained.
Molnar, Melissa; Marek, C. John
2005-01-01
A simplified single rate expression for hydrogen combustion and nitrogen oxide production was developed. Detailed kinetics are predicted for the chemical kinetic times using the complete chemical mechanism over the entire operating space. These times are then correlated to the reactor conditions using an exponential fit. Simple first order reaction expressions are then used to find the conversion in the reactor. The method uses a two-time step kinetic scheme. The first time averaged step is used at the initial times with smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, temperature, and pressure. The second instantaneous step is used at higher water concentrations (> 1 x 10(exp -20) moles/cc) in the mixture which gives the chemical kinetic time as a function of the instantaneous fuel and water mole concentrations, pressure and temperature (T4). The simple correlations are then compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. This time is regressed over the complete initial conditions using the Excel regression routine. Chemical kinetic time equations for H2 and NOx are obtained for H2/air fuel and for the H2/O2. A similar correlation is also developed using data from NASA s Chemical Equilibrium Applications (CEA) code to determine the equilibrium temperature (T4) as a function of overall fuel/air ratio, pressure and initial temperature (T3). High values of the regression coefficient R2 are obtained.
Thermodynamically Consistent Algorithms for the Solution of Phase-Field Models
Vignal, Philippe
2016-02-11
Phase-field models are emerging as a promising strategy to simulate interfacial phenomena. Rather than tracking interfaces explicitly as done in sharp interface descriptions, these models use a diffuse order parameter to monitor interfaces implicitly. This implicit description, as well as solid physical and mathematical footings, allow phase-field models to overcome problems found by predecessors. Nonetheless, the method has significant drawbacks. The phase-field framework relies on the solution of high-order, nonlinear partial differential equations. Solving these equations entails a considerable computational cost, so finding efficient strategies to handle them is important. Also, standard discretization strategies can many times lead to incorrect solutions. This happens because, for numerical solutions to phase-field equations to be valid, physical conditions such as mass conservation and free energy monotonicity need to be guaranteed. In this work, we focus on the development of thermodynamically consistent algorithms for time integration of phase-field models. The first part of this thesis focuses on an energy-stable numerical strategy developed for the phase-field crystal equation. This model was put forward to model microstructure evolution. The algorithm developed conserves, guarantees energy stability and is second order accurate in time. The second part of the thesis presents two numerical schemes that generalize literature regarding energy-stable methods for conserved and non-conserved phase-field models. The time discretization strategies can conserve mass if needed, are energy-stable, and second order accurate in time. We also develop an adaptive time-stepping strategy, which can be applied to any second-order accurate scheme. This time-adaptive strategy relies on a backward approximation to give an accurate error estimator. The spatial discretization, in both parts, relies on a mixed finite element formulation and isogeometric analysis. The codes are
Molnar, Melissa; Marek, C. John
2004-01-01
A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes that are being developed at Glenn. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates were then used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx were obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3
A parallel adaptive finite difference algorithm for petroleum reservoir simulation
Energy Technology Data Exchange (ETDEWEB)
Hoang, Hai Minh
2005-07-01
Adaptive finite differential for problems arising in simulation of flow in porous medium applications are considered. Such methods have been proven useful for overcoming limitations of computational resources and improving the resolution of the numerical solutions to a wide range of problems. By local refinement of the computational mesh where it is needed to improve the accuracy of solutions, yields better solution resolution representing more efficient use of computational resources than is possible with traditional fixed-grid approaches. In this thesis, we propose a parallel adaptive cell-centered finite difference (PAFD) method for black-oil reservoir simulation models. This is an extension of the adaptive mesh refinement (AMR) methodology first developed by Berger and Oliger (1984) for the hyperbolic problem. Our algorithm is fully adaptive in time and space through the use of subcycling, in which finer grids are advanced at smaller time steps than the coarser ones. When coarse and fine grids reach the same advanced time level, they are synchronized to ensure that the global solution is conservative and satisfy the divergence constraint across all levels of refinement. The material in this thesis is subdivided in to three overall parts. First we explain the methodology and intricacies of AFD scheme. Then we extend a finite differential cell-centered approximation discretization to a multilevel hierarchy of refined grids, and finally we are employing the algorithm on parallel computer. The results in this work show that the approach presented is robust, and stable, thus demonstrating the increased solution accuracy due to local refinement and reduced computing resource consumption. (Author)
Unconditionally stable perfectly matched layer boundary conditions
De Raedt, H.; Michielsen, K.
2007-01-01
A brief review is given of a systematic, product-formula based approach to construct unconditionally stable algorithms for solving the time-dependent Maxwell equations. The fundamental difficulties that arise when we want to incorporate uniaxial perfectly matched layer boundary conditions into this
An Unconditionally Stable Method for Solving the Acoustic Wave Equation
Directory of Open Access Journals (Sweden)
Zhi-Kai Fu
2015-01-01
Full Text Available An unconditionally stable method for solving the time-domain acoustic wave equation using Associated Hermit orthogonal functions is proposed. The second-order time derivatives in acoustic wave equation are expanded by these orthogonal basis functions. By applying Galerkin temporal testing procedure, the time variable can be eliminated from the calculations. The restriction of Courant-Friedrichs-Levy (CFL condition in selecting time step for analyzing thin layer can be avoided. Numerical results show the accuracy and the efficiency of the proposed method.
Okumura, Hisashi; Itoh, Satoru G; Okamoto, Yuko
2007-02-28
The authors propose explicit symplectic integrators of molecular dynamics (MD) algorithms for rigid-body molecules in the canonical and isobaric-isothermal ensembles. They also present a symplectic algorithm in the constant normal pressure and lateral surface area ensemble and that combined with the Parrinello-Rahman algorithm. Employing the symplectic integrators for MD algorithms, there is a conserved quantity which is close to Hamiltonian. Therefore, they can perform a MD simulation more stably than by conventional nonsymplectic algorithms. They applied this algorithm to a TIP3P pure water system at 300 K and compared the time evolution of the Hamiltonian with those by the nonsymplectic algorithms. They found that the Hamiltonian was conserved well by the symplectic algorithm even for a time step of 4 fs. This time step is longer than typical values of 0.5-2 fs which are used by the conventional nonsymplectic algorithms.
Local Search Approaches in Stable Matching Problems
Directory of Open Access Journals (Sweden)
Toby Walsh
2013-10-01
Full Text Available The stable marriage (SM problem has a wide variety of practical applications, ranging from matching resident doctors to hospitals, to matching students to schools or, more generally, to any two-sided market. In the classical formulation, n men and n women express their preferences (via a strict total order over the members of the other sex. Solving an SM problem means finding a stable marriage where stability is an envy-free notion: no man and woman who are not married to each other would both prefer each other to their partners or to being single. We consider both the classical stable marriage problem and one of its useful variations (denoted SMTI (Stable Marriage with Ties and Incomplete lists where the men and women express their preferences in the form of an incomplete preference list with ties over a subset of the members of the other sex. Matchings are permitted only with people who appear in these preference lists, and we try to find a stable matching that marries as many people as possible. Whilst the SM problem is polynomial to solve, the SMTI problem is NP-hard. We propose to tackle both problems via a local search approach, which exploits properties of the problems to reduce the size of the neighborhood and to make local moves efficiently. We empirically evaluate our algorithm for SM problems by measuring its runtime behavior and its ability to sample the lattice of all possible stable marriages. We evaluate our algorithm for SMTI problems in terms of both its runtime behavior and its ability to find a maximum cardinality stable marriage. Experimental results suggest that for SM problems, the number of steps of our algorithm grows only as O(n log(n, and that it samples very well the set of all stable marriages. It is thus a fair and efficient approach to generate stable marriages. Furthermore, our approach for SMTI problems is able to solve large problems, quickly returning stable matchings of large and often optimal size, despite the
Stable isotopes labelled compounds
International Nuclear Information System (INIS)
1982-09-01
The catalogue on stable isotopes labelled compounds offers deuterium, nitrogen-15, and multiply labelled compounds. It includes: (1) conditions of sale and delivery, (2) the application of stable isotopes, (3) technical information, (4) product specifications, and (5) the complete delivery programme
Steeneveld, G.J.
2012-01-01
Understanding and prediction of the stable atmospheric boundary layer is a challenging task. Many physical processes are relevant in the stable boundary layer, i.e. turbulence, radiation, land surface coupling, orographic turbulent and gravity wave drag, and land surface heterogeneity. The
Indian Academy of Sciences (India)
IAS Admin
After Maynard-Smith and Price [1] mathematically derived why a given behaviour or strategy was adopted by a certain proportion of the population at a given time, it was shown that a strategy which is currently stable in a population need not be stable in evolutionary time (across generations). Additionally it was sug-.
Normal modified stable processes
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Shephard, N.
2002-01-01
This paper discusses two classes of distributions, and stochastic processes derived from them: modified stable (MS) laws and normal modified stable (NMS) laws. This extends corresponding results for the generalised inverse Gaussian (GIG) and generalised hyperbolic (GH) or normal generalised inverse...... Gaussian (NGIG) laws. The wider framework thus established provides, in particular, for added flexibility in the modelling of the dynamics of financial time series, of importance especially as regards OU based stochastic volatility models for equities. In the special case of the tempered stable OU process...
Applications of stable isotopes
International Nuclear Information System (INIS)
Letolle, R.; Mariotti, A.; Bariac, T.
1991-06-01
This report reviews the historical background and the properties of stable isotopes, the methods used for their measurement (mass spectrometry and others), the present technics for isotope enrichment and separation, and at last the various present and foreseeable application (in nuclear energy, physical and chemical research, materials industry and research; tracing in industrial, medical and agronomical tests; the use of natural isotope variations for environmental studies, agronomy, natural resources appraising: water, minerals, energy). Some new possibilities in the use of stable isotope are offered. A last chapter gives the present state and forecast development of stable isotope uses in France and Europe
Hu, T C
2002-01-01
Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9
National Research Council Canada - National Science Library
Adler, Robert
1997-01-01
We describe how to take a stable, ARMA, time series through the various stages of model identification, parameter estimation, and diagnostic checking, and accompany the discussion with a goodly number...
A stable higher order space time Galerkin marching-on-in-time scheme
Pray, Andrew J.
2013-07-01
We present a method for the stable solution of time-domain integral equations. The method uses a technique developed in [1] to accurately evaluate matrix elements. As opposed to existing stabilization schemes, the method presented uses higher order basis functions in time to improve the accuracy of the solver. The method is validated by showing convergence in temporal basis function order, time step size, and geometric discretization order. © 2013 IEEE.
A Computational Fluid Dynamics Algorithm on a Massively Parallel Computer
Jespersen, Dennis C.; Levit, Creon
1989-01-01
The discipline of computational fluid dynamics is demanding ever-increasing computational power to deal with complex fluid flow problems. We investigate the performance of a finite-difference computational fluid dynamics algorithm on a massively parallel computer, the Connection Machine. Of special interest is an implicit time-stepping algorithm; to obtain maximum performance from the Connection Machine, it is necessary to use a nonstandard algorithm to solve the linear systems that arise in the implicit algorithm. We find that the Connection Machine ran achieve very high computation rates on both explicit and implicit algorithms. The performance of the Connection Machine puts it in the same class as today's most powerful conventional supercomputers.
Lorentz covariant canonical symplectic algorithms for dynamics of charged particles
Wang, Yulei; Liu, Jian; Qin, Hong
2016-12-01
In this paper, the Lorentz covariance of algorithms is introduced. Under Lorentz transformation, both the form and performance of a Lorentz covariant algorithm are invariant. To acquire the advantages of symplectic algorithms and Lorentz covariance, a general procedure for constructing Lorentz covariant canonical symplectic algorithms (LCCSAs) is provided, based on which an explicit LCCSA for dynamics of relativistic charged particles is built. LCCSA possesses Lorentz invariance as well as long-term numerical accuracy and stability, due to the preservation of a discrete symplectic structure and the Lorentz symmetry of the system. For situations with time-dependent electromagnetic fields, which are difficult to handle in traditional construction procedures of symplectic algorithms, LCCSA provides a perfect explicit canonical symplectic solution by implementing the discretization in 4-spacetime. We also show that LCCSA has built-in energy-based adaptive time steps, which can optimize the computation performance when the Lorentz factor varies.
DEFF Research Database (Denmark)
Markham, Annette
layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...
Directory of Open Access Journals (Sweden)
Anna Bourmistrova
2011-02-01
Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.
Energy Technology Data Exchange (ETDEWEB)
Grefenstette, J.J.
1994-12-31
Genetic algorithms solve problems by using principles inspired by natural population genetics: They maintain a population of knowledge structures that represent candidate solutions, and then let that population evolve over time through competition and controlled variation. GAs are being applied to a wide range of optimization and learning problems in many domains.
Adaptive Maneuvering Target Tracking Algorithm
Directory of Open Access Journals (Sweden)
Chunling Wu
2014-07-01
Full Text Available Based on the current statistical model, a new adaptive maneuvering target tracking algorithm, CS-MSTF, is presented. The new algorithm keep the merits of high tracking precision that the current statistical model and strong tracking filter (STF have in tracking maneuvering target, and made the modifications as such: First, STF has the defect that it achieves the perfect performance in maneuvering segment at a cost of the precision in non-maneuvering segment, so the new algorithm modified the prediction error covariance matrix and the fading factor to improve the tracking precision both of the maneuvering segment and non-maneuvering segment; The estimation error covariance matrix was calculated using the Joseph form, which is more stable and robust in numerical. The Monte- Carlo simulation shows that the CS-MSTF algorithm has a more excellent performance than CS-STF and can estimate efficiently.
Temperature and Humidity Control in Livestock Stables
DEFF Research Database (Denmark)
Hansen, Michael; Andersen, Palle; Nielsen, Kirsten M.
2010-01-01
The paper describes temperature and humidity control of a livestock stable. It is important to have a correct air flow pattern in the livestock stable in order to achieve proper temperature and humidity control as well as to avoid draught. In the investigated livestock stable the air flow...... is controlled using wall mounted ventilation flaps. In the paper an algorithm for air flow control is presented meeting the needs for temperature and humidity while taking the air flow pattern in consideration. To obtain simple and realisable controllers a model based control design method is applied....... In the design dynamic models for temperature and humidity are very important elements and effort is put into deriving and testing the models. It turns out that non-linearities are dominating in both models making feedback linearization the natural design method. The air controller as well as the temperature...
Massively Parallel and Scalable Implicit Time Integration Algorithms for Structural Dynamics
Farhat, Charbel
1997-01-01
Explicit codes are often used to simulate the nonlinear dynamics of large-scale structural systems, even for low frequency response, because the storage and CPU requirements entailed by the repeated factorizations traditionally found in implicit codes rapidly overwhelm the available computing resources. With the advent of parallel processing, this trend is accelerating because of the following additional facts: (a) explicit schemes are easier to parallelize than implicit ones, and (b) explicit schemes induce short range interprocessor communications that are relatively inexpensive, while the factorization methods used in most implicit schemes induce long range interprocessor communications that often ruin the sought-after speed-up. However, the time step restriction imposed by the Courant stability condition on all explicit schemes cannot yet be offset by the speed of the currently available parallel hardware. Therefore, it is essential to develop efficient alternatives to direct methods that are also amenable to massively parallel processing because implicit codes using unconditionally stable time-integration algorithms are computationally more efficient when simulating the low-frequency dynamics of aerospace structures.
Calcium stable isotope geochemistry
Energy Technology Data Exchange (ETDEWEB)
Gausonne, Nikolaus [Muenster Univ. (Germany). Inst. fuer Mineralogie; Schmitt, Anne-Desiree [Strasbourg Univ. (France). LHyGeS/EOST; Heuser, Alexander [Bonn Univ. (Germany). Steinmann-Inst. fuer Geologie, Mineralogie und Palaeontologie; Wombacher, Frank [Koeln Univ. (Germany). Inst. fuer Geologie und Mineralogie; Dietzel, Martin [Technische Univ. Graz (Austria). Inst. fuer Angewandte Geowissenschaften; Tipper, Edward [Cambridge Univ. (United Kingdom). Dept. of Earth Sciences; Schiller, Martin [Copenhagen Univ. (Denmark). Natural History Museum of Denmark
2016-08-01
This book provides an overview of the fundamentals and reference values for Ca stable isotope research, as well as current analytical methodologies including detailed instructions for sample preparation and isotope analysis. As such, it introduces readers to the different fields of application, including low-temperature mineral precipitation and biomineralisation, Earth surface processes and global cycling, high-temperature processes and cosmochemistry, and lastly human studies and biomedical applications. The current state of the art in these major areas is discussed, and open questions and possible future directions are identified. In terms of its depth and coverage, the current work extends and complements the previous reviews of Ca stable isotope geochemistry, addressing the needs of graduate students and advanced researchers who want to familiarize themselves with Ca stable isotope research.
A Symbolic Algorithm for the Analysis of Robust Timed Automata
Kordy, P.T.; Langerak, Romanus; Mauw, Sjouke; Polderman, Jan W.; Jones, C.; Pihlajasaari, P.; Sun, J.W.
2014-01-01
We propose an algorithm for the analysis of robustness of timed automata, that is, the correctness of a model in the presence of small drifts of the clocks. The algorithm is an extension of the region based algorithm of Puri and uses the idea of stable zones as introduced by Daws and Kordy.
Indian Academy of Sciences (India)
2016-08-26
Aug 26, 2016 ... Home; Journals; Resonance – Journal of Science Education; Volume 21; Issue 9. Evolutionary Stable Strategy: Application of Nash Equilibrium in Biology. General ... Using some examples of classical games, we show how evolutionary game theory can help understand behavioural decisions of animals.
Kearney, M. Kate
2013-01-01
The concordance genus of a knot is the least genus of any knot in its concordance class. Although difficult to compute, it is a useful invariant that highlights the distinction between the three-genus and four-genus. In this paper we define and discuss the stable concordance genus of a knot, which describes the behavior of the concordance genus under connected sum.
Manifolds admitting stable forms
Czech Academy of Sciences Publication Activity Database
Le, Hong-Van; Panák, Martin; Vanžura, Jiří
2008-01-01
Roč. 49, č. 1 (2008), s. 101-11 ISSN 0010-2628 R&D Projects: GA ČR(CZ) GP201/05/P088 Institutional research plan: CEZ:AV0Z10190503 Keywords : stable forms * automorphism groups Subject RIV: BA - General Mathematics
International Nuclear Information System (INIS)
Ishida, T.
1992-01-01
The research has been in four general areas: (1) correlation of isotope effects with molecular forces and molecular structures, (2) correlation of zero-point energy and its isotope effects with molecular structure and molecular forces, (3) vapor pressure isotope effects, and (4) fractionation of stable isotopes. 73 refs, 38 figs, 29 tabs
Interactive Stable Ray Tracing
DEFF Research Database (Denmark)
Dal Corso, Alessandro; Salvi, Marco; Kolb, Craig
2017-01-01
Interactive ray tracing applications running on commodity hardware can suffer from objectionable temporal artifacts due to a low sample count. We introduce stable ray tracing, a technique that improves temporal stability without the over-blurring and ghosting artifacts typical of temporal post-pr...
Directory of Open Access Journals (Sweden)
Behnaz Tolue
2018-07-01
Full Text Available In this paper we introduce stable subgroup graph associated to the group $G$. It is a graph with vertex set all subgroups of $G$ and two distinct subgroups $H_1$ and $H_2$ are adjacent if $St_{G}(H_1\\cap H_2\
Casanova, Henri; Robert, Yves
2008-01-01
""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi
Yatheendradas, Soni; Narapusetty, Balachandrudu; Peters-Lidard, Christa; Funk, Christopher; Verdin, James
2014-01-01
A previous study analyzed errors in the numerical calculation of actual crop evapotranspiration (ET(sub a)) under soil water stress. Assuming no irrigation or precipitation, it constructed equations for ET(sub a) over limited soil-water ranges in a root zone drying out due to evapotranspiration. It then used a single crop-soil composite to provide recommendations about the appropriate usage of numerical methods under different values of the time step and the maximum crop evapotranspiration (ET(sub c)). This comment reformulates those ET(sub a) equations for applicability over the full range of soil water values, revealing a dependence of the relative error in numerical ET(sub a) on the initial soil water that was not seen in the previous study. It is shown that the recommendations based on a single crop-soil composite can be invalid for other crop-soil composites. Finally, a consideration of the numerical error in the time-cumulative value of ET(sub a) is discussed besides the existing consideration of that error over individual time steps as done in the previous study. This cumulative ET(sub a) is more relevant to the final crop yield.
International Nuclear Information System (INIS)
Tibari, Elghali; Taous, Fouad; Marah, Hamid
2014-01-01
This report presents results related to stable isotopes analysis carried out at the CNESTEN DASTE in Rabat (Morocco), on behalf of Senegal. These analyzes cover 127 samples. These results demonstrate that Oxygen-18 and Deuterium in water analysis were performed by infrared Laser spectroscopy using a LGR / DLT-100 with Autosampler. Also, the results are expressed in δ values (‰) relative to V-SMOW to ± 0.3 ‰ for oxygen-18 and ± 1 ‰ for deuterium.
Manipulation and gender neutrality in stable marriage procedures
Pini, Maria; Rossi, Francesca; Venable, Brent; Walsh, Toby
2009-01-01
The stable marriage problem is a well-known problem of matching men to women so that no man and woman who are not married to each other both prefer each other. Such a problem has a wide variety of practical applications ranging from matching resident doctors to hospitals to matching students to schools. A well-known algorithm to solve this problem is the Gale-Shapley algorithm, which runs in polynomial time. It has been proven that stable marriage procedures can always be manipulated. Whilst ...
Forensic Stable Isotope Biogeochemistry
Cerling, Thure E.; Barnette, Janet E.; Bowen, Gabriel J.; Chesson, Lesley A.; Ehleringer, James R.; Remien, Christopher H.; Shea, Patrick; Tipple, Brett J.; West, Jason B.
2016-06-01
Stable isotopes are being used for forensic science studies, with applications to both natural and manufactured products. In this review we discuss how scientific evidence can be used in the legal context and where the scientific progress of hypothesis revisions can be in tension with the legal expectations of widely used methods for measurements. Although this review is written in the context of US law, many of the considerations of scientific reproducibility and acceptance of relevant scientific data span other legal systems that might apply different legal principles and therefore reach different conclusions. Stable isotopes are used in legal situations for comparing samples for authenticity or evidentiary considerations, in understanding trade patterns of illegal materials, and in understanding the origins of unknown decedents. Isotope evidence is particularly useful when considered in the broad framework of physiochemical processes and in recognizing regional to global patterns found in many materials, including foods and food products, drugs, and humans. Stable isotopes considered in the larger spatial context add an important dimension to forensic science.
A Numerical Algorithm for the Solution of a Phase-Field Model of Polycrystalline Materials
Energy Technology Data Exchange (ETDEWEB)
Dorr, M R; Fattebert, J; Wickett, M E; Belak, J F; Turchi, P A
2008-12-04
We describe an algorithm for the numerical solution of a phase-field model (PFM) of microstructure evolution in polycrystalline materials. The PFM system of equations includes a local order parameter, a quaternion representation of local orientation and a species composition parameter. The algorithm is based on the implicit integration of a semidiscretization of the PFM system using a backward difference formula (BDF) temporal discretization combined with a Newton-Krylov algorithm to solve the nonlinear system at each time step. The BDF algorithm is combined with a coordinate projection method to maintain quaternion unit length, which is related to an important solution invariant. A key element of the Newton-Krylov algorithm is the selection of a preconditioner to accelerate the convergence of the Generalized Minimum Residual algorithm used to solve the Jacobian linear system in each Newton step. Results are presented for the application of the algorithm to 2D and 3D examples.
Li, Wei; Saleeb, Atef F.
1995-01-01
This two-part report is concerned with the development of a general framework for the implicit time-stepping integrators for the flow and evolution equations in generalized viscoplastic models. The primary goal is to present a complete theoretical formulation, and to address in detail the algorithmic and numerical analysis aspects involved in its finite element implementation, as well as to critically assess the numerical performance of the developed schemes in a comprehensive set of test cases. On the theoretical side, the general framework is developed on the basis of the unconditionally-stable, backward-Euler difference scheme as a starting point. Its mathematical structure is of sufficient generality to allow a unified treatment of different classes of viscoplastic models with internal variables. In particular, two specific models of this type, which are representative of the present start-of-art in metal viscoplasticity, are considered in applications reported here; i.e., fully associative (GVIPS) and non-associative (NAV) models. The matrix forms developed for both these models are directly applicable for both initially isotropic and anisotropic materials, in general (three-dimensional) situations as well as subspace applications (i.e., plane stress/strain, axisymmetric, generalized plane stress in shells). On the computational side, issues related to efficiency and robustness are emphasized in developing the (local) interative algorithm. In particular, closed-form expressions for residual vectors and (consistent) material tangent stiffness arrays are given explicitly for both GVIPS and NAV models, with their maximum sizes 'optimized' to depend only on the number of independent stress components (but independent of the number of viscoplastic internal state parameters). Significant robustness of the local iterative solution is provided by complementing the basic Newton-Raphson scheme with a line-search strategy for convergence. In the present second part of
On The Roman Domination Stable Graphs
Directory of Open Access Journals (Sweden)
Hajian Majid
2017-11-01
Full Text Available A Roman dominating function (or just RDF on a graph G = (V,E is a function f : V → {0, 1, 2} satisfying the condition that every vertex u for which f(u = 0 is adjacent to at least one vertex v for which f(v = 2. The weight of an RDF f is the value f(V (G = Pu2V (G f(u. The Roman domination number of a graph G, denoted by R(G, is the minimum weight of a Roman dominating function on G. A graph G is Roman domination stable if the Roman domination number of G remains unchanged under removal of any vertex. In this paper we present upper bounds for the Roman domination number in the class of Roman domination stable graphs, improving bounds posed in [V. Samodivkin, Roman domination in graphs: the class RUV R, Discrete Math. Algorithms Appl. 8 (2016 1650049].
Energy Technology Data Exchange (ETDEWEB)
Bornholdt, S. [Heidelberg Univ., (Germany). Inst., fuer Theoretische Physik; Graudenz, D. [Lawrence Berkeley Lab., CA (United States)
1993-07-01
A learning algorithm based on genetic algorithms for asymmetric neural networks with an arbitrary structure is presented. It is suited for the learning of temporal patterns and leads to stable neural networks with feedback.
International Nuclear Information System (INIS)
Bornholdt, S.
1993-07-01
A learning algorithm based on genetic algorithms for asymmetric neural networks with an arbitrary structure is presented. It is suited for the learning of temporal patterns and leads to stable neural networks with feedback
Marginally Stable Nuclear Burning
Strohmayer, Tod E.; Altamirano, D.
2012-01-01
Thermonuclear X-ray bursts result from unstable nuclear burning of the material accreted on neutron stars in some low mass X-ray binaries (LMXBs). Theory predicts that close to the boundary of stability oscillatory burning can occur. This marginally stable regime has so far been identified in only a small number of sources. We present Rossi X-ray Timing Explorer (RXTE) observations of the bursting, high- inclination LMXB 4U 1323-619 that reveal for the first time in this source the signature of marginally stable burning. The source was observed during two successive RXTE orbits for approximately 5 ksec beginning at 10:14:01 UTC on March 28, 2011. Significant mHz quasi- periodic oscillations (QPO) at a frequency of 8.1 mHz are detected for approximately 1600 s from the beginning of the observation until the occurrence of a thermonuclear X-ray burst at 10:42:22 UTC. The mHz oscillations are not detected following the X-ray burst. The average fractional rms amplitude of the mHz QPOs is 6.4% (3 - 20 keV), and the amplitude increases to about 8% below 10 keV.This phenomenology is strikingly similar to that seen in the LMXB 4U 1636-53. Indeed, the frequency of the mHz QPOs in 4U 1323-619 prior to the X-ray burst is very similar to the transition frequency between mHz QPO and bursts found in 4U 1636-53 by Altamirano et al. (2008). These results strongly suggest that the observed QPOs in 4U 1323-619 are, like those in 4U 1636-53, due to marginally stable nuclear burning. We also explore the dependence of the energy spectrum on the oscillation phase, and we place the present observations within the context of the spectral evolution of the accretion-powered flux from the source.
International Nuclear Information System (INIS)
Ise, Takeharu
1976-12-01
Review studies have been made on algorithms of numerical analysis and benchmark tests on point kinetics and quasistatic approximate kinetics computer codes to perform efficiently benchmark tests on space-dependent neutron kinetics codes. Point kinetics methods have now been improved since they can be directly applied to the factorization procedures. Methods based on Pade rational function give numerically stable solutions and methods on matrix-splitting are interested in the fact that they are applicable to the direct integration methods. An improved quasistatic (IQ) approximation is the best and the most practical method; it is numerically shown that the IQ method has a high stability and precision and the computation time which is about one tenth of that of the direct method. IQ method is applicable to thermal reactors as well as fast reactors and especially fitted for fast reactors to which many time steps are necessary. Two-dimensional diffusion kinetics codes are most practicable though there exist also three-dimensional diffusion kinetics code as well as two-dimensional transport kinetics code. On developing a space-dependent kinetics code, in any case, it is desirable to improve the method so as to have a high computing speed for solving static diffusion and transport equations. (auth.)
Quantum Dynamical Entropies and Gács Algorithmic Entropy
Directory of Open Access Journals (Sweden)
Fabio Benatti
2012-07-01
Full Text Available Several quantum dynamical entropies have been proposed that extend the classical Kolmogorov–Sinai (dynamical entropy. The same scenario appears in relation to the extension of algorithmic complexity theory to the quantum realm. A theorem of Brudno establishes that the complexity per unit time step along typical trajectories of a classical ergodic system equals the KS-entropy. In the following, we establish a similar relation between the Connes–Narnhofer–Thirring quantum dynamical entropy for the shift on quantum spin chains and the Gács algorithmic entropy. We further provide, for the same system, a weaker linkage between the latter algorithmic complexity and a different quantum dynamical entropy proposed by Alicki and Fannes.
A theoretical derivation of the condensed history algorithm
International Nuclear Information System (INIS)
Larsen, E.W.
1992-01-01
Although the Condensed History Algorithm is a successful and widely-used Monte Carlo method for solving electron transport problems, it has been derived only by an ad-hoc process based on physical reasoning. In this paper we show that the Condensed History Algorithm can be justified as a Monte Carlo simulation of an operator-split procedure in which the streaming, angular scattering, and slowing-down operators are separated within each time step. Different versions of the operator-split procedure lead to Ο(Δs) and Ο(Δs 2 ) versions of the method, where Δs is the path-length step. Our derivation also indicates that higher-order versions of the Condensed History Algorithm may be developed. (Author)
Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.
Wei, Qinglai; Li, Benkai; Song, Ruizhuo
2018-04-01
In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.
Directory of Open Access Journals (Sweden)
Haisheng Song
2013-01-01
Full Text Available The back propagation neural network (BPNN algorithm can be used as a supervised classification in the processing of remote sensing image classification. But its defects are obvious: falling into the local minimum value easily, slow convergence speed, and being difficult to determine intermediate hidden layer nodes. Genetic algorithm (GA has the advantages of global optimization and being not easy to fall into local minimum value, but it has the disadvantage of poor local searching capability. This paper uses GA to generate the initial structure of BPNN. Then, the stable, efficient, and fast BP classification network is gotten through making fine adjustments on the improved BP algorithm. Finally, we use the hybrid algorithm to execute classification on remote sensing image and compare it with the improved BP algorithm and traditional maximum likelihood classification (MLC algorithm. Results of experiments show that the hybrid algorithm outperforms improved BP algorithm and MLC algorithm.
Adaptive numerical algorithms in space weather modeling
Tóth, Gábor; van der Holst, Bart; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav
2012-02-01
Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit
Adaptive numerical algorithms in space weather modeling
International Nuclear Information System (INIS)
Tóth, Gábor; Holst, Bart van der; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav
2012-01-01
Space weather describes the various processes in the Sun–Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit
Adaptive Numerical Algorithms in Space Weather Modeling
Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.;
2010-01-01
Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical
Canonical, stable, general mapping using context schemes.
Novak, Adam M; Rosen, Yohei; Haussler, David; Paten, Benedict
2015-11-15
Sequence mapping is the cornerstone of modern genomics. However, most existing sequence mapping algorithms are insufficiently general. We introduce context schemes: a method that allows the unambiguous recognition of a reference base in a query sequence by testing the query for substrings from an algorithmically defined set. Context schemes only map when there is a unique best mapping, and define this criterion uniformly for all reference bases. Mappings under context schemes can also be made stable, so that extension of the query string (e.g. by increasing read length) will not alter the mapping of previously mapped positions. Context schemes are general in several senses. They natively support the detection of arbitrary complex, novel rearrangements relative to the reference. They can scale over orders of magnitude in query sequence length. Finally, they are trivially extensible to more complex reference structures, such as graphs, that incorporate additional variation. We demonstrate empirically the existence of high-performance context schemes, and present efficient context scheme mapping algorithms. The software test framework created for this study is available from https://registry.hub.docker.com/u/adamnovak/sequence-graphs/. anovak@soe.ucsc.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
International Nuclear Information System (INIS)
Ganjaei, A. A.; Nourazar, S. S.
2009-01-01
A new algorithm, the modified direct simulation Monte-Carlo (MDSMC) method, for the simulation of Couette- Taylor gas flow problem is developed. The Taylor series expansion is used to obtain the modified equation of the first order time discretization of the collision equation and the new algorithm, MDSMC, is implemented to simulate the collision equation in the Boltzmann equation. In the new algorithm (MDSMC) there exists a new extra term which takes in to account the effect of the second order collision. This new extra term has the effect of enhancing the appearance of the first Taylor instabilities of vortices streamlines. In the new algorithm (MDSMC) there also exists a second order term in time step in the probabilistic coefficients which has the effect of simulation with higher accuracy than the previous DSMC algorithm. The appearance of the first Taylor instabilities of vortices streamlines using the MDSMC algorithm at different ratios of ω/ν (experimental data of Taylor) occurred at less time-step than using the DSMC algorithm. The results of the torque developed on the stationary cylinder using the MDSMC algorithm show better agreement in comparison with the experimental data of Kuhlthau than the results of the torque developed on the stationary cylinder using the DSMC algorithm
Improvement of mixed time implicit-explicit algorithms for thermal analysis of structures
Liu, W. K.; Zhang, Y. F.
1983-01-01
Computer implementation aspects and numerical evaluation of the recently introduced mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally useful method of estimating the critical time step for a linear quadrilateral element is given herein for the methods introduced by Liu and co-workers. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.
Dynamical attraction to stable processes
Fisher, Albert M.; Talet, Marina
2012-01-01
We apply dynamical ideas within probability theory, proving an almost-sure invariance principle in log density for stable processes. The familiar scaling property (self-similarity) of the stable process has a stronger expression, that the scaling flow on Skorokhod path space is a Bernoulli flow. We prove that typical paths of a random walk with i.i.d. increments in the domain of attraction of a stable law can be paired with paths of a stable process so that, after applying a non-random regula...
Photovoltaic Cells Mppt Algorithm and Design of Controller Monitoring System
Meng, X. Z.; Feng, H. B.
2017-10-01
This paper combined the advantages of each maximum power point tracking (MPPT) algorithm, put forward a kind of algorithm with higher speed and higher precision, based on this algorithm designed a maximum power point tracking controller with ARM. The controller, communication technology and PC software formed a control system. Results of the simulation and experiment showed that the process of maximum power tracking was effective, and the system was stable.
Skiena, Steven S
2008-01-01
Explaining designing algorithms, and analyzing their efficacy and efficiency, this book covers combinatorial algorithms technology, stressing design over analysis. It presents instruction on methods for designing and analyzing computer algorithms. It contains the catalog of algorithmic resources, implementations and a bibliography
DEFF Research Database (Denmark)
Bucher, Taina
2017-01-01
of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops....... Examining how algorithms make people feel, then, seems crucial if we want to understand their social power....
Investigation of a breathing surrogate prediction algorithm for prospective pulmonary gating
International Nuclear Information System (INIS)
White, Benjamin M.; Low, Daniel A.; Zhao Tianyu; Wuenschel, Sara; Lu, Wei; Lamb, James M.; Mutic, Sasa; Bradley, Jeffrey D.; El Naqa, Issam
2011-01-01
Purpose: A major challenge of four dimensional computed tomography (4DCT) in treatment planning and delivery has been the lack of respiration amplitude and phase reproducibility during image acquisition. The implementation of a prospective gating algorithm would ensure that images would be acquired only during user-specified breathing phases. This study describes the development and testing of an autoregressive moving average (ARMA) model for human respiratory phase prediction under quiet respiration conditions. Methods: A total of 47 4DCT patient datasets and synchronized respiration records was utilized in this study. Three datasets were used in model development and were removed from further evaluation of the ARMA model. The remaining 44 patient datasets were evaluated with the ARMA model for prediction time steps from 50 to 1000 ms in increments of 50 and 100 ms. Thirty-five of these datasets were further used to provide a comparison between the proposed ARMA model and a commercial algorithm with a prediction time step of 240 ms. Results: The optimal number of parameters for the ARMA model was based on three datasets reserved for model development. Prediction error was found to increase as the prediction time step increased. The minimum prediction time step required for prospective gating was selected to be half of the gantry rotation period. The maximum prediction time step with a conservative 95% confidence criterion was found to be 0.3 s. The ARMA model predicted peak inhalation and peak exhalation phases significantly better than the commercial algorithm. Furthermore, the commercial algorithm had numerous instances of missed breath cycles and falsely predicted breath cycles, while the proposed model did not have these errors. Conclusions: An ARMA model has been successfully applied to predict human respiratory phase occurrence. For a typical CT scanner gantry rotation period of 0.4 s (0.2 s prediction time step), the absolute error was relatively small, 0
Algorithmically specialized parallel computers
Snyder, Lawrence; Gannon, Dennis B
1985-01-01
Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster
Stable computation of generalized singular values
Energy Technology Data Exchange (ETDEWEB)
Drmac, Z.; Jessup, E.R. [Univ. of Colorado, Boulder, CO (United States)
1996-12-31
We study floating-point computation of the generalized singular value decomposition (GSVD) of a general matrix pair (A, B), where A and B are real matrices with the same numbers of columns. The GSVD is a powerful analytical and computational tool. For instance, the GSVD is an implicit way to solve the generalized symmetric eigenvalue problem Kx = {lambda}Mx, where K = A{sup {tau}}A and M = B{sup {tau}}B. Our goal is to develop stable numerical algorithms for the GSVD that are capable of computing the singular value approximations with the high relative accuracy that the perturbation theory says is possible. We assume that the singular values are well-determined by the data, i.e., that small relative perturbations {delta}A and {delta}B (pointwise rounding errors, for example) cause in each singular value {sigma} of (A, B) only a small relative perturbation {vert_bar}{delta}{sigma}{vert_bar}/{sigma}.
Flux-corrected transport principles, algorithms, and applications
Kuzmin, Dmitri; Turek, Stefan
2005-01-01
Addressing students and researchers as well as CFD practitioners, this book describes the state of the art in the development of high-resolution schemes based on the Flux-Corrected Transport (FCT) paradigm. Intended for readers who have a solid background in Computational Fluid Dynamics, the book begins with historical notes by J.P. Boris and D.L. Book. Review articles that follow describe recent advances in the design of FCT algorithms as well as various algorithmic aspects. The topics addressed in the book and its main highlights include: the derivation and analysis of classical FCT schemes with special emphasis on the underlying physical and mathematical constraints; flux limiting for hyperbolic systems; generalization of FCT to implicit time-stepping and finite element discretizations on unstructured meshes and its role as a subgrid scale model for Monotonically Integrated Large Eddy Simulation (MILES) of turbulent flows. The proposed enhancements of the FCT methodology also comprise the prelimiting and '...
Classification of posture maintenance data with fuzzy clustering algorithms
Bezdek, James C.
1992-01-01
Sensory inputs from the visual, vestibular, and proprioreceptive systems are integrated by the central nervous system to maintain postural equilibrium. Sustained exposure to microgravity causes neurosensory adaptation during spaceflight, which results in decreased postural stability until readaptation occurs upon return to the terrestrial environment. Data which simulate sensory inputs under various sensory organization test (SOT) conditions were collected in conjunction with Johnson Space Center postural control studies using a tilt-translation device (TTD). The University of West Florida applied the fuzzy c-meams (FCM) clustering algorithms to this data with a view towards identifying various states and stages of subjects experiencing such changes. Feature analysis, time step analysis, pooling data, response of the subjects, and the algorithms used are discussed.
Chen, Qiang; Qin, Hong; Liu, Jian; Xiao, Jianyuan; Zhang, Ruili; He, Yang; Wang, Yulei
2017-11-01
An infinite dimensional canonical symplectic structure and structure-preserving geometric algorithms are developed for the photon-matter interactions described by the Schrödinger-Maxwell equations. The algorithms preserve the symplectic structure of the system and the unitary nature of the wavefunctions, and bound the energy error of the simulation for all time-steps. This new numerical capability enables us to carry out first-principle based simulation study of important photon-matter interactions, such as the high harmonic generation and stabilization of ionization, with long-term accuracy and fidelity.
Directory of Open Access Journals (Sweden)
Brajesh Kumar Singh
2018-03-01
Full Text Available In this paper, a new approach “modified extended cubic B-Spline differential quadrature (mECDQ method” has been developed for the numerical computation of two dimensional hyperbolic telegraph equation. The mECDQ method is a DQM based on modified extended cubic B-spline functions as new base functions. The mECDQ method reduces the hyperbolic telegraph equation into an amenable system of ordinary differential equations (ODEs, in time. The resulting system of ODEs has been solved by adopting an optimal five stage fourth-order strong stability preserving Runge - Kutta (SSP-RK54 scheme. The stability of the method is also studied by computing the eigenvalues of the coefficient matrices. It is shown that the mECDQ method produces stable solution for the telegraph equation. The accuracy of the method is illustrated by computing the errors between analytical solutions and numerical solutions are measured in terms of L2 and L∞ and average error norms for each problem. A comparison of mECDQ solutions with the results of the other numerical methods has been carried out for various space sizes and time step sizes, which shows that the mECDQ solutions are converging very fast in comparison with the various existing schemes. Keywords: Differential quadrature method, Hyperbolic telegraph equation, Modified extended cubic B-splines, mECDQ method, Thomas algorithm
Algorithm For Hypersonic Flow In Chemical Equilibrium
Palmer, Grant
1989-01-01
Implicit, finite-difference, shock-capturing algorithm calculates inviscid, hypersonic flows in chemical equilibrium. Implicit formulation chosen because overcomes limitation on mathematical stability encountered in explicit formulations. For dynamical portion of problem, Euler equations written in conservation-law form in Cartesian coordinate system for two-dimensional or axisymmetric flow. For chemical portion of problem, equilibrium state of gas at each point in computational grid determined by minimizing local Gibbs free energy, subject to local conservation of molecules, atoms, ions, and total enthalpy. Major advantage: resulting algorithm naturally stable and captures strong shocks without help of artificial-dissipation terms to damp out spurious numerical oscillations.
Fireworks algorithm for mean-VaR/CVaR models
Zhang, Tingting; Liu, Zhifeng
2017-10-01
Intelligent algorithms have been widely applied to portfolio optimization problems. In this paper, we introduce a novel intelligent algorithm, named fireworks algorithm, to solve the mean-VaR/CVaR model for the first time. The results show that, compared with the classical genetic algorithm, fireworks algorithm not only improves the optimization accuracy and the optimization speed, but also makes the optimal solution more stable. We repeat our experiments at different confidence levels and different degrees of risk aversion, and the results are robust. It suggests that fireworks algorithm has more advantages than genetic algorithm in solving the portfolio optimization problem, and it is feasible and promising to apply it into this field.
Annerel, Sebastiaan; Degroote, Joris; Claessens, Tom; Vierendeels, Jan
2010-06-01
We present a newly developed Fluid-Structure Interaction coupling algorithm to simulate Bileaflet Mechanical Heart Valves dynamics in a partitioned way. The coupling iterations between the flow solver and the leaflet motion solver are accelerated by using the Jacobian with the derivatives of the pressure and viscous moments acting on the leaflets with respect to the leaflet acceleration. This Jacobian is used in the leaflet motion solver when new positions of the leaflets are computed during the coupling iterations. The Jacobian is numerically derived from the flow solver by applying leaflet perturbations. Instead of calculating this Jacobian every time step, the Jacobian is extrapolated from previous time steps and a recalculation of the Jacobian is only done when needed. The efficiency of our new algorithm is subsequently compared to existing algorithms which use fixed relaxation and dynamic Aitken Δ2 relaxation in the coupling iterations when the new positions of the leaflets are computed. Results show that dynamic Aitken Δ2 relaxation outperforms fixed relaxation. Moreover, during the opening phase of the valve, our new algorithm needs fewer subiterations per time step to achieve convergence than the method with Aitken Δ2 relaxation. Thus, our newly developed FSI coupling scheme outperforms the existing coupling schemes.
... is an MRE? Is an MRE shelf stable? What foods are packaged in retort packages? What is aseptic ... type of package is used for aseptic processing? What foods are packaged in aseptic packages? Can I microwave ...
Amezcua, Javier
in the mean value of the function. Using statistical significance tests both at the local and field level, it is shown that the climatology of the SPEEDY model is not modified by the changed time stepping scheme; hence, no retuning of the parameterizations is required. It is found the accuracy of the medium-term forecasts is increased by using the RAW filter.
Pharmaceuticals labelled with stable isotopes
International Nuclear Information System (INIS)
Krumbiegel, P.
1986-11-01
The relatively new field of pharmaceuticals labelled with stable isotopes is reviewed. Scientific, juridical, and ethical questions are discussed concerning the application of these pharmaceuticals in human medicine. 13 C, 15 N, and 2 H are the stable isotopes mainly utilized in metabolic function tests. Methodical contributions are given to the application of 2 H, 13 C, and 15 N pharmaceuticals showing new aspects and different states of development in the field under discussion. (author)
Stable isotope research pool inventory
International Nuclear Information System (INIS)
1984-03-01
This report contains a listing of electromagnetically separated stable isotopes which are available at the Oak Ridge National Laboratory for distribution for nondestructive research use on a loan basis. This inventory includes all samples of stable isotopes in the Research Materials Collection and does not designate whether a sample is out on loan or is in reprocessing. For some of the high abundance naturally occurring isotopes, larger amounts can be made available; for example, Ca-40 and Fe-56
Directory of Open Access Journals (Sweden)
Jingjing Ma
2014-01-01
Full Text Available Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms.
Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng
2014-01-01
Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms. PMID:24723806
Approximate iterative algorithms
Almudevar, Anthony Louis
2014-01-01
Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis a
Autonomous Star Tracker Algorithms
DEFF Research Database (Denmark)
Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren
1998-01-01
Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....
Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa
2018-01-01
The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,
Ranking stability and super-stable nodes in complex networks.
Ghoshal, Gourab; Barabási, Albert-László
2011-07-19
Pagerank, a network-based diffusion algorithm, has emerged as the leading method to rank web content, ecological species and even scientists. Despite its wide use, it remains unknown how the structure of the network on which it operates affects its performance. Here we show that for random networks the ranking provided by pagerank is sensitive to perturbations in the network topology, making it unreliable for incomplete or noisy systems. In contrast, in scale-free networks we predict analytically the emergence of super-stable nodes whose ranking is exceptionally stable to perturbations. We calculate the dependence of the number of super-stable nodes on network characteristics and demonstrate their presence in real networks, in agreement with the analytical predictions. These results not only deepen our understanding of the interplay between network topology and dynamical processes but also have implications in all areas where ranking has a role, from science to marketing.
An empirical study on SAJQ (Sorting Algorithm for Join Queries
Directory of Open Access Journals (Sweden)
Hassan I. Mathkour
2010-06-01
Full Text Available Most queries that applied on database management systems (DBMS depend heavily on the performance of the used sorting algorithm. In addition to have an efficient sorting algorithm, as a primary feature, stability of such algorithms is a major feature that is needed in performing DBMS queries. In this paper, we study a new Sorting Algorithm for Join Queries (SAJQ that has both advantages of being efficient and stable. The proposed algorithm takes the advantage of using the m-way-merge algorithm in enhancing its time complexity. SAJQ performs the sorting operation in a time complexity of O(nlogm, where n is the length of the input array and m is number of sub-arrays used in sorting. An unsorted input array of length n is arranged into m sorted sub-arrays. The m-way-merge algorithm merges the sorted m sub-arrays into the final output sorted array. The proposed algorithm keeps the stability of the keys intact. An analytical proof has been conducted to prove that, in the worst case, the proposed algorithm has a complexity of O(nlogm. Also, a set of experiments has been performed to investigate the performance of the proposed algorithm. The experimental results have shown that the proposed algorithm outperforms other Stable–Sorting algorithms that are designed for join-based queries.
Nature-inspired optimization algorithms
Yang, Xin-She
2014-01-01
Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning
Akl, Selim G
1985-01-01
Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the
VISUALIZATION OF PAGERANK ALGORITHM
Perhaj, Ervin
2013-01-01
The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...
Digital Arithmetic: Division Algorithms
DEFF Research Database (Denmark)
Montuschi, Paolo; Nannarelli, Alberto
2017-01-01
implement it in hardware to not compromise the overall computation performances. This entry explains the basic algorithms, suitable for hardware and software, to implement division in computer systems. Two classes of algorithms implement division or square root: digit-recurrence and multiplicative (e.......g., Newton–Raphson) algorithms. The first class of algorithms, the digit-recurrence type, is particularly suitable for hardware implementation as it requires modest resources and provides good performance on contemporary technology. The second class of algorithms, the multiplicative type, requires...
Active Fault Tolerant Control of Livestock Stable Ventilation System
DEFF Research Database (Denmark)
Gholami, Mehdi
2011-01-01
degraded performance even in the faulty case. In this thesis, we have designed such controllers for climate control systems for livestock buildings in three steps: Deriving a model for the climate control system of a pig-stable. Designing a active fault diagnosis (AFD) algorithm for different kinds...... of the hybrid model are estimated by a recursive estimation algorithm, the Extended Kalman Filter (EKF), using experimental data which was provided by an equipped laboratory. Two methods for active fault diagnosis are proposed. The AFD methods excite the system by injecting a so-called excitation input. In both...... methods, the input is designed off-line based on a sensitivity analysis in order to improve the precision of estimation of parameters associated with faults. Two different algorithm, the EKF and a new adaptive filter, are used to estimate the parameters of the system. The fault is detected and isolated...
Stable Boundary Layer Education (STABLE) Final Campaign Summary
Energy Technology Data Exchange (ETDEWEB)
Turner, David D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2016-03-01
The properties of, and the processes that occur in, the nocturnal stable boundary layer are not well understood, making it difficult to represent adequately in numerical models. The nocturnal boundary layer often is characterized by a temperature inversion and, in the Southern Great Plains region, a low-level jet. To advance our understanding of the nocturnal stable boundary layer, high temporal and vertical resolution data on the temperature and wind properties are needed, along with both large-eddy simulation and cloud-resolving modeling.
Modified Clipped LMS Algorithm
Directory of Open Access Journals (Sweden)
Lotfizad Mojtaba
2005-01-01
Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.
Optimization Algorithms for Calculation of the Joint Design Point in Parallel Systems
DEFF Research Database (Denmark)
Enevoldsen, I.; Sørensen, John Dalsgaard
1992-01-01
In large structures it is often necessary to estimate the reliability of the system by use of parallel systems. Optimality criteria-based algorithms for calculation of the joint design point in a parallel system are described and efficient active set strategies are developed. Three possible...... algorithms are tested in two examples against well-known general non-linear optimization algorithms. Especially one of the suggested algorithms seems to be stable and fast....
Radiation-stable polyolefin compositions
International Nuclear Information System (INIS)
Rekers, J.W.
1986-01-01
This invention relates to compositions of olefinic polymers suitable for high energy radiation treatment. In particular, the invention relates to olefinic polymer compositions that are stable to sterilizing dosages of high energy radiation such as a gamma radiation. Stabilizers are described that include benzhydrol and benzhydrol derivatives; these stabilizers may be used alone or in combination with secondary antioxidants or synergists
Monitoring of stable glaucoma patients
K.M. Holtzer-Goor (Kim); N.S. Klazinga (Niek); M.A. Koopmanschap (Marc); H.G. Lemij (Hans); T. Plochg; E. van Sprundel (Esther)
2010-01-01
textabstractA high workload for ophthalmologists and long waiting lists for patients challenge the organization of ophthalmic care. Tasks that require less specialized skills, like the monitoring of stable (well controlled) glaucoma patients could be substituted from ophthalmologists to other
Yongquan Zhou; Jian Xie; Liangliang Li; Mingzhi Ma
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformati...
Recursive forgetting algorithms
DEFF Research Database (Denmark)
Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan
1992-01-01
In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...
Ship Detection in SAR Image Based on the Alpha-stable Distribution.
Wang, Changcheng; Liao, Mingsheng; Li, Xiaofeng
2008-08-22
This paper describes an improved Constant False Alarm Rate (CFAR) ship detection algorithm in spaceborne synthetic aperture radar (SAR) image based on Alphastable distribution model. Typically, the CFAR algorithm uses the Gaussian distribution model to describe statistical characteristics of a SAR image background clutter. However, the Gaussian distribution is only valid for multilook SAR images when several radar looks are averaged. As sea clutter in SAR images shows spiky or heavy-tailed characteristics, the Gaussian distribution often fails to describe background sea clutter. In this study, we replace the Gaussian distribution with the Alpha-stable distribution, which is widely used in impulsive or spiky signal processing, to describe the background sea clutter in SAR images. In our proposed algorithm, an initial step for detecting possible ship targets is employed. Then, similar to the typical two-parameter CFAR algorithm, a local process is applied to the pixel identified as possible target. A RADARSAT-1 image is used to validate this Alpha-stable distribution based algorithm. Meanwhile, known ship location data during the time of RADARSAT-1 SAR image acquisition is used to validate ship detection results. Validation results show improvements of the new CFAR algorithm based on the Alpha-stable distribution over the CFAR algorithm based on the Gaussian distribution.
Ship Detection in SAR Image Based on the Alpha-stable Distribution
Directory of Open Access Journals (Sweden)
Xiaofeng Li
2008-08-01
Full Text Available This paper describes an improved Constant False Alarm Rate (CFAR ship detection algorithm in spaceborne synthetic aperture radar (SAR image based on Alphastable distribution model. Typically, the CFAR algorithm uses the Gaussian distribution model to describe statistical characteristics of a SAR image background clutter. However, the Gaussian distribution is only valid for multilook SAR images when several radar looks are averaged. As sea clutter in SAR images shows spiky or heavy-tailed characteristics, the Gaussian distribution often fails to describe background sea clutter. In this study, we replace the Gaussian distribution with the Alpha-stable distribution, which is widely used in impulsive or spiky signal processing, to describe the background sea clutter in SAR images. In our proposed algorithm, an initial step for detecting possible ship targets is employed. Then, similar to the typical two-parameter CFAR algorithm, a local process is applied to the pixel identified as possible target. A RADARSAT-1 image is used to validate this Alpha-stable distribution based algorithm. Meanwhile, known ship location data during the time of RADARSAT-1 SAR image acquisition is used to validate ship detection results. Validation results show improvements of the new CFAR algorithm based on the Alpha-stable distribution over the CFAR algorithm based on the Gaussian distribution.
Molecular dynamics algorithms for path integrals at constant pressure
Martyna, Glenn J.; Hughes, Adam; Tuckerman, Mark E.
1999-02-01
Extended system path integral molecular dynamics algorithms have been developed that can generate efficiently averages in the quantum mechanical canonical ensemble [M. E. Tuckerman, B. J. Berne, G. J. Martyna, and M. L. Klein, J. Chem. Phys. 99, 2796 (1993)]. Here, the corresponding extended system path integral molecular dynamics algorithms appropriate to the quantum mechanical isothermal-isobaric ensembles with isotropic-only and full system cell fluctuations are presented. The former ensemble is employed to study fluid systems which do not support shear modes while the latter is employed to study solid systems. The algorithms are constructed by deriving appropriate dynamical equations of motions and developing reversible multiple time step algorithms to integrate the equations numerically. Effective parallelization schemes for distributed memory computers are presented. The new numerical methods are tested on model (a particle in a periodic potential) and realistic (liquid and solid para-hydrogen and liquid butane) systems. In addition, the methodology is extended to treat the path integral centroid dynamics scheme, [J. Cao and G. A. Voth, J. Chem. Phys. 99, 10070 (1993)], a novel method which is capable of generating semiclassical approximations to quantum mechanical time correlation functions.
Explaining algorithms using metaphors
Forišek, Michal
2013-01-01
There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo
Spectral Decomposition Algorithm (SDA)
National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...
Algorithms in Algebraic Geometry
Dickenstein, Alicia; Sommese, Andrew J
2008-01-01
In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its
An algorithm for symplectic implicit Taylor-map tracking
International Nuclear Information System (INIS)
Yan, Y.; Channell, P.; Syphers, M.
1992-10-01
An algorithm has been developed for converting an ''order-by-order symplectic'' Taylor map that is truncated to an arbitrary order (thus not exactly symplectic) into a Courant-Snyder matrix and a symplectic implicit Taylor map for symplectic tracking. This algorithm is implemented using differential algebras, and it is numerically stable and fast. Thus, lifetime charged-particle tracking for large hadron colliders, such as the Superconducting Super Collider, is now made possible
A new parallelization algorithm of ocean model with explicit scheme
Fu, X. D.
2017-08-01
This paper will focus on the parallelization of ocean model with explicit scheme which is one of the most commonly used schemes in the discretization of governing equation of ocean model. The characteristic of explicit schema is that calculation is simple, and that the value of the given grid point of ocean model depends on the grid point at the previous time step, which means that one doesn’t need to solve sparse linear equations in the process of solving the governing equation of the ocean model. Aiming at characteristics of the explicit scheme, this paper designs a parallel algorithm named halo cells update with tiny modification of original ocean model and little change of space step and time step of the original ocean model, which can parallelize ocean model by designing transmission module between sub-domains. This paper takes the GRGO for an example to implement the parallelization of GRGO (Global Reduced Gravity Ocean model) with halo update. The result demonstrates that the higher speedup can be achieved at different problem size.
Discontinuous Galerkin algorithms for fully kinetic plasmas
Juno, J.; Hakim, A.; TenBarge, J.; Shi, E.; Dorland, W.
2018-01-01
We present a new algorithm for the discretization of the non-relativistic Vlasov-Maxwell system of equations for the study of plasmas in the kinetic regime. Using the discontinuous Galerkin finite element method for the spatial discretization, we obtain a high order accurate solution for the plasma's distribution function. Time stepping for the distribution function is done explicitly with a third order strong-stability preserving Runge-Kutta method. Since the Vlasov equation in the Vlasov-Maxwell system is a high dimensional transport equation, up to six dimensions plus time, we take special care to note various features we have implemented to reduce the cost while maintaining the integrity of the solution, including the use of a reduced high-order basis set. A series of benchmarks, from simple wave and shock calculations, to a five dimensional turbulence simulation, are presented to verify the efficacy of our set of numerical methods, as well as demonstrate the power of the implemented features.
DEFF Research Database (Denmark)
Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino
2016-01-01
A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network-oblivi...
DEFF Research Database (Denmark)
Husfeldt, Thore
2015-01-01
This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...
Indian Academy of Sciences (India)
Computing connectivities between all pairs of vertices good algorithm wrt both space and time to compute the exact solution. Computing all-pairs distances good algorithm wrt both space and time - but only approximate solutions can be found. Optimal bipartite matchings an optimal matching need not always exist.
Algorithms and Their Explanations
Benini, M.; Gobbo, F.; Beckmann, A.; Csuhaj-Varjú, E.; Meer, K.
2014-01-01
By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of
8. Algorithm Design Techniques
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...
8. Algorithm Design Techniques
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 Issue 8 August 1997 pp 6-17. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/002/08/0006-0017 ...
Introduction to Algorithms -14 ...
Indian Academy of Sciences (India)
As elaborated in the earlier articles, algorithms must be written in an unambiguous formal way. Algorithms intended for automatic execution by computers are called programs and the formal notations used to write programs are called programming languages. The concept of a programming language has been around ...
Taniman, R.O.; Sikkes, B.; van Bochove, A.C.; de Boer, Pieter-Tjerk
In this paper, an adaptive subcarrier assignment method based on a stable matching algorithm is considered. From the performance comparison with other assignment methods (Hungarian-algorithm-based, contiguous and interleaved), our assignment method, while of relatively low complexity, resulted in a
Directory of Open Access Journals (Sweden)
Francesca Musiani
2013-08-01
Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.
Totally parallel multilevel algorithms
Frederickson, Paul O.
1988-01-01
Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.
Group leaders optimization algorithm
Daskin, Anmer; Kais, Sabre
2011-03-01
We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.
Towards stable acceleration in LINACS
Dubrovskiy, A D
2014-01-01
Ultra-stable and -reproducible high-energy particle beams with short bunches are needed in novel linear accelerators and, in particular, in the Compact Linear Collider CLIC. A passive beam phase stabilization system based on a bunch compression with a negative transfer matrix element R56 and acceleration at a positive off-crest phase is proposed. The motivation and expected advantages of the proposed scheme are outlined.
Modelling of dynamically stable AR-601M robot locomotion in Simulink
Directory of Open Access Journals (Sweden)
Khusainov Ramil
2016-01-01
Full Text Available Humanoid robots will gradually play an important role in our daily lives. Currently, research on anthropomorphic robots and biped locomotion is one of the most important problems in the field of mobile robotics, and the development of reliable control algorithms for them is a challenging task. In this research two algorithms for stable walking of Russian anthropomorphic robot AR-601M with 41 Degrees of Freedom (DoF are investigated. To achieve a human-like dynamically stable locomotion 6 DoF in each robot leg are controlled with Virtual Height Inverted Pendulum and Preview control methods.
Trilateral market coupling. Algorithm appendix
International Nuclear Information System (INIS)
2006-03-01
each local market. The Market Coupling algorithm provides as an output for each market: The set of accepted Block Orders; The Net Position for each Settlement Period of the following day; and The price (MCP) for each Settlement Period of the following day. The results of the Market Coupling algorithm are consistent with a number of 'High Level Properties'. The High Level Properties can be divided into two subsets: Market Coupling High Level Properties (constraints that the Market Results fulfill for each Settlement Period), and Exchanges High Level Properties (constraints that the Market Results must fulfill for each Settlement Period. They reflect the requirements of individual participants trading on the exchanges). Using the ATCs and NECs, the Market Coupling algorithm can determine for each Settlement Period the Price and Net Position of each market. A NEC is built for a given set of accepted Block Orders (Winning Subset). When a set of NECs is used to determine the prices and Net Positions of each market, the set of prices returned for each market may very well not be compatible with this assumed Winning Subset. The Winning Subset needs to be updated and the calculations run again with the derived new NEC. This procedure must be repeated until a stable solution is found. As a consequence, the Market Coupling algorithm involves iterations between two modules: The Coordination Module which is in charge of the centralized computations; The Block Selector of each exchange which performs the decentralized computations. The iterative nature of the algorithm derives from the treatment of Block Orders. The data flows and calculations of the iterative algorithm are described in the rest of the document
Flocking algorithm for autonomous flying robots.
Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás
2014-06-01
Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks.
An efficient algorithm for corona simulation with complex chemical models
Villa, Andrea; Barbieri, Luca; Gondola, Marco; Leon-Garzon, Andres R.; Malgesini, Roberto
2017-05-01
The simulation of cold plasma discharges is a leading field of applied sciences with many applications ranging from pollutant control to surface treatment. Many of these applications call for the development of novel numerical techniques to implement fully three-dimensional corona solvers that can utilize complex and physically detailed chemical databases. This is a challenging task since it multiplies the difficulties inherent to a three-dimensional approach by the complexity of databases comprising tens of chemical species and hundreds of reactions. In this paper a novel approach, capable of reducing significantly the computational burden, is developed. The proposed method is based on a proper time stepping algorithm capable of decomposing the original problem into simpler ones: each of them has then been tackled with either finite element, finite volume or ordinary differential equations solvers. This last solver deals with the chemical model and its efficient implementation is one of the main contributions of this work.
Directory of Open Access Journals (Sweden)
Hans Schonemann
1996-12-01
Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].
A New Modified Firefly Algorithm
Directory of Open Access Journals (Sweden)
Medha Gupta
2016-07-01
Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.
Lewis, Dustin A.; Blum, Gabriella; Modirzadeh, Naz K.
2016-01-01
In this briefing report, we introduce a new concept — war algorithms — that elevates algorithmically-derived “choices” and “decisions” to a, and perhaps the, central concern regarding technical autonomy in war. We thereby aim to shed light on and recast the discussion regarding “autonomous weapon systems.” We define “war algorithm” as any algorithm that is expressed in computer code, that is effectuated through a constructed system, and that is capable of operating in relation to armed co...
Françoise Benz
2004-01-01
ACADEMIC TRAINING LECTURE REGULAR PROGRAMME 1, 2, 3 and 4 June From 11:00 hrs to 12:00 hrs - Main Auditorium bldg. 500 Evolutionary Heuristic Optimization: Genetic Algorithms and Estimation of Distribution Algorithms V. Robles Forcada and M. Perez Hernandez / Univ. de Madrid, Spain In the real world, there exist a huge number of problems that require getting an optimum or near-to-optimum solution. Optimization can be used to solve a lot of different problems such as network design, sets and partitions, storage and retrieval or scheduling. On the other hand, in nature, there exist many processes that seek a stable state. These processes can be seen as natural optimization processes. Over the last 30 years several attempts have been made to develop optimization algorithms, which simulate these natural optimization processes. These attempts have resulted in methods such as Simulated Annealing, based on natural annealing processes or Evolutionary Computation, based on biological evolution processes. Geneti...
Françoise Benz
2004-01-01
ENSEIGNEMENT ACADEMIQUE ACADEMIC TRAINING Françoise Benz 73127 academic.training@cern.ch ACADEMIC TRAINING LECTURE REGULAR PROGRAMME 1, 2, 3 and 4 June From 11:00 hrs to 12:00 hrs - Main Auditorium bldg. 500 Evolutionary Heuristic Optimization: Genetic Algorithms and Estimation of Distribution Algorithms V. Robles Forcada and M. Perez Hernandez / Univ. de Madrid, Spain In the real world, there exist a huge number of problems that require getting an optimum or near-to-optimum solution. Optimization can be used to solve a lot of different problems such as network design, sets and partitions, storage and retrieval or scheduling. On the other hand, in nature, there exist many processes that seek a stable state. These processes can be seen as natural optimization processes. Over the last 30 years several attempts have been made to develop optimization algorithms, which simulate these natural optimization processes. These attempts have resulted in methods such as Simulated Annealing, based on nat...
Stable Hemiaminals: 2-Aminopyrimidine Derivatives
Directory of Open Access Journals (Sweden)
Anna Kwiecień
2015-08-01
Full Text Available Stable hemiaminals can be obtained in the one-pot reaction between 2-aminopyrimidine and nitrobenzaldehyde derivatives. Ten new hemiaminals have been obtained, six of them in crystal state. The molecular stability of these intermediates results from the presence of both electron-withdrawing nitro groups as substituents on the phenyl ring and pyrimidine ring, so no further stabilisation by intramolecular interaction is required. Hemiaminal molecules possess a tetrahedral carbon atom constituting a stereogenic centre. As the result of crystallisation in centrosymmetric space groups both enantiomers are present in the crystal structure.
Organic synthesis with stable isotopes
International Nuclear Information System (INIS)
Daub, G.H.; Kerr, V.N.; Williams, D.L.; Whaley, T.W.
1978-01-01
Some general considerations concerning organic synthesis with stable isotopes are presented. Illustrative examples are described and discussed. The examples include DL-2-amino-3-methyl- 13 C-butanoic-3,4- 13 C 2 acid (DL-valine- 13 C 3 ); methyl oleate-1- 13 C; thymine-2,6- 13 C 2 ; 2-aminoethanesulfonic- 13 C acid (taurine- 13 C); D-glucose-6- 13 C; DL-2-amino-3-methylpentanoic-3,4- 13 C 2 acid (DL-isoleucine- 13 C 2 ); benzidine- 15 N 2 ; and 4-ethylsulfonyl-1-naphthalene-sulfonamide- 15 N
Stable agents for imaging investigations
International Nuclear Information System (INIS)
Tofe, A.J.
1976-01-01
This invention concerns highly stable compounds useful in preparing technetium 99m based scintiscanning exploration agents. The compounds of this invention include a pertechnetate reducing agent or a solution of oxidized pertechnetate and an efficient proportion, sufficient to stabilize the compounds in the presence of oxygen and of radiolysis products, of ascorbic acid or a pharmaceutically acceptable salt or ester of this acid. The invention also concerns a perfected process for preparing a technetium based exploration agent, consisting in codissolving the ascorbic acid or a pharmaceutically acceptable salt or ester of such an acid and a pertechnetate reducing agent in a solution of oxidized pertechnetate [fr
Stable locality sensitive discriminant analysis for image recognition.
Gao, Quanxue; Liu, Jingjing; Cui, Kai; Zhang, Hailin; Wang, Xiaogang
2014-06-01
Locality Sensitive Discriminant Analysis (LSDA) is one of the prevalent discriminant approaches based on manifold learning for dimensionality reduction. However, LSDA ignores the intra-class variation that characterizes the diversity of data, resulting in unstableness of the intra-class geometrical structure representation and not good enough performance of the algorithm. In this paper, a novel approach is proposed, namely stable locality sensitive discriminant analysis (SLSDA), for dimensionality reduction. SLSDA constructs an adjacency graph to model the diversity of data and then integrates it in the objective function of LSDA. Experimental results in five databases show the effectiveness of the proposed approach. Copyright © 2014 Elsevier Ltd. All rights reserved.
On a numerical algorithm for uncertain system | Abiola | Science ...
African Journals Online (AJOL)
A numerical method for computing stable control signals for system with bounded input disturbance is developed. The algorithm is an elaboration of the gradient technique and variable metric method for computing control variables in linear and non-linear optimization problems. This method is developed for an integral ...
Lakshminarasimhulu, Pasupulati; Madura, Jeffry D.
2002-04-01
A domain decomposition algorithm for molecular dynamics simulation of atomic and molecular systems with arbitrary shape and non-periodic boundary conditions is described. The molecular dynamics program uses cell multipole method for efficient calculation of long range electrostatic interactions and a multiple time step method to facilitate bigger time steps. The system is enclosed in a cube and the cube is divided into a hierarchy of cells. The deepest level cells are assigned to processors such that each processor has contiguous cells and static load balancing is achieved by redistributing the cells so that each processor has approximately same number of atoms. The resulting domains have irregular shape and may have more than 26 neighbors. Atoms constituting bond angles and torsion angles may straddle more than two processors. An efficient strategy is devised for initial assignment and subsequent reassignment of such multiple-atom potentials to processors. At each step, computation is overlapped with communication greatly reducing the effect of communication overhead on parallel performance. The algorithm is tested on a spherical cluster of water molecules, a hexasaccharide and an enzyme both solvated by a spherical cluster of water molecules. In each case a spherical boundary containing oxygen atoms with only repulsive interactions is used to prevent evaporation of water molecules. The algorithm shows excellent parallel efficiency even for small number of cells/atoms per processor.
Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization.
Directory of Open Access Journals (Sweden)
Yongquan Zhou
2014-01-01
Full Text Available Bat algorithm (BA is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: “bats approach their prey.” Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization.
Unsupervised learning algorithms
Aydin, Kemal
2016-01-01
This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...
Algorithms for parallel computers
International Nuclear Information System (INIS)
Churchhouse, R.F.
1985-01-01
Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Hong, Youngjoon, E-mail: hongy@uic.edu; Nicholls, David P., E-mail: davidn@uic.edu
2017-02-01
The accurate numerical simulation of linear waves interacting with periodic layered media is a crucial capability in engineering applications. In this contribution we study the stable and high-order accurate numerical simulation of the interaction of linear, time-harmonic waves with a periodic, triply layered medium with irregular interfaces. In contrast with volumetric approaches, High-Order Perturbation of Surfaces (HOPS) algorithms are inexpensive interfacial methods which rapidly and recursively estimate scattering returns by perturbation of the interface shape. In comparison with Boundary Integral/Element Methods, the stable HOPS algorithm we describe here does not require specialized quadrature rules, periodization strategies, or the solution of dense non-symmetric positive definite linear systems. In addition, the algorithm is provably stable as opposed to other classical HOPS approaches. With numerical experiments we show the remarkable efficiency, fidelity, and accuracy one can achieve with an implementation of this algorithm.
Static Analysis Numerical Algorithms
2016-04-01
STATIC ANALYSIS OF NUMERICAL ALGORITHMS KESTREL TECHNOLOGY, LLC APRIL 2016 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION...3. DATES COVERED (From - To) NOV 2013 – NOV 2015 4. TITLE AND SUBTITLE STATIC ANALYSIS OF NUMERICAL ALGORITHMS 5a. CONTRACT NUMBER FA8750-14-C...and Honeywell Aerospace Advanced Technology to combine model-based development of complex avionics control software with static analysis of the
Improved Chaff Solution Algorithm
2009-03-01
Programme de démonstration de technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré...technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré pour déterminer automatiquement
Optimization algorithms and applications
Arora, Rajesh Kumar
2015-01-01
Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc
Image Segmentation Algorithms Overview
Yuheng, Song; Hao, Yan
2017-01-01
The technology of image segmentation is widely used in medical image processing, face recognition pedestrian detection, etc. The current image segmentation techniques include region-based segmentation, edge detection segmentation, segmentation based on clustering, segmentation based on weakly-supervised learning in CNN, etc. This paper analyzes and summarizes these algorithms of image segmentation, and compares the advantages and disadvantages of different algorithms. Finally, we make a predi...
Algorithmic Principles of Mathematical Programming
Faigle, Ulrich; Kern, Walter; Still, Georg
2002-01-01
Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear
Parallel algorithms and architecture for computation of manipulator forward dynamics
Fijany, Amir; Bejczy, Antal K.
1989-01-01
Parallel computation of manipulator forward dynamics is investigated. Considering three classes of algorithms for the solution of the problem, that is, the O(n), the O(n exp 2), and the O(n exp 3) algorithms, parallelism in the problem is analyzed. It is shown that the problem belongs to the class of NC and that the time and processors bounds are of O(log2/2n) and O(n exp 4), respectively. However, the fastest stable parallel algorithms achieve the computation time of O(n) and can be derived by parallelization of the O(n exp 3) serial algorithms. Parallel computation of the O(n exp 3) algorithms requires the development of parallel algorithms for a set of fundamentally different problems, that is, the Newton-Euler formulation, the computation of the inertia matrix, decomposition of the symmetric, positive definite matrix, and the solution of triangular systems. Parallel algorithms for this set of problems are developed which can be efficiently implemented on a unique architecture, a triangular array of n(n+2)/2 processors with a simple nearest-neighbor interconnection. This architecture is particularly suitable for VLSI and WSI implementations. The developed parallel algorithm, compared to the best serial O(n) algorithm, achieves an asymptotic speedup of more than two orders-of-magnitude in the computation the forward dynamics.
Directory of Open Access Journals (Sweden)
Wang Zi Min
2016-01-01
Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.
Stable cosmology in chameleon bigravity
De Felice, Antonio; Mukohyama, Shinji; Oliosi, Michele; Watanabe, Yota
2018-02-01
The recently proposed chameleonic extension of bigravity theory, by including a scalar field dependence in the graviton potential, avoids several fine-tunings found to be necessary in usual massive bigravity. In particular it ensures that the Higuchi bound is satisfied at all scales, that no Vainshtein mechanism is needed to satisfy Solar System experiments, and that the strong coupling scale is always above the scale of cosmological interest all the way up to the early Universe. This paper extends the previous work by presenting a stable example of cosmology in the chameleon bigravity model. We find a set of initial conditions and parameters such that the derived stability conditions on general flat Friedmann background are satisfied at all times. The evolution goes through radiation-dominated, matter-dominated, and de Sitter eras. We argue that the parameter space allowing for such a stable evolution may be large enough to encompass an observationally viable evolution. We also argue that our model satisfies all known constraints due to gravitational wave observations so far and thus can be considered as a unique testing ground of gravitational wave phenomenologies in bimetric theories of gravity.
A blind matching algorithm for cognitive radio networks
Hamza, Doha R.
2016-08-15
We consider a cognitive radio network where secondary users (SUs) are allowed access time to the spectrum belonging to the primary users (PUs) provided that they relay primary messages. PUs and SUs negotiate over allocations of the secondary power that will be used to relay PU data. We formulate the problem as a generalized assignment market to find an epsilon pairwise stable matching. We propose a distributed blind matching algorithm (BLMA) to produce the pairwise-stable matching plus the associated power allocations. We stipulate a limited information exchange in the network so that agents only calculate their own utilities but no information is available about the utilities of any other users in the network. We establish convergence to epsilon pairwise stable matchings in finite time. Finally we show that our algorithm exhibits a limited degradation in PU utility when compared with the Pareto optimal results attained using perfect information assumptions. © 2016 IEEE.
Stable isotope mass spectrometry in petroleum exploration
International Nuclear Information System (INIS)
Mathur, Manju
1997-01-01
The stable isotope mass spectrometry plays an important role to evaluate the stable isotopic composition of hydrocarbons. The isotopic ratios of certain elements in petroleum samples reflect certain characteristics which are useful for petroleum exploration
Improvement of Parallel Algorithm for MATRA Code
International Nuclear Information System (INIS)
Kim, Seong-Jin; Seo, Kyong-Won; Kwon, Hyouk; Hwang, Dae-Hyun
2014-01-01
The feasibility study to parallelize the MATRA code was conducted in KAERI early this year. As a result, a parallel algorithm for the MATRA code has been developed to decrease a considerably required computing time to solve a bigsize problem such as a whole core pin-by-pin problem of a general PWR reactor and to improve an overall performance of the multi-physics coupling calculations. It was shown that the performance of the MATRA code was greatly improved by implementing the parallel algorithm using MPI communication. For problems of a 1/8 core and whole core for SMART reactor, a speedup was evaluated as about 10 when the numbers of used processor were 25. However, it was also shown that the performance deteriorated as the axial node number increased. In this paper, the procedure of a communication between processors is optimized to improve the previous parallel algorithm.. To improve the performance deterioration of the parallelized MATRA code, the communication algorithm between processors was newly presented. It was shown that the speedup was improved and stable regardless of the axial node number
Stable rotating dipole solitons in nonlocal media
DEFF Research Database (Denmark)
Lopez-Aguayo, Servando; Skupin, Stefan; Desyatnikov, Anton S.
2006-01-01
We present the first example of stable rotating two-soliton bound states in nonlinear optical media with nonlocal response. We show that, in contrast to media with local response, nonlocality opens possibilities to generate stable azimuthons.......We present the first example of stable rotating two-soliton bound states in nonlinear optical media with nonlocal response. We show that, in contrast to media with local response, nonlocality opens possibilities to generate stable azimuthons....
Directory of Open Access Journals (Sweden)
Juan Carlos Figueroa García
2011-12-01
The presented approach uses an iterative algorithm which finds stable solutions to problems with fuzzy parameter sinboth sides of an FLP problem. The algorithm is based on the soft constraints method proposed by Zimmermann combined with an iterative procedure which gets a single optimal solution.
Uses of stable isotopes in fish ecology
Analyses of fish tissues (other than otoliths) for stable isotope ratios can provide substantial information on fish ecology, including physiological ecology. Stable isotopes of nitrogen and carbon frequently are used to determine the mix of diet sources for consumers. Stable i...
Periodicity of the stable isotopes
Boeyens, J C A
2003-01-01
It is demonstrated that all stable (non-radioactive) isotopes are formally interrelated as the products of systematically adding alpha particles to four elementary units. The region of stability against radioactive decay is shown to obey a general trend based on number theory and contains the periodic law of the elements as a special case. This general law restricts the number of what may be considered as natural elements to 100 and is based on a proton:neutron ratio that matches the golden ratio, characteristic of biological and crystal growth structures. Different forms of the periodic table inferred at other proton:neutron ratios indicate that the electronic configuration of atoms is variable and may be a function of environmental pressure. Cosmic consequences of this postulate are examined. (author)
Stable States of Biological Organisms
Yukalov, V. I.; Sornette, D.; Yukalova, E. P.; Henry, J.-Y.; Cobb, J. P.
2009-04-01
A novel model of biological organisms is advanced, treating an organism as a self-consistent system subject to a pathogen flux. The principal novelty of the model is that it describes not some parts, but a biological organism as a whole. The organism is modeled by a five-dimensional dynamical system. The organism homeostasis is described by the evolution equations for five interacting components: healthy cells, ill cells, innate immune cells, specific immune cells, and pathogens. The stability analysis demonstrates that, in a wide domain of the parameter space, the system exhibits robust structural stability. There always exist four stable stationary solutions characterizing four qualitatively differing states of the organism: alive state, boundary state, critical state, and dead state.
Theory of stable allocations II
Directory of Open Access Journals (Sweden)
Pantelić Svetlana
2015-01-01
Full Text Available The Swedish Royal Academy awarded the 2012 Nobel Prize in Economics to Lloyd Shapley and Alvin Roth, for the theory of stable allocations and the practice of market design. These two American researchers worked independently from each other, combining basic theory and empirical investigations. Through their experiments and practical design they generated a flourishing field of research and improved the performance of many markets. Shapley provided the fundamental theoretical contribution to this field of research, whereas Roth, a professor at the Harvard University in Boston, developed and upgraded these theoretical investigations by applying them to the American market of medical doctors. Namely, their research helps explain the market processes at work, for instance, when doctors are assigned to hospitals, students to schools and human organs for transplant to recipients.
Stable massive particles at colliders
Energy Technology Data Exchange (ETDEWEB)
Fairbairn, M.; /Stockholm U.; Kraan, A.C.; /Pennsylvania U.; Milstead, D.A.; /Stockholm U.; Sjostrand, T.; /Lund U.; Skands, P.; /Fermilab; Sloan, T.; /Lancaster U.
2006-11-01
We review the theoretical motivations and experimental status of searches for stable massive particles (SMPs) which could be sufficiently long-lived as to be directly detected at collider experiments. The discovery of such particles would address a number of important questions in modern physics including the origin and composition of dark matter in the universe and the unification of the fundamental forces. This review describes the techniques used in SMP-searches at collider experiments and the limits so far obtained on the production of SMPs which possess various colour, electric and magnetic charge quantum numbers. We also describe theoretical scenarios which predict SMPs, the phenomenology needed to model their production at colliders and interactions with matter. In addition, the interplay between collider searches and open questions in cosmology such as dark matter composition are addressed.
A Parallel Butterfly Algorithm
Poulson, Jack
2014-02-04
The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.
Directory of Open Access Journals (Sweden)
Hanns Holger Rutz
2016-11-01
Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory.
Hybridizing Non-dominated Sorting Algorithms: Divide-and-Conquer Meets Best Order Sort
Markina, Margarita; Buzdalov, Maxim
2017-01-01
Many production-grade algorithms benefit from combining an asymptotically efficient algorithm for solving big problem instances, by splitting them into smaller ones, and an asymptotically inefficient algorithm with a very small implementation constant for solving small subproblems. A well-known example is stable sorting, where mergesort is often combined with insertion sort to achieve a constant but noticeable speed-up. We apply this idea to non-dominated sorting. Namely, we combine the divid...
Algorithms in invariant theory
Sturmfels, Bernd
2008-01-01
J. Kung and G.-C. Rota, in their 1984 paper, write: "Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics". The book of Sturmfels is both an easy-to-read textbook for invariant theory and a challenging research monograph that introduces a new approach to the algorithmic side of invariant theory. The Groebner bases method is the main tool by which the central problems in invariant theory become amenable to algorithmic solutions. Students will find the book an easy introduction to this "classical and new" area of mathematics. Researchers in mathematics, symbolic computation, and computer science will get access to a wealth of research ideas, hints for applications, outlines and details of algorithms, worked out examples, and research problems.
Detection of algorithmic trading
Bogoev, Dimitar; Karam, Arzé
2017-10-01
We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.
CERN. Geneva; PUNZI, Giovanni
2015-01-01
Charge particle reconstruction is one of the most demanding computational tasks found in HEP, and it becomes increasingly important to perform it in real time. We envision that HEP would greatly benefit from achieving a long-term goal of making track reconstruction happen transparently as part of the detector readout ("detector-embedded tracking"). We describe here a track-reconstruction approach based on a massively parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature ('RETINA algorithm'). It turns out that high-quality tracking in large HEP detectors is possible with very small latencies, when this algorithm is implemented in specialized processors, based on current state-of-the-art, high-speed/high-bandwidth digital devices.
Handbook of Memetic Algorithms
Cotta, Carlos; Moscato, Pablo
2012-01-01
Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems. The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes. “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now. A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem, memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...
Kiyohara, Shin; Mizoguchi, Teruyasu
2018-03-01
Grain boundary segregation of dopants plays a crucial role in materials properties. To investigate the dopant segregation behavior at the grain boundary, an enormous number of combinations have to be considered in the segregation of multiple dopants at the complex grain boundary structures. Here, two data mining techniques, the random-forests regression and the genetic algorithm, were applied to determine stable segregation sites at grain boundaries efficiently. Using the random-forests method, a predictive model was constructed from 2% of the segregation configurations and it has been shown that this model could determine the stable segregation configurations. Furthermore, the genetic algorithm also successfully determined the most stable segregation configuration with great efficiency. We demonstrate that these approaches are quite effective to investigate the dopant segregation behaviors at grain boundaries.
Named Entity Linking Algorithm
Directory of Open Access Journals (Sweden)
M. F. Panteleev
2017-01-01
Full Text Available In the tasks of processing text in natural language, Named Entity Linking (NEL represents the task to define and link some entity, which is found in the text, with some entity in the knowledge base (for example, Dbpedia. Currently, there is a diversity of approaches to solve this problem, but two main classes can be identified: graph-based approaches and machine learning-based ones. Graph and Machine Learning approaches-based algorithm is proposed accordingly to the stated assumptions about the interrelations of named entities in a sentence and in general.In the case of graph-based approaches, it is necessary to solve the problem of identifying an optimal set of the related entities according to some metric that characterizes the distance between these entities in a graph built on some knowledge base. Due to limitations in processing power, to solve this task directly is impossible. Therefore, its modification is proposed. Based on the algorithms of machine learning, an independent solution cannot be built due to small volumes of training datasets relevant to NEL task. However, their use can contribute to improving the quality of the algorithm. The adaptation of the Latent Dirichlet Allocation model is proposed in order to obtain a measure of the compatibility of attributes of various entities encountered in one context.The efficiency of the proposed algorithm was experimentally tested. A test dataset was independently generated. On its basis the performance of the model was compared using the proposed algorithm with the open source product DBpedia Spotlight, which solves the NEL problem.The mockup, based on the proposed algorithm, showed a low speed as compared to DBpedia Spotlight. However, the fact that it has shown higher accuracy, stipulates the prospects for work in this direction.The main directions of development were proposed in order to increase the accuracy of the system and its productivity.
A cluster algorithm for graphs
S. van Dongen
2000-01-01
textabstractA cluster algorithm for graphs called the emph{Markov Cluster algorithm (MCL~algorithm) is introduced. The algorithm provides basically an interface to an algebraic process defined on stochastic matrices, called the MCL~process. The graphs may be both weighted (with nonnegative weight)
Fokkinga, M.M.
1992-01-01
An algorithm is the input-output effect of a computer program; mathematically, the notion of algorithm comes close to the notion of function. Just as arithmetic is the theory and practice of calculating with numbers, so is ALGORITHMICS the theory and practice of calculating with algorithms. Just as
Parallel Algorithms and Patterns
Energy Technology Data Exchange (ETDEWEB)
Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.
Wireless communications algorithmic techniques
Vitetta, Giorgio; Colavolpe, Giulio; Pancaldi, Fabrizio; Martin, Philippa A
2013-01-01
This book introduces the theoretical elements at the basis of various classes of algorithms commonly employed in the physical layer (and, in part, in MAC layer) of wireless communications systems. It focuses on single user systems, so ignoring multiple access techniques. Moreover, emphasis is put on single-input single-output (SISO) systems, although some relevant topics about multiple-input multiple-output (MIMO) systems are also illustrated.Comprehensive wireless specific guide to algorithmic techniquesProvides a detailed analysis of channel equalization and channel coding for wi
Algorithms for Reinforcement Learning
Szepesvari, Csaba
2010-01-01
Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'
To develop a universal gamut mapping algorithm
International Nuclear Information System (INIS)
Morovic, J.
1998-10-01
When a colour image from one colour reproduction medium (e.g. nature, a monitor) needs to be reproduced on another (e.g. on a monitor or in print) and these media have different colour ranges (gamuts), it is necessary to have a method for mapping between them. If such a gamut mapping algorithm can be used under a wide range of conditions, it can also be incorporated in an automated colour reproduction system and considered to be in some sense universal. In terms of preliminary work, a colour reproduction system was implemented, for which a new printer characterisation model (including grey-scale correction) was developed. Methods were also developed for calculating gamut boundary descriptors and for calculating gamut boundaries along given lines from them. The gamut mapping solution proposed in this thesis is a gamut compression algorithm developed with the aim of being accurate and universally applicable. It was arrived at by way of an evolutionary gamut mapping development strategy for the purposes of which five test images were reproduced between a CRT and printed media obtained using an inkjet printer. Initially, a number of previously published algorithms were chosen and psychophysically evaluated whereby an important characteristic of this evaluation was that it also considered the performance of algorithms for individual colour regions within the test images used. New algorithms were then developed on their basis, subsequently evaluated and this process was repeated once more. In this series of experiments the new GCUSP algorithm, which consists of a chroma-dependent lightness compression followed by a compression towards the lightness of the reproduction cusp on the lightness axis, gave the most accurate and stable performance overall. The results of these experiments were also useful for improving the understanding of some gamut mapping factors - in particular gamut difference. In addition to looking at accuracy, the pleasantness of reproductions obtained
New Optimization Algorithms in Physics
Hartmann, Alexander K
2004-01-01
Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.
Ball, Stanley
1986-01-01
Presents a developmental taxonomy which promotes sequencing activities to enhance the potential of matching these activities with learner needs and readiness, suggesting that the order commonly found in the classroom needs to be inverted. The proposed taxonomy (story, skill, and algorithm) involves problem-solving emphasis in the classroom. (JN)
Ferguson, David L.; Henderson, Peter B.
1987-01-01
Designed initially for use in college computer science courses, the model and computer-aided instructional environment (CAIE) described helps students develop algorithmic problem solving skills. Cognitive skills required are discussed, and implications for developing computer-based design environments in other disciplines are suggested by…
Improved Approximation Algorithm for
Byrka, Jaroslaw; Li, S.; Rybicki, Bartosz
2014-01-01
We study the k-level uncapacitated facility location problem (k-level UFL) in which clients need to be connected with paths crossing open facilities of k types (levels). In this paper we first propose an approximation algorithm that for any constant k, in polynomial time, delivers solutions of
Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko
2013-01-01
In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.
DEFF Research Database (Denmark)
This book constitutes the refereed proceedings of the 10th Scandinavian Workshop on Algorithm Theory, SWAT 2006, held in Riga, Latvia, in July 2006. The 36 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 154 submissions. The papers address all...
Algorithmic information theory
Grünwald, P.D.; Vitányi, P.M.B.; Adriaans, P.; van Benthem, J.
2008-01-01
We introduce algorithmic information theory, also known as the theory of Kolmogorov complexity. We explain the main concepts of this quantitative approach to defining 'information'. We discuss the extent to which Kolmogorov's and Shannon's information theory have a common purpose, and where they are
Algorithmic information theory
Grünwald, P.D.; Vitányi, P.M.B.
2008-01-01
We introduce algorithmic information theory, also known as the theory of Kolmogorov complexity. We explain the main concepts of this quantitative approach to defining `information'. We discuss the extent to which Kolmogorov's and Shannon's information theory have a common purpose, and where they are
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 9. Introduction to Algorithms Turtle Graphics. R K Shyamasundar. Series Article Volume 1 ... Author Affiliations. R K Shyamasundar1. Computer Science Group Tata Institute of Fundamental Research Homi Bhabha Road Mumbai 400 005, India.
Modular Regularization Algorithms
DEFF Research Database (Denmark)
Jacobsen, Michael
2004-01-01
The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed into indepen......The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed...... into independent modules. These modules are then combined to form new regularization algorithms with other properties than those we started out with. Several variations are tested using the Matlab toolbox MOORe Tools created in connection with this thesis. Object oriented programming techniques are explained...... and used to set up the illposed problems in the toolbox. Hereby, we are able to write regularization algorithms that automatically exploit structure in the ill-posed problem without being rewritten explicitly. We explain how to implement a stopping criteria for a parameter choice method based upon...
Algorithms for SCC Decomposition
J. Barnat; J. Chaloupka (Jakub); J.C. van de Pol (Jaco)
2008-01-01
htmlabstractWe study and improve the OBF technique [Barnat, J. and P.Moravec, Parallel algorithms for finding SCCs in implicitly given graphs, in: Proceedings of the 5th International Workshop on Parallel and Distributed Methods in Verification (PDMC 2006), LNCS (2007)], which was used in
Naive Bayes-Guided Bat Algorithm for Feature Selection
Taha, Ahmed Majid; Mustapha, Aida; Chen, Soong-Der
2013-01-01
When the amount of data and information is said to double in every 20 months or so, feature selection has become highly important and beneficial. Further improvements in feature selection will positively affect a wide array of applications in fields such as pattern recognition, machine learning, or signal processing. Bio-inspired method called Bat Algorithm hybridized with a Naive Bayes classifier has been presented in this work. The performance of the proposed feature selection algorithm was investigated using twelve benchmark datasets from different domains and was compared to three other well-known feature selection algorithms. Discussion focused on four perspectives: number of features, classification accuracy, stability, and feature generalization. The results showed that BANB significantly outperformed other algorithms in selecting lower number of features, hence removing irrelevant, redundant, or noisy features while maintaining the classification accuracy. BANB is also proven to be more stable than other methods and is capable of producing more general feature subsets. PMID:24396295
A particle tracking velocimetry algorithm based on the Voronoi diagram
Zhang, Yang; Wang, Yuan; Yang, Bin; He, Wenbo
2015-07-01
Particle tracking velocimetry (PTV) algorithms have great applications in tracking discrete particles across successive images. In managing complex flows, classic PTV algorithms typically follow delicate concepts that may lead to a higher risk of disturbance caused by the parameter settings. To avoid such a ‘closure problem’, a PTV algorithm based on the Voronoi diagram (VD-PTV) is developed. This algorithm has a simple structure, as it is designed to possess only one controlling parameter. The VD-PTV is tested using two types of synthetic flows. The result shows that the VD-PTV exhibits a stable performance with good accuracy level and is independent of parameter pre-setting. Moreover, the VD-PTV demonstrates satisfactory computing speed.
Python algorithms mastering basic algorithms in the Python language
Hetland, Magnus Lie
2014-01-01
Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc
Advanced thermally stable jet fuels
Energy Technology Data Exchange (ETDEWEB)
Schobert, H.H.
1999-01-31
The Pennsylvania State University program in advanced thermally stable coal-based jet fuels has five broad objectives: (1) Development of mechanisms of degradation and solids formation; (2) Quantitative measurement of growth of sub-micrometer and micrometer-sized particles suspended in fuels during thermal stressing; (3) Characterization of carbonaceous deposits by various instrumental and microscopic methods; (4) Elucidation of the role of additives in retarding the formation of carbonaceous solids; (5) Assessment of the potential of production of high yields of cycloalkanes by direct liquefaction of coal. Future high-Mach aircraft will place severe thermal demands on jet fuels, requiring the development of novel, hybrid fuel mixtures capable of withstanding temperatures in the range of 400--500 C. In the new aircraft, jet fuel will serve as both an energy source and a heat sink for cooling the airframe, engine, and system components. The ultimate development of such advanced fuels requires a thorough understanding of the thermal decomposition behavior of jet fuels under supercritical conditions. Considering that jet fuels consist of hundreds of compounds, this task must begin with a study of the thermal degradation behavior of select model compounds under supercritical conditions. The research performed by The Pennsylvania State University was focused on five major tasks that reflect the objectives stated above: Task 1: Investigation of the Quantitative Degradation of Fuels; Task 2: Investigation of Incipient Deposition; Task 3: Characterization of Solid Gums, Sediments, and Carbonaceous Deposits; Task 4: Coal-Based Fuel Stabilization Studies; and Task 5: Exploratory Studies on the Direct Conversion of Coal to High Quality Jet Fuels. The major findings of each of these tasks are presented in this executive summary. A description of the sub-tasks performed under each of these tasks and the findings of those studies are provided in the remainder of this volume
Population Games, Stable Games, and Passivity
Directory of Open Access Journals (Sweden)
Michael J. Fox
2013-10-01
Full Text Available The class of “stable games”, introduced by Hofbauer and Sandholm in 2009, has the attractive property of admitting global convergence to equilibria under many evolutionary dynamics. We show that stable games can be identified as a special case of the feedback-system-theoretic notion of a “passive” dynamical system. Motivated by this observation, we develop a notion of passivity for evolutionary dynamics that complements the definition of the class of stable games. Since interconnections of passive dynamical systems exhibit stable behavior, we can make conclusions about passive evolutionary dynamics coupled with stable games. We show how established evolutionary dynamics qualify as passive dynamical systems. Moreover, we exploit the flexibility of the definition of passive dynamical systems to analyze generalizations of stable games and evolutionary dynamics that include forecasting heuristics as well as certain games with memory.
A MEDLINE categorization algorithm
Directory of Open Access Journals (Sweden)
Gehanno Jean-Francois
2006-02-01
Full Text Available Abstract Background Categorization is designed to enhance resource description by organizing content description so as to enable the reader to grasp quickly and easily what are the main topics discussed in it. The objective of this work is to propose a categorization algorithm to classify a set of scientific articles indexed with the MeSH thesaurus, and in particular those of the MEDLINE bibliographic database. In a large bibliographic database such as MEDLINE, finding materials of particular interest to a specialty group, or relevant to a particular audience, can be difficult. The categorization refines the retrieval of indexed material. In the CISMeF terminology, metaterms can be considered as super-concepts. They were primarily conceived to improve recall in the CISMeF quality-controlled health gateway. Methods The MEDLINE categorization algorithm (MCA is based on semantic links existing between MeSH terms and metaterms on the one hand and between MeSH subheadings and metaterms on the other hand. These links are used to automatically infer a list of metaterms from any MeSH term/subheading indexing. Medical librarians manually select the semantic links. Results The MEDLINE categorization algorithm lists the medical specialties relevant to a MEDLINE file by decreasing order of their importance. The MEDLINE categorization algorithm is available on a Web site. It can run on any MEDLINE file in a batch mode. As an example, the top 3 medical specialties for the set of 60 articles published in BioMed Central Medical Informatics & Decision Making, which are currently indexed in MEDLINE are: information science, organization and administration and medical informatics. Conclusion We have presented a MEDLINE categorization algorithm in order to classify the medical specialties addressed in any MEDLINE file in the form of a ranked list of relevant specialties. The categorization method introduced in this paper is based on the manual indexing of resources
A MEDLINE categorization algorithm
Darmoni, Stefan J; Névéol, Aurelie; Renard, Jean-Marie; Gehanno, Jean-Francois; Soualmia, Lina F; Dahamna, Badisse; Thirion, Benoit
2006-01-01
Background Categorization is designed to enhance resource description by organizing content description so as to enable the reader to grasp quickly and easily what are the main topics discussed in it. The objective of this work is to propose a categorization algorithm to classify a set of scientific articles indexed with the MeSH thesaurus, and in particular those of the MEDLINE bibliographic database. In a large bibliographic database such as MEDLINE, finding materials of particular interest to a specialty group, or relevant to a particular audience, can be difficult. The categorization refines the retrieval of indexed material. In the CISMeF terminology, metaterms can be considered as super-concepts. They were primarily conceived to improve recall in the CISMeF quality-controlled health gateway. Methods The MEDLINE categorization algorithm (MCA) is based on semantic links existing between MeSH terms and metaterms on the one hand and between MeSH subheadings and metaterms on the other hand. These links are used to automatically infer a list of metaterms from any MeSH term/subheading indexing. Medical librarians manually select the semantic links. Results The MEDLINE categorization algorithm lists the medical specialties relevant to a MEDLINE file by decreasing order of their importance. The MEDLINE categorization algorithm is available on a Web site. It can run on any MEDLINE file in a batch mode. As an example, the top 3 medical specialties for the set of 60 articles published in BioMed Central Medical Informatics & Decision Making, which are currently indexed in MEDLINE are: information science, organization and administration and medical informatics. Conclusion We have presented a MEDLINE categorization algorithm in order to classify the medical specialties addressed in any MEDLINE file in the form of a ranked list of relevant specialties. The categorization method introduced in this paper is based on the manual indexing of resources with MeSH (terms
Frequency-Dependent FDTD Algorithm Using Newmark’s Method
Directory of Open Access Journals (Sweden)
Bing Wei
2014-01-01
Full Text Available According to the characteristics of the polarizability in frequency domain of three common models of dispersive media, the relation between the polarization vector and electric field intensity is converted into a time domain differential equation of second order with the polarization vector by using the conversion from frequency to time domain. Newmark βγ difference method is employed to solve this equation. The electric field intensity to polarizability recursion is derived, and the electric flux to electric field intensity recursion is obtained by constitutive relation. Then FDTD iterative computation in time domain of electric and magnetic field components in dispersive medium is completed. By analyzing the solution stability of the above differential equation using central difference method, it is proved that this method has more advantages in the selection of time step. Theoretical analyses and numerical results demonstrate that this method is a general algorithm and it has advantages of higher accuracy and stability over the algorithms based on central difference method.
Development and assessment of a daily time-step continuous ...
African Journals Online (AJOL)
. Commonly used event-based approaches to design flood estimation have several limitations, which include the estimation of antecedent soil moisture conditions and the assumption that the exceedance probability of the design flood is the ...
Exponential Time Differencing With Runge- Kutta Time Stepping for ...
African Journals Online (AJOL)
Nafiisah
stepping (ETDRK) for convectively dominated financial problems. For European ... We consider a financial market with a single asset with price S which follows the ...... The Mathematics of. Financial Derivatives. Cambridge University Press. New York. ZVAN, R., FORSYTH, P. A. & VETZAL, K. R. (1998). Robust numerical.
Showering habits: time, steps, and products used after brain injury.
Reistetter, Timothy A; Chang, Pei-Fen J; Abreu, Beatriz C
2009-01-01
This pilot study describes the showering habits of people with brain injury (BI) compared with those of people without BI (WBI). The showering habits of 10 people with BI and 10 people WBI were measured and compared. A videotaped session recorded and documented the shower routine. The BI group spent longer time showering, used more steps, and used fewer products than the WBI group. A moderately significant relationship was found between time and age (r = .46, p = .041). Similarly, we found significant correlations between number of steps and number of products used (r = .64, p = .002) and between the number of products used and education (r = .47, p = .044). Results suggest that people with BI have showering habits that differ from those WBI. Correlations, regardless of group, showed that older people showered longer, and people with more education used more showering products.
Xu, Hang; Su, Shi; Tang, Wuji; Wei, Meng; Wang, Tao; Wang, Dongjin; Ge, Weihong
2015-09-01
A large number of warfarin pharmacogenetics algorithms have been published. Our research was aimed to evaluate the performance of the selected pharmacogenetic algorithms in patients with surgery of heart valve replacement and heart valvuloplasty during the phase of initial and stable anticoagulation treatment. 10 pharmacogenetic algorithms were selected by searching PubMed. We compared the performance of the selected algorithms in a cohort of 193 patients during the phase of initial and stable anticoagulation therapy. Predicted dose was compared to therapeutic dose by using a predicted dose percentage that falls within 20% threshold of the actual dose (percentage within 20%) and mean absolute error (MAE). The average warfarin dose for patients was 3.05±1.23mg/day for initial treatment and 3.45±1.18mg/day for stable treatment. The percentages of the predicted dose within 20% of the therapeutic dose were 44.0±8.8% and 44.6±9.7% for the initial and stable phases, respectively. The MAEs of the selected algorithms were 0.85±0.18mg/day and 0.93±0.19mg/day, respectively. All algorithms had better performance in the ideal group than in the low dose and high dose groups. The only exception is the Wadelius et al. algorithm, which had better performance in the high dose group. The algorithms had similar performance except for the Wadelius et al. and Miao et al. algorithms, which had poor accuracy in our study cohort. The Gage et al. algorithm had better performance in both phases of initial and stable treatment. Algorithms had relatively higher accuracy in the >50years group of patients on the stable phase. Copyright © 2015 Elsevier Ltd. All rights reserved.
Genetic Algorithms and Local Search
Whitley, Darrell
1996-01-01
The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.
Genetic Algorithm for Optimization: Preprocessor and Algorithm
Sen, S. K.; Shaykhian, Gholam A.
2006-01-01
Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.
Algorithms for Global Positioning
DEFF Research Database (Denmark)
Borre, Kai; Strang, Gilbert
and replaces the authors' previous work, Linear Algebra, Geodesy, and GPS (1997). An initial discussion of the basic concepts, characteristics and technical aspects of different satellite systems is followed by the necessary mathematical content which is presented in a detailed and self-contained fashion......The emergence of satellite technology has changed the lives of millions of people. In particular, GPS has brought an unprecedented level of accuracy to the field of geodesy. This text is a guide to the algorithms and mathematical principles that account for the success of GPS technology....... At the heart of the matter are the positioning algorithms on which GPS technology relies, the discussion of which will affirm the mathematical contents of the previous chapters. Numerous ready-to-use MATLAB codes are included for the reader. This comprehensive guide will be invaluable for engineers...
Kramer, Oliver
2017-01-01
This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.
Aydemir, Bahar
2017-01-01
The Trigger and Data Acquisition (TDAQ) system of the ATLAS detector at the Large Hadron Collider (LHC) at CERN is composed of a large number of distributed hardware and software components. TDAQ system consists of about 3000 computers and more than 25000 applications which, in a coordinated manner, provide the data-taking functionality of the overall system. There is a number of online services required to configure, monitor and control the ATLAS data taking. In particular, the configuration service is used to provide configuration of above components. The configuration of the ATLAS data acquisition system is stored in XML-based object database named OKS. DAL (Data Access Library) allowing to access it's information by C++, Java and Python clients in a distributed environment. Some information has quite complicated structure, so it's extraction requires writing special algorithms. Algorithms available on C++ programming language and partially reimplemented on Java programming language. The goal of the projec...
Partitional clustering algorithms
2015-01-01
This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...
Fatigue Evaluation Algorithms: Review
DEFF Research Database (Denmark)
Passipoularidis, Vaggelis; Brøndsted, Povl
A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck...... series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor...... blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects...
Boosting foundations and algorithms
Schapire, Robert E
2012-01-01
Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate "rules of thumb." A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical.
Likelihood Inflating Sampling Algorithm
Entezari, Reihaneh; Craiu, Radu V.; Rosenthal, Jeffrey S.
2016-01-01
Markov Chain Monte Carlo (MCMC) sampling from a posterior distribution corresponding to a massive data set can be computationally prohibitive since producing one sample requires a number of operations that is linear in the data size. In this paper, we introduce a new communication-free parallel method, the Likelihood Inflating Sampling Algorithm (LISA), that significantly reduces computational costs by randomly splitting the dataset into smaller subsets and running MCMC methods independently ...
Constrained Minimization Algorithms
Lantéri, H.; Theys, C.; Richard, C.
2013-03-01
In this paper, we consider the inverse problem of restoring an unknown signal or image, knowing the transformation suffered by the unknowns. More specifically we deal with transformations described by a linear model linking the unknown signal to an unnoisy version of the data. The measured data are generally corrupted by noise. This aspect of the problem is presented in the introduction for general models. In Section 2, we introduce the linear models, and some examples of linear inverse problems are presented. The specificities of the inverse problems are briefly mentionned and shown on a simple example. In Section 3, we give some information on classical distances or divergences. Indeed, an inverse problem is generally solved by minimizing a discrepancy function (divergence or distance) between the measured data and the model (here linear) of such data. Section 4 deals with the likelihood maximization and with their links with divergences minimization. The physical constraints on the solution are indicated and the Split Gradient Method (SGM) is detailed in Section 5. A constraint on the inferior bound of the solution is introduced at first; the positivity constraint is a particular case of such a constraint. We show how to obtain strictly, the multiplicative form of the algorithms. In a second step, the so-called flux constraint is introduced, and a complete algorithmic form is given. In Section 6 we give some brief information on acceleration method of such algorithms. A conclusion is given in Section 7.
ALGORITHM OF OBJECT RECOGNITION
Directory of Open Access Journals (Sweden)
Loktev Alexey Alexeevich
2012-10-01
Full Text Available The second important problem to be resolved to the algorithm and its software, that comprises an automatic design of a complex closed circuit television system, represents object recognition, by virtue of which an image is transmitted by the video camera. Since imaging of almost any object is dependent on many factors, including its orientation in respect of the camera, lighting conditions, parameters of the registering system, static and dynamic parameters of the object itself, it is quite difficult to formalize the image and represent it in the form of a certain mathematical model. Therefore, methods of computer-aided visualization depend substantially on the problems to be solved. They can be rarely generalized. The majority of these methods are non-linear; therefore, there is a need to increase the computing power and complexity of algorithms to be able to process the image. This paper covers the research of visual object recognition and implementation of the algorithm in the view of the software application that operates in the real-time mode
Large scale tracking algorithms
Energy Technology Data Exchange (ETDEWEB)
Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
NEUTRON ALGORITHM VERIFICATION TESTING
Energy Technology Data Exchange (ETDEWEB)
COWGILL,M.; MOSBY,W.; ARGONNE NATIONAL LABORATORY-WEST
2000-07-19
Active well coincidence counter assays have been performed on uranium metal highly enriched in {sup 235}U. The data obtained in the present program, together with highly enriched uranium (HEU) metal data obtained in other programs, have been analyzed using two approaches, the standard approach and an alternative approach developed at BNL. Analysis of the data with the standard approach revealed that the form of the relationship between the measured reals and the {sup 235}U mass varied, being sometimes linear and sometimes a second-order polynomial. In contrast, application of the BNL algorithm, which takes into consideration the totals, consistently yielded linear relationships between the totals-corrected reals and the {sup 235}U mass. The constants in these linear relationships varied with geometric configuration and level of enrichment. This indicates that, when the BNL algorithm is used, calibration curves can be established with fewer data points and with more certainty than if a standard algorithm is used. However, this potential advantage has only been established for assays of HEU metal. In addition, the method is sensitive to the stability of natural background in the measurement facility.
Stubbs, Allston Julius; Atilla, Halis Atil
2016-01-01
Summary Background Despite the rapid advancement of imaging and arthroscopic techniques about the hip joint, missed diagnoses are still common. As a deep joint and compared to the shoulder and knee joints, localization of hip symptoms is difficult. Hip pathology is not easily isolated and is often related to intra and extra-articular abnormalities. In light of these diagnostic challenges, we recommend an algorithmic approach to effectively diagnoses and treat hip pain. Methods In this review, hip pain is evaluated from diagnosis to treatment in a clear decision model. First we discuss emergency hip situations followed by the differentiation of intra and extra-articular causes of the hip pain. We differentiate the intra-articular hip as arthritic and non-arthritic and extra-articular pain as surrounding or remote tissue generated. Further, extra-articular hip pain is evaluated according to pain location. Finally we summarize the surgical treatment approach with an algorithmic diagram. Conclusion Diagnosis of hip pathology is difficult because the etiologies of pain may be various. An algorithmic approach to hip restoration from diagnosis to rehabilitation is crucial to successfully identify and manage hip pathologies. Level of evidence: V. PMID:28066734
An efficient algorithm for function optimization: modified stem cells algorithm
Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad
2013-03-01
In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).
Convex hull ranking algorithm for multi-objective evolutionary algorithms
Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.
2012-01-01
Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity
Directory of Open Access Journals (Sweden)
Mingwei Leng
2013-01-01
Full Text Available The accuracy of most of the existing semisupervised clustering algorithms based on small size of labeled dataset is low when dealing with multidensity and imbalanced datasets, and labeling data is quite expensive and time consuming in many real-world applications. This paper focuses on active data selection and semisupervised clustering algorithm in multidensity and imbalanced datasets and proposes an active semisupervised clustering algorithm. The proposed algorithm uses an active mechanism for data selection to minimize the amount of labeled data, and it utilizes multithreshold to expand labeled datasets on multidensity and imbalanced datasets. Three standard datasets and one synthetic dataset are used to demonstrate the proposed algorithm, and the experimental results show that the proposed semisupervised clustering algorithm has a higher accuracy and a more stable performance in comparison to other clustering and semisupervised clustering algorithms, especially when the datasets are multidensity and imbalanced.
Construction of energy-stable Galerkin reduced order models.
Energy Technology Data Exchange (ETDEWEB)
Kalashnikova, Irina; Barone, Matthew Franklin; Arunajatesan, Srinivasan; van Bloemen Waanders, Bart Gustaaf
2013-05-01
This report aims to unify several approaches for building stable projection-based reduced order models (ROMs). Attention is focused on linear time-invariant (LTI) systems. The model reduction procedure consists of two steps: the computation of a reduced basis, and the projection of the governing partial differential equations (PDEs) onto this reduced basis. Two kinds of reduced bases are considered: the proper orthogonal decomposition (POD) basis and the balanced truncation basis. The projection step of the model reduction can be done in two ways: via continuous projection or via discrete projection. First, an approach for building energy-stable Galerkin ROMs for linear hyperbolic or incompletely parabolic systems of PDEs using continuous projection is proposed. The idea is to apply to the set of PDEs a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. The resulting ROM will be energy-stable for any choice of reduced basis. It is shown that, for many PDE systems, the desired transformation is induced by a special weighted L2 inner product, termed the %E2%80%9Csymmetry inner product%E2%80%9D. Attention is then turned to building energy-stable ROMs via discrete projection. A discrete counterpart of the continuous symmetry inner product, a weighted L2 inner product termed the %E2%80%9CLyapunov inner product%E2%80%9D, is derived. The weighting matrix that defines the Lyapunov inner product can be computed in a black-box fashion for a stable LTI system arising from the discretization of a system of PDEs in space. It is shown that a ROM constructed via discrete projection using the Lyapunov inner product will be energy-stable for any choice of reduced basis. Connections between the Lyapunov inner product and the inner product induced by the balanced truncation algorithm are made. Comparisons are also made between the symmetry inner product and the Lyapunov inner product. The performance of ROMs constructed
Iterative Algorithms for Nonexpansive Mappings
Directory of Open Access Journals (Sweden)
Yao Yonghong
2008-01-01
Full Text Available Abstract We suggest and analyze two new iterative algorithms for a nonexpansive mapping in Banach spaces. We prove that the proposed iterative algorithms converge strongly to some fixed point of .
Foundations of genetic algorithms 1991
1991-01-01
Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition
Parallel Architectures and Bioinspired Algorithms
Pérez, José; Lanchares, Juan
2012-01-01
This monograph presents examples of best practices when combining bioinspired algorithms with parallel architectures. The book includes recent work by leading researchers in the field and offers a map with the main paths already explored and new ways towards the future. Parallel Architectures and Bioinspired Algorithms will be of value to both specialists in Bioinspired Algorithms, Parallel and Distributed Computing, as well as computer science students trying to understand the present and the future of Parallel Architectures and Bioinspired Algorithms.
Gas phase thermal diffusion of stable isotopes
International Nuclear Information System (INIS)
Eck, C.F.
1979-01-01
The separation of stable isotopes at Mound Facility is reviewed from a historical perspective. The historical development of thermal diffusion from a laboratory process to a separation facility that handles all the noble gases is described. In addition, elementary thermal diffusion theory and elementary cascade theory are presented along with a brief review of the uses of stable isotopes
physico-chemical and stable isotopes
African Journals Online (AJOL)
This paper details the mineralogical, chemical and stable isotope abundances of calcrete in the Letlhakeng fossil valley. The stable isotope abundances (O and C) of calcretes yielded some values which were tested against the nature of the calcretes – pedogenic or groundwater type. The Kgalagadi (Kalahari) is a vast ...
Modelling stable atmospheric boundary layers over snow
Sterk, H.A.M.
2015-01-01
Thesis entitled:
Modelling Stable Atmospheric Boundary Layers over Snow
H.A.M. Sterk
Wageningen, 29th of April, 2015
Summary
The emphasis of this thesis is on the understanding and forecasting of the Stable Boundary Layer (SBL) over snow-covered surfaces. SBLs
Modelling stable atmospheric boundary layers over snow
Sterk, H.A.M.
2015-01-01
Thesis entitled: Modelling Stable Atmospheric Boundary Layers over Snow H.A.M. Sterk Wageningen, 29th of April, 2015 Summary The emphasis of this thesis is on the understanding and forecasting of the Stable Boundary Layer (SBL) over snow-covered surfaces. SBLs typically form at night and in polar
Stable isotopes and biomarkers in microbial ecology
Boschker, H.T.S.; Middelburg, J.J.
2002-01-01
The use of biomarkers in combination with stable isotope analysis is a new approach in microbial ecology and a number of papers on a variety of subjects have appeared. We will first discuss the techniques for analysing stable isotopes in biomarkers, primarily gas chromatography-combustion-isotope
Stable Agrobacterium -mediated transformation of the halophytic ...
African Journals Online (AJOL)
Stable Agrobacterium-mediated transformation of the halophytic Leymus chinensis (Trin.) Yan-Lin Sun, Soon-Kwan Hong. Abstract. In this study, an efficient procedure for stable Agrobacterium-mediated transformation of Leymus chinensis (Trin.) was established. Agrobacterium tumefaciens strain EHA105, harboring a ...
Essential algorithms a practical approach to computer algorithms
Stephens, Rod
2013-01-01
A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s
A generalized LSTM-like training algorithm for second-order recurrent neural networks.
Monner, Derek; Reggia, James A
2012-01-01
The long short term memory (LSTM) is a second-order recurrent neural network architecture that excels at storing sequential short-term memories and retrieving them many time-steps later. LSTM's original training algorithm provides the important properties of spatial and temporal locality, which are missing from other training approaches, at the cost of limiting its applicability to a small set of network architectures. Here we introduce the generalized long short-term memory(LSTM-g) training algorithm, which provides LSTM-like locality while being applicable without modification to a much wider range of second-order network architectures. With LSTM-g, all units have an identical set of operating instructions for both activation and learning, subject only to the configuration of their local environment in the network; this is in contrast to the original LSTM training algorithm, where each type of unit has its own activation and training instructions. When applied to LSTM architectures with peephole connections, LSTM-g takes advantage of an additional source of back-propagated error which can enable better performance than the original algorithm. Enabled by the broad architectural applicability of LSTM-g, we demonstrate that training recurrent networks engineered for specific tasks can produce better results than single-layer networks. We conclude that LSTM-g has the potential to both improve the performance and broaden the applicability of spatially and temporally local gradient-based training algorithms for recurrent neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.
Efficient GPS Position Determination Algorithms
National Research Council Canada - National Science Library
Nguyen, Thao Q
2007-01-01
... differential GPS algorithm for a network of users. The stand-alone user GPS algorithm is a direct, closed-form, and efficient new position determination algorithm that exploits the closed-form solution of the GPS trilateration equations and works...
Recent results on howard's algorithm
DEFF Research Database (Denmark)
Miltersen, P.B.
2012-01-01
Howard’s algorithm is a fifty-year old generally applicable algorithm for sequential decision making in face of uncertainty. It is routinely used in practice in numerous application areas that are so important that they usually go by their acronyms, e.g., OR, AI, and CAV. While Howard’s algorithm...
Multisensor estimation: New distributed algorithms
Directory of Open Access Journals (Sweden)
Plataniotis K. N.
1997-01-01
Full Text Available The multisensor estimation problem is considered in this paper. New distributed algorithms, which are able to locally process the information and which deliver identical results to those generated by their centralized counterparts are presented. The algorithms can be used to provide robust and computationally efficient solutions to the multisensor estimation problem. The proposed distributed algorithms are theoretically interesting and computationally attractive.
Aguinaga, Iker; Fierz, Basil; Spillmann, Jonas; Harders, Matthias
2010-12-01
The behavior, performance, and run-time of mechanical simulations in interactive virtual surgery depend heavily on the type of numerical differential equation solver used to integrate in time the dynamic equations obtained from simulation methods, such as the Finite Element Method. Explicit solvers are fast but only conditionally stable. The condition number of the stiffness matrix limits the highest possible time step. This limit is related to the geometrical properties of the underlying mesh, such as element shape and size. In fact, it can be governed by a small set of ill-shaped elements. For many applications this issue can be solved a priori by a careful meshing. However, when meshes are cut during interactive surgery simulation, it is difficult and computationally expensive to control the quality of the resulting elements. As an alternative, we propose to modify the elemental stiffness matrices directly in order to ensure stability. In this context, we first investigate the behavior of the eigenmodes of the elemental stiffness matrix in a Finite Element Method. We then propose a simple filter to reduce high model frequencies and thus allow larger time steps, while maintaining the general mechanical behavior. Copyright © 2010 Elsevier Ltd. All rights reserved.
Structure of acid-stable carmine.
Sugimoto, Naoki; Kawasaki, Yoko; Sato, Kyoko; Aoki, Hiromitsu; Ichi, Takahito; Koda, Takatoshi; Yamazaki, Takeshi; Maitani, Tamio
2002-02-01
Acid-stable carmine has recently been distributed in the U.S. market because of its good acid stability, but it is not permitted in Japan. We analyzed and determined the structure of the major pigment in acid-stable carmine, in order to establish an analytical method for it. Carminic acid was transformed into a different type of pigment, named acid-stable carmine, through amination when heated in ammonia solution. The features of the structure were clarified using a model compound, purpurin, in which the orientation of hydroxyl groups on the A ring of the anthraquinone skeleton is the same as that of carminic acid. By spectroscopic means and the synthesis of acid-stable carmine and purpurin derivatives, the structure of the major pigment in acid-stable carmine was established as 4-aminocarminic acid, a novel compound.
Stable Fly, (L., Dispersal and Governing Factors
Directory of Open Access Journals (Sweden)
Allan T. Showler
2015-01-01
Full Text Available Although the movement of stable fly, Stomoxys calcitrans (L., has been studied, its extent and significance has been uncertain. On a local scale (13 km is mainly wind-driven by weather fronts that carry stable flies from inland farm areas for up to 225 km to beaches of northwestern Florida and Lake Superior. Stable flies can reproduce for a short time each year in washed-up sea grass, but the beaches are not conducive to establishment. Such movement is passive and does not appear to be advantageous to stable fly's survival. On a regional scale, stable flies exhibit little genetic differentiation, and on the global scale, while there might be more than one “lineage”, the species is nevertheless considered to be panmictic. Population expansion across much of the globe likely occurred from the late Pleistocene to the early Holocene in association with the spread of domesticated nomad livestock and particularly with more sedentary, penned livestock.
Genetic algorithms for optimal design and control of adaptive structures
Ribeiro, R; Dias-Rodrigues, J; Vaz, M
2000-01-01
Future High Energy Physics experiments require the use of light and stable structures to support their most precise radiation detection elements. These large structures must be light, highly stable, stiff and radiation tolerant in an environment where external vibrations, high radiation levels, material aging, temperature and humidity gradients are not negligible. Unforeseen factors and the unknown result of the coupling of environmental conditions, together with external vibrations, may affect the position stability of the detectors and their support structures compromising their physics performance. Careful optimization of static and dynamic behavior must be an essential part of the engineering design. Genetic Algorithms ( GA) belong to the group of probabilistic algorithms, combining elements of direct and stochastic search. They are more robust than existing directed search methods with the advantage of maintaining a population of potential solutions. There is a class of optimization problems for which Ge...
Selfish Gene Algorithm Vs Genetic Algorithm: A Review
Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed
2016-11-01
Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.
Ahmadi, Amir; Stone, Gregg W; Leipsic, Jonathon; Shaw, Leslee J; Villines, Todd C; Kern, Morton J; Hecht, Harvey; Erlinge, David; Ben-Yehuda, Ori; Maehara, Akiko; Arbustini, Eloisa; Serruys, Patrick; Garcia-Garcia, Hector M; Narula, Jagat
2016-07-08
Risk stratification in patients with stable ischemic heart disease is essential to guide treatment decisions. In this regard, whether coronary anatomy, physiology, or plaque morphology is the best determinant of prognosis (and driver an effective therapeutic risk reduction) remains one of the greatest ongoing debates in cardiology. In the present report, we review the evidence for each of these characteristics and explore potential algorithms that may enable a practical diagnostic and therapeutic strategy for the management of patients with stable ischemic heart disease. © 2016 American Heart Association, Inc.
Canonical algorithms for numerical integration of charged particle motion equations
Efimov, I. N.; Morozov, E. A.; Morozova, A. R.
2017-02-01
A technique for numerically integrating the equation of charged particle motion in a magnetic field is considered. It is based on the canonical transformations of the phase space in Hamiltonian mechanics. The canonical transformations make the integration process stable against counting error accumulation. The integration algorithms contain a minimum possible amount of arithmetics and can be used to design accelerators and devices of electron and ion optics.
An Algorithmic Diversity Diet?
DEFF Research Database (Denmark)
Sørensen, Jannick Kirk; Schmidt, Jan-Hinrik
2016-01-01
diet system however triggers not only the classic discussion of the reach – distinctiveness balance for PSM, but also shows that ‘diversity’ is understood very differently in algorithmic recommender system communities than it is editorially and politically in the context of PSM. The design...... of a diversity diet system generates questions not just about editorial power, personal freedom and techno-paternalism, but also about the embedded politics of recommender systems as well as the human skills affiliated with PSM editorial work and the nature of PSM content....
Randomized Filtering Algorithms
DEFF Research Database (Denmark)
Katriel, Irit; Van Hentenryck, Pascal
2008-01-01
of AllDifferent and is generalization, the Global Cardinality Constraint. The first delayed filtering scheme is a Monte Carlo algorithm: its running time is superior, in the worst case, to that of enforcing are consistency after every domain event, while its filtering effectiveness is analyzed......Filtering every global constraint of a CPS to are consistency at every search step can be costly and solvers often compromise on either the level of consistency or the frequency at which are consistency is enforced. In this paper we propose two randomized filtering schemes for dense instances...
An energy-stable generalized- α method for the Swift–Hohenberg equation
Sarmiento, Adel
2017-11-16
We propose a second-order accurate energy-stable time-integration method that controls the evolution of numerical instabilities introducing numerical dissipation in the highest-resolved frequencies. Our algorithm further extends the generalized-α method and provides control over dissipation via the spectral radius. We derive the first and second laws of thermodynamics for the Swift–Hohenberg equation and provide a detailed proof of the unconditional energy stability of our algorithm. Finally, we present numerical results to verify the energy stability and its second-order accuracy in time.
Recognition algorithms in knot theory
International Nuclear Information System (INIS)
Dynnikov, I A
2003-01-01
In this paper the problem of constructing algorithms for comparing knots and links is discussed. A survey of existing approaches and basic results in this area is given. In particular, diverse combinatorial methods for representing links are discussed, the Haken algorithm for recognizing a trivial knot (the unknot) and a scheme for constructing a general algorithm (using Haken's ideas) for comparing links are presented, an approach based on representing links by closed braids is described, the known algorithms for solving the word problem and the conjugacy problem for braid groups are described, and the complexity of the algorithms under consideration is discussed. A new method of combinatorial description of knots is given together with a new algorithm (based on this description) for recognizing the unknot by using a procedure for monotone simplification. In the conclusion of the paper several problems are formulated whose solution could help to advance towards the 'algorithmization' of knot theory
Fast algorithm for Morphological Filters
International Nuclear Information System (INIS)
Lou Shan; Jiang Xiangqian; Scott, Paul J
2011-01-01
In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.
Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm
Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad
2018-01-01
Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.
Energy Technology Data Exchange (ETDEWEB)
Fijany, A. [Jet Propulsion Lab., Pasadena, CA (United States); Coley, T.R. [Virtual Chemistry, Inc., San Diego, CA (United States); Cagin, T.; Goddard, W.A. III [California Institute of Technology, Pasadena, CA (United States)
1997-12-31
Successful molecular dynamics (MD) simulation of large systems (> million atoms) for long times (> nanoseconds) requires the integration of constrained equations of motion (CEOM). Constraints are used to eliminate high frequency degrees of freedom (DOF) and to allow the use of rigid bodies. Solving the CEOM allows for larger integration time-steps and helps focus the simulation on the important collective dynamics of chemical, biological, and materials systems. We explore advances in multibody dynamics which have resulted in O(N) algorithms for propagating the CEOM. However, because of their strictly sequential nature, the computational time required by these algorithms does not scale down with increased numbers of processors. We then present the new constraint force algorithm for solving the CEOM and show that this algorithm is fully parallelizable, leading to a computational cost of O(N/P+IogP) for N DOF on P processors.
Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John
2005-01-01
The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.
Algorithmic Relative Complexity
Directory of Open Access Journals (Sweden)
Daniele Cerra
2011-04-01
Full Text Available Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity and relative complexity, counterparts of the cross-entropy and relative entropy (or Kullback-Leibler divergence found in Shannon’s framework. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when adopting such a description for x, compared to the shortest representation of x. Properties of analogous quantities in classical information theory hold for these new concepts. As these notions are incomputable, a suitable approximation based upon data compression is derived to enable the application to real data, yielding a divergence measure applicable to any pair of strings. Example applications are outlined, involving authorship attribution and satellite image classification, as well as a comparison to similar established techniques.
Fatigue evaluation algorithms: Review
Energy Technology Data Exchange (ETDEWEB)
Passipoularidis, V.A.; Broendsted, P.
2009-11-15
A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck, to model the degradation caused by failure events in ply level. Residual strength is incorporated as fatigue damage accumulation metric. Once the typical fatigue and static properties of the constitutive ply are determined,the performance of an arbitrary lay-up under uniaxial and/or multiaxial load time series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects. In general, FADAS performs well in predicting life under both spectral and block loading fatigue. (author)
Rabideau, Gregg R.; Chien, Steve A.
2010-01-01
AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.
Stable Organic Neutral Diradical via Reversible Coordination.
Lu, Zhenpin; Quanz, Henrik; Burghaus, Olaf; Hofmann, Jonas; Logemann, Christian; Beeck, Sebastian; Schreiner, Peter R; Wegner, Hermann A
2017-12-27
We report the formation of a stable neutral diboron diradical simply by coordination of an aromatic dinitrogen compound to an ortho-phenyldiborane. This process is reversible upon addition of pyridine. The diradical species is stable above 200 °C. Computations are consistent with an open-shell triplet diradical with a very small open-shell singlet-triplet energy gap that is indicative of the electronic disjointness of the two radical sites. This opens a new way of generating stable radicals with fascinating electronic properties useful for a large variety of applications.
An algorithm for engineering regime shifts in one-dimensional dynamical systems
Tan, James P. L.
2018-01-01
Regime shifts are discontinuous transitions between stable attractors hosting a system. They can occur as a result of a loss of stability in an attractor as a bifurcation is approached. In this work, we consider one-dimensional dynamical systems where attractors are stable equilibrium points. Relying on critical slowing down signals related to the stability of an equilibrium point, we present an algorithm for engineering regime shifts such that a system may escape an undesirable attractor into a desirable one. We test the algorithm on synthetic data from a one-dimensional dynamical system with a multitude of stable equilibrium points and also on a model of the population dynamics of spruce budworms in a forest. The algorithm and other ideas discussed here contribute to an important part of the literature on exercising greater control over the sometimes unpredictable nature of nonlinear systems.
Applications of algorithmic differentiation to phase retrieval algorithms.
Jurling, Alden S; Fienup, James R
2014-07-01
In this paper, we generalize the techniques of reverse-mode algorithmic differentiation to include elementary operations on multidimensional arrays of complex numbers. We explore the application of the algorithmic differentiation to phase retrieval error metrics and show that reverse-mode algorithmic differentiation provides a framework for straightforward calculation of gradients of complicated error metrics without resorting to finite differences or laborious symbolic differentiation.
Optimal Fungal Space Searching Algorithms.
Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V
2016-10-01
Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.
Stable Isotope Group 1983 progress report
International Nuclear Information System (INIS)
Stewart, M.K.
1984-06-01
The work of the Stable Isotope Group of the Institute of Nuclear Sciences in the fields of isotope geology, isotope hydrology, geochronology, isotope biology and related fields, and mass spectrometer instrumentation, during 1983, is described
Stable Isotope Group 1982 progress report
International Nuclear Information System (INIS)
Stewart, M.K.
1983-06-01
The work of the Stable Isotope Group of the Institute of Nuclear Sciences during 1982, in the fields of isotope geology, isotope hydrology, geochronology, isotope biology and mass spectrometer instrumentation, is described
Bartolome Island, Galapagos Stable Oxygen Calibration Data
National Oceanic and Atmospheric Administration, Department of Commerce — Galapagos Coral Stable Oxygen Calibration Data. Sites: Bartolome Island: 0 deg, 17'S, 90 deg 33' W. Champion Island: 1 deg, 15'S, 90 deg, 05' W. Urvina Bay (Isabela...
Allan Hills Stable Water Isotopes, Version 1
National Aeronautics and Space Administration — This data set includes stable water isotope values at 10 m resolution along an approximately 5 km transect through the main icefield of the Allan Hills Blue Ice...
Applications of stable isotopes in clinical pharmacology
Schellekens, Reinout C A; Stellaard, Frans; Woerdenbag, Herman J; Frijlink, Henderik W; Kosterink, Jos G W
2011-01-01
This review aims to present an overview of the application of stable isotope technology in clinical pharmacology. Three main categories of stable isotope technology can be distinguished in clinical pharmacology. Firstly, it is applied in the assessment of drug pharmacology to determine the pharmacokinetic profile or mode of action of a drug substance. Secondly, stable isotopes may be used for the assessment of drug products or drug delivery systems by determination of parameters such as the bioavailability or the release profile. Thirdly, patients may be assessed in relation to patient-specific drug treatment; this concept is often called personalized medicine. In this article, the application of stable isotope technology in the aforementioned three areas is reviewed, with emphasis on developments over the past 25 years. The applications are illustrated with examples from clinical studies in humans. PMID:21801197
Tannaka duality and stable infinity-categories
Iwanari, Isamu
2014-01-01
We introduce the notion of fine tannakian infinity-categories and prove Tannaka duality results for symmetric monoidal stable infinity-categories over a field of characteristic zero. We also discuss several examples.
Reactive gas control of non-stable plasma conditions
International Nuclear Information System (INIS)
Bellido-Gonzalez, V.; Daniel, B.; Counsell, J.; Monaghan, D.
2006-01-01
Most industrial plasma processes are dependant upon the control of plasma properties for repeatable and reliable production. The speed of production and range of properties achieved depend on the degree of control. Process control involves all the aspects of the vacuum equipment, substrate preparation, plasma source condition, power supplies, process drift, valves (inputs/outputs), signal and data processing and the user's understanding and ability. In many cases, some of the processes which involve the manufacturing of interesting coating structures, require a precise control of the process in a reactive environment [S.J. Nadel, P. Greene, 'High rate sputtering technology for throughput and quality', International Glass Review, Issue 3, 2001, p. 45. ]. Commonly in these circumstances the plasma is not stable if all the inputs and outputs of the system were to remain constant. The ideal situation is to move a process from set-point A to B in zero time and maintain the monitored signal with a fluctuation equal to zero. In a 'real' process that's not possible but improvements in the time response and energy delivery could be achieved with an appropriate algorithm structure. In this paper an advanced multichannel reactive plasma gas control system is presented. The new controller offers both high-speed gas control combined with a very flexible control structure. The controller uses plasma emission monitoring, target voltage or any process sensor monitoring as the input into a high-speed control algorithm for gas input. The control algorithm and parameters can be tuned to different process requirements in order to optimize response times
Algorithms and their others: Algorithmic culture in context
Directory of Open Access Journals (Sweden)
Paul Dourish
2016-08-01
Full Text Available Algorithms, once obscure objects of technical art, have lately been subject to considerable popular and scholarly scrutiny. What does it mean to adopt the algorithm as an object of analytic attention? What is in view, and out of view, when we focus on the algorithm? Using Niklaus Wirth's 1975 formulation that “algorithms + data structures = programs” as a launching-off point, this paper examines how an algorithmic lens shapes the way in which we might inquire into contemporary digital culture.
The Reactivity of Stable Metallacyclobutenes and Vinylcarbenes
Holland, Ryan Lynn
2016-01-01
Chapter 1. Historical Development of Stable Metallacyclobutenes Fred Tebbe and co-workers synthesized the first stable metallacyclobutene complexes in the late 1970’s by treatment of an intermediate titanium methylene species – later popularized as the “Tebbe reagent” – with acetylenes. Robert Grubbs at Caltech further studied this system, using it to detail a degenerate metathesis reaction and to isolate a metallacyclobutane complex – which was implicated in the emerging field of alkene meta...
Stable atomic hydrogen: Polarized atomic beam source
International Nuclear Information System (INIS)
Niinikoski, T.O.; Penttilae, S.; Rieubland, J.M.; Rijllart, A.
1984-01-01
We have carried out experiments with stable atomic hydrogen with a view to possible applications in polarized targets or polarized atomic beam sources. Recent results from the stabilization apparatus are described. The first stable atomic hydrogen beam source based on the microwave extraction method (which is being tested ) is presented. The effect of the stabilized hydrogen gas density on the properties of the source is discussed. (orig.)
Fighting Censorship with Algorithms
Mahdian, Mohammad
In countries such as China or Iran where Internet censorship is prevalent, users usually rely on proxies or anonymizers to freely access the web. The obvious difficulty with this approach is that once the address of a proxy or an anonymizer is announced for use to the public, the authorities can easily filter all traffic to that address. This poses a challenge as to how proxy addresses can be announced to users without leaking too much information to the censorship authorities. In this paper, we formulate this question as an interesting algorithmic problem. We study this problem in a static and a dynamic model, and give almost tight bounds on the number of proxy servers required to give access to n people k of whom are adversaries. We will also discuss how trust networks can be used in this context.
Algorithmic Reflections on Choreography
Directory of Open Access Journals (Sweden)
Pablo Ventura
2016-11-01
Full Text Available In 1996, Pablo Ventura turned his attention to the choreography software Life Forms to find out whether the then-revolutionary new tool could lead to new possibilities of expression in contemporary dance. During the next 2 decades, he devised choreographic techniques and custom software to create dance works that highlight the operational logic of computers, accompanied by computer-generated dance and media elements. This article provides a firsthand account of how Ventura’s engagement with algorithmic concepts guided and transformed his choreographic practice. The text describes the methods that were developed to create computer-aided dance choreographies. Furthermore, the text illustrates how choreography techniques can be applied to correlate formal and aesthetic aspects of movement, music, and video. Finally, the text emphasizes how Ventura’s interest in the wider conceptual context has led him to explore with choreographic means fundamental issues concerning the characteristics of humans and machines and their increasingly profound interdependencies.
The Copenhagen Triage Algorithm
DEFF Research Database (Denmark)
Hasselbalch, Rasmus Bo; Plesner, Louis Lind; Pries-Heje, Mia
2016-01-01
BACKGROUND: Crowding in the emergency department (ED) is a well-known problem resulting in an increased risk of adverse outcomes. Effective triage might counteract this problem by identifying the sickest patients and ensuring early treatment. In the last two decades, systematic triage has become...... the standard in ED's worldwide. However, triage models are also time consuming, supported by limited evidence and could potentially be of more harm than benefit. The aim of this study is to develop a quicker triage model using data from a large cohort of unselected ED patients and evaluate if this new model...... is non-inferior to an existing triage model in a prospective randomized trial. METHODS: The Copenhagen Triage Algorithm (CTA) study is a prospective two-center, cluster-randomized, cross-over, non-inferiority trial comparing CTA to the Danish Emergency Process Triage (DEPT). We include patients ≥16 years...
Nidheesh, N; Abdul Nazeer, K A; Ameer, P M
2017-12-01
Clustering algorithms with steps involving randomness usually give different results on different executions for the same dataset. This non-deterministic nature of algorithms such as the K-Means clustering algorithm limits their applicability in areas such as cancer subtype prediction using gene expression data. It is hard to sensibly compare the results of such algorithms with those of other algorithms. The non-deterministic nature of K-Means is due to its random selection of data points as initial centroids. We propose an improved, density based version of K-Means, which involves a novel and systematic method for selecting initial centroids. The key idea of the algorithm is to select data points which belong to dense regions and which are adequately separated in feature space as the initial centroids. We compared the proposed algorithm to a set of eleven widely used single clustering algorithms and a prominent ensemble clustering algorithm which is being used for cancer data classification, based on the performances on a set of datasets comprising ten cancer gene expression datasets. The proposed algorithm has shown better overall performance than the others. There is a pressing need in the Biomedical domain for simple, easy-to-use and more accurate Machine Learning tools for cancer subtype prediction. The proposed algorithm is simple, easy-to-use and gives stable results. Moreover, it provides comparatively better predictions of cancer subtypes from gene expression data. Copyright © 2017 Elsevier Ltd. All rights reserved.
An efficient algorithm for incompressible N-phase flows
Dong, S.
2014-11-01
We present an efficient algorithm within the phase field framework for simulating the motion of a mixture of N (N ⩾ 2) immiscible incompressible fluids, with possibly very different physical properties such as densities, viscosities, and pairwise surface tensions. The algorithm employs a physical formulation for the N-phase system that honors the conservations of mass and momentum and the second law of thermodynamics. We present a method for uniquely determining the mixing energy density coefficients involved in the N-phase model based on the pairwise surface tensions among the N fluids. Our numerical algorithm has several attractive properties that make it computationally very efficient: (i) it has completely de-coupled the computations for different flow variables, and has also completely de-coupled the computations for the (N - 1) phase field functions; (ii) the algorithm only requires the solution of linear algebraic systems after discretization, and no nonlinear algebraic solve is needed; (iii) for each flow variable the linear algebraic system involves only constant and time-independent coefficient matrices, which can be pre-computed during pre-processing, despite the variable density and variable viscosity of the N-phase mixture; (iv) within a time step the semi-discretized system involves only individual de-coupled Helmholtz-type (including Poisson) equations, despite the strongly-coupled phase-field system of fourth spatial order at the continuum level; (v) the algorithm is suitable for large density contrasts and large viscosity contrasts among the N fluids. Extensive numerical experiments have been presented for several problems involving multiple fluid phases, large density contrasts and large viscosity contrasts. In particular, we compare our simulations with the de Gennes theory, and demonstrate that our method produces physically accurate results for multiple fluid phases. We also demonstrate the significant and sometimes dramatic effects of the
A Gaussian Process Based Online Change Detection Algorithm for Monitoring Periodic Time Series
Energy Technology Data Exchange (ETDEWEB)
Chandola, Varun [ORNL; Vatsavai, Raju [ORNL
2011-01-01
Online time series change detection is a critical component of many monitoring systems, such as space and air-borne remote sensing instruments, cardiac monitors, and network traffic profilers, which continuously analyze observations recorded by sensors. Data collected by such sensors typically has a periodic (seasonal) component. Most existing time series change detection methods are not directly applicable to handle such data, either because they are not designed to handle periodic time series or because they cannot operate in an online mode. We propose an online change detection algorithm which can handle periodic time series. The algorithm uses a Gaussian process based non-parametric time series prediction model and monitors the difference between the predictions and actual observations within a statistically principled control chart framework to identify changes. A key challenge in using Gaussian process in an online mode is the need to solve a large system of equations involving the associated covariance matrix which grows with every time step. The proposed algorithm exploits the special structure of the covariance matrix and can analyze a time series of length T in O(T^2) time while maintaining a O(T) memory footprint, compared to O(T^4) time and O(T^2) memory requirement of standard matrix manipulation methods. We experimentally demonstrate the superiority of the proposed algorithm over several existing time series change detection algorithms on a set of synthetic and real time series. Finally, we illustrate the effectiveness of the proposed algorithm for identifying land use land cover changes using Normalized Difference Vegetation Index (NDVI) data collected for an agricultural region in Iowa state, USA. Our algorithm is able to detect different types of changes in a NDVI validation data set (with ~80% accuracy) which occur due to crop type changes as well as disruptive changes (e.g., natural disasters).
An overview of smart grid routing algorithms
Wang, Junsheng; OU, Qinghai; Shen, Haijuan
2017-08-01
This paper summarizes the typical routing algorithm in smart grid by analyzing the communication business and communication requirements of intelligent grid. Mainly from the two kinds of routing algorithm is analyzed, namely clustering routing algorithm and routing algorithm, analyzed the advantages and disadvantages of two kinds of typical routing algorithm in routing algorithm and applicability.
Genetic Algorithms in Noisy Environments
THEN, T. W.; CHONG, EDWIN K. P.
1993-01-01
Genetic Algorithms (GA) have been widely used in the areas of searching, function optimization, and machine learning. In many of these applications, the effect of noise is a critical factor in the performance of the genetic algorithms. While it hals been shown in previous siiudies that genetic algorithms are still able to perform effectively in the presence of noise, the problem of locating the global optimal solution at the end of the search has never been effectively addressed. Furthermore,...
Mao-Gilles Stabilization Algorithm
Jérôme Gilles
2013-01-01
Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different sce...
Mao-Gilles Stabilization Algorithm
Directory of Open Access Journals (Sweden)
Jérôme Gilles
2013-07-01
Full Text Available Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different scenarios involving non-rigid deformations.
Unsupervised Classification Using Immune Algorithm
Al-Muallim, M. T.; El-Kouatly, R.
2012-01-01
Unsupervised classification algorithm based on clonal selection principle named Unsupervised Clonal Selection Classification (UCSC) is proposed in this paper. The new proposed algorithm is data driven and self-adaptive, it adjusts its parameters to the data to make the classification operation as fast as possible. The performance of UCSC is evaluated by comparing it with the well known K-means algorithm using several artificial and real-life data sets. The experiments show that the proposed U...
Fuzzy HRRN CPU Scheduling Algorithm
Bashir Alam; R. Biswas; M. Alam
2011-01-01
There are several scheduling algorithms like FCFS, SRTN, RR, priority etc. Scheduling decisions of these algorithms are based on parameters which are assumed to be crisp. However, in many circumstances these parameters are vague. The vagueness of these parameters suggests that scheduler should use fuzzy technique in scheduling the jobs. In this paper we have proposed a novel CPU scheduling algorithm Fuzzy HRRN that incorporates fuzziness in basic HRRN using fuzzy Technique FIS.
Interior tomography: theory, algorithms and applications
Yu, Hengyong; Ye, Yangbo; Wang, Ge
2008-08-01
The conventional wisdom states that the interior problem (reconstruction of an interior region from projection data along lines only through that region) is NOT uniquely solvable. While it remains correct, our recent theoretical and numerical results demonstrated that this interior problem CAN be solved in a theoretically exact and numerically stable fashion if a sub-region within the interior region is known. In contrast to the well-established lambda tomography, the studies on this type of exact interior reconstruction are referred to as "interior tomography". In this paper, we will overview the development of interior tomography, involving theory, algorithms and applications. The essence of interior tomography is to find the unique solution from highly truncated projection data via analytic continuation. Such an extension can be done either in the filtered backprojection or backprojection filtration formats. The key issue for the exact interior reconstruction is how to invert the truncated Hilbert transform. We have developed a projection onto convex set (POCS) algorithm and a singular value decomposition (SVD) method and produced excellent results in numerical simulations and practical applications.
The multi-niche crowding genetic algorithm: Analysis and applications
Energy Technology Data Exchange (ETDEWEB)
Cedeno, Walter [Univ. of California, Davis, CA (United States)
1995-09-01
The ability of organisms to evolve and adapt to the environment has provided mother nature with a rich and diverse set of species. Only organisms well adapted to their environment can survive from one generation to the next, transferring on the traits, that made them successful, to their offspring. Competition for resources and the ever changing environment drives some species to extinction and at the same time others evolve to maintain the delicate balance in nature. In this disertation we present the multi-niche crowding genetic algorithm, a computational metaphor to the survival of species in ecological niches in the face of competition. The multi-niche crowding genetic algorithm maintains stable subpopulations of solutions in multiple niches in multimodal landscapes. The algorithm introduces the concept of crowding selection to promote mating among members with qirnilar traits while allowing many members of the population to participate in mating. The algorithm uses worst among most similar replacement policy to promote competition among members with similar traits while allowing competition among members of different niches as well. We present empirical and theoretical results for the success of the multiniche crowding genetic algorithm for multimodal function optimization. The properties of the algorithm using different parameters are examined. We test the performance of the algorithm on problems of DNA Mapping, Aquifer Management, and the File Design Problem. Applications that combine the use of heuristics and special operators to solve problems in the areas of combinatorial optimization, grouping, and multi-objective optimization. We conclude by presenting the advantages and disadvantages of the algorithm and describing avenues for future investigation to answer other questions raised by this study.
Subbarao, Italo; Johnson, Christopher; Bond, William F; Schwid, Howard A; Wasser, Thomas E; Deye, Greg A; Burkhart, Keith K
2005-01-01
This study intended to create symptom-based triage algorithms for the initial encounter with terror-attack victims. The goals of the triage algorithms include: (1) early recognition; (2) avoiding contamination; (3) early use of antidotes; (4) appropriate handling of unstable, contaminated victims; and (5) provisions of force protection. The algorithms also address industrial accidents and emerging infections, which have similar clinical presentations and risks for contamination as weapons of mass destruction (WMD). The algorithms were developed using references from military and civilian sources. They were tested and adjusted using a series of theoretical patients from a CD-ROM chemical, biological, radiological/nuclear, and explosive victim simulator. Then, the algorithms were placed into a card format and sent to experts in relevant fields for academic review. Six inter-connected algorithms were created, described, and presented in figure form. The "attack" algorithm, for example, begins by differentiating between overt and covert attack victims (A covert attack is defined by epidemiological criteria adapted from the Centers for Disease Control and Prevention (CDC) recommendations). The attack algorithm then categorizes patients either as stable or unstable. Unstable patients flow to the "Dirty Resuscitation" algorithm, whereas, stable patients flow to the "Chemical Agent" and "Biological Agent" algorithms. The two remaining algorithms include the "Suicide Bomb/Blast/Explosion" and the "Radiation Dispersal Device" algorithms, which are inter-connected through the overt pathway in the "Attack" algorithm. A civilian, symptom-based, algorithmic approach to the initial encounter with victims of terrorist attacks, industrial accidents, or emerging infections was created. Future studies will address the usability of the algorithms with theoretical cases and utility in prospective, announced and unannounced, field drills. Additionally, future studies will assess the
Machine Learning an algorithmic perspective
Marsland, Stephen
2009-01-01
Traditional books on machine learning can be divided into two groups - those aimed at advanced undergraduates or early postgraduates with reasonable mathematical knowledge and those that are primers on how to code algorithms. The field is ready for a text that not only demonstrates how to use the algorithms that make up machine learning methods, but also provides the background needed to understand how and why these algorithms work. Machine Learning: An Algorithmic Perspective is that text.Theory Backed up by Practical ExamplesThe book covers neural networks, graphical models, reinforcement le
Algorithmic complexity of quantum capacity
Oskouei, Samad Khabbazi; Mancini, Stefano
2018-04-01
We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.
Diversity-Guided Evolutionary Algorithms
DEFF Research Database (Denmark)
Ursem, Rasmus Kjær
2002-01-01
Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few...... algorithms have used a measure to guide the search. The diversity-guided evolutionary algorithm (DGEA) uses the wellknown distance-to-average-point measure to alternate between phases of exploration (mutation) and phases of exploitation (recombination and selection). The DGEA showed remarkable results...
An algorithm for constructing Lyapunov functions
Directory of Open Access Journals (Sweden)
Sigurdur Freyr Hafstein
2007-08-01
Full Text Available In this monograph we develop an algorithm for constructing Lyapunov functions for arbitrary switched dynamical systems $dot{mathbf{x}} = mathbf{f}_sigma(t,mathbf{x}$, possessing a uniformly asymptotically stable equilibrium. Let $dot{mathbf{x}}=mathbf{f}_p(t,mathbf{x}$, $pinmathcal{P}$, be the collection of the ODEs, to which the switched system corresponds. The number of the vector fields $mathbf{f}_p$ on the right-hand side of the differential equation is assumed to be finite and we assume that their components $f_{p,i}$ are $mathcal{C}^2$ functions and that we can give some bounds, not necessarily close, on their second-order partial derivatives. The inputs of the algorithm are solely a finite number of the function values of the vector fields $mathbf{f}_p$ and these bounds. The domain of the Lyapunov function constructed by the algorithm is only limited by the size of the equilibrium's region of attraction. Note, that the concept of a Lyapunov function for the arbitrary switched system $dot{mathbf{x}} = mathbf{f}_sigma(t,mathbf{x}$ is equivalent to the concept of a common Lyapunov function for the systems $dot{mathbf{x}}=mathbf{f}_p(t,mathbf{x}$, $pinmathcal{P}$, and that if $mathcal{P}$ contains exactly one element, then the switched system is just a usual ODE $dot{mathbf{x}}=mathbf{f}(t,mathbf{x}$. We give numerous examples of Lyapunov functions constructed by our method at the end of this monograph.
The operational methane retrieval algorithm for TROPOMI
Directory of Open Access Journals (Sweden)
H. Hu
2016-11-01
Full Text Available This work presents the operational methane retrieval algorithm for the Sentinel 5 Precursor (S5P satellite and its performance tested on realistic ensembles of simulated measurements. The target product is the column-averaged dry air volume mixing ratio of methane (XCH4, which will be retrieved simultaneously with scattering properties of the atmosphere. The algorithm attempts to fit spectra observed by the shortwave and near-infrared channels of the TROPOspheric Monitoring Instrument (TROPOMI spectrometer aboard S5P.The sensitivity of the retrieval performance to atmospheric scattering properties, atmospheric input data and instrument calibration errors is evaluated. In addition, we investigate the effect of inhomogeneous slit illumination on the instrument spectral response function. Finally, we discuss the cloud filters to be used operationally and as backup.We show that the required accuracy and precision of < 1 % for the XCH4 product are met for clear-sky measurements over land surfaces and after appropriate filtering of difficult scenes. The algorithm is very stable, having a convergence rate of 99 %. The forward model error is less than 1 % for about 95 % of the valid retrievals. Model errors in the input profile of water do not influence the retrieval outcome noticeably. The methane product is expected to meet the requirements if errors in input profiles of pressure and temperature remain below 0.3 % and 2 K, respectively. We further find that, of all instrument calibration errors investigated here, our retrievals are the most sensitive to an error in the instrument spectral response function of the shortwave infrared channel.
Measuring the self-similarity exponent in Lévy stable processes of financial time series
Fernández-Martínez, M.; Sánchez-Granero, M. A.; Trinidad Segovia, J. E.
2013-11-01
Geometric method-based procedures, which will be called GM algorithms herein, were introduced in [M.A. Sánchez Granero, J.E. Trinidad Segovia, J. García Pérez, Some comments on Hurst exponent and the long memory processes on capital markets, Phys. A 387 (2008) 5543-5551], to efficiently calculate the self-similarity exponent of a time series. In that paper, the authors showed empirically that these algorithms, based on a geometrical approach, are more accurate than the classical algorithms, especially with short length time series. The authors checked that GM algorithms are good when working with (fractional) Brownian motions. Moreover, in [J.E. Trinidad Segovia, M. Fernández-Martínez, M.A. Sánchez-Granero, A note on geometric method-based procedures to calculate the Hurst exponent, Phys. A 391 (2012) 2209-2214], a mathematical background for the validity of such procedures to estimate the self-similarity index of any random process with stationary and self-affine increments was provided. In particular, they proved theoretically that GM algorithms are also valid to explore long-memory in (fractional) Lévy stable motions. In this paper, we prove empirically by Monte Carlo simulation that GM algorithms are able to calculate accurately the self-similarity index in Lévy stable motions and find empirical evidence that they are more precise than the absolute value exponent (denoted by AVE onwards) and the multifractal detrended fluctuation analysis (MF-DFA) algorithms, especially with a short length time series. We also compare them with the generalized Hurst exponent (GHE) algorithm and conclude that both GM2 and GHE algorithms are the most accurate to study financial series. In addition to that, we provide empirical evidence, based on the accuracy of GM algorithms to estimate the self-similarity index in Lévy motions, that the evolution of the stocks of some international market indices, such as U.S. Small Cap and Nasdaq100, cannot be modelized by means of a
User Matching with Relation to the Stable Marriage Problem in Cognitive Radio Networks
Hamza, Doha R.
2017-03-20
We consider a network comprised of multiple primary users (PUs) and multiple secondary users (SUs), where the SUs seek access to a set of orthogonal channels each occupied by one PU. Only one SU is allowed to coexist with a given PU. We propose a distributed matching algorithm to pair the network users, where a Stackelberg game model is assumed for the interaction between the paired PU and SU. The selected secondary is given access in exchange for monetary compensation to the primary. The PU optimizes the interference price it charges to a given SU and the power allocation to maintain communication. The SU optimizes its power demand so as to maximize its utility. Our algorithm provides a unique stable matching. Numerical results indicate the advantage of the proposed algorithm over other reference schemes.
He, Hongsen; Lu, Jing; Chen, Jingdong; Qiu, Xiaojun; Benesty, Jacob
2014-08-01
Blind multichannel identification is generally sensitive to background noise. Although there have been some efforts in the literature devoted to improving the robustness of blind multichannel identification with respect to noise, most of those works assume that the noise is Gaussian distributed, which is often not valid in real room acoustic environments. This paper deals with the more practical scenario where the noise is not Gaussian. To improve the robustness of blind multichannel identification to non-Gaussian noise, a robust normalized multichannel frequency-domain least-mean M-estimate algorithm is developed. Unlike the traditional approaches that use the squared error as the cost function, the proposed algorithm uses an M-estimator to form the cost function, which is shown to be immune to non-Gaussian noise with a symmetric α-stable distribution. Experiments based on the identification of a single-input/multiple-output acoustic system demonstrate the robustness of the proposed algorithm.
Stable chaos in fluctuation driven neural circuits
International Nuclear Information System (INIS)
Angulo-Garcia, David; Torcini, Alessandro
2014-01-01
Highlights: • Nonlinear instabilities in fluctuation driven (balanced) neural circuits are studied. • Balanced networks display chaos and stable phases at different post-synaptic widths. • Linear instabilities coexists with nonlinear ones in the chaotic regime. • Erratic motion appears also in linearly stable phase due to stable chaos. - Abstract: We study the dynamical stability of pulse coupled networks of leaky integrate-and-fire neurons against infinitesimal and finite perturbations. In particular, we compare mean versus fluctuations driven networks, the former (latter) is realized by considering purely excitatory (inhibitory) sparse neural circuits. In the excitatory case the instabilities of the system can be completely captured by an usual linear stability (Lyapunov) analysis, whereas the inhibitory networks can display the coexistence of linear and nonlinear instabilities. The nonlinear effects are associated to finite amplitude instabilities, which have been characterized in terms of suitable indicators. For inhibitory coupling one observes a transition from chaotic to non chaotic dynamics by decreasing the pulse-width. For sufficiently fast synapses the system, despite showing an erratic evolution, is linearly stable, thus representing a prototypical example of stable chaos
Metabolic studies in man using stable isotopes
International Nuclear Information System (INIS)
Faust, H.; Jung, K.; Krumbiegel, P.
1993-01-01
In this project, stable isotope compounds and stable isotope pharmaceuticals were used (with emphasis on the application of 15 N) to study several aspects of nitrogen metabolism in man. Of the many methods available, the 15 N stable isotope tracer technique holds a special position because the methodology for application and nitrogen isotope analysis is proven and reliable. Valid routine methods using 15 N analysis by emission spectrometry have been demonstrated. Several methods for the preparation of biological material were developed during our participation in the Coordinated Research Programme. In these studies, direct procedures (i.e. use of diluted urine as a samples without chemical preparation) or rapid isolation methods were favoured. Within the scope of the Analytical Quality Control Service (AQCS) enriched stable isotope reference materials for medical and biological studies were prepared and are now available through the International Atomic Energy Agency. The materials are of special importance as the increasing application of stable isotopes as tracers in medical, biological and agricultural studies has focused interest on reliable measurements of biological material of different origin. 24 refs
Implicit Kalman filter algorithm for nuclear reactor analysis
International Nuclear Information System (INIS)
Hassberger, J.A.; Lee, J.C.
1986-01-01
Artificial intelligence (AI) is currently the hot topic in nuclear power plant diagnostics and control. Recently, researchers have considered the use of simulation as knowledge in which faster than real-time best-estimate simulations based on first principles are tightly coupled with AI systems for analyzing power plant transients on-line. On-line simulations can be improved through a Kalman filter, a mathematical technique for obtaining the optimal estimate of a system state given the information contained in the equations of system dynamics and measurements made on the system. Filtering can be used to systemically adjust parameters of a low-order simulation model to obtain reasonable agreement between the model and actual plant dynamics. The authors present here a general Kalman filtering algorithm that derives its information of system dynamics implicitly and naturally from the discrete time step-series of state estimates available from a simulation program. Previous research has demonstrated that models adjusted on past data can be coupled with an intelligent controller to predict the future time-course of plant transients
Backtrack Orbit Search Algorithm
Knowles, K.; Swick, R.
2002-12-01
A Mathematical Solution to a Mathematical Problem. With the dramatic increase in satellite-born sensor resolution traditional methods of spatially searching for orbital data have become inadequate. As data volumes increase end-users of the data have become increasingly intolerant of false positives. And, as computing power rapidly increases end-users have come to expect equally rapid search speeds. Meanwhile data archives have an interest in delivering the minimum amount of data that meets users' needs. This keeps their costs down and allows them to serve more users in a more timely manner. Many methods of spatial search for orbital data have been tried in the past and found wanting. The ever popular lat/lon bounding box on a flat Earth is highly inaccurate. Spatial search based on nominal "orbits" is somewhat more accurate at much higher implementation cost and slower performance. Spatial search of orbital data based on predict orbit models are very accurate at a much higher maintenance cost and slower performance. This poster describes the Backtrack Orbit Search Algorithm--an alternative spatial search method for orbital data. Backtrack has a degree of accuracy that rivals predict methods while being faster, less costly to implement, and less costly to maintain than other methods.
Diagnostic algorithm for syncope.
Mereu, Roberto; Sau, Arunashis; Lim, Phang Boon
2014-09-01
Syncope is a common symptom with many causes. Affecting a large proportion of the population, both young and old, it represents a significant healthcare burden. The diagnostic approach to syncope should be focused on the initial evaluation, which includes a detailed clinical history, physical examination and 12-lead electrocardiogram. Following the initial evaluation, patients should be risk-stratified into high or low-risk groups in order to guide further investigations and management. Patients with high-risk features should be investigated further to exclude significant structural heart disease or arrhythmia. The ideal currently-available investigation should allow ECG recording during a spontaneous episode of syncope, and when this is not possible, an implantable loop recorder may be considered. In the emergency room setting, acute causes of syncope must also be considered including severe cardiovascular compromise due to pulmonary, cardiac or vascular pathology. While not all patients will receive a conclusive diagnosis, risk-stratification in patients to guide appropriate investigations in the context of a diagnostic algorithm should allow a benign prognosis to be maintained. Copyright © 2014 Elsevier B.V. All rights reserved.
Toward an Algorithmic Pedagogy
Directory of Open Access Journals (Sweden)
Holly Willis
2007-01-01
Full Text Available The demand for an expanded definition of literacy to accommodate visual and aural media is not particularly new, but it gains urgency as college students transform, becoming producers of media in many of their everyday social activities. The response among those who grapple with these issues as instructors has been to advocate for new definitions of literacy and particularly, an understanding of visual literacy. These efforts are exemplary, and promote a much needed rethinking of literacy and models of pedagogy. However, in what is more akin to a manifesto than a polished argument, this essay argues that we need to push farther: What if we moved beyond visual rhetoric, as well as a game-based pedagogy and the adoption of a broad range of media tools on campus, toward a pedagogy grounded fundamentally in a media ecology? Framing this investigation in terms of a media ecology allows us to take account of the multiply determining relationships wrought not just by individual media, but by the interrelationships, dependencies and symbioses that take place within the dynamic system that is today’s high-tech university. An ecological approach allows us to examine what happens when new media practices collide with computational models, providing a glimpse of possible transformations not only ways of being but ways of teaching and learning. How, then, may pedagogical practices be transformed computationally or algorithmically and to what ends?
Streaming Algorithms for Line Simplification
DEFF Research Database (Denmark)
Abam, Mohammad; de Berg, Mark; Hachenberger, Peter
2010-01-01
this problem in a streaming setting, where we only have a limited amount of storage, so that we cannot store all the points. We analyze the competitive ratio of our algorithms, allowing resource augmentation: we let our algorithm maintain a simplification with 2k (internal) points and compare the error of our...
Echo Cancellation I: Algorithms Simulation
Directory of Open Access Journals (Sweden)
P. Sovka
2000-04-01
Full Text Available Echo cancellation system used in mobile communications is analyzed.Convergence behavior and misadjustment of several LMS algorithms arecompared. The misadjustment means errors in filter weight estimation.The resulting echo suppression for discussed algorithms with simulatedas well as rela speech signals is evaluated. The optional echocancellation configuration is suggested.
Algorithms on ensemble quantum computers.
Boykin, P Oscar; Mor, Tal; Roychowdhury, Vwani; Vatan, Farrokh
2010-06-01
In ensemble (or bulk) quantum computation, all computations are performed on an ensemble of computers rather than on a single computer. Measurements of qubits in an individual computer cannot be performed; instead, only expectation values (over the complete ensemble of computers) can be measured. As a result of this limitation on the model of computation, many algorithms cannot be processed directly on such computers, and must be modified, as the common strategy of delaying the measurements usually does not resolve this ensemble-measurement problem. Here we present several new strategies for resolving this problem. Based on these strategies we provide new versions of some of the most important quantum algorithms, versions that are suitable for implementing on ensemble quantum computers, e.g., on liquid NMR quantum computers. These algorithms are Shor's factorization algorithm, Grover's search algorithm (with several marked items), and an algorithm for quantum fault-tolerant computation. The first two algorithms are simply modified using a randomizing and a sorting strategies. For the last algorithm, we develop a classical-quantum hybrid strategy for removing measurements. We use it to present a novel quantum fault-tolerant scheme. More explicitly, we present schemes for fault-tolerant measurement-free implementation of Toffoli and σ(z)(¼) as these operations cannot be implemented "bitwise", and their standard fault-tolerant implementations require measurement.
International Nuclear Information System (INIS)
Grady, M.
1986-01-01
I describe a fast fermion algorithm which utilizes pseudofermion fields but appears to have little or no systematic error. Test simulations on two-dimensional gauge theories are described. A possible justification for the algorithm being exact is discussed. 8 refs
Global alignment algorithms implementations | Fatumo ...
African Journals Online (AJOL)
In this paper, we implemented the two routes for sequence comparison, that is; the dotplot and Needleman-wunsch algorithm for global sequence alignment. Our algorithms were implemented in python programming language and were tested on Linux platform 1.60GHz, 512 MB of RAM SUSE 9.2 and 10.1 versions.
Recovery Rate of Clustering Algorithms
Li, Fajie; Klette, Reinhard; Wada, T; Huang, F; Lin, S
2009-01-01
This article provides a simple and general way for defining the recovery rate of clustering algorithms using a given family of old clusters for evaluating the performance of the algorithm when calculating a family of new clusters. Under the assumption of dealing with simulated data (i.e., known old
Diversity-Guided Evolutionary Algorithms
DEFF Research Database (Denmark)
Ursem, Rasmus Kjær
2002-01-01
Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few algorit...
Quantum algorithms and learning theory
Arunachalam, S.
2018-01-01
This thesis studies strengths and weaknesses of quantum computers. In the first part we present three contributions to quantum algorithms. 1) consider a search space of N elements. One of these elements is "marked" and our goal is to find this. We describe a quantum algorithm to solve this problem
Where are the parallel algorithms?
Voigt, R. G.
1985-01-01
Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.
Online co-regularized algorithms
Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.
2012-01-01
We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks
Algorithms in combinatorial design theory
Colbourn, CJ
1985-01-01
The scope of the volume includes all algorithmic and computational aspects of research on combinatorial designs. Algorithmic aspects include generation, isomorphism and analysis techniques - both heuristic methods used in practice, and the computational complexity of these operations. The scope within design theory includes all aspects of block designs, Latin squares and their variants, pairwise balanced designs and projective planes and related geometries.
Executable Pseudocode for Graph Algorithms
B. Ó Nualláin (Breanndán)
2015-01-01
textabstract Algorithms are written in pseudocode. However the implementation of an algorithm in a conventional, imperative programming language can often be scattered over hundreds of lines of code thus obscuring its essence. This can lead to difficulties in understanding or verifying the
On exact algorithms for treewidth
Bodlaender, H.L.; Fomin, F.V.; Koster, A.M.C.A.; Kratsch, D.; Thilikos, D.M.
2006-01-01
We give experimental and theoretical results on the problem of computing the treewidth of a graph by exact exponential time algorithms using exponential space or using only polynomial space. We first report on an implementation of a dynamic programming algorithm for computing the treewidth of a
Cascade Error Projection Learning Algorithm
Duong, T. A.; Stubberud, A. R.; Daud, T.
1995-01-01
A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.
On some topological properties of stable measures
DEFF Research Database (Denmark)
Nielsen, Carsten Krabbe
1996-01-01
Summary The paper shows that the set of stable probability measures and the set of Rational Beliefs relative to a given stationary measure are closed in the strong topology, but not closed in the topology of weak convergence. However, subsets of the set of stable probability measures which...... are characterized by uniformity of convergence of the empirical distribution are closed in the topology of weak convergence. It is demonstrated that such subsets exist. In particular, there is an increasing sequence of sets of SIDS measures who's union is the set of all SIDS measures generated by a particular...... system and such that each subset consists of stable measures. The uniformity requirement has a natural interpretation in terms of plausibility of Rational Beliefs...
Concentration of stable elements in food products
Energy Technology Data Exchange (ETDEWEB)
Montford, M.A.; Shank, K.E.; Hendricks, C.; Oakes, T.W.
1980-01-01
Food samples were taken from commercial markets and analyzed for stable element content. The concentrations of most stable elements (Ag, Al, As, Au, Ba, Br, Ca, Ce, Cl, Co, Cr, Cs, Cu, Fe, Hf, I, K, La, Mg, Mn, Mo, Na, Rb, Sb, Sc, Se, Sr, Ta, Th, Ti, V, Zn, Zr) were determined using multiple-element neutron activation analysis, while the concentrations of other elements (Cd, Hg, Ni, Pb) were determined using atomic absorption. The relevance of the concentrations found are noted in relation to other literature values. An earlier study was extended to include the determination of the concentration of stable elements in home-grown products in the vicinity of the Oak Ridge National Laboratory. Comparisons between the commercial and local food-stuff values are discussed.
Margnat, Florent; Fortuné, Véronique
2010-10-01
An iterative algorithm is developed for the computation of aeroacoustic integrals in the time domain. It is specially designed for the generation of acoustic images, thus giving access to the wavefront pattern radiated by an unsteady flow when large size source fields are considered. It is based on an iterative selection of source-observer pairs involved in the radiation process at a given time-step. It is written as an advanced-time approach, allowing easy connection with flow simulation tools. Its efficiency is related to the fraction of an observer grid step that a sound-wave covers during one time step. Test computations were performed, showing the CPU-time to be 30 to 50 times smaller than with a classical non-iterative procedure. The algorithm is applied to compute the sound radiated by a spatially evolving mixing-layer flow: it is used to compute and visualize contributions to the acoustic field from the different terms obtained by a decomposition of the Lighthill source term.
Stability Analysis of Learning Algorithms for Ontology Similarity Computation
Directory of Open Access Journals (Sweden)
Wei Gao
2013-01-01
Full Text Available Ontology, as a useful tool, is widely applied in lots of areas such as social science, computer science, and medical science. Ontology concept similarity calculation is the key part of the algorithms in these applications. A recent approach is to make use of similarity between vertices on ontology graphs. It is, instead of pairwise computations, based on a function that maps the vertex set of an ontology graph to real numbers. In order to obtain this, the ranking learning problem plays an important and essential role, especially k-partite ranking algorithm, which is suitable for solving some ontology problems. A ranking function is usually used to map the vertices of an ontology graph to numbers and assign ranks of the vertices through their scores. Through studying a training sample, such a function can be learned. It contains a subset of vertices of the ontology graph. A good ranking function means small ranking mistakes and good stability. For ranking algorithms, which are in a well-stable state, we study generalization bounds via some concepts of algorithmic stability. We also find that kernel-based ranking algorithms stated as regularization schemes in reproducing kernel Hilbert spaces satisfy stability conditions and have great generalization abilities.
Novel medical image enhancement algorithms
Agaian, Sos; McClendon, Stephen A.
2010-01-01
In this paper, we present two novel medical image enhancement algorithms. The first, a global image enhancement algorithm, utilizes an alpha-trimmed mean filter as its backbone to sharpen images. The second algorithm uses a cascaded unsharp masking technique to separate the high frequency components of an image in order for them to be enhanced using a modified adaptive contrast enhancement algorithm. Experimental results from enhancing electron microscopy, radiological, CT scan and MRI scan images, using the MATLAB environment, are then compared to the original images as well as other enhancement methods, such as histogram equalization and two forms of adaptive contrast enhancement. An image processing scheme for electron microscopy images of Purkinje cells will also be implemented and utilized as a comparison tool to evaluate the performance of our algorithm.
Elementary functions algorithms and implementation
Muller, Jean-Michel
2016-01-01
This textbook presents the concepts and tools necessary to understand, build, and implement algorithms for computing elementary functions (e.g., logarithms, exponentials, and the trigonometric functions). Both hardware- and software-oriented algorithms are included, along with issues related to accurate floating-point implementation. This third edition has been updated and expanded to incorporate the most recent advances in the field, new elementary function algorithms, and function software. After a preliminary chapter that briefly introduces some fundamental concepts of computer arithmetic, such as floating-point arithmetic and redundant number systems, the text is divided into three main parts. Part I considers the computation of elementary functions using algorithms based on polynomial or rational approximations and using table-based methods; the final chapter in this section deals with basic principles of multiple-precision arithmetic. Part II is devoted to a presentation of “shift-and-add” algorithm...
Mixture reduction algorithms for target tracking in clutter
Salmond, David J.
1990-10-01
The Bayesian solution of the problem of tracking a target in random clutter gives rise to Gaussian mixture distributions, which are composed of an ever increasing number of components. To implement such a tracking filter, the growth of components must be controlled by approximating the mixture distribution. A popular and economical scheme is the Probabilistic Data Association Filter (PDAF), which reduces the mixture to a single Gaussian component at each time step. However this approximation may destroy valuable information, especially if several significant, well spaced components are present. In this paper, two new algorithms for reducing Gaussian mixture distributions are presented. These techniques preserve the mean and covariance of the mixture, and the fmal approximation is itself a Gaussian mixture. The reduction is achieved by successively merging pairs of components or groups of components until their number is reduced to some specified limit. Further reduction will then proceed while the approximation to the main features of the original distribution is still good. The performance of the most economical of these algorithms has been compared with that of the PDAF for the problem of tracking a single target which moves in a plane according to a second order model. A linear sensor which measures target position is corrupted by uniformly distributed clutter. Given a detection probability of unity and perfect knowledge of initial target position and velocity, this problem depends on only tw‡ non-dimensional parameters. Monte Carlo simulation has been employed to identify the region of this parameter space where significant performance improvement is obtained over the PDAF.
Stable isotopes in Lithuanian bioarcheological material
Skipityte, Raminta; Jankauskas, Rimantas; Remeikis, Vidmantas
2015-04-01
Investigation of bioarcheological material of ancient human populations allows us to understand the subsistence behavior associated with various adaptations to the environment. Feeding habits are essential to the survival and growth of ancient populations. Stable isotope analysis is accepted tool in paleodiet (Schutkowski et al, 1999) and paleoenvironmental (Zernitskaya et al, 2014) studies. However, stable isotopes can be useful not only in investigating human feeding habits but also in describing social and cultural structure of the past populations (Le Huray and Schutkowski, 2005). Only few stable isotope investigations have been performed before in Lithuanian region suggesting a quite uniform diet between males and females and protein intake from freshwater fish and animal protein. Previously, stable isotope analysis has only been used to study a Stone Age population however, more recently studies have been conducted on Iron Age and Late medieval samples (Jacobs et al, 2009). Anyway, there was a need for more precise examination. Stable isotope analysis were performed on human bone collagen and apatite samples in this study. Data represented various ages (from 5-7th cent. to 18th cent.). Stable carbon and nitrogen isotope analysis on medieval populations indicated that individuals in studied sites in Lithuania were almost exclusively consuming C3 plants, C3 fed terrestrial animals, and some freshwater resources. Current investigation demonstrated social differences between elites and country people and is promising in paleodietary and daily life reconstruction. Acknowledgement I thank prof. dr. G. Grupe, Director of the Anthropological and Palaeoanatomical State Collection in Munich for providing the opportunity to work in her laboratory. The part of this work was funded by DAAD. Antanaitis-Jacobs, Indre, et al. "Diet in early Lithuanian prehistory and the new stable isotope evidence." Archaeologia Baltica 12 (2009): 12-30. Le Huray, Jonathan D., and Holger
Bordism, stable homotopy and adams spectral sequences
Kochman, Stanley O
1996-01-01
This book is a compilation of lecture notes that were prepared for the graduate course "Adams Spectral Sequences and Stable Homotopy Theory" given at The Fields Institute during the fall of 1995. The aim of this volume is to prepare students with a knowledge of elementary algebraic topology to study recent developments in stable homotopy theory, such as the nilpotence and periodicity theorems. Suitable as a text for an intermediate course in algebraic topology, this book provides a direct exposition of the basic concepts of bordism, characteristic classes, Adams spectral sequences, Brown-Peter
Modelling stable water isotopes: Status and perspectives
Directory of Open Access Journals (Sweden)
Werner M.
2010-12-01
Full Text Available Modelling of stable water isotopes H2 18O and HDO within various parts of the Earth’s hydrological cycle has clearly improved our understanding of the interplay between climatic variations and related isotope fractionation processes. In this article key principles and major research results of stable water isotope modelling studies are described. Emphasis is put on research work using explicit isotope diagnostics within general circulation models as this highly complex model setup bears many resemblances with studies using simpler isotope modelling approaches.
Energy Technology Data Exchange (ETDEWEB)
Joseph, Ilon [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2014-05-27
Jacobian-free Newton-Krylov (JFNK) algorithms are a potentially powerful class of methods for solving the problem of coupling codes that address dfferent physics models. As communication capability between individual submodules varies, different choices of coupling algorithms are required. The more communication that is available, the more possible it becomes to exploit the simple sparsity pattern of the Jacobian, albeit of a large system. The less communication that is available, the more dense the Jacobian matrices become and new types of preconditioners must be sought to efficiently take large time steps. In general, methods that use constrained or reduced subsystems can offer a compromise in complexity. The specific problem of coupling a fluid plasma code to a kinetic neutrals code is discussed as an example.
Energy Technology Data Exchange (ETDEWEB)
Osei-Kuffuor, Daniel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fattebert, Jean-Luc [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2014-01-01
We present the first truly scalable first-principles molecular dynamics algorithm with O(N) complexity and controllable accuracy, capable of simulating systems with finite band gaps of sizes that were previously impossible with this degree of accuracy. By avoiding global communications, we provide a practical computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic wave functions are confined, and a cutoff beyond which the components of the overlap matrix can be omitted when computing selected elements of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to 101 952 atoms on 23 328 processors, with a wall-clock time of the order of 1 min per molecular dynamics time step and numerical error on the forces of less than 7x10^{-4} Ha/Bohr.
Numerically stable fluid–structure interactions between compressible flow and solid structures
Grétarsson, Jón Tómas
2011-04-01
We propose a novel method to implicitly two-way couple Eulerian compressible flow to volumetric Lagrangian solids. The method works for both deformable and rigid solids and for arbitrary equations of state. The method exploits the formulation of [11] which solves compressible fluid in a semi-implicit manner, solving for the advection part explicitly and then correcting the intermediate state to time tn+1 using an implicit pressure, obtained by solving a modified Poisson system. Similar to previous fluid-structure interaction methods, we apply pressure forces to the solid and enforce a velocity boundary condition on the fluid in order to satisfy a no-slip constraint. Unlike previous methods, however, we apply these coupled interactions implicitly by adding the constraint to the pressure system and combining it with any implicit solid forces in order to obtain a strongly coupled, symmetric indefinite system (similar to [17], which only handles incompressible flow). We also show that, under a few reasonable assumptions, this system can be made symmetric positive-definite by following the methodology of [16]. Because our method handles the fluid-structure interactions implicitly, we avoid introducing any new time step restrictions and obtain stable results even for high density-to-mass ratios, where explicit methods struggle or fail. We exactly conserve momentum and kinetic energy (thermal fluid-structure interactions are not considered) at the fluid-structure interface, and hence naturally handle highly non-linear phenomenon such as shocks, contacts and rarefactions. © 2011 Elsevier Inc.
International Nuclear Information System (INIS)
Johnston, Hans; Liu Jianguo
2004-01-01
We present numerical schemes for the incompressible Navier-Stokes equations based on a primitive variable formulation in which the incompressibility constraint has been replaced by a pressure Poisson equation. The pressure is treated explicitly in time, completely decoupling the computation of the momentum and kinematic equations. The result is a class of extremely efficient Navier-Stokes solvers. Full time accuracy is achieved for all flow variables. The key to the schemes is a Neumann boundary condition for the pressure Poisson equation which enforces the incompressibility condition for the velocity field. Irrespective of explicit or implicit time discretization of the viscous term in the momentum equation the explicit time discretization of the pressure term does not affect the time step constraint. Indeed, we prove unconditional stability of the new formulation for the Stokes equation with explicit treatment of the pressure term and first or second order implicit treatment of the viscous term. Systematic numerical experiments for the full Navier-Stokes equations indicate that a second order implicit time discretization of the viscous term, with the pressure and convective terms treated explicitly, is stable under the standard CFL condition. Additionally, various numerical examples are presented, including both implicit and explicit time discretizations, using spectral and finite difference spatial discretizations, demonstrating the accuracy, flexibility and efficiency of this class of schemes. In particular, a Galerkin formulation is presented requiring only C 0 elements to implement
Study of variations of stable isotopes in precipitation: case of Antananarivo
International Nuclear Information System (INIS)
Randrianarivola, M.
2014-01-01
The isotopic signature of precipitation is the input signal in any study of hydrological cycle. The scientific objective of this work is to better understand the isotopic variations in precipitation and identify their processes. We used the network of measurement GNIP (Global Network of Isotopes in Precipitation) in which data is acquired by the International Atomic Energy Agency through isotope hydrology laboratory at INSTN-Madagascar. Analyzes stable isotopes ( 18O and 2 H), were performed at a monthly time step. We were able to confirm the relative importance of different mechanisms governing the isotopic composition of precipitation. The spatial distribution of abundance ratios of Antananarivo rain is in fact dictated by the temperature which follow indirectly from the effects of altitude and seasonal variations. At the monthly scale, local meteoric water line δ 2 H versus δ 18 O shows the specificity of Antananarivo (deuterium excess of 17.5‰ ). Additionally, seasonal variations in precipitation is related to the temperature such that in summer (d=15‰) and winter (d=18‰) [fr
An energy-stable time-integrator for phase-field models
Vignal, Philippe
2016-12-27
We introduce a provably energy-stable time-integration method for general classes of phase-field models with polynomial potentials. We demonstrate how Taylor series expansions of the nonlinear terms present in the partial differential equations of these models can lead to expressions that guarantee energy-stability implicitly, which are second-order accurate in time. The spatial discretization relies on a mixed finite element formulation and isogeometric analysis. We also propose an adaptive time-stepping discretization that relies on a first-order backward approximation to give an error-estimator. This error estimator is accurate, robust, and does not require the computation of extra solutions to estimate the error. This methodology can be applied to any second-order accurate time-integration scheme. We present numerical examples in two and three spatial dimensions, which confirm the stability and robustness of the method. The implementation of the numerical schemes is done in PetIGA, a high-performance isogeometric analysis framework.
An Implementable First-Order Primal-Dual Algorithm for Structured Convex Optimization
Directory of Open Access Journals (Sweden)
Feng Ma
2014-01-01
Full Text Available Many application problems of practical interest can be posed as structured convex optimization models. In this paper, we study a new first-order primaldual algorithm. The method can be easily implementable, provided that the resolvent operators of the component objective functions are simple to evaluate. We show that the proposed method can be interpreted as a proximal point algorithm with a customized metric proximal parameter. Convergence property is established under the analytic contraction framework. Finally, we verify the efficiency of the algorithm by solving the stable principal component pursuit problem.
A molecular dynamics algorithm for simulation of field theories in the canonical ensemble
International Nuclear Information System (INIS)
Kogut, J.B.; Sinclair, D.K.
1986-01-01
We add a single scalar degree of freedom (''demon'') to the microcanonical ensemble which converts its molecular dynamics into a simulation method for the canonical ensemble (euclidean path integral) of the underlying field theory. This generalization of the microcanonical molecular dynamics algorithm simulates the field theory at fixed coupling with a completely deterministic procedure. We discuss the finite size effects of the method, the equipartition theorem and ergodicity. The method is applied to the planar model in two dimensions and SU(3) lattice gauge theory with four species of light, dynamical quarks in four dimensions. The method is much less sensitive to its discrete time step than conventional Langevin equation simulations of the canonical ensemble. The method is a straightforward generalization of a procedure introduced by S. Nose for molecular physics. (orig.)
An improved flux-split algorithm applied to hypersonic flows in chemical equilibrium
Palmer, Grant
1988-01-01
An explicit, finite-difference, shock-capturing numerical algorithm is presented and applied to hypersonic flows assumed to be in thermochemical equilibrium. Real-gas chemistry is either loosely coupled to the gasdynamics by way of a Gibbs free energy minimization package or fully coupled using species mass conservation equations with finite-rate chemical reactions. A scheme is developed that maintains stability in the explicit, finite-rate formulation while allowing relatively high time steps. The codes use flux vector splitting to difference the inviscid fluxes and employ real-gas corrections to viscosity and thermal conductivity. Numerical results are compared against existing ballistic range and flight data. Flows about complex geometries are also computed.
Recent developments in structure-preserving algorithms for oscillatory differential equations
Wu, Xinyuan
2018-01-01
The main theme of this book is recent progress in structure-preserving algorithms for solving initial value problems of oscillatory differential equations arising in a variety of research areas, such as astronomy, theoretical physics, electronics, quantum mechanics and engineering. It systematically describes the latest advances in the development of structure-preserving integrators for oscillatory differential equations, such as structure-preserving exponential integrators, functionally fitted energy-preserving integrators, exponential Fourier collocation methods, trigonometric collocation methods, and symmetric and arbitrarily high-order time-stepping methods. Most of the material presented here is drawn from the recent literature. Theoretical analysis of the newly developed schemes shows their advantages in the context of structure preservation. All the new methods introduced in this book are proven to be highly effective compared with the well-known codes in the scientific literature. This book also addre...
Gnoffo, Peter A.; Johnston, Christopher O.
2011-01-01
Implementations of a model for equilibrium, steady-state ablation boundary conditions are tested for the purpose of providing strong coupling with a hypersonic flow solver. The objective is to remove correction factors or film cooling approximations that are usually applied in coupled implementations of the flow solver and the ablation response. Three test cases are considered - the IRV-2, the Galileo probe, and a notional slender, blunted cone launched at 10 km/s from the Earth's surface. A successive substitution is employed and the order of succession is varied as a function of surface temperature to obtain converged solutions. The implementation is tested on a specified trajectory for the IRV-2 to compute shape change under the approximation of steady-state ablation. Issues associated with stability of the shape change algorithm caused by explicit time step limits are also discussed.
An Efficient Surface Algorithm for Random-Particle Simulation of Vorticity and Heat Transport
Smith, P. A.; Stansby, P. K.
1989-04-01
A new surface algorithm has been incorporated into the random-vortex method for the simulation of 2-dimensional laminar flow, in which vortex particles are deleted rather than reflected as they cross a solid surface. This involves a modification to the strength and random walk of newly created vortex particles. Computations of the early stages of symmetric, impulsively started flow around a circular cylinder for a wide range of Reynolds numbers demonstrate that the number of vortices required for convergence is substantially reduced. The method has been further extended to accommodate forced convective heat transfer where temperature particles are created at a surface to satisfy the condition of constant surface temperature. Vortex and temperature particles are handled together throughout each time step. For long runs, in which a steady state is reached, comparison is made with some time-averaged experimental heat transfer data for Reynolds numbers up to a few hundred. A Karman vortex street occurs at the higher Reynolds numbers.
Progress in parallel implementation of the multilevel plane wave time domain algorithm
Liu, Yang
2013-07-01
The computational complexity and memory requirements of classical schemes for evaluating transient electromagnetic fields produced by Ns dipoles active for Nt time steps scale as O(NtN s 2) and O(Ns 2), respectively. The multilevel plane wave time domain (PWTD) algorithm [A.A. Ergin et al., Antennas and Propagation Magazine, IEEE, vol. 41, pp. 39-52, 1999], viz. the extension of the frequency domain fast multipole method (FMM) to the time domain, reduces the above costs to O(NtNslog2Ns) and O(Ns α) with α = 1.5 for surface current distributions and α = 4/3 for volumetric ones. Its favorable computational and memory costs notwithstanding, serial implementations of the PWTD scheme unfortunately remain somewhat limited in scope and ill-suited to tackle complex real-world scattering problems, and parallel implementations are called for. © 2013 IEEE.
Algorithm for calculating spectral intensity due to charged particles in arbitrary motion
Directory of Open Access Journals (Sweden)
A. G. R. Thomas
2010-02-01
Full Text Available An algorithm for calculating the spectral intensity of radiation due to the coherent addition of many particles with arbitrary trajectories is described. Direct numerical integration of the Liénard-Wiechert potentials, in the far field, for extremely high photon energies and many particles is made computationally feasible by a mixed analytic and numerical method. Exact integrals of spectral intensity are made between discretely sampled trajectories, by assuming the space-time four-vector is a quadratic function of proper time. The integral Fourier transform of the trajectory with respect to time, the modulus squared of which comprises the spectral intensity, can then be formed by piecewise summation of exact integrals between discrete points. Because of this, the calculation is not restricted by discrete sampling bandwidth theory and, hence, for smooth trajectories, time steps many orders larger than the inverse of the frequency of interest can be taken.
Portable Health Algorithms Test System
Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.
2010-01-01
A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.
Pelvic radiography in ATLS algorithms: A diminishing role?
Directory of Open Access Journals (Sweden)
Buggy Donald J
2008-03-01
Full Text Available Abstract Background Pelvic x-ray is a routine part of the primary survey of polytraumatized patients according to Advanced Trauma Life Support (ATLS guidelines. However, pelvic CT is the gold standard imaging technique in the diagnosis of pelvic fractures. This study was conducted to confirm the safety of a modified ATLS algorithm omitting pelvic x-ray in hemodynamically stable polytraumatized patients with clinically stable pelvis in favour of later pelvic examination by CT scan. Methods We conducted a retrospective analysis of all polytraumatized patients in our emergency room between 01.07.2004 and 31.01.2006. Inclusion criteria were blunt abdominal trauma, initial hemodynamic stability and a stable pelvis on clinical examination. We excluded patients requiring immediate intervention because of hemodynamic instability. Results We reviewed the records of n = 452 polytraumatized patients, of which n = 91 fulfilled inclusion criteria (56% male, mean age = 45 years. The mechanism of trauma included 43% road traffic accidents, 47% falls. In 68/91 (75% patients, both a pelvic x-ray and a CT examination were performed; the remainder had only pelvic CT. In 6/68 (9% patients, pelvic fracture was diagnosed by pelvic x-ray. None of these 6 patients was found having a false positive pelvic x-ray, i.e. there was no fracture on pelvic CT scan. In 3/68 (4% cases a fracture was missed in the pelvic x-ray, but confirmed on CT (false negative on x-ray. None of the diagnosed fractures needed an immediate therapeutic intervention. 5 (56% were classified type A fractures, and another 4 (44% B 2.1 in computed tomography (AO classification. One A 2.1 fracture was found in a clinically stable patient who only received CT scan (1/23. Conclusion While pelvic x-ray is an integral part of ATLS assessment, this retrospective study suggests that in hemodynamically stable patients with clinically stable pevis, its sensitivity is only 67% and it may safely be omitted in
Stable isotope analysis of dynamic lipidomics.
Brandsma, Joost; Bailey, Andrew P; Koster, Grielof; Gould, Alex P; Postle, Anthony D
2017-08-01
Metabolic pathway flux is a fundamental element of biological activity, which can be quantified using a variety of mass spectrometric techniques to monitor incorporation of stable isotope-labelled substrates into metabolic products. This article contrasts developments in electrospray ionisation mass spectrometry (ESI-MS) for the measurement of lipid metabolism with more established gas chromatography mass spectrometry and isotope ratio mass spectrometry methodologies. ESI-MS combined with diagnostic tandem MS/MS scans permits the sensitive and specific analysis of stable isotope-labelled substrates into intact lipid molecular species without the requirement for lipid hydrolysis and derivatisation. Such dynamic lipidomic methodologies using non-toxic stable isotopes can be readily applied to quantify lipid metabolic fluxes in clinical and metabolic studies in vivo. However, a significant current limitation is the absence of appropriate software to generate kinetic models of substrate incorporation into multiple products in the time domain. Finally, we discuss the future potential of stable isotope-mass spectrometry imaging to quantify the location as well as the extent of lipid synthesis. This article is part of a Special Issue entitled: BBALIP_Lipidomics Opinion Articles edited by Sepp Kohlwein. Copyright © 2017 Elsevier B.V. All rights reserved.
Petrography, compositional characteristics and stable isotope ...
African Journals Online (AJOL)
Petrography, compositional characteristics and stable isotope geochemistry of the Ewekoro formation from Ibese Corehole, eastern Dahomey basin, southwestern Nigeria. ME Nton, MO ... Preserved pore types such as; intercrystaline, moldic and vuggy pores were observed as predominant conduits for fluids. The major ...
petrography, compositional characteristics and stable isotope ...
African Journals Online (AJOL)
PROF EKWUEME
Subsurface samples of the predominantly carbonate Ewekoro Formation, obtained from Ibese core hole within the Dahomey basin were used in this study. Investigations entail petrographic, elemental composition as well as stable isotopes (carbon and oxygen) geochemistry in order to deduce the different microfacies and ...
Substitution of stable isotopes in Chlorella
Flaumenhaft, E.; Katz, J. J.; Uphaus, R. A.
1969-01-01
Replacement of biologically important isotopes in the alga Chlorella by corresponding heavier stable isotopes produces increasingly greater deviations from the normal cell size and changes the quality and distribution of certain cellular components. The usefulness of isotopically altered organisms increases interest in the study of such permuted organisms.
Champion Island, Galapagos Stable Oxygen Calibration Data
National Oceanic and Atmospheric Administration, Department of Commerce — Galapagos Coral Stable Oxygen Calibration Data. Sites: Bartolome Island: 0 deg, 17 min S, 90 deg 33 min W. Champion Island: 1 deg, 15 min S, 90 deg, 05 min W. Urvina...
Stable propagation of 'selfish'genetic elements
Indian Academy of Sciences (India)
Unknown
viruses such as the Epstein-Barr virus (Harris et al 1985;. Kanda et al 2001) and bovine papilloma virus (Lehman and Botchan 1998; Ilves et al 1999), which exist pre- dominantly as extrachromosomal episomes, have been shown to utilize chromosome tethering as a means for stable segregation. The tethering mechanism ...
Facies, dissolution seams and stable isotope compositions
Indian Academy of Sciences (India)
Stable isotope analysis of the limestone shows that 13C and 18O values are compatible with the early Mesoproterozoic open seawater composition. The ribbon limestone facies in the Rohtas Limestone is characterized by micritic beds, each decoupled in a lower band enriched and an upper band depleted in dissolution ...
Connected domination stable graphs upon edge addition ...
African Journals Online (AJOL)
A set S of vertices in a graph G is a connected dominating set of G if S dominates G and the subgraph induced by S is connected. We study the graphs for which adding any edge does not change the connected domination number. Keywords: Connected domination, connected domination stable, edge addition ...
Stable magnetic remanence in antiferromagnetic goethite.
Strangway, D W; McMahon, B E; Honea, R M
1967-11-10
Goethite, known to be antiferromagnetic, acquires thermoremanent magnetization at its Neel temperature of 120 degrees C. This remanence, extremely stable, is due to the presence of unbalanced spins in the antiferromagnetic structure; the spins may result from grain size, imperfections, or impurities.
American option valuation under time changed tempered stable Lévy processes
Gong, Xiaoli; Zhuang, Xintian
2017-01-01
Given that the underlying assets in financial markets exhibit stylized facts such as leptokurtosis, asymmetry, clustering properties and heteroskedasticity effect, this paper presents a novel model for pricing American option under the assumptions that the stock price processes are governed by time changed tempered stable Lévy process. As this model is constructed by introducing random time changes into tempered stable (TS) processes which specially refer to normal tempered stable (NTS) distribution as well as classical tempered stable (CTS) distribution, it permits infinite jumps as well as capturing random varying time in stochastic volatility, consequently taking into account the empirical facts such as leptokurtosis, skewness and volatility clustering behaviors. We employ the Fourier-cosine technique to calculate American option and propose the improved Particle Swarm optimization (IPSO) intelligent algorithm for model calibration. To demonstrate the advantage of the constructed model, we carry out empirical research on American index option in financial markets across wide ranges of models, with the time changing normal tempered stable distribution model yielding a superior performance than others.
Learning from nature: Nature-inspired algorithms
DEFF Research Database (Denmark)
Albeanu, Grigore; Madsen, Henrik; Popentiu-Vladicescu, Florin
2016-01-01
During last decade, the nature has inspired researchers to develop new algorithms. The largest collection of nature-inspired algorithms is biology-inspired: swarm intelligence (particle swarm optimization, ant colony optimization, cuckoo search, bees' algorithm, bat algorithm, firefly algorithm etc...
Stable ischemic heart disease in women: current perspectives
Directory of Open Access Journals (Sweden)
Samad F
2017-09-01
Full Text Available Fatima Samad,1 Anushree Agarwal,2 Zainab Samad3 1Aurora Cardiovascular Services, Aurora Sinai/Aurora St Luke’s Medical Centers, University of Wisconsin School of Medicine and Public Health, Milwaukee, WI, 2Division of Cardiology, Department of Medicine, University of California San Francisco, San Francisco, CA, 3Division of Cardiology, Department of Medicine, Duke University Medical Center, Durham, NC, USA Abstract: Cardiovascular disease is the leading cause of death in women accounting for 1 in every 4 female deaths. Pathophysiology of ischemic heart disease in women includes epicardial coronary artery, endothelial dysfunction, coronary vasospasm, plaque erosion and spontaneous coronary artery dissection. Angina is the most common presentation of stable ischemic heart disease (SIHD in women. Risk factors for SIHD include traditional risks such as older age, obesity (body mass index [BMI] >25 kg/m2, smoking, hypertension, dyslipidemia, cerebrovascular and peripheral vascular disease, sedentary lifestyle, family history of premature coronary artery disease, metabolic syndrome and diabetes mellitus, and nontraditional risk factors, such as gestational diabetes, insulin resistance/polycystic ovarian disease, pregnancy-induced hypertension, pre-eclampsia, eclampsia, menopause, mental stress and autoimmune diseases. Diagnostic testing can be used effectively to risk stratify women. Guidelines-directed medical therapy including aspirin, statins, beta-blocker therapy, calcium channel blockers and ranolazine should be instituted for symptom and ischemia management. Despite robust evidence regarding the adverse outcomes seen in women with ischemic heart disease, knowledge gaps exist in several areas. Future research needs to be directed toward a greater understanding of the role of nontraditional risk factors for SIHD in women, gaining deeper insights into the sex differences in therapeutic effects and formulating a sex-specific algorithm for the
Complex networks an algorithmic perspective
Erciyes, Kayhan
2014-01-01
Network science is a rapidly emerging field of study that encompasses mathematics, computer science, physics, and engineering. A key issue in the study of complex networks is to understand the collective behavior of the various elements of these networks.Although the results from graph theory have proven to be powerful in investigating the structures of complex networks, few books focus on the algorithmic aspects of complex network analysis. Filling this need, Complex Networks: An Algorithmic Perspective supplies the basic theoretical algorithmic and graph theoretic knowledge needed by every r
An investigation of genetic algorithms
International Nuclear Information System (INIS)
Douglas, S.R.
1995-04-01
Genetic algorithms mimic biological evolution by natural selection in their search for better individuals within a changing population. they can be used as efficient optimizers. This report discusses the developing field of genetic algorithms. It gives a simple example of the search process and introduces the concept of schema. It also discusses modifications to the basic genetic algorithm that result in species and niche formation, in machine learning and artificial evolution of computer programs, and in the streamlining of human-computer interaction. (author). 3 refs., 1 tab., 2 figs
Instance-specific algorithm configuration
Malitsky, Yuri
2014-01-01
This book presents a modular and expandable technique in the rapidly emerging research area of automatic configuration and selection of the best algorithm for the instance at hand. The author presents the basic model behind ISAC and then details a number of modifications and practical applications. In particular, he addresses automated feature generation, offline algorithm configuration for portfolio generation, algorithm selection, adaptive solvers, online tuning, and parallelization. The author's related thesis was honorably mentioned (runner-up) for the ACP Dissertation Award in 2014,
Quantum Computations: Fundamentals and Algorithms
International Nuclear Information System (INIS)
Duplij, S.A.; Shapoval, I.I.
2007-01-01
Basic concepts of quantum information theory, principles of quantum calculations and the possibility of creation on this basis unique on calculation power and functioning principle device, named quantum computer, are concerned. The main blocks of quantum logic, schemes of quantum calculations implementation, as well as some known today effective quantum algorithms, called to realize advantages of quantum calculations upon classical, are presented here. Among them special place is taken by Shor's algorithm of number factorization and Grover's algorithm of unsorted database search. Phenomena of decoherence, its influence on quantum computer stability and methods of quantum errors correction are described
Algorithms Design Techniques and Analysis
Alsuwaiyel, M H
1999-01-01
Problem solving is an essential part of every scientific discipline. It has two components: (1) problem identification and formulation, and (2) solution of the formulated problem. One can solve a problem on its own using ad hoc techniques or follow those techniques that have produced efficient solutions to similar problems. This requires the understanding of various algorithm design techniques, how and when to use them to formulate solutions and the context appropriate for each of them. This book advocates the study of algorithm design techniques by presenting most of the useful algorithm desi
Subcubic Control Flow Analysis Algorithms
DEFF Research Database (Denmark)
Midtgaard, Jan; Van Horn, David
We give the first direct subcubic algorithm for performing control flow analysis of higher-order functional programs. Despite the long held belief that inclusion-based flow analysis could not surpass the ``cubic bottleneck, '' we apply known set compression techniques to obtain an algorithm...... that runs in time O(n^3/log n) on a unit cost random-access memory model machine. Moreover, we refine the initial flow analysis into two more precise analyses incorporating notions of reachability. We give subcubic algorithms for these more precise analyses and relate them to an existing analysis from...
Strontium stable isotope behaviour accompanying basalt weathering
Burton, K. W.; Parkinson, I. J.; Gíslason, S. G. R.
2016-12-01
The strontium (Sr) stable isotope composition of rivers is strongly controlled by the balance of carbonate to silicate weathering (Krabbenhöft et al. 2010; Pearce et al. 2015). However, rivers draining silicate catchments possess distinctly heavier Sr stable isotope values than their bedrock compositions, pointing to significant fractionation during weathering. Some have argued for preferential release of heavy Sr from primary phases during chemical weathering, others for the formation of secondary weathering minerals that incorporate light isotopes. This study presents high-precision double-spike Sr stable isotope data for soils, rivers, ground waters and estuarine waters from Iceland, reflecting both natural weathering and societal impacts on those environments. The bedrock in Iceland is dominantly basaltic, d88/86Sr ≈ +0.27, extending to lighter values for rhyolites. Geothermal waters range from basaltic Sr stable compositions to those akin to seawater. Soil pore waters reflect a balance of input from primary mineral weathering, precipitation and litter recycling and removal into secondary phases and vegetation. Rivers and ground waters possess a wide range of d88/86Sr compositions from +0.101 to +0.858. Elemental and isotope data indicate that this fractionation primarily results from the formation or dissolution of secondary zeolite (d88/86Sr ≈ +0.10), but also carbonate (d88/86Sr ≈ +0.22) and sometimes anhydrite (d88/86Sr ≈ -0.73), driving the residual waters to heavier or lighter values, respectively. Estuarine waters largely reflect mixing with seawater, but are also be affected by adsorption onto particulates, again driving water to heavy values. Overall, these data indicate that the stability and nature of secondary weathering phases, exerts a strong control on the Sr stable isotope composition of silicate rivers. [1] Krabbenhöft et al. (2010) Geochim. Cosmochim. Acta 74, 4097-4109. [2] Pearce et al. (2015) Geochim. Cosmochim. Acta 157, 125-146.
Development of a Safety Management Web Tool for Horse Stables.
Leppälä, Jarkko; Kolstrup, Christina Lunner; Pinzke, Stefan; Rautiainen, Risto; Saastamoinen, Markku; Särkijärvi, Susanna
2015-11-12
Managing a horse stable involves risks, which can have serious consequences for the stable, employees, clients, visitors and horses. Existing industrial or farm production risk management tools are not directly applicable to horse stables and they need to be adapted for use by managers of different types of stables. As a part of the InnoEquine project, an innovative web tool, InnoHorse, was developed to support horse stable managers in business, safety, pasture and manure management. A literature review, empirical horse stable case studies, expert panel workshops and stakeholder interviews were carried out to support the design. The InnoHorse web tool includes a safety section containing a horse stable safety map, stable safety checklists, and examples of good practices in stable safety, horse handling and rescue planning. This new horse stable safety management tool can also help in organizing work processes in horse stables in general.
Noise filtering algorithm for the MFTF-B computer based control system
International Nuclear Information System (INIS)
Minor, E.G.
1983-01-01
An algorithm to reduce the message traffic in the MFTF-B computer based control system is described. The algorithm filters analog inputs to the control system. Its purpose is to distinguish between changes in the inputs due to noise and changes due to significant variations in the quantity being monitored. Noise is rejected while significant changes are reported to the control system data base, thus keeping the data base updated with a minimum number of messages. The algorithm is memory efficient, requiring only four bytes of storage per analog channel, and computationally simple, requiring only subtraction and comparison. Quantitative analysis of the algorithm is presented for the case of additive Gaussian noise. It is shown that the algorithm is stable and tends toward the mean value of the monitored variable over a wide variety of additive noise distributions
EDGA: A Population Evolution Direction-Guided Genetic Algorithm for Protein-Ligand Docking.
Guan, Boxin; Zhang, Changsheng; Ning, Jiaxu
2016-07-01
Protein-ligand docking can be formulated as a search algorithm associated with an accurate scoring function. However, most current search algorithms cannot show good performance in docking problems, especially for highly flexible docking. To overcome this drawback, this article presents a novel and robust optimization algorithm (EDGA) based on the Lamarckian genetic algorithm (LGA) for solving flexible protein-ligand docking problems. This method applies a population evolution direction-guided model of genetics, in which search direction evolves to the optimum solution. The method is more efficient to find the lowest energy of protein-ligand docking. We consider four search methods-a tradition genetic algorithm, LGA, SODOCK, and EDGA-and compare their performance in docking of six protein-ligand docking problems. The results show that EDGA is the most stable, reliable, and successful.
A Scalable O(N) Algorithm for Large-Scale Parallel First-Principles Molecular Dynamics Simulations
Energy Technology Data Exchange (ETDEWEB)
Osei-Kuffuor, Daniel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fattebert, Jean-Luc [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2014-01-01
Traditional algorithms for first-principles molecular dynamics (FPMD) simulations only gain a modest capability increase from current petascale computers, due to their O(N^{3}) complexity and their heavy use of global communications. To address this issue, we are developing a truly scalable O(N) complexity FPMD algorithm, based on density functional theory (DFT), which avoids global communications. The computational model uses a general nonorthogonal orbital formulation for the DFT energy functional, which requires knowledge of selected elements of the inverse of the associated overlap matrix. We present a scalable algorithm for approximately computing selected entries of the inverse of the overlap matrix, based on an approximate inverse technique, by inverting local blocks corresponding to principal submatrices of the global overlap matrix. The new FPMD algorithm exploits sparsity and uses nearest neighbor communication to provide a computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic orbitals are confined, and a cutoff beyond which the entries of the overlap matrix can be omitted when computing selected entries of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to O(100K) atoms on O(100K) processors, with a wall-clock time of O(1) minute per molecular dynamics time step.
Self-Identification Algorithm for the Autonomous Control of Lateral Vibration in Flexible Rotors
Directory of Open Access Journals (Sweden)
Thiago Malta Buttini
2012-01-01
the shaft. For that, frequency response functions of the system are automatically identified experimentally by the algorithm. It is demonstrated that regions of stable gains can be easily plotted, and the most suitable gains can be found to minimize the resonant peak of the system in an autonomous way, without human intervention.
Recursive Algorithm For Linear Regression
Varanasi, S. V.
1988-01-01
Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.
Designing algorithms using CAD technologies
Directory of Open Access Journals (Sweden)
Alin IORDACHE
2008-01-01
Full Text Available A representative example of eLearning-platform modular application, Ã¢Â€Â˜Logical diagramsÃ¢Â€Â™, is intended to be a useful learning and testing tool for the beginner programmer, but also for the more experienced one. The problem this application is trying to solve concerns young programmers who forget about the fundamentals of this domain, algorithmic. Logical diagrams are a graphic representation of an algorithm, which uses different geometrical figures (parallelograms, rectangles, rhombuses, circles with particular meaning that are called blocks and connected between them to reveal the flow of the algorithm. The role of this application is to help the user build the diagram for the algorithm and then automatically generate the C code and test it.
A quantum causal discovery algorithm
Giarmatzi, Christina; Costa, Fabio
2018-03-01
Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.
Multiagent scheduling models and algorithms
Agnetis, Alessandro; Gawiejnowicz, Stanisław; Pacciarelli, Dario; Soukhal, Ameur
2014-01-01
This book presents multi-agent scheduling models in which subsets of jobs sharing the same resources are evaluated by different criteria. It discusses complexity results, approximation schemes, heuristics and exact algorithms.
Efficient Algorithms for Subgraph Listing
Directory of Open Access Journals (Sweden)
Niklas Zechner
2014-05-01
Full Text Available Subgraph isomorphism is a fundamental problem in graph theory. In this paper we focus on listing subgraphs isomorphic to a given pattern graph. First, we look at the algorithm due to Chiba and Nishizeki for listing complete subgraphs of fixed size, and show that it cannot be extended to general subgraphs of fixed size. Then, we consider the algorithm due to Ga̧sieniec et al. for finding multiple witnesses of a Boolean matrix product, and use it to design a new output-sensitive algorithm for listing all triangles in a graph. As a corollary, we obtain an output-sensitive algorithm for listing subgraphs and induced subgraphs isomorphic to an arbitrary fixed pattern graph.
A retrodictive stochastic simulation algorithm
International Nuclear Information System (INIS)
Vaughan, T.G.; Drummond, P.D.; Drummond, A.J.
2010-01-01
In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.
Autonomous algorithms for image restoration
Griniasty, Meir
1994-01-01
We describe a general theoretical framework for algorithms that adaptively tune all their parameters during the restoration of a noisy image. The adaptation procedure is based on a mean field approach which is known as ``Deterministic Annealing'', and is reminiscent of the ``Deterministic Bolzmann Machiné'. The algorithm is less time consuming in comparison with its simulated annealing alternative. We apply the theory to several architectures and compare their performances.
New algorithms for parallel MRI
International Nuclear Information System (INIS)
Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A
2008-01-01
Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.
When the greedy algorithm fails
Bang-Jensen, Jørgen; Gutin, Gregory; Yeo, Anders
2004-01-01
We provide a characterization of the cases when the greedy algorithm may produce the unique worst possible solution for the problem of finding a minimum weight base in an independence system when the weights are taken from a finite range. We apply this theorem to TSP and the minimum bisection problem. The practical message of this paper is that the greedy algorithm should be used with great care, since for many optimization problems its usage seems impractical even for generating a starting s...
A* Algorithm for Graphics Processors
Inam, Rafia; Cederman, Daniel; Tsigas, Philippas
2010-01-01
Today's computer games have thousands of agents moving at the same time in areas inhabited by a large number of obstacles. In such an environment it is important to be able to calculate multiple shortest paths concurrently in an efficient manner. The highly parallel nature of the graphics processor suits this scenario perfectly. We have implemented a graphics processor based version of the A* path finding algorithm together with three algorithmic improvements that allow it to work faster and ...
Algorithm for programming function generators
International Nuclear Information System (INIS)
Bozoki, E.
1981-01-01
The present paper deals with a mathematical problem, encountered when driving a fully programmable μ-processor controlled function generator. An algorithm is presented to approximate a desired function by a set of straight segments in such a way that additional restrictions (hardware imposed) are also satisfied. A computer program which incorporates this algorithm and automatically generates the necessary input for the function generator for a broad class of desired functions is also described
Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning
Fu, QiMing
2016-01-01
To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency. PMID:27795704
Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning
Directory of Open Access Journals (Sweden)
Shan Zhong
2016-01-01
Full Text Available To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with l2-regularization are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR and linear function approximation (LFA, respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency.
Cascade Error Projection: A New Learning Algorithm
Duong, T. A.; Stubberud, A. R.; Daud, T.; Thakoor, A. P.
1995-01-01
A new neural network architecture and a hardware implementable learning algorithm is proposed. The algorithm, called cascade error projection (CEP), handles lack of precision and circuit noise better than existing algorithms.
Energy Technology Data Exchange (ETDEWEB)
Souza, Bruno B.; Neto, Oriane M. [Universidade Federal de Minas Gerais - Department of Electrical Engineering (Brazil); Carrano, Eduardo G. [Centro Federal de Educacao Tecnologica de Minas Gerais - Department of Computer Engineering (Brazil); Takahashi, Ricardo H.C. [Universidade Federal de Minas Gerais - Department of Mathematics (Brazil)
2011-02-15
A recent paper, has proposed a methodology for taking into account uncertainties in the load evolution within the design of electric distribution networks. That paper has presented an immunological algorithm that is used for finding a set of solutions which are sub-optimal under the viewpoint of the ''mean scenario'' load conditions, and which are submitted to a sensitivity analysis for the load uncertainty. This paper presents a further development of the algorithm presented in, employing now a memetic algorithm (an algorithm endowed with local search operators) instead of the original immunological algorithm. The new algorithm is shown to present a better behavior, achieving a better set of candidate solutions, which dominate the solution set of the former algorithm. The solution set of the proposed algorithm is also stable, in the senses that: (i) the same set of solutions is found systematically; and (ii) the merit function values associated to those solutions vary smoothly from one solution to another one. It can be concluded that the design procedure proposed in should be performed preferentially with the algorithm proposed here. (author)
Directory of Open Access Journals (Sweden)
GRANDIN, P. H.
2014-06-01
Full Text Available Recommendation systems based on collaborative filtering are open by nature, what makes them vulnerable to profile injection attacks that insert biased evaluations in the system database in order to manipulate recommendations. In this paper we evaluate the stability and robustness of collaborative filtering algorithms applied to semantic web services recommendation when submitted to random and segment profile injection attacks. We evaluated four algorithms: (1 IMEAN, that makes predictions using the average of the evaluations received by the target item; (2 UMEAN, that makes predictions using the average of the evaluation made by the target user; (3 an algorithm based on the k-nearest neighbor (k-NN method and (4, an algorithm based on the k-means clustering method.The experiments showed that the UMEAN algorithm is not affected by the attacks and that IMEAN is the most vulnerable of all algorithms tested. Nevertheless, both UMEAN and IMEAN have little practical application due to the low precision of their predictions. Among the algorithms with intermediate tolerance to attacks but with good prediction performance, the algorithm based on k-nn proved to be more robust and stable than the algorithm based on k-means.
Rotational Invariant Dimensionality Reduction Algorithms.
Lai, Zhihui; Xu, Yong; Yang, Jian; Shen, Linlin; Zhang, David
2017-11-01
A common intrinsic limitation of the traditional subspace learning methods is the sensitivity to the outliers and the image variations of the object since they use the norm as the metric. In this paper, a series of methods based on the -norm are proposed for linear dimensionality reduction. Since the -norm based objective function is robust to the image variations, the proposed algorithms can perform robust image feature extraction for classification. We use different ideas to design different algorithms and obtain a unified rotational invariant (RI) dimensionality reduction framework, which extends the well-known graph embedding algorithm framework to a more generalized form. We provide the comprehensive analyses to show the essential properties of the proposed algorithm framework. This paper indicates that the optimization problems have global optimal solutions when all the orthogonal projections of the data space are computed and used. Experimental results on popular image datasets indicate that the proposed RI dimensionality reduction algorithms can obtain competitive performance compared with the previous norm based subspace learning algorithms.
Artificial Flora (AF Optimization Algorithm
Directory of Open Access Journals (Sweden)
Long Cheng
2018-02-01
Full Text Available Inspired by the process of migration and reproduction of flora, this paper proposes a novel artificial flora (AF algorithm. This algorithm can be used to solve some complex, non-linear, discrete optimization problems. Although a plant cannot move, it can spread seeds within a certain range to let offspring to find the most suitable environment. The stochastic process is easy to copy, and the spreading space is vast; therefore, it is suitable for applying in intelligent optimization algorithm. First, the algorithm randomly generates the original plant, including its position and the propagation distance. Then, the position and the propagation distance of the original plant as parameters are substituted in the propagation function to generate offspring plants. Finally, the optimal offspring is selected as a new original plant through the selection function. The previous original plant becomes the former plant. The iteration continues until we find out optimal solution. In this paper, six classical evaluation functions are used as the benchmark functions. The simulation results show that proposed algorithm has high accuracy and stability compared with the classical particle swarm optimization and artificial bee colony algorithm.
Design of optically stable image reflector system.
Tsai, Chung-Yu
2013-08-01
The design of a partially optically stable (POS) reflector system, in which the exit ray direction and image pose are unchanged as the reflector system rotates about a specific directional vector, was presented in an earlier study by the current group [Appl. Phys. B100, 883-890 (2010)]. The present study further proposes an optically stable image (OSI) reflector system, in which not only is the optical stability property of the POS system retained, but the image position and total ray path length are also fixed. An analytical method is proposed for the design of OSI reflector systems comprising multiple reflectors. The validity of the proposed approach is demonstrated by means of two illustrative examples.
Stable microfluidic flow focusing using hydrostatics.
Gnyawali, Vaskar; Saremi, Mohammadali; Kolios, Michael C; Tsai, Scott S H
2017-05-01
We present a simple technique to generate stable hydrodynamically focused flows by driving the flow with hydrostatic pressure from liquid columns connected to the inlets of a microfluidic device. Importantly, we compare the focused flows generated by hydrostatic pressure and classical syringe pump driven flows and find that the stability of the hydrostatic pressure driven technique is significantly better than the stability achieved via syringe pumps, providing fluctuation-free focused flows that are suitable for sensitive microfluidic flow cytometry applications. We show that the degree of flow focusing with the hydrostatic method can be accurately controlled by the simple tuning of the liquid column heights. We anticipate that this approach to stable flow focusing will find many applications in microfluidic cytometry technologies.
Utilization of stable isotopes in medicine
International Nuclear Information System (INIS)
1980-11-01
The ten lectures given at this round table are presented together with a discussion. Five lectures, relating to studies in which deuterium oxide was employed as a tracer of body water, dealt with pulmonary water measurements in man and animals, the total water pool in adipose subjects, and liquid compartments in children undergoing hemodyalisis. The heavy water is analysed by infrared spectrometry and a new double spectrodoser is described. Two studies using 13 C as tracer, described the diagnosis of liver troubles and diabetes respectively. A general review of the perspectives of the application of stable isotopes in clinical medicine is followed by a comparison of the use of stable and radioactive isotopes in France [fr
Thermally Stable, Latent Olefin Metathesis Catalysts
Thomas, Renee M.; Fedorov, Alexey; Keitz, Benjamin K.; Grubbs, Robert H.
2011-01-01
Highly thermally stable N-aryl,N-alkyl N-heterocyclic carbene (NHC) ruthenium catalysts were designed and synthesized for latent olefin metathesis. These catalysts showed excellent latent behavior toward metathesis reactions, whereby the complexes were inactive at ambient temperature and initiated at elevated temperatures, a challenging property to achieve with second generation catalysts. A sterically hindered N-tert-butyl substituent on the NHC ligand of the ruthenium complex was found to i...
The nature of stable insomnia phenotypes.
Pillai, Vivek; Roth, Thomas; Drake, Christopher L
2015-01-01
We examined the 1-y stability of four insomnia symptom profiles: sleep onset insomnia; sleep maintenance insomnia; combined onset and maintenance insomnia; and neither criterion (i.e., insomnia cases that do not meet quantitative thresholds for onset or maintenance problems). Insomnia cases that exhibited the same symptom profile over a 1-y period were considered to be phenotypes, and were compared in terms of clinical and demographic characteristics. Longitudinal. Urban, community-based. Nine hundred fifty-four adults with Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition based current insomnia (46.6 ± 12.6 y; 69.4% female). None. At baseline, participants were divided into four symptom profile groups based on quantitative criteria. Follow-up assessment 1 y later revealed that approximately 60% of participants retained the same symptom profile, and were hence judged to be phenotypes. Stability varied significantly by phenotype, such that sleep onset insomnia (SOI) was the least stable (42%), whereas combined insomnia (CI) was the most stable (69%). Baseline symptom groups (cross-sectionally defined) differed significantly across various clinical indices, including daytime impairment, depression, and anxiety. Importantly, however, a comparison of stable phenotypes (longitudinally defined) did not reveal any differences in impairment or comorbid psychopathology. Another interesting finding was that whereas all other insomnia phenotypes showed evidence of an elevated wake drive both at night and during the day, the 'neither criterion' phenotype did not; this latter phenotype exhibited significantly higher daytime sleepiness despite subthreshold onset and maintenance difficulties. By adopting a stringent, stability-based definition, this study offers timely and important data on the longitudinal trajectory of specific insomnia phenotypes. With the exception of daytime sleepiness, few clinical differences are apparent across stable phenotypes.
A belief-based evolutionarily stable strategy
Deng, Xinyang; Wang, Zhen; Liu, Qi; Deng, Yong; Mahadevan, Sankaran
2014-01-01
As an equilibrium refinement of the Nash equilibrium, evolutionarily stable strategy (ESS) is a key concept in evolutionary game theory and has attracted growing interest. An ESS can be either a pure strategy or a mixed strategy. Even though the randomness is allowed in mixed strategy, the selection probability of pure strategy in a mixed strategy may fluctuate due to the impact of many factors. The fluctuation can lead to more uncertainty. In this paper, such uncertainty involved in mixed st...
Algebraic Algorithm Design and Local Search
National Research Council Canada - National Science Library
Graham, Robert
1996-01-01
.... Algebraic techniques have been applied successfully to algorithm synthesis by the use of algorithm theories and design tactics, an approach pioneered in the Kestrel Interactive Development System (KIDS...
Stable iodine prophylaxis. Recommendations of the 2nd UK Working Group on Stable Iodine Prophylaxis
Energy Technology Data Exchange (ETDEWEB)
NONE
2001-07-01
The Working Group reviewed the revised Who guidance and the information published since 1991 on the risks of thyroid cancer in children from radioiodine and the risks of side effects from stable iodine. In particular, it reviewed data compiled on the incidence of thyroid cancers in children following the accident at the Chernobyl nuclear power plant in 1986. It considered whether the NRPB Earls were still appropriate, in the light of the new data. It also reviewed a range of other recommendations given by the 1st Working Group, concerning the chemical form of stable iodine tablets and practical issues concerning implementation of stable iodine prophylaxis. Finally, it reviewed the Patient Information Leaflet that is required, by law, to be included in each box of tablets and provided suggestions for information to be included in a separate information leaflet to be handed out to the public when stable iodine tablets are distributed.