WorldWideScience

Sample records for velocity verlet algorithm

  1. Kinetic energy definition in velocity Verlet integration for accurate pressure evaluation

    Science.gov (United States)

    Jung, Jaewoon; Kobayashi, Chigusa; Sugita, Yuji

    2018-04-01

    In molecular dynamics (MD) simulations, a proper definition of kinetic energy is essential for controlling pressure as well as temperature in the isothermal-isobaric condition. The virial theorem provides an equation that connects the average kinetic energy with the product of particle coordinate and force. In this paper, we show that the theorem is satisfied in MD simulations with a larger time step and holonomic constraints of bonds, only when a proper definition of kinetic energy is used. We provide a novel definition of kinetic energy, which is calculated from velocities at the half-time steps (t - Δt/2 and t + Δt/2) in the velocity Verlet integration method. MD simulations of a 1,2-dispalmitoyl-sn-phosphatidylcholine (DPPC) lipid bilayer and a water box using the kinetic energy definition could reproduce the physical properties in the isothermal-isobaric condition properly. We also develop a multiple time step (MTS) integration scheme with the kinetic energy definition. MD simulations with the MTS integration for the DPPC and water box systems provided the same quantities as the velocity Verlet integration method, even when the thermostat and barostat are updated less frequently.

  2. Kinetic energy definition in velocity Verlet integration for accurate pressure evaluation.

    Science.gov (United States)

    Jung, Jaewoon; Kobayashi, Chigusa; Sugita, Yuji

    2018-04-28

    In molecular dynamics (MD) simulations, a proper definition of kinetic energy is essential for controlling pressure as well as temperature in the isothermal-isobaric condition. The virial theorem provides an equation that connects the average kinetic energy with the product of particle coordinate and force. In this paper, we show that the theorem is satisfied in MD simulations with a larger time step and holonomic constraints of bonds, only when a proper definition of kinetic energy is used. We provide a novel definition of kinetic energy, which is calculated from velocities at the half-time steps (t - Δt/2 and t + Δt/2) in the velocity Verlet integration method. MD simulations of a 1,2-dispalmitoyl-sn-phosphatidylcholine (DPPC) lipid bilayer and a water box using the kinetic energy definition could reproduce the physical properties in the isothermal-isobaric condition properly. We also develop a multiple time step (MTS) integration scheme with the kinetic energy definition. MD simulations with the MTS integration for the DPPC and water box systems provided the same quantities as the velocity Verlet integration method, even when the thermostat and barostat are updated less frequently.

  3. Modeling of diatomic molecule using the Morse potential and the Verlet algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Fidiani, Elok [Department of Physics, Parahyangan Catholic University, Bandung-Jawa Barat (Indonesia)

    2016-03-11

    Performing molecular modeling usually uses special software for Molecular Dynamics (MD) such as: GROMACS, NAMD, JMOL etc. Molecular dynamics is a computational method to calculate the time dependent behavior of a molecular system. In this work, MATLAB was used as numerical method for a simple modeling of some diatomic molecules: HCl, H{sub 2} and O{sub 2}. MATLAB is a matrix based numerical software, in order to do numerical analysis, all the functions and equations describing properties of atoms and molecules must be developed manually in MATLAB. In this work, a Morse potential was generated to describe the bond interaction between the two atoms. In order to analyze the simultaneous motion of molecules, the Verlet Algorithm derived from Newton’s Equations of Motion (classical mechanics) was operated. Both the Morse potential and the Verlet algorithm were integrated using MATLAB to derive physical properties and the trajectory of the molecules. The data computed by MATLAB is always in the form of a matrix. To visualize it, Visualized Molecular Dynamics (VMD) was performed. Such method is useful for development and testing some types of interaction on a molecular scale. Besides, this can be very helpful for describing some basic principles of molecular interaction for educational purposes.

  4. Modeling of diatomic molecule using the Morse potential and the Verlet algorithm

    International Nuclear Information System (INIS)

    Fidiani, Elok

    2016-01-01

    Performing molecular modeling usually uses special software for Molecular Dynamics (MD) such as: GROMACS, NAMD, JMOL etc. Molecular dynamics is a computational method to calculate the time dependent behavior of a molecular system. In this work, MATLAB was used as numerical method for a simple modeling of some diatomic molecules: HCl, H_2 and O_2. MATLAB is a matrix based numerical software, in order to do numerical analysis, all the functions and equations describing properties of atoms and molecules must be developed manually in MATLAB. In this work, a Morse potential was generated to describe the bond interaction between the two atoms. In order to analyze the simultaneous motion of molecules, the Verlet Algorithm derived from Newton’s Equations of Motion (classical mechanics) was operated. Both the Morse potential and the Verlet algorithm were integrated using MATLAB to derive physical properties and the trajectory of the molecules. The data computed by MATLAB is always in the form of a matrix. To visualize it, Visualized Molecular Dynamics (VMD) was performed. Such method is useful for development and testing some types of interaction on a molecular scale. Besides, this can be very helpful for describing some basic principles of molecular interaction for educational purposes.

  5. Time-Reversible Velocity Predictors for Verlet Integration with Velocity-Dependent Right-Hand Side

    Czech Academy of Sciences Publication Activity Database

    Kolafa, J.; Lísal, Martin

    2011-01-01

    Roč. 7, č. 11 (2011), s. 3596-3607 ISSN 1549-9618 R&D Projects: GA ČR GA104/08/0600 Grant - others:IGA J.E.PU(CZ) 53222 15 0006 01 Institutional research plan: CEZ:AV0Z40720504 Keywords : molecular dynamics * nose-hoover thermostat * verlet integrator Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 5.215, year: 2011

  6. Velocity and stress autocorrelation decay in isothermal dissipative particle dynamics

    Science.gov (United States)

    Chaudhri, Anuj; Lukes, Jennifer R.

    2010-02-01

    The velocity and stress autocorrelation decay in a dissipative particle dynamics ideal fluid model is analyzed in this paper. The autocorrelation functions are calculated at three different friction parameters and three different time steps using the well-known Groot/Warren algorithm and newer algorithms including self-consistent leap-frog, self-consistent velocity Verlet and Shardlow first and second order integrators. At low friction values, the velocity autocorrelation function decays exponentially at short times, shows slower-than exponential decay at intermediate times, and approaches zero at long times for all five integrators. As friction value increases, the deviation from exponential behavior occurs earlier and is more pronounced. At small time steps, all the integrators give identical decay profiles. As time step increases, there are qualitative and quantitative differences between the integrators. The stress correlation behavior is markedly different for the algorithms. The self-consistent velocity Verlet and the Shardlow algorithms show very similar stress autocorrelation decay with change in friction parameter, whereas the Groot/Warren and leap-frog schemes show variations at higher friction factors. Diffusion coefficients and shear viscosities are calculated using Green-Kubo integration of the velocity and stress autocorrelation functions. The diffusion coefficients match well-known theoretical results at low friction limits. Although the stress autocorrelation function is different for each integrator, fluctuates rapidly, and gives poor statistics for most of the cases, the calculated shear viscosities still fall within range of theoretical predictions and nonequilibrium studies.

  7. Restauration de Sapho ou Le Chant de Raoul Verlet

    Directory of Open Access Journals (Sweden)

    Lucie Courtiade

    2012-06-01

    Full Text Available L'étude de Sapho ou Le Chant, modèle de fonderie en plâtre élaboré par Raoul Verlet et conservé au Musée d'Angoulême depuis 1926, a permis d'aborder un sujet courant, celui des plâtres fracturés par l'expansion de la corrosion d'armatures internes causée par l'humidité. La principale intervention s'est axée sur le remontage des fragments nécessitant la conception d'une structure de remontage en acier inoxydable. Les interventions  structurelles ont été complétées par des collages simples ou renforcés des fragments de grandes dimensions.The study of Sapho ou Le Chant, a plaster cast foundry created by Raoul Verlet and preserved in the Angoulême Museum since 1926, has allowed to approach a common problematic, that one of split plaster cast by the expansion of steel reinforcements corrosion, caused by high humidity conservation conditions. The main intervention concerned the fragments reconstruction requiring the design and the execution of a stainless steel pedestal. The structural interventions was completed with simple and reinforced stickings of the larger fragments.

  8. A New Filtering Algorithm Utilizing Radial Velocity Measurement

    Institute of Scientific and Technical Information of China (English)

    LIU Yan-feng; DU Zi-cheng; PAN Quan

    2005-01-01

    Pulse Doppler radar measurements consist of range, azimuth, elevation and radial velocity. Most of the radar tracking algorithms in engineering only utilize position measurement. The extended Kalman filter with radial velocity measureneut is presented, then a new filtering algorithm utilizing radial velocity measurement is proposed to improve tracking results and the theoretical analysis is also given. Simulation results of the new algorithm, converted measurement Kalman filter, extended Kalman filter are compared. The effectiveness of the new algorithm is verified by simulation results.

  9. A fast algorithm for 3D azimuthally anisotropic velocity scan

    KAUST Repository

    Hu, Jingwei; Fomel, Sergey; Ying, Lexing

    2014-01-01

    © 2014 European Association of Geoscientists & Engineers. The conventional velocity scan can be computationally expensive for large-scale seismic data sets, particularly when the presence of anisotropy requires multiparameter scanning. We introduce a fast algorithm for 3D azimuthally anisotropic velocity scan by generalizing the previously proposed 2D butterfly algorithm for hyperbolic Radon transforms. To compute semblance in a two-parameter residual moveout domain, the numerical complexity of our algorithm is roughly O(N3logN) as opposed to O(N5) of the straightforward velocity scan, with N being the representative of the number of points in a particular dimension of either data space or parameter space. Synthetic and field data examples demonstrate the superior efficiency of the proposed algorithm.

  10. A fast algorithm for 3D azimuthally anisotropic velocity scan

    KAUST Repository

    Hu, Jingwei

    2014-11-11

    © 2014 European Association of Geoscientists & Engineers. The conventional velocity scan can be computationally expensive for large-scale seismic data sets, particularly when the presence of anisotropy requires multiparameter scanning. We introduce a fast algorithm for 3D azimuthally anisotropic velocity scan by generalizing the previously proposed 2D butterfly algorithm for hyperbolic Radon transforms. To compute semblance in a two-parameter residual moveout domain, the numerical complexity of our algorithm is roughly O(N3logN) as opposed to O(N5) of the straightforward velocity scan, with N being the representative of the number of points in a particular dimension of either data space or parameter space. Synthetic and field data examples demonstrate the superior efficiency of the proposed algorithm.

  11. Analysis of velocity planning interpolation algorithm based on NURBS curve

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng

    2017-04-01

    To reduce interpolation time and Max interpolation error in NURBS (Non-Uniform Rational B-Spline) inter-polation caused by planning Velocity. This paper proposed a velocity planning interpolation algorithm based on NURBS curve. Firstly, the second-order Taylor expansion is applied on the numerator in NURBS curve representation with parameter curve. Then, velocity planning interpolation algorithm can meet with NURBS curve interpolation. Finally, simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished.

  12. Velocity of climate change algorithms for guiding conservation and management.

    Science.gov (United States)

    Hamann, Andreas; Roberts, David R; Barber, Quinn E; Carroll, Carlos; Nielsen, Scott E

    2015-02-01

    The velocity of climate change is an elegant analytical concept that can be used to evaluate the exposure of organisms to climate change. In essence, one divides the rate of climate change by the rate of spatial climate variability to obtain a speed at which species must migrate over the surface of the earth to maintain constant climate conditions. However, to apply the algorithm for conservation and management purposes, additional information is needed to improve realism at local scales. For example, destination information is needed to ensure that vectors describing speed and direction of required migration do not point toward a climatic cul-de-sac by pointing beyond mountain tops. Here, we present an analytical approach that conforms to standard velocity algorithms if climate equivalents are nearby. Otherwise, the algorithm extends the search for climate refugia, which can be expanded to search for multivariate climate matches. With source and destination information available, forward and backward velocities can be calculated allowing useful inferences about conservation of species (present-to-future velocities) and management of species populations (future-to-present velocities). © 2014 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.

  13. Total-variation based velocity inversion with Bregmanized operator splitting algorithm

    Science.gov (United States)

    Zand, Toktam; Gholami, Ali

    2018-04-01

    Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.

  14. Thermal Fluctuations in Smooth Dissipative Particle Dynamics simulation of mesoscopic thermal systems

    Science.gov (United States)

    Gatsonis, Nikolaos; Yang, Jun

    2013-11-01

    The SDPD-DV is implemented in our work for arbitrary 3D wall bounded geometries. The particle position and momentum equations are integrated with a velocity-Verlet algorithm and the entropy equation is integrated with a Runge-Kutta algorithm. Simulations of nitrogen gas are performed to evaluate the effects of timestep and particle scale on temperature, self-diffusion coefficient and shear viscosity. The hydrodynamic fluctuations in temperature, density, pressure and velocity from the SDPD-DV simulations are evaluated and compared with theoretical predictions. Steady planar thermal Couette flows are simulated and compared with analytical solutions. Simulations cover the hydrodynamic and mesocopic regime and show thermal fluctuations and their dependence on particle size.

  15. Energy Demodulation Algorithm for Flow Velocity Measurement of Oil-Gas-Water Three-Phase Flow

    Directory of Open Access Journals (Sweden)

    Yingwei Li

    2014-01-01

    Full Text Available Flow velocity measurement was an important research of oil-gas-water three-phase flow parameter measurements. In order to satisfy the increasing demands for flow detection technology, the paper presented a gas-liquid phase flow velocity measurement method which was based on energy demodulation algorithm combing with time delay estimation technology. First, a gas-liquid phase separation method of oil-gas-water three-phase flow based on energy demodulation algorithm and blind signal separation technology was proposed. The separation of oil-gas-water three-phase signals which were sampled by conductance sensor performed well, so the gas-phase signal and the liquid-phase signal were obtained. Second, we used the time delay estimation technology to get the delay time of gas-phase signals and liquid-phase signals, respectively, and the gas-phase velocity and the liquid-phase velocity were derived. At last, the experiment was performed at oil-gas-water three-phase flow loop, and the results indicated that the measurement errors met the need of velocity measurement. So it provided a feasible method for gas-liquid phase velocity measurement of the oil-gas-water three-phase flow.

  16. Unsupervised Learning Through Randomized Algorithms for High-Volume High-Velocity Data (ULTRA-HV).

    Energy Technology Data Exchange (ETDEWEB)

    Pinar, Ali [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kolda, Tamara G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carlberg, Kevin Thomas [Wake Forest Univ., Winston-Salem, MA (United States); Ballard, Grey [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mahoney, Michael [Univ. of California, Berkeley, CA (United States)

    2018-01-01

    Through long-term investments in computing, algorithms, facilities, and instrumentation, DOE is an established leader in massive-scale, high-fidelity simulations, as well as science-leading experimentation. In both cases, DOE is generating more data than it can analyze and the problem is intensifying quickly. The need for advanced algorithms that can automatically convert the abundance of data into a wealth of useful information by discovering hidden structures is well recognized. Such efforts however, are hindered by the massive volume of the data and its high velocity. Here, the challenge is developing unsupervised learning methods to discover hidden structure in high-volume, high-velocity data.

  17. Experimental investigation of the velocity field in buoyant diffusion flames using PIV and TPIV algorithm

    Science.gov (United States)

    L. Sun; X. Zhou; S.M. Mahalingam; D.R. Weise

    2005-01-01

    We investigated a simultaneous temporally and spatially resolved 2-D velocity field above a burning circular pan of alcohol using particle image velocimetry (PIV). The results obtained from PIV were used to assess a thermal particle image velocimetry (TPIV) algorithm previously developed to approximate the velocity field using the temperature field, simultaneously...

  18. The Development and Comparison of Molecular Dynamics Simulation and Monte Carlo Simulation

    Science.gov (United States)

    Chen, Jundong

    2018-03-01

    Molecular dynamics is an integrated technology that combines physics, mathematics and chemistry. Molecular dynamics method is a computer simulation experimental method, which is a powerful tool for studying condensed matter system. This technique not only can get the trajectory of the atom, but can also observe the microscopic details of the atomic motion. By studying the numerical integration algorithm in molecular dynamics simulation, we can not only analyze the microstructure, the motion of particles and the image of macroscopic relationship between them and the material, but can also study the relationship between the interaction and the macroscopic properties more conveniently. The Monte Carlo Simulation, similar to the molecular dynamics, is a tool for studying the micro-molecular and particle nature. In this paper, the theoretical background of computer numerical simulation is introduced, and the specific methods of numerical integration are summarized, including Verlet method, Leap-frog method and Velocity Verlet method. At the same time, the method and principle of Monte Carlo Simulation are introduced. Finally, similarities and differences of Monte Carlo Simulation and the molecular dynamics simulation are discussed.

  19. Improvements in seismic event locations in a deep western U.S. coal mine using tomographic velocity models and an evolutionary search algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Adam Lurka; Peter Swanson [Central Mining Institute, Katowice (Poland)

    2009-09-15

    Methods of improving seismic event locations were investigated as part of a research study aimed at reducing ground control safety hazards. Seismic event waveforms collected with a 23-station three-dimensional sensor array during longwall coal mining provide the data set used in the analyses. A spatially variable seismic velocity model is constructed using seismic event sources in a passive tomographic method. The resulting three-dimensional velocity model is used to relocate seismic event positions. An evolutionary optimization algorithm is implemented and used in both the velocity model development and in seeking improved event location solutions. Results obtained using the different velocity models are compared. The combination of the tomographic velocity model development and evolutionary search algorithm provides improvement to the event locations. 13 refs., 5 figs., 4 tabs.

  20. An ML-Based Radial Velocity Estimation Algorithm for Moving Targets in Spaceborne High-Resolution and Wide-Swath SAR Systems

    Directory of Open Access Journals (Sweden)

    Tingting Jin

    2017-04-01

    Full Text Available Multichannel synthetic aperture radar (SAR is a significant breakthrough to the inherent limitation between high-resolution and wide-swath (HRWS compared with conventional SAR. Moving target indication (MTI is an important application of spaceborne HRWS SAR systems. In contrast to previous studies of SAR MTI, the HRWS SAR mainly faces the problem of under-sampled data of each channel, causing single-channel imaging and processing to be infeasible. In this study, the estimation of velocity is equivalent to the estimation of the cone angle according to their relationship. The maximum likelihood (ML based algorithm is proposed to estimate the radial velocity in the existence of Doppler ambiguities. After that, the signal reconstruction and compensation for the phase offset caused by radial velocity are processed for a moving target. Finally, the traditional imaging algorithm is applied to obtain a focused moving target image. Experiments are conducted to evaluate the accuracy and effectiveness of the estimator under different signal-to-noise ratios (SNR. Furthermore, the performance is analyzed with respect to the motion ship that experiences interference due to different distributions of sea clutter. The results verify that the proposed algorithm is accurate and efficient with low computational complexity. This paper aims at providing a solution to the velocity estimation problem in the future HRWS SAR systems with multiple receive channels.

  1. Algorithms for estimating blood velocities using ultrasound

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2000-01-01

    Ultrasound has been used intensively for the last 15 years for studying the hemodynamics of the human body. Systems for determining both the velocity distribution at one point of interest (spectral systems) and for displaying a map of velocity in real time have been constructed. A number of schemes...... have been developed for performing the estimation, and the various approaches are described. The current systems only display the velocity along the ultrasound beam direction and a velocity transverse to the beam is not detected. This is a major problem in these systems, since most blood vessels...... are parallel to the skin surface. Angling the transducer will often disturb the flow, and new techniques for finding transverse velocities are needed. The various approaches for determining transverse velocities will be explained. This includes techniques using two-dimensional correlation (speckle tracking...

  2. Parallelising a molecular dynamics algorithm on a multi-processor workstation

    Science.gov (United States)

    Müller-Plathe, Florian

    1990-12-01

    The Verlet neighbour-list algorithm is parallelised for a multi-processor Hewlett-Packard/Apollo DN10000 workstation. The implementation makes use of memory shared between the processors. It is a genuine master-slave approach by which most of the computational tasks are kept in the master process and the slaves are only called to do part of the nonbonded forces calculation. The implementation features elements of both fine-grain and coarse-grain parallelism. Apart from three calls to library routines, two of which are standard UNIX calls, and two machine-specific language extensions, the whole code is written in standard Fortran 77. Hence, it may be expected that this parallelisation concept can be transfered in parts or as a whole to other multi-processor shared-memory computers. The parallel code is routinely used in production work.

  3. Attitude Determination Algorithm based on Relative Quaternion Geometry of Velocity Incremental Vectors for Cost Efficient AHRS Design

    Science.gov (United States)

    Lee, Byungjin; Lee, Young Jae; Sung, Sangkyung

    2018-05-01

    A novel attitude determination method is investigated that is computationally efficient and implementable in low cost sensor and embedded platform. Recent result on attitude reference system design is adapted to further develop a three-dimensional attitude determination algorithm through the relative velocity incremental measurements. For this, velocity incremental vectors, computed respectively from INS and GPS with different update rate, are compared to generate filter measurement for attitude estimation. In the quaternion-based Kalman filter configuration, an Euler-like attitude perturbation angle is uniquely introduced for reducing filter states and simplifying propagation processes. Furthermore, assuming a small angle approximation between attitude update periods, it is shown that the reduced order filter greatly simplifies the propagation processes. For performance verification, both simulation and experimental studies are completed. A low cost MEMS IMU and GPS receiver are employed for system integration, and comparison with the true trajectory or a high-grade navigation system demonstrates the performance of the proposed algorithm.

  4. Spacecraft angular velocity estimation algorithm for star tracker based on optical flow techniques

    Science.gov (United States)

    Tang, Yujie; Li, Jian; Wang, Gangyi

    2018-02-01

    An integrated navigation system often uses the traditional gyro and star tracker for high precision navigation with the shortcomings of large volume, heavy weight and high-cost. With the development of autonomous navigation for deep space and small spacecraft, star tracker has been gradually used for attitude calculation and angular velocity measurement directly. At the same time, with the dynamic imaging requirements of remote sensing satellites and other imaging satellites, how to measure the angular velocity in the dynamic situation to improve the accuracy of the star tracker is the hotspot of future research. We propose the approach to measure angular rate with a nongyro and improve the dynamic performance of the star tracker. First, the star extraction algorithm based on morphology is used to extract the star region, and the stars in the two images are matched according to the method of angular distance voting. The calculation of the displacement of the star image is measured by the improved optical flow method. Finally, the triaxial angular velocity of the star tracker is calculated by the star vector using the least squares method. The method has the advantages of fast matching speed, strong antinoise ability, and good dynamic performance. The triaxial angular velocity of star tracker can be obtained accurately with these methods. So, the star tracker can achieve better tracking performance and dynamic attitude positioning accuracy to lay a good foundation for the wide application of various satellites and complex space missions.

  5. Feasibility of waveform inversion of Rayleigh waves for shallow shear-wave velocity using a genetic algorithm

    Science.gov (United States)

    Zeng, C.; Xia, J.; Miller, R.D.; Tsoflias, G.P.

    2011-01-01

    Conventional surface wave inversion for shallow shear (S)-wave velocity relies on the generation of dispersion curves of Rayleigh waves. This constrains the method to only laterally homogeneous (or very smooth laterally heterogeneous) earth models. Waveform inversion directly fits waveforms on seismograms, hence, does not have such a limitation. Waveforms of Rayleigh waves are highly related to S-wave velocities. By inverting the waveforms of Rayleigh waves on a near-surface seismogram, shallow S-wave velocities can be estimated for earth models with strong lateral heterogeneity. We employ genetic algorithm (GA) to perform waveform inversion of Rayleigh waves for S-wave velocities. The forward problem is solved by finite-difference modeling in the time domain. The model space is updated by generating offspring models using GA. Final solutions can be found through an iterative waveform-fitting scheme. Inversions based on synthetic records show that the S-wave velocities can be recovered successfully with errors no more than 10% for several typical near-surface earth models. For layered earth models, the proposed method can generate one-dimensional S-wave velocity profiles without the knowledge of initial models. For earth models containing lateral heterogeneity in which case conventional dispersion-curve-based inversion methods are challenging, it is feasible to produce high-resolution S-wave velocity sections by GA waveform inversion with appropriate priori information. The synthetic tests indicate that the GA waveform inversion of Rayleigh waves has the great potential for shallow S-wave velocity imaging with the existence of strong lateral heterogeneity. ?? 2011 Elsevier B.V.

  6. Diffraction imaging and velocity analysis using oriented velocity continuation

    KAUST Repository

    Decker, Luke

    2014-08-05

    We perform seismic diffraction imaging and velocity analysis by separating diffractions from specular reflections and decomposing them into slope components. We image slope components using extrapolation in migration velocity in time-space-slope coordinates. The extrapolation is described by a convection-type partial differential equation and implemented efficiently in the Fourier domain. Synthetic and field data experiments show that the proposed algorithm is able to detect accurate time-migration velocities by automatically measuring the flatness of events in dip-angle gathers.

  7. Crustal velocity structure of central Gansu Province from regional seismic waveform inversion using firework algorithm

    Science.gov (United States)

    Chen, Yanyang; Wang, Yanbin; Zhang, Yuansheng

    2017-04-01

    The firework algorithm (FWA) is a novel swarm intelligence-based method recently proposed for the optimization of multi-parameter, nonlinear functions. Numerical waveform inversion experiments using a synthetic model show that the FWA performs well in both solution quality and efficiency. We apply the FWA in this study to crustal velocity structure inversion using regional seismic waveform data of central Gansu on the northeastern margin of the Qinghai-Tibet plateau. Seismograms recorded from the moment magnitude ( M W) 5.4 Minxian earthquake enable obtaining an average crustal velocity model for this region. We initially carried out a series of FWA robustness tests in regional waveform inversion at the same earthquake and station positions across the study region, inverting two velocity structure models, with and without a low-velocity crustal layer; the accuracy of our average inversion results and their standard deviations reveal the advantages of the FWA for the inversion of regional seismic waveforms. We applied the FWA across our study area using three component waveform data recorded by nine broadband permanent seismic stations with epicentral distances ranging between 146 and 437 km. These inversion results show that the average thickness of the crust in this region is 46.75 km, while thicknesses of the sedimentary layer, and the upper, middle, and lower crust are 3.15, 15.69, 13.08, and 14.83 km, respectively. Results also show that the P-wave velocities of these layers and the upper mantle are 4.47, 6.07, 6.12, 6.87, and 8.18 km/s, respectively.

  8. Thermal particle image velocity estimation of fire plume flow

    Science.gov (United States)

    Xiangyang Zhou; Lulu Sun; Shankar Mahalingam; David R. Weise

    2003-01-01

    For the purpose of studying wildfire spread in living vegetation such as chaparral in California, a thermal particle image velocity (TPIV) algorithm for nonintrusively measuring flame gas velocities through thermal infrared (IR) imagery was developed. By tracing thermal particles in successive digital IR images, the TPIV algorithm can estimate the velocity field in a...

  9. Surface wave velocity tracking by bisection method

    International Nuclear Information System (INIS)

    Maeda, T.

    2005-01-01

    Calculation of surface wave velocity is a classic problem dating back to the well-known Haskell's transfer matrix method, which contributes to solutions of elastic wave propagation, global subsurface structure evaluation by simulating observed earthquake group velocities, and on-site evaluation of subsurface structure by simulating phase velocity dispersion curves and/or H/V spectra obtained by micro-tremor observation. Recently inversion analysis on micro-tremor observation requires efficient method of generating many model candidates and also stable, accurate, and fast computation of dispersion curves and Raleigh wave trajectory. The original Haskell's transfer matrix method has been improved in terms of its divergence tendency mainly by the generalized transmission and reflection matrix method with formulation available for surface wave velocity; however, root finding algorithm has not been fully discussed except for the one by setting threshold to the absolute value of complex characteristic functions. Since surface wave number (reciprocal to the surface wave velocity multiplied by frequency) is a root of complex valued characteristic function, it is intractable to use general root finding algorithm. We will examine characteristic function in phase plane to construct two dimensional bisection algorithm with consideration on a layer to be evaluated and algorithm for tracking roots down along frequency axis. (author)

  10. Velocity Estimate Following Air Data System Failure

    National Research Council Canada - National Science Library

    McLaren, Scott A

    2008-01-01

    .... A velocity estimator (VEST) algorithm was developed to combine the inertial and wind velocities to provide an estimate of the aircraft's current true velocity to be used for command path gain scheduling and for display in the cockpit...

  11. Molecular Dynamics Simulations of Collisional Cooling and Ordering of Multiply Charged Ions in a Penning Trap

    International Nuclear Information System (INIS)

    Holder, J.P.; Church, D.A.; Gruber, L.; DeWitt, H.E.; Beck, B.R.; Schneider, D.

    2000-01-01

    Molecular dynamics simulations are used to help design new experiments by modeling the cooling of small numbers of trapped multiply charged ions by Coulomb interactions with laser-cooled Be + ions. A Verlet algorithm is used to integrate the equations of motion of two species of point ions interacting in an ideal Penning trap. We use a time step short enough to follow the cyclotron motion of the ions. Axial and radial temperatures for each species are saved periodically. Direct heating and cooling of each species in the simulation can be performed by periodically rescaling velocities. Of interest are Fe 11+ due to a EUV-optical double resonance for imaging and manipulating the ions, and Ca 14+ since a ground state fine structure transition has a convenient wavelength in the tunable laser range

  12. Waveform inversion of lateral velocity variation from wavefield source location perturbation

    KAUST Repository

    Choi, Yun Seok

    2013-09-22

    It is challenge in waveform inversion to precisely define the deep part of the velocity model compared to the shallow part. The lateral velocity variation, or what referred to as the derivative of velocity with respect to the horizontal distance, with well log data can be used to update the deep part of the velocity model more precisely. We develop a waveform inversion algorithm to obtain the lateral velocity variation by inverting the wavefield variation associated with the lateral shot location perturbation. The gradient of the new waveform inversion algorithm is obtained by the adjoint-state method. Our inversion algorithm focuses on resolving the lateral changes of the velocity model with respect to a fixed reference vertical velocity profile given by a well log. We apply the method on a simple-dome model to highlight the methods potential.

  13. Hybrid ANFIS with ant colony optimization algorithm for prediction of shear wave velocity from a carbonate reservoir in Iran

    Directory of Open Access Journals (Sweden)

    Hadi Fattahi

    2016-12-01

    Full Text Available Shear wave velocity (Vs data are key information for petrophysical, geophysical and geomechanical studies. Although compressional wave velocity (Vp measurements exist in almost all wells, shear wave velocity is not recorded for most of elderly wells due to lack of technologic tools. Furthermore, measurement of shear wave velocity is to some extent costly. This study proposes a novel methodology to remove aforementioned problems by use of hybrid adaptive neuro fuzzy inference system (ANFIS with ant colony optimization algorithm (ACO based on fuzzy c–means clustering (FCM and subtractive clustering (SCM. The ACO is combined with two ANFIS models for determining the optimal value of its user–defined parameters. The optimization implementation by the ACO significantly improves the generalization ability of the ANFIS models. These models are used in this study to formulate conventional well log data into Vs in a quick, cheap, and accurate manner. A total of 3030 data points was used for model construction and 833 data points were employed for assessment of ANFIS models. Finally, a comparison among ANFIS models, and six well–known empirical correlations demonstrated ANFIS models outperformed other methods. This strategy was successfully applied in the Marun reservoir, Iran.

  14. Application of forking genetic algorithm to the estimation of an S-wave-velocity structure from Rayleigh-wave dispersion data. With special reference to an exploration method using microtremors; Rayleigh ha no bunsan data kara S ha sokudo kozo wo suiteisuru inversion mondai eno kotaigun tansaku bunkigata identeki algorithm no tekiyo. Bido tansaho ni kanrenshite

    Energy Technology Data Exchange (ETDEWEB)

    Cho, I; Nakanishi, I [Kyoto University, Kyoto (Japan); Ling, S [Nihon Nessui Corp., Tokyo (Japan); Okada, H [Hokkaido University, Sapporo (Japan)

    1997-10-22

    Discussions were given on a genetic algorithm as a means to solve simultaneously the problems related to stability of solution and dependence on an initial model in estimating subsurface structures using the microtremor exploration method. In the study, a forking genetic algorithm (fGA) to explore solid substance groups was applied to the optimizing simulations for a velocity structure model to discuss whether the algorithm can be used practically. The simulation No. 1 was performed by making the number of layers four for both of the given velocity structure and the optimizing model. On the other hand, the simulation No. 2 was executed by making the number of layers for the given velocity structure greater than that for the optimizing model. As a result, it was verified that wide range exploration may be possible for the velocity structure model, and that a large number of candidates for the velocity structure model may be proposed. In either case, the exploration capability of the fGA exceeded that of the standard simple genetic algorithm. 8 refs., 4 figs., 2 tabs.

  15. Structure, single-particle and many-particle coefficients of Lennard ...

    Indian Academy of Sciences (India)

    Molecular dynamics calculations; viscosity of liquids; structure of liquids; simple liquids and ... (UNAABMD) that uses the Verlet algorithm to perform the integration of equa- ... The input parameters for the Lennard–Jones model are σ = 2.62 Å and .... viscosity. This has been calculated using also the Green–Kubo relation and.

  16. Time step rescaling recovers continuous-time dynamical properties for discrete-time Langevin integration of nonequilibrium systems.

    Science.gov (United States)

    Sivak, David A; Chodera, John D; Crooks, Gavin E

    2014-06-19

    When simulating molecular systems using deterministic equations of motion (e.g., Newtonian dynamics), such equations are generally numerically integrated according to a well-developed set of algorithms that share commonly agreed-upon desirable properties. However, for stochastic equations of motion (e.g., Langevin dynamics), there is still broad disagreement over which integration algorithms are most appropriate. While multiple desiderata have been proposed throughout the literature, consensus on which criteria are important is absent, and no published integration scheme satisfies all desiderata simultaneously. Additional nontrivial complications stem from simulating systems driven out of equilibrium using existing stochastic integration schemes in conjunction with recently developed nonequilibrium fluctuation theorems. Here, we examine a family of discrete time integration schemes for Langevin dynamics, assessing how each member satisfies a variety of desiderata that have been enumerated in prior efforts to construct suitable Langevin integrators. We show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting (related to the velocity Verlet discretization) that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts.

  17. The velocity of a radioactive bolus in the oesophagus evaluated by means of an image segmentation algorithm

    International Nuclear Information System (INIS)

    Miquelin, Charlie A; Dantas, Roberto O; Oliveira, Ricardo B; Braga, Francisco Jos H. N

    2002-01-01

    Classical scintigraphic evaluation of a radioactive bolus through the oesophagus is based on regions of interest and time/activity curves, which only gives information about the total time required for it to cross the organ. Instantaneous parameters can be obtained if the exact position (centroid) of the bolus is known. For that, one needs to know the co-ordinates of the centre of mass of the bolus radioactivity distribution. From this, one can obtain velocity at each time. Obtaining such a new parameter would be important, to try to determine whether the anatomical differences among the 3 thirds of the oesophagus have a functional correspondence or not. We have studied 5 normal volunteers (4 males, 1 female, 33-68 yo). Each volunteer swallowed (unique swallowing) 40 MBq of 99mTc-phytate in 10 ml water. Eighty frames (0.3 sec) were acquired in a scintillation camera. External marks were used to separate the pharynx from the oesophagus. Images were transformed into bitmap by means of a Sophy Medical processing module and analysed by means of the algorithm, which determines the co-ordinates of the centroid (horizontal and vertical) for each frame and instant velocities through the organ. Different velocities were found in typical evaluations. Curves representing the different positions of the bolus C and the correspondent different Vs were obtained. Different velocities of the bolus were detected during the pharyngeal phase, and proximal, mid and distal parts of the oesophagus. Larger studies are necessary, but it seems that the velocity of a radioactive bolus changes in the different parts of the oesophagus. It is reasonable to say that there is a functional correspondence to the anatomical differences in the organ (Au)

  18. Velocity control of servo systems using an integral retarded algorithm.

    Science.gov (United States)

    Ramírez, Adrián; Garrido, Rubén; Mondié, Sabine

    2015-09-01

    This paper presents a design technique for the delay-based controller called Integral Retarded (IR), and its applications to velocity control of servo systems. Using spectral analysis, the technique yields a tuning strategy for the IR by assigning a triple real dominant root for the closed-loop system. This result ultimately guarantees a desired exponential decay rate σ(d) while achieving the IR tuning as explicit function of σ(d) and system parameters. The intentional introduction of delay allows using noisy velocity measurements without additional filtering. The structure of the controller is also able to avoid velocity measurements by using instead position information. The IR is compared to a classical PI, both tested in a laboratory prototype. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Ultrasonic particle image velocimetry for improved flow gradient imaging: algorithms, methodology and validation

    International Nuclear Information System (INIS)

    Niu Lili; Qian Ming; Yu Wentao; Jin Qiaofeng; Ling Tao; Zheng Hairong; Wan Kun; Gao Shen

    2010-01-01

    This paper presents a new algorithm for ultrasonic particle image velocimetry (Echo PIV) for improving the flow velocity measurement accuracy and efficiency in regions with high velocity gradients. The conventional Echo PIV algorithm has been modified by incorporating a multiple iterative algorithm, sub-pixel method, filter and interpolation method, and spurious vector elimination algorithm. The new algorithms' performance is assessed by analyzing simulated images with known displacements, and ultrasonic B-mode images of in vitro laminar pipe flow, rotational flow and in vivo rat carotid arterial flow. Results of the simulated images show that the new algorithm produces much smaller bias from the known displacements. For laminar flow, the new algorithm results in 1.1% deviation from the analytically derived value, and 8.8% for the conventional algorithm. The vector quality evaluation for the rotational flow imaging shows that the new algorithm produces better velocity vectors. For in vivo rat carotid arterial flow imaging, the results from the new algorithm deviate 6.6% from the Doppler-measured peak velocities averagely compared to 15% of that from the conventional algorithm. The new Echo PIV algorithm is able to effectively improve the measurement accuracy in imaging flow fields with high velocity gradients.

  20. Tracking Lagrangian trajectories in position–velocity space

    International Nuclear Information System (INIS)

    Xu, Haitao

    2008-01-01

    Lagrangian particle-tracking algorithms are susceptible to intermittent loss of particle images on the sensors. The measured trajectories are often interrupted into short segments and the long-time Lagrangian statistics are difficult to obtain. We present an algorithm to connect the segments of Lagrangian trajectories from common particle-tracking algorithms. Our algorithm tracks trajectory segments in the six-dimensional position and velocity space. We describe the approach to determine parameters in the algorithm and demonstrate the validity of the algorithm with data from numerical simulations and the improvement of long-time Lagrangian statistics on experimental data. The algorithm has important applications in measurements with high particle seeding density and in obtaining multi-particle Lagrangian statistics

  1. Iterative reflectivity-constrained velocity estimation for seismic imaging

    Science.gov (United States)

    Masaya, Shogo; Verschuur, D. J. Eric

    2018-03-01

    This paper proposes a reflectivity constraint for velocity estimation to optimally solve the inverse problem for active seismic imaging. This constraint is based on the velocity model derived from the definition of reflectivity and acoustic impedance. The constraint does not require any prior information of the subsurface and large extra computational costs, like the calculation of so-called Hessian matrices. We incorporate this constraint into the Joint Migration Inversion algorithm, which simultaneously estimates both the reflectivity and velocity model of the subsurface in an iterative process. Using so-called full wavefield modeling, the misfit between forward modeled and measured data is minimized. Numerical and field data examples are given to demonstrate the validity of our proposed algorithm in case accurate initial models and the low frequency components of observed seismic data are absent.

  2. Bulk velocity extraction for nano-scale Newtonian flows

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Wenfei, E-mail: zwenfei@gmail.com [Key Laboratory of Mechanical Reliability for Heavy Equipments and Large Structures of Hebei Province, Yanshan University, Qinhuangdao 066004 (China); Sun, Hongyu [Key Laboratory of Mechanical Reliability for Heavy Equipments and Large Structures of Hebei Province, Yanshan University, Qinhuangdao 066004 (China)

    2012-04-16

    The conventional velocity extraction algorithm in MDS method has difficulty to determine the small flow velocity. This study proposes a new method to calculate the bulk velocity in nano-flows. Based on the Newton's law of viscosity, according to the calculated viscosities and shear stresses, the flow velocity can be obtained by numerical integration. This new method can overcome the difficulty existed in the conventional MDS method and improve the stability of the computational process. Numerical results show that this method is effective for the extraction of bulk velocity, no matter the bulk velocity is large or small. -- Highlights: ► Proposed a new method to calculate the bulk velocity in nano-flows. ► It is effective for the extraction of small bulk velocity. ► The accuracy, convergence and stability of the new method is good.

  3. Bulk velocity extraction for nano-scale Newtonian flows

    International Nuclear Information System (INIS)

    Zhang, Wenfei; Sun, Hongyu

    2012-01-01

    The conventional velocity extraction algorithm in MDS method has difficulty to determine the small flow velocity. This study proposes a new method to calculate the bulk velocity in nano-flows. Based on the Newton's law of viscosity, according to the calculated viscosities and shear stresses, the flow velocity can be obtained by numerical integration. This new method can overcome the difficulty existed in the conventional MDS method and improve the stability of the computational process. Numerical results show that this method is effective for the extraction of bulk velocity, no matter the bulk velocity is large or small. -- Highlights: ► Proposed a new method to calculate the bulk velocity in nano-flows. ► It is effective for the extraction of small bulk velocity. ► The accuracy, convergence and stability of the new method is good.

  4. Uncertainty on PIV mean and fluctuating velocity due to bias and random errors

    International Nuclear Information System (INIS)

    Wilson, Brandon M; Smith, Barton L

    2013-01-01

    Particle image velocimetry is a powerful and flexible fluid velocity measurement tool. In spite of its widespread use, the uncertainty of PIV measurements has not been sufficiently addressed to date. The calculation and propagation of local, instantaneous uncertainties on PIV results into the measured mean and Reynolds stresses are demonstrated for four PIV error sources that impact uncertainty through the vector computation: particle image density, diameter, displacement and velocity gradients. For the purpose of this demonstration, velocity data are acquired in a rectangular jet. Hot-wire measurements are compared to PIV measurements with velocity fields computed using two PIV algorithms. Local uncertainty on the velocity mean and Reynolds stress for these algorithms are automatically estimated using a previously published method. Previous work has shown that PIV measurements can become ‘noisy’ in regions of high shear as well as regions of small displacement. This paper also demonstrates the impact of these effects by comparing PIV data to data acquired using hot-wire anemometry, which does not suffer from the same issues. It is confirmed that flow gradients, large particle images and insufficient particle image displacements can result in elevated measurements of turbulence levels. The uncertainty surface method accurately estimates the difference between hot-wire and PIV measurements for most cases. The uncertainty based on each algorithm is found to be unique, motivating the use of algorithm-specific uncertainty estimates. (paper)

  5. Measurement of transient two-phase flow velocity using statistical signal analysis of impedance probe signals

    International Nuclear Information System (INIS)

    Leavell, W.H.; Mullens, J.A.

    1981-01-01

    A computational algorithm has been developed to measure transient, phase-interface velocity in two-phase, steam-water systems. The algorithm will be used to measure the transient velocity of steam-water mixture during simulated PWR reflood experiments. By utilizing signals produced by two, spatially separated impedance probes immersed in a two-phase mixture, the algorithm computes the average transit time of mixture fluctuations moving between the two probes. This transit time is computed by first, measuring the phase shift between the two probe signals after transformation to the frequency domain and then computing the phase shift slope by a weighted least-squares fitting technique. Our algorithm, which has been tested with both simulated and real data, is able to accurately track velocity transients as fast as 4 m/s/s

  6. An analytical phantom for the evaluation of medical flow imaging algorithms

    International Nuclear Information System (INIS)

    Pashaei, A; Fatouraee, N

    2009-01-01

    Blood flow characteristics (e.g. velocity, pressure, shear stress, streamline and volumetric flow rate) are effective tools in diagnosis of cardiovascular diseases such as atherosclerotic plaque, aneurism and cardiac muscle failure. Noninvasive estimation of cardiovascular blood flow characteristics is mostly limited to the measurement of velocity components by medical imaging modalities. Once the velocity field is obtained from the images, other flow characteristics within the cardiovascular system can be determined using algorithms relating them to the velocity components. In this work, we propose an analytical flow phantom to evaluate these algorithms accurately. The Navier-Stokes equations are used to derive this flow phantom. The exact solution of these equations obtains analytical expression for the flow characteristics inside the domain. Features such as pulsatility, incompressibility and viscosity of flow are included in a three-dimensional domain. The velocity domain of the resulted system is presented as reference images. These images could be employed to evaluate the performance of different flow characteristic algorithms. In this study, we also present some applications of the obtained phantom. The calculation of pressure domain from velocity data, volumetric flow rate, wall shear stress and particle trace are the characteristics whose algorithms are evaluated here. We also present the application of this phantom in the analysis of noisy and low-resolution images. The presented phantom can be considered as a benchmark test to compare the accuracy of different flow characteristic algorithms.

  7. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    Science.gov (United States)

    Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel

    2015-08-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.

  8. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    International Nuclear Information System (INIS)

    Martins, Fabio J W A; Foucaut, Jean-Marc; Stanislas, Michel; Thomas, Lionel; Azevedo, Luis F A

    2015-01-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time. (paper)

  9. Evolution of semilocal string networks. II. Velocity estimators

    Science.gov (United States)

    Lopez-Eiguren, A.; Urrestilla, J.; Achúcarro, A.; Avgoustidis, A.; Martins, C. J. A. P.

    2017-07-01

    We continue a comprehensive numerical study of semilocal string networks and their cosmological evolution. These can be thought of as hybrid networks comprised of (nontopological) string segments, whose core structure is similar to that of Abelian Higgs vortices, and whose ends have long-range interactions and behavior similar to that of global monopoles. Our study provides further evidence of a linear scaling regime, already reported in previous studies, for the typical length scale and velocity of the network. We introduce a new algorithm to identify the position of the segment cores. This allows us to determine the length and velocity of each individual segment and follow their evolution in time. We study the statistical distribution of segment lengths and velocities for radiation- and matter-dominated evolution in the regime where the strings are stable. Our segment detection algorithm gives higher length values than previous studies based on indirect detection methods. The statistical distribution shows no evidence of (anti)correlation between the speed and the length of the segments.

  10. Ensemble simulations with discrete classical dynamics

    DEFF Research Database (Denmark)

    Toxværd, Søren

    2013-01-01

    For discrete classical Molecular dynamics (MD) obtained by the "Verlet" algorithm (VA) with the time increment $h$ there exist a shadow Hamiltonian $\\tilde{H}$ with energy $\\tilde{E}(h)$, for which the discrete particle positions lie on the analytic trajectories for $\\tilde{H}$. $\\tilde......{E}(h)$ is employed to determine the relation with the corresponding energy, $E$ for the analytic dynamics with $h=0$ and the zero-order estimate $E_0(h)$ of the energy for discrete dynamics, appearing in the literature for MD with VA. We derive a corresponding time reversible VA algorithm for canonical dynamics...

  11. Description of a Normal-Force In-Situ Turbulence Algorithm for Airplanes

    Science.gov (United States)

    Stewart, Eric C.

    2003-01-01

    A normal-force in-situ turbulence algorithm for potential use on commercial airliners is described. The algorithm can produce information that can be used to predict hazardous accelerations of airplanes or to aid meteorologists in forecasting weather patterns. The algorithm uses normal acceleration and other measures of the airplane state to approximate the vertical gust velocity. That is, the fundamental, yet simple, relationship between normal acceleration and the change in normal force coefficient is exploited to produce an estimate of the vertical gust velocity. This simple approach is robust and produces a time history of the vertical gust velocity that would be intuitively useful to pilots. With proper processing, the time history can be transformed into the eddy dissipation rate that would be useful to meteorologists. Flight data for a simplified research implementation of the algorithm are presented for a severe turbulence encounter of the NASA ARIES Boeing 757 research airplane. The results indicate that the algorithm has potential for producing accurate in-situ turbulence measurements. However, more extensive tests and analysis are needed with an operational implementation of the algorithm to make comparisons with other algorithms or methods.

  12. Quantum-circuit model of Hamiltonian search algorithms

    International Nuclear Information System (INIS)

    Roland, Jeremie; Cerf, Nicolas J.

    2003-01-01

    We analyze three different quantum search algorithms, namely, the traditional circuit-based Grover's algorithm, its continuous-time analog by Hamiltonian evolution, and the quantum search by local adiabatic evolution. We show that these algorithms are closely related in the sense that they all perform a rotation, at a constant angular velocity, from a uniform superposition of all states to the solution state. This makes it possible to implement the two Hamiltonian-evolution algorithms on a conventional quantum circuit, while keeping the quadratic speedup of Grover's original algorithm. It also clarifies the link between the adiabatic search algorithm and Grover's algorithm

  13. VizieR Online Data Catalog: HD20794 HARPS radial velocities (Feng+, 2017)

    Science.gov (United States)

    Feng, F.; Tuomi, M.; Jones, H. R. A.

    2017-05-01

    HARPS radial velocities, activity indices and differential radial velocities for HD 20794. The HARPS spectra are available in the European Southern Observatory archive, and are processed using the TERRA algorithm (Anglada-Escude and Butler, 2012, Cat. J/ApJS/200/15). (1 data file).

  14. High Dynamic Velocity Range Particle Image Velocimetry Using Multiple Pulse Separation Imaging

    Directory of Open Access Journals (Sweden)

    Tadhg S. O’Donovan

    2010-12-01

    Full Text Available The dynamic velocity range of particle image velocimetry (PIV is determined by the maximum and minimum resolvable particle displacement. Various techniques have extended the dynamic range, however flows with a wide velocity range (e.g., impinging jets still challenge PIV algorithms. A new technique is presented to increase the dynamic velocity range by over an order of magnitude. The multiple pulse separation (MPS technique (i records series of double-frame exposures with different pulse separations, (ii processes the fields using conventional multi-grid algorithms, and (iii yields a composite velocity field with a locally optimized pulse separation. A robust criterion determines the local optimum pulse separation, accounting for correlation strength and measurement uncertainty. Validation experiments are performed in an impinging jet flow, using laser-Doppler velocimetry as reference measurement. The precision of mean flow and turbulence quantities is significantly improved compared to conventional PIV, due to the increase in dynamic range. In a wide range of applications, MPS PIV is a robust approach to increase the dynamic velocity range without restricting the vector evaluation methods.

  15. High dynamic velocity range particle image velocimetry using multiple pulse separation imaging.

    Science.gov (United States)

    Persoons, Tim; O'Donovan, Tadhg S

    2011-01-01

    The dynamic velocity range of particle image velocimetry (PIV) is determined by the maximum and minimum resolvable particle displacement. Various techniques have extended the dynamic range, however flows with a wide velocity range (e.g., impinging jets) still challenge PIV algorithms. A new technique is presented to increase the dynamic velocity range by over an order of magnitude. The multiple pulse separation (MPS) technique (i) records series of double-frame exposures with different pulse separations, (ii) processes the fields using conventional multi-grid algorithms, and (iii) yields a composite velocity field with a locally optimized pulse separation. A robust criterion determines the local optimum pulse separation, accounting for correlation strength and measurement uncertainty. Validation experiments are performed in an impinging jet flow, using laser-Doppler velocimetry as reference measurement. The precision of mean flow and turbulence quantities is significantly improved compared to conventional PIV, due to the increase in dynamic range. In a wide range of applications, MPS PIV is a robust approach to increase the dynamic velocity range without restricting the vector evaluation methods.

  16. Models and Algorithms for Tracking Target with Coordinated Turn Motion

    Directory of Open Access Journals (Sweden)

    Xianghui Yuan

    2014-01-01

    Full Text Available Tracking target with coordinated turn (CT motion is highly dependent on the models and algorithms. First, the widely used models are compared in this paper—coordinated turn (CT model with known turn rate, augmented coordinated turn (ACT model with Cartesian velocity, ACT model with polar velocity, CT model using a kinematic constraint, and maneuver centered circular motion model. Then, in the single model tracking framework, the tracking algorithms for the last four models are compared and the suggestions on the choice of models for different practical target tracking problems are given. Finally, in the multiple models (MM framework, the algorithm based on expectation maximization (EM algorithm is derived, including both the batch form and the recursive form. Compared with the widely used interacting multiple model (IMM algorithm, the EM algorithm shows its effectiveness.

  17. Rayleigh wave group velocity and shear wave velocity structure in the San Francisco Bay region from ambient noise tomography

    Science.gov (United States)

    Li, Peng; Thurber, Clifford

    2018-06-01

    We derive new Rayleigh wave group velocity models and a 3-D shear wave velocity model of the upper crust in the San Francisco Bay region using an adaptive grid ambient noise tomography algorithm and 6 months of continuous seismic data from 174 seismic stations from multiple networks. The resolution of the group velocity models is 0.1°-0.2° for short periods (˜3 s) and 0.3°-0.4° for long periods (˜10 s). The new shear wave velocity model of the upper crust reveals a number of important structures. We find distinct velocity contrasts at the Golden Gate segment of the San Andreas Fault, the West Napa Fault, central part of the Hayward Fault and southern part of the Calaveras Fault. Low shear wave velocities are mainly located in Tertiary and Quaternary basins, for instance, La Honda Basin, Livermore Valley and the western and eastern edges of Santa Clara Valley. Low shear wave velocities are also observed at the Sonoma volcanic field. Areas of high shear wave velocity include the Santa Lucia Range, the Gabilan Range and Ben Lomond Plutons, and the Diablo Range, where Franciscan Complex or Silinian rocks are exposed.

  18. A finite difference approach to despiking in-stationary velocity data - tested on a triple-lidar

    DEFF Research Database (Denmark)

    Meyer Forsting, Alexander Raul; Troldborg, Niels

    2016-01-01

    A novel despiking method is presented for in-stationary wind lidar velocity measurements. A finite difference approach yields the upper and lower bounds for a valid velocity reading. The sole input to the algorithm is the velocity series and optionally a far- field reference to the temporal...

  19. Blob sizes and velocities in the Alcator C-Mod scrape-off layer

    DEFF Research Database (Denmark)

    Kube, R.; Garcia, O.E.; LaBombard, B.

    A new blob-tracking algorithm for the GPI diagnostic installed in the outboard-midplane of Alcator C-Mod is developed. I t tracks large-amplitude fluctuations propagating through the scrape-off layer and calculates blob sizes and velocities. We compare the results of this method to a blob velocity...

  20. Analyzing angular distributions for two-step dissociation mechanisms in velocity map imaging.

    Science.gov (United States)

    Straus, Daniel B; Butler, Lynne M; Alligood, Bridget W; Butler, Laurie J

    2013-08-15

    Increasingly, velocity map imaging is becoming the method of choice to study photoinduced molecular dissociation processes. This paper introduces an algorithm to analyze the measured net speed, P(vnet), and angular, β(vnet), distributions of the products from a two-step dissociation mechanism, where the first step but not the second is induced by absorption of linearly polarized laser light. Typically, this might be the photodissociation of a C-X bond (X = halogen or other atom) to produce an atom and a momentum-matched radical that has enough internal energy to subsequently dissociate (without the absorption of an additional photon). It is this second step, the dissociation of the unstable radicals, that one wishes to study, but the measured net velocity of the final products is the vector sum of the velocity imparted to the radical in the primary photodissociation (which is determined by taking data on the momentum-matched atomic cophotofragment) and the additional velocity vector imparted in the subsequent dissociation of the unstable radical. The algorithm allows one to determine, from the forward-convolution fitting of the net velocity distribution, the distribution of velocity vectors imparted in the second step of the mechanism. One can thus deduce the secondary velocity distribution, characterized by a speed distribution P(v1,2°) and an angular distribution I(θ2°), where θ2° is the angle between the dissociating radical's velocity vector and the additional velocity vector imparted to the product detected from the subsequent dissociation of the radical.

  1. Distributed leader-follower flocking control for multi-agent dynamical systems with time-varying velocities

    NARCIS (Netherlands)

    Yu, Wenwu; Chen, Guanrong; Cao, Ming

    Using tools from algebraic graph theory and nonsmooth analysis in combination with ideas of collective potential functions, velocity consensus and navigation feedback, a distributed leader-follower flocking algorithm for multi-agent dynamical systems with time-varying velocities is developed where

  2. Fuzzy gain scheduling of velocity PI controller with intelligent learning algorithm for reactor control

    International Nuclear Information System (INIS)

    Kim, Dong Yun

    1997-02-01

    In this research, we propose a fuzzy gain scheduler (FGS) with an intelligent learning algorithm for a reactor control. In the proposed algorithm, the gradient descent method is used in order to generate the rule bases of a fuzzy algorithm by learning. These rule bases are obtained by minimizing an objective function, which is called a performance cost function. The objective of the FGS with an intelligent learning algorithm is to generate adequate gains, which minimize the error of system. The proposed algorithm can reduce the time and efforts required for obtaining the fuzzy rules through the intelligent learning function. The evolutionary programming algorithm is modified and adopted as the method in order to find the optimal gains which are used as the initial gains of FGS with learning function. It is applied to reactor control of nuclear power plant (NPP), and the results are compared with those of a conventional PI controller with fixed gains. As a result, it is shown that the proposed algorithm is superior to the conventional PI controller

  3. Fuzzy gain scheduling of velocity PI controller with intelligent learning algorithm for reactor control

    International Nuclear Information System (INIS)

    Dong Yun Kim; Poong Hyun Seong; .

    1997-01-01

    In this research, we propose a fuzzy gain scheduler (FGS) with an intelligent learning algorithm for a reactor control. In the proposed algorithm, the gradient descent method is used in order to generate the rule bases of a fuzzy algorithm by learning. These rule bases are obtained by minimizing an objective function, which is called a performance cost function. The objective of the FGS with an intelligent learning algorithm is to generate gains, which minimize the error of system. The proposed algorithm can reduce the time and effort required for obtaining the fuzzy rules through the intelligent learning function. It is applied to reactor control of nuclear power plant (NPP), and the results are compared with those of a conventional PI controller with fixed gains. As a result, it is shown that the proposed algorithm is superior to the conventional PI controller. (author)

  4. Coarse Alignment Technology on Moving base for SINS Based on the Improved Quaternion Filter Algorithm.

    Science.gov (United States)

    Zhang, Tao; Zhu, Yongyun; Zhou, Feng; Yan, Yaxiong; Tong, Jinwu

    2017-06-17

    Initial alignment of the strapdown inertial navigation system (SINS) is intended to determine the initial attitude matrix in a short time with certain accuracy. The alignment accuracy of the quaternion filter algorithm is remarkable, but the convergence rate is slow. To solve this problem, this paper proposes an improved quaternion filter algorithm for faster initial alignment based on the error model of the quaternion filter algorithm. The improved quaternion filter algorithm constructs the K matrix based on the principle of optimal quaternion algorithm, and rebuilds the measurement model by containing acceleration and velocity errors to make the convergence rate faster. A doppler velocity log (DVL) provides the reference velocity for the improved quaternion filter alignment algorithm. In order to demonstrate the performance of the improved quaternion filter algorithm in the field, a turntable experiment and a vehicle test are carried out. The results of the experiments show that the convergence rate of the proposed improved quaternion filter is faster than that of the tradition quaternion filter algorithm. In addition, the improved quaternion filter algorithm also demonstrates advantages in terms of correctness, effectiveness, and practicability.

  5. A parallel second-order adaptive mesh algorithm for incompressible flow in porous media.

    Science.gov (United States)

    Pau, George S H; Almgren, Ann S; Bell, John B; Lijewski, Michael J

    2009-11-28

    In this paper, we present a second-order accurate adaptive algorithm for solving multi-phase, incompressible flow in porous media. We assume a multi-phase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting, the total velocity, defined to be the sum of the phase velocities, is divergence free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. The single-grid algorithm is described briefly, but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behaviour of the method.

  6. Novel mathematical algorithm for pupillometric data analysis.

    Science.gov (United States)

    Canver, Matthew C; Canver, Adam C; Revere, Karen E; Amado, Defne; Bennett, Jean; Chung, Daniel C

    2014-01-01

    Pupillometry is used clinically to evaluate retinal and optic nerve function by measuring pupillary response to light stimuli. We have developed a mathematical algorithm to automate and expedite the analysis of non-filtered, non-calculated pupillometric data obtained from mouse pupillary light reflex recordings, obtained from dynamic pupillary diameter recordings following exposure of varying light intensities. The non-filtered, non-calculated pupillometric data is filtered through a low pass finite impulse response (FIR) filter. Thresholding is used to remove data caused by eye blinking, loss of pupil tracking, and/or head movement. Twelve physiologically relevant parameters were extracted from the collected data: (1) baseline diameter, (2) minimum diameter, (3) response amplitude, (4) re-dilation amplitude, (5) percent of baseline diameter, (6) response time, (7) re-dilation time, (8) average constriction velocity, (9) average re-dilation velocity, (10) maximum constriction velocity, (11) maximum re-dilation velocity, and (12) onset latency. No significant differences were noted between parameters derived from algorithm calculated values and manually derived results (p ≥ 0.05). This mathematical algorithm will expedite endpoint data derivation and eliminate human error in the manual calculation of pupillometric parameters from non-filtered, non-calculated pupillometric values. Subsequently, these values can be used as reference metrics for characterizing the natural history of retinal disease. Furthermore, it will be instrumental in the assessment of functional visual recovery in humans and pre-clinical models of retinal degeneration and optic nerve disease following pharmacological or gene-based therapies. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Near-Surface Seismic Velocity Data: A Computer Program For ...

    African Journals Online (AJOL)

    A computer program (NESURVELANA) has been developed in Visual Basic Computer programming language to carry out a near surface velocity analysis. The method of analysis used includes: Algorithms design and Visual Basic codes generation for plotting arrival time (ms) against geophone depth (m) employing the ...

  8. Two-stage open-loop velocity compensating method applied to multi-mass elastic transmission system

    Directory of Open Access Journals (Sweden)

    Zhang Deli

    2014-02-01

    Full Text Available In this paper, a novel vibration-suppression open-loop control method for multi-mass system is proposed, which uses two-stage velocity compensating algorithm and fuzzy I + P controller. This compensating method is based on model-based control theory in order to provide a damping effect on the system mechanical part. The mathematical model of multi-mass system is built and reduced to estimate the velocities of masses. The velocity difference between adjacent masses is calculated dynamically. A 3-mass system is regarded as the composition of two 2-mass systems in order to realize the two-stage compensating algorithm. Instead of using a typical PI controller in the velocity compensating loop, a fuzzy I + P controller is designed and its input variables are decided according to their impact on the system, which is different from the conventional fuzzy PID controller designing rules. Simulations and experimental results show that the proposed velocity compensating method is effective in suppressing vibration on a 3-mass system and it has a better performance when the designed fuzzy I + P controller is utilized in the control system.

  9. Application of velocity filtering to optical-flow passive ranging

    Science.gov (United States)

    Barniv, Yair

    1992-01-01

    The performance of the velocity filtering method as applied to optical-flow passive ranging under real-world conditions is evaluated. The theory of the 3-D Fourier transform as applied to constant-speed moving points is reviewed, and the space-domain shift-and-add algorithm is derived from the general 3-D matched filtering formulation. The constant-speed algorithm is then modified to fit the actual speed encountered in the optical flow application, and the passband of that filter is found in terms of depth (sensor/object distance) so as to cover any given range of depths. Two algorithmic solutions for the problems associated with pixel interpolation and object expansion are developed, and experimental results are presented.

  10. Pre- and post-processing filters for improvement of blood velocity estimation

    DEFF Research Database (Denmark)

    Schlaikjer, Malene; Jensen, Jørgen Arendt

    2000-01-01

    with different signal-to-noise ratios (SNR). The exact extent of the vessel and the true velocities are thereby known. Velocity estimates were obtained by employing Kasai's autocorrelator on the data. The post-processing filter was used on the computed 2D velocity map. An improvement of the RMS error...... velocity in the vessels. Post-processing is beneficial to obtain an image that minimizes the variation, and present the important information to the clinicians. Applying the theory of fluid mechanics introduces restrictions on the variations possible in a flow field. Neighboring estimates in time and space...... should be highly correlated, since transitions should occur smoothly. This idea is the basis of the algorithm developed in this study. From Bayesian image processing theory an a posteriori probability distribution for the velocity field is computed based on constraints on smoothness. An estimate...

  11. Toward precise solution of one-dimensional velocity inverse problems

    International Nuclear Information System (INIS)

    Gray, S.; Hagin, F.

    1980-01-01

    A family of one-dimensional inverse problems are considered with the goal of reconstructing velocity profiles to reasonably high accuracy. The travel-time variable change is used together with an iteration scheme to produce an effective algorithm for computation. Under modest assumptions the scheme is shown to be convergent

  12. An External Archive-Guided Multiobjective Particle Swarm Optimization Algorithm.

    Science.gov (United States)

    Zhu, Qingling; Lin, Qiuzhen; Chen, Weineng; Wong, Ka-Chun; Coello Coello, Carlos A; Li, Jianqiang; Chen, Jianyong; Zhang, Jun

    2017-09-01

    The selection of swarm leaders (i.e., the personal best and global best), is important in the design of a multiobjective particle swarm optimization (MOPSO) algorithm. Such leaders are expected to effectively guide the swarm to approach the true Pareto optimal front. In this paper, we present a novel external archive-guided MOPSO algorithm (AgMOPSO), where the leaders for velocity update are all selected from the external archive. In our algorithm, multiobjective optimization problems (MOPs) are transformed into a set of subproblems using a decomposition approach, and then each particle is assigned accordingly to optimize each subproblem. A novel archive-guided velocity update method is designed to guide the swarm for exploration, and the external archive is also evolved using an immune-based evolutionary strategy. These proposed approaches speed up the convergence of AgMOPSO. The experimental results fully demonstrate the superiority of our proposed AgMOPSO in solving most of the test problems adopted, in terms of two commonly used performance measures. Moreover, the effectiveness of our proposed archive-guided velocity update method and immune-based evolutionary strategy is also experimentally validated on more than 30 test MOPs.

  13. Path following mobile robot in the presence of velocity constraints

    DEFF Research Database (Denmark)

    Bak, Martin; Poulsen, Niels Kjølstad; Ravn, Ole

    2001-01-01

    This paper focuses on path following algorithms for mobile robots with velocity constraints on the wheels. The path considered consists of straight lines intersected with given angles. We present a fast real-time receding horizon controller which anticipates the intersections and smoothly control...

  14. Fuzzy gain scheduling of velocity PI controller with intelligent learning algorithm for reactor control

    International Nuclear Information System (INIS)

    Kim, Dong Yun; Seong, Poong Hyun

    1996-01-01

    In this study, we proposed a fuzzy gain scheduler with intelligent learning algorithm for a reactor control. In the proposed algorithm, we used the gradient descent method to learn the rule bases of a fuzzy algorithm. These rule bases are learned toward minimizing an objective function, which is called a performance cost function. The objective of fuzzy gain scheduler with intelligent learning algorithm is the generation of adequate gains, which minimize the error of system. The condition of every plant is generally changed as time gose. That is, the initial gains obtained through the analysis of system are no longer suitable for the changed plant. And we need to set new gains, which minimize the error stemmed from changing the condition of a plant. In this paper, we applied this strategy for reactor control of nuclear power plant (NPP), and the results were compared with those of a simple PI controller, which has fixed gains. As a result, it was shown that the proposed algorithm was superior to the simple PI controller

  15. Transport coefficients of multi-particle collision algorithms with velocity-dependent collision rules

    International Nuclear Information System (INIS)

    Ihle, Thomas

    2008-01-01

    Detailed calculations of the transport coefficients of a recently introduced particle-based model for fluid dynamics with a non-ideal equation of state are presented. Excluded volume interactions are modeled by means of biased stochastic multi-particle collisions which depend on the local velocities and densities. Momentum and energy are exactly conserved locally. A general scheme to derive transport coefficients for such biased, velocity-dependent collision rules is developed. Analytic expressions for the self-diffusion coefficient and the shear viscosity are obtained, and very good agreement is found with numerical results at small and large mean free paths. The viscosity turns out to be proportional to the square root of temperature, as in a real gas. In addition, the theoretical framework is applied to a two-component version of the model, and expressions for the viscosity and the difference in diffusion of the two species are given

  16. VeLoc: Finding Your Car in Indoor Parking Structures.

    Science.gov (United States)

    Gao, Ruipeng; He, Fangpu; Li, Teng

    2018-05-02

    While WiFi-based indoor localization is attractive, there are many indoor places without WiFi coverage with a strong demand for localization capability. This paper describes a system and associated algorithms to address the indoor vehicle localization problem without the installation of additional infrastructure. In this paper, we propose VeLoc, which utilizes the sensor data of smartphones in the vehicle together with the floor map of the parking structure to track the vehicle in real time. VeLoc simultaneously harnesses constraints imposed by the map and environment sensing. All these cues are codified into a novel augmented particle filtering framework to estimate the position of the vehicle. Experimental results show that VeLoc performs well when even the initial position and the initial heading direction of the vehicle are completely unknown.

  17. A review of velocity-type PSO variants

    OpenAIRE

    Ivo Sousa-Ferreira; Duarte Sousa

    2017-01-01

    This paper presents a review of the particular variants of particle swarm optimization, based on the velocity-type class. The original particle swarm optimization algorithm was developed as an unconstrained optimization technique, which lacks a model that is able to handle constrained optimization problems. The particle swarm optimization and its inapplicability in constrained optimization problems are solved using the dynamic-objective constraint-handling method. The dynamic-objective constr...

  18. Field Programmable Gate Array Based Parallel Strapdown Algorithm Design for Strapdown Inertial Navigation Systems

    Directory of Open Access Journals (Sweden)

    Long-Hua Ma

    2011-08-01

    Full Text Available A new generalized optimum strapdown algorithm with coning and sculling compensation is presented, in which the position, velocity and attitude updating operations are carried out based on the single-speed structure in which all computations are executed at a single updating rate that is sufficiently high to accurately account for high frequency angular rate and acceleration rectification effects. Different from existing algorithms, the updating rates of the coning and sculling compensations are unrelated with the number of the gyro incremental angle samples and the number of the accelerometer incremental velocity samples. When the output sampling rate of inertial sensors remains constant, this algorithm allows increasing the updating rate of the coning and sculling compensation, yet with more numbers of gyro incremental angle and accelerometer incremental velocity in order to improve the accuracy of system. Then, in order to implement the new strapdown algorithm in a single FPGA chip, the parallelization of the algorithm is designed and its computational complexity is analyzed. The performance of the proposed parallel strapdown algorithm is tested on the Xilinx ISE 12.3 software platform and the FPGA device XC6VLX550T hardware platform on the basis of some fighter data. It is shown that this parallel strapdown algorithm on the FPGA platform can greatly decrease the execution time of algorithm to meet the real-time and high precision requirements of system on the high dynamic environment, relative to the existing implemented on the DSP platform.

  19. A simple fall detection algorithm for Powered Two Wheelers

    OpenAIRE

    BOUBEZOUL, Abderrahmane; ESPIE, Stéphane; LARNAUDIE, Bruno; BOUAZIZ, Samir

    2013-01-01

    The aim of this study is to evaluate a low-complexity fall detection algorithm, that use both acceleration and angular velocity signals to trigger an alert-system or to inflate an airbag jacket. The proposed fall detection algorithm is a threshold-based algorithm, using data from 3-accelerometers and 3-gyroscopes sensors mounted on the motorcycle. During the first step, the commonly fall accident configurations were selected and analyzed in order to identify the main causation factors. On the...

  20. Velocity-Aided Attitude Estimation for Helicopter Aircraft Using Microelectromechanical System Inertial-Measurement Units

    Directory of Open Access Journals (Sweden)

    Sang Cheol Lee

    2016-12-01

    Full Text Available This paper presents an algorithm for velocity-aided attitude estimation for helicopter aircraft using a microelectromechanical system inertial-measurement unit. In general, high- performance gyroscopes are used for estimating the attitude of a helicopter, but this type of sensor is very expensive. When designing a cost-effective attitude system, attitude can be estimated by fusing a low cost accelerometer and a gyro, but the disadvantage of this method is its relatively low accuracy. The accelerometer output includes a component that occurs primarily as the aircraft turns, as well as the gravitational acceleration. When estimating attitude, the accelerometer measurement terms other than gravitational ones can be considered as disturbances. Therefore, errors increase in accordance with the flight dynamics. The proposed algorithm is designed for using velocity as an aid for high accuracy at low cost. It effectively eliminates the disturbances of accelerometer measurements using the airspeed. The algorithm was verified using helicopter experimental data. The algorithm performance was confirmed through a comparison with an attitude estimate obtained from an attitude heading reference system based on a high accuracy optic gyro, which was employed as core attitude equipment in the helicopter.

  1. Velocity-Aided Attitude Estimation for Helicopter Aircraft Using Microelectromechanical System Inertial-Measurement Units

    Science.gov (United States)

    Lee, Sang Cheol; Hong, Sung Kyung

    2016-01-01

    This paper presents an algorithm for velocity-aided attitude estimation for helicopter aircraft using a microelectromechanical system inertial-measurement unit. In general, high- performance gyroscopes are used for estimating the attitude of a helicopter, but this type of sensor is very expensive. When designing a cost-effective attitude system, attitude can be estimated by fusing a low cost accelerometer and a gyro, but the disadvantage of this method is its relatively low accuracy. The accelerometer output includes a component that occurs primarily as the aircraft turns, as well as the gravitational acceleration. When estimating attitude, the accelerometer measurement terms other than gravitational ones can be considered as disturbances. Therefore, errors increase in accordance with the flight dynamics. The proposed algorithm is designed for using velocity as an aid for high accuracy at low cost. It effectively eliminates the disturbances of accelerometer measurements using the airspeed. The algorithm was verified using helicopter experimental data. The algorithm performance was confirmed through a comparison with an attitude estimate obtained from an attitude heading reference system based on a high accuracy optic gyro, which was employed as core attitude equipment in the helicopter. PMID:27973429

  2. Metaheuristic optimization approaches to predict shear-wave velocity from conventional well logs in sandstone and carbonate case studies

    Science.gov (United States)

    Emami Niri, Mohammad; Amiri Kolajoobi, Rasool; Khodaiy Arbat, Mohammad; Shahbazi Raz, Mahdi

    2018-06-01

    Seismic wave velocities, along with petrophysical data, provide valuable information during the exploration and development stages of oil and gas fields. The compressional-wave velocity (VP ) is acquired using conventional acoustic logging tools in many drilled wells. But the shear-wave velocity (VS ) is recorded using advanced logging tools only in a limited number of wells, mainly because of the high operational costs. In addition, laboratory measurements of seismic velocities on core samples are expensive and time consuming. So, alternative methods are often used to estimate VS . Heretofore, several empirical correlations that predict VS by using well logging measurements and petrophysical data such as VP , porosity and density are proposed. However, these empirical relations can only be used in limited cases. The use of intelligent systems and optimization algorithms are inexpensive, fast and efficient approaches for predicting VS. In this study, in addition to the widely used Greenberg–Castagna empirical method, we implement three relatively recently developed metaheuristic algorithms to construct linear and nonlinear models for predicting VS : teaching–learning based optimization, imperialist competitive and artificial bee colony algorithms. We demonstrate the applicability and performance of these algorithms to predict Vs using conventional well logs in two field data examples, a sandstone formation from an offshore oil field and a carbonate formation from an onshore oil field. We compared the estimated VS using each of the employed metaheuristic approaches with observed VS and also with those predicted by Greenberg–Castagna relations. The results indicate that, for both sandstone and carbonate case studies, all three implemented metaheuristic algorithms are more efficient and reliable than the empirical correlation to predict VS . The results also demonstrate that in both sandstone and carbonate case studies, the performance of an artificial bee

  3. Velocity Tracking Control of Wheeled Mobile Robots by Iterative Learning Control

    Directory of Open Access Journals (Sweden)

    Xiaochun Lu

    2016-05-01

    Full Text Available This paper presents an iterative learning control (ILC strategy to resolve the trajectory tracking problem of wheeled mobile robots (WMRs based on dynamic model. In the previous study of WMRs’ trajectory tracking, ILC was usually applied to the kinematical model of WMRs with the assumption that desired velocity can be tracked immediately. However, this assumption cannot be realized in the real world at all. The kinematic and dynamic models of WMRs are deduced in this chapter, and a novel combination of D-type ILC algorithm and dynamic model of WMR with random bounded disturbances are presented. To analyze the convergence of the algorithm, the method of contracting mapping, which shows that the designed controller can make the velocity tracking errors converge to zero completely when the iteration times tend to infinite, is adopted. Simulation results show the effectiveness of D-type ILC in the trajectory tracking problem of WMRs, demonstrating the effectiveness and robustness of the algorithm in the condition of random bounded disturbance. A comparative study conducted between D-type ILC and compound cosine function neural network (NN controller also demonstrates the effectiveness of the ILC strategy.

  4. Dynamic Inertia Weight Binary Bat Algorithm with Neighborhood Search

    Directory of Open Access Journals (Sweden)

    Xingwang Huang

    2017-01-01

    Full Text Available Binary bat algorithm (BBA is a binary version of the bat algorithm (BA. It has been proven that BBA is competitive compared to other binary heuristic algorithms. Since the update processes of velocity in the algorithm are consistent with BA, in some cases, this algorithm also faces the premature convergence problem. This paper proposes an improved binary bat algorithm (IBBA to solve this problem. To evaluate the performance of IBBA, standard benchmark functions and zero-one knapsack problems have been employed. The numeric results obtained by benchmark functions experiment prove that the proposed approach greatly outperforms the original BBA and binary particle swarm optimization (BPSO. Compared with several other heuristic algorithms on zero-one knapsack problems, it also verifies that the proposed algorithm is more able to avoid local minima.

  5. VeLoc: Finding Your Car in Indoor Parking Structures

    Directory of Open Access Journals (Sweden)

    Ruipeng Gao

    2018-05-01

    Full Text Available While WiFi-based indoor localization is attractive, there are many indoor places without WiFi coverage with a strong demand for localization capability. This paper describes a system and associated algorithms to address the indoor vehicle localization problem without the installation of additional infrastructure. In this paper, we propose VeLoc, which utilizes the sensor data of smartphones in the vehicle together with the floor map of the parking structure to track the vehicle in real time. VeLoc simultaneously harnesses constraints imposed by the map and environment sensing. All these cues are codified into a novel augmented particle filtering framework to estimate the position of the vehicle. Experimental results show that VeLoc performs well when even the initial position and the initial heading direction of the vehicle are completely unknown.

  6. Stress wave velocity patterns in the longitudinal-radial plane of trees for defect diagnosis

    Science.gov (United States)

    Guanghui Li; Xiang Weng; Xiaocheng Du; Xiping Wang; Hailin Feng

    2016-01-01

    Acoustic tomography for urban tree inspection typically uses stress wave data to reconstruct tomographic images for the trunk cross section using interpolation algorithm. This traditional technique does not take into account the stress wave velocity patterns along tree height. In this study, we proposed an analytical model for the wave velocity in the longitudinal–...

  7. Zero-crossing detection algorithm for arrays of optical spatial filtering velocimetry sensors

    DEFF Research Database (Denmark)

    Jakobsen, Michael Linde; Pedersen, Finn; Hanson, Steen Grüner

    2008-01-01

    This paper presents a zero-crossing detection algorithm for arrays of compact low-cost optical sensors based on spatial filtering for measuring fluctuations in angular velocity of rotating solid structures. The algorithm is applicable for signals with moderate signal-to-noise ratios, and delivers...... repeating the same measurement error for each revolution of the target, and to gain high performance measurement of angular velocity. The traditional zero-crossing detection is extended by 1) inserting an appropriate band-pass filter before the zero-crossing detection, 2) measuring time periods between zero...

  8. Study of the mode of angular velocity damping for a spacecraft at non-standard situation

    Science.gov (United States)

    Davydov, A. A.; Sazonov, V. V.

    2012-07-01

    Non-standard situation on a spacecraft (Earth's satellite) is considered, when there are no measurements of the spacecraft's angular velocity component relative to one of its body axes. Angular velocity measurements are used in controlling spacecraft's attitude motion by means of flywheels. The arising problem is to study the operation of standard control algorithms in the absence of some necessary measurements. In this work this problem is solved for the algorithm ensuring the damping of spacecraft's angular velocity. Such a damping is shown to be possible not for all initial conditions of motion. In the general case one of two possible final modes is realized, each described by stable steady-state solutions of the equations of motion. In one of them, the spacecraft's angular velocity component relative to the axis, for which the measurements are absent, is nonzero. The estimates of the regions of attraction are obtained for these steady-state solutions by numerical calculations. A simple technique is suggested that allows one to eliminate the initial conditions of the angular velocity damping mode from the attraction region of an undesirable solution. Several realizations of this mode that have taken place are reconstructed. This reconstruction was carried out using approximations of telemetry values of the angular velocity components and the total angular momentum of flywheels, obtained at the non-standard situation, by solutions of the equations of spacecraft's rotational motion.

  9. Wake Component Detection in X-Band SAR Images for Ship Heading and Velocity Estimation

    Directory of Open Access Journals (Sweden)

    Maria Daniela Graziano

    2016-06-01

    Full Text Available A new algorithm for ship wake detection is developed with the aim of ship heading and velocity estimation. It exploits the Radon transform and utilizes merit indexes in the intensity domain to validate the detected linear features as real components of the ship wake. Finally, ship velocity is estimated by state-of-the-art techniques of azimuth shift and Kelvin arm wavelength. The algorithm is applied to 13 X-band SAR images from the TerraSAR-X and COSMO/SkyMed missions with different polarization and incidence angles. Results show that the vast majority of wake features are correctly detected and validated also in critical situations, i.e., when multiple wake appearances or dark areas not related to wake features are imaged. The ship route estimations are validated with truth-at-sea in seven cases. Finally, it is also verified that the algorithm does not detect wakes in the surroundings of 10 ships without wake appearances.

  10. A two pressure-velocity approach for immersed boundary methods in three dimensional incompressible flows

    International Nuclear Information System (INIS)

    Sabir, O; Ahmad, Norhafizan; Nukman, Y; Tuan Ya, T M Y S

    2013-01-01

    This paper describes an innovative method for computing fluid solid interaction using Immersed boundary methods with two stage pressure-velocity corrections. The algorithm calculates the interactions between incompressible viscous flows and a solid shape in three-dimensional domain. The fractional step method is used to solve the Navier-Stokes equations in finite difference schemes. Most of IBMs are concern about exchange of the momentum between the Eulerian variables (fluid) and the Lagrangian nodes (solid). To address that concern, a new algorithm to correct the pressure and the velocity using Simplified Marker and Cell method is added. This scheme is applied on staggered grid to simulate the flow past a circular cylinder and study the effect of the new stage on calculations cost. To evaluate the accuracy of the computations the results are compared with the previous software results. The paper confirms the capacity of new algorithm for accurate and robust simulation of Fluid Solid Interaction with respect to pressure field

  11. Constraining fault interpretation through tomographic velocity gradients: application to northern Cascadia

    Directory of Open Access Journals (Sweden)

    K. Ramachandran

    2012-02-01

    Full Text Available Spatial gradients of tomographic velocities are seldom used in interpretation of subsurface fault structures. This study shows that spatial velocity gradients can be used effectively in identifying subsurface discontinuities in the horizontal and vertical directions. Three-dimensional velocity models constructed through tomographic inversion of active source and/or earthquake traveltime data are generally built from an initial 1-D velocity model that varies only with depth. Regularized tomographic inversion algorithms impose constraints on the roughness of the model that help to stabilize the inversion process. Final velocity models obtained from regularized tomographic inversions have smooth three-dimensional structures that are required by the data. Final velocity models are usually analyzed and interpreted either as a perturbation velocity model or as an absolute velocity model. Compared to perturbation velocity model, absolute velocity models have an advantage of providing constraints on lithology. Both velocity models lack the ability to provide sharp constraints on subsurface faults. An interpretational approach utilizing spatial velocity gradients applied to northern Cascadia shows that subsurface faults that are not clearly interpretable from velocity model plots can be identified by sharp contrasts in velocity gradient plots. This interpretation resulted in inferring the locations of the Tacoma, Seattle, Southern Whidbey Island, and Darrington Devil's Mountain faults much more clearly. The Coast Range Boundary fault, previously hypothesized on the basis of sedimentological and tectonic observations, is inferred clearly from the gradient plots. Many of the fault locations imaged from gradient data correlate with earthquake hypocenters, indicating their seismogenic nature.

  12. Optimal Pid Controller Design Using Adaptive Vurpso Algorithm

    Science.gov (United States)

    Zirkohi, Majid Moradi

    2015-04-01

    The purpose of this paper is to improve theVelocity Update Relaxation Particle Swarm Optimization algorithm (VURPSO). The improved algorithm is called Adaptive VURPSO (AVURPSO) algorithm. Then, an optimal design of a Proportional-Integral-Derivative (PID) controller is obtained using the AVURPSO algorithm. An adaptive momentum factor is used to regulate a trade-off between the global and the local exploration abilities in the proposed algorithm. This operation helps the system to reach the optimal solution quickly and saves the computation time. Comparisons on the optimal PID controller design confirm the superiority of AVURPSO algorithm to the optimization algorithms mentioned in this paper namely the VURPSO algorithm, the Ant Colony algorithm, and the conventional approach. Comparisons on the speed of convergence confirm that the proposed algorithm has a faster convergence in a less computation time to yield a global optimum value. The proposed AVURPSO can be used in the diverse areas of optimization problems such as industrial planning, resource allocation, scheduling, decision making, pattern recognition and machine learning. The proposed AVURPSO algorithm is efficiently used to design an optimal PID controller.

  13. Microseismic Velocity Imaging of the Fracturing Zone

    Science.gov (United States)

    Zhang, H.; Chen, Y.

    2015-12-01

    Hydraulic fracturing of low permeability reservoirs can induce microseismic events during fracture development. For this reason, microseismic monitoring using sensors on surface or in borehole have been widely used to delineate fracture spatial distribution and to understand fracturing mechanisms. It is often the case that the stimulated reservoir volume (SRV) is determined solely based on microseismic locations. However, it is known that for some fracture development stage, long period long duration events, instead of microseismic events may be associated. In addition, because microseismic events are essentially weak and there exist different sources of noise during monitoring, some microseismic events could not be detected and thus located. Therefore the estimation of the SRV is biased if it is solely determined by microseismic locations. With the existence of fluids and fractures, the seismic velocity of reservoir layers will be decreased. Based on this fact, we have developed a near real time seismic velocity tomography method to characterize velocity changes associated with fracturing process. The method is based on double-difference seismic tomography algorithm to image the fracturing zone where microseismic events occur by using differential arrival times from microseismic event pairs. To take into account varying data distribution for different fracking stages, the method solves the velocity model in the wavelet domain so that different scales of model features can be obtained according to different data distribution. We have applied this real time tomography method to both acoustic emission data from lab experiment and microseismic data from a downhole microseismic monitoring project for shale gas hydraulic fracturing treatment. The tomography results from lab data clearly show the velocity changes associated with different rock fracturing stages. For the field data application, it shows that microseismic events are located in low velocity anomalies. By

  14. Moveout analysis of wide-azimuth data in the presence of lateral velocity variation

    KAUST Repository

    Takanashi, Mamoru

    2012-05-01

    Moveout analysis of wide-azimuth reflection data seldom takes into account lateral velocity variations on the scale of spreadlength. However, velocity lenses (such as channels and reefs) in the overburden can cause significant, laterally varying errors in the moveout parameters and distortions in data interpretation. Here, we present an analytic expression for the normal-moveout (NMO) ellipse in stratified media with lateral velocity variation. The contribution of lateral heterogeneity (LH) is controlled by the second derivatives of the interval vertical traveltime with respect to the horizontal coordinates, along with the depth and thickness of the LH layer. This equation provides a quick estimate of the influence of velocity lenses and can be used to substantially mitigate the lens-induced distortions in the effective and interval NMO ellipses. To account for velocity lenses in nonhyperbolic moveout inversion of wide-azimuth data, we propose a prestack correction algorithm that involves computation of the lens-induced traveltime distortion for each recorded trace. The overburden is assumed to be composed of horizontal layers (one of which contains the lens), but the target interval can be laterally heterogeneous with dipping or curved interfaces. Synthetic tests for horizontally layered models confirm that our algorithm accurately removes lens-related azimuthally varying traveltime shifts and errors in the moveout parameters. The developed methods should increase the robustness of seismic processing of wide-azimuth surveys, especially those acquired for fracture-characterization purposes. © 2012 Society of Exploration Geophysicists.

  15. Moveout analysis of wide-azimuth data in the presence of lateral velocity variation

    KAUST Repository

    Takanashi, Mamoru; Tsvankin, Ilya

    2012-01-01

    Moveout analysis of wide-azimuth reflection data seldom takes into account lateral velocity variations on the scale of spreadlength. However, velocity lenses (such as channels and reefs) in the overburden can cause significant, laterally varying errors in the moveout parameters and distortions in data interpretation. Here, we present an analytic expression for the normal-moveout (NMO) ellipse in stratified media with lateral velocity variation. The contribution of lateral heterogeneity (LH) is controlled by the second derivatives of the interval vertical traveltime with respect to the horizontal coordinates, along with the depth and thickness of the LH layer. This equation provides a quick estimate of the influence of velocity lenses and can be used to substantially mitigate the lens-induced distortions in the effective and interval NMO ellipses. To account for velocity lenses in nonhyperbolic moveout inversion of wide-azimuth data, we propose a prestack correction algorithm that involves computation of the lens-induced traveltime distortion for each recorded trace. The overburden is assumed to be composed of horizontal layers (one of which contains the lens), but the target interval can be laterally heterogeneous with dipping or curved interfaces. Synthetic tests for horizontally layered models confirm that our algorithm accurately removes lens-related azimuthally varying traveltime shifts and errors in the moveout parameters. The developed methods should increase the robustness of seismic processing of wide-azimuth surveys, especially those acquired for fracture-characterization purposes. © 2012 Society of Exploration Geophysicists.

  16. Investigation of 1-D crustal velocity structure beneath Izmir Gulf and surroundings by using local earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Polat, Orhan, E-mail: orhan.polat@deu.edu.tr [Dokuz Eylul University, Faculty of Engineering, Geophysical Engineering Department, Izmir (Turkey); Özer, Çaglar, E-mail: caglar.ozer@deu.edu.tr [Dokuz Eylul University, Faculty of Engineering, Geophysical Engineering Department, Izmir (Turkey); Dokuz Eylul University, The Graduate School of Natural and Applied Sciences, Department of Geophysical Engineering, Izmir-Turkey (Turkey)

    2016-04-18

    In this study; we examined one dimensional crustal velocity structure of Izmir gulf and surroundings. We used nearly one thousand high quality (A and B class) earthquake data which recorded by Disaster and Emergency Management Presidency (AFAD) [1], Bogazici University (BU-KOERI) [2] and National Observatory of Athens (NOA) [3,4]. We tried several synthetic tests to understand power of new velocity structure, and examined phase residuals, RMS values and shifting tests. After evaluating these tests; we decided one dimensional velocity structure and minimum 1-D P wave velocities, hypocentral parameter and earthquake locations from VELEST algorithm. Distribution of earthquakes was visibly improved by using new minimum velocity structure.

  17. Investigation of 1-D crustal velocity structure beneath Izmir Gulf and surroundings by using local earthquakes

    International Nuclear Information System (INIS)

    Polat, Orhan; Özer, Çaglar

    2016-01-01

    In this study; we examined one dimensional crustal velocity structure of Izmir gulf and surroundings. We used nearly one thousand high quality (A and B class) earthquake data which recorded by Disaster and Emergency Management Presidency (AFAD) [1], Bogazici University (BU-KOERI) [2] and National Observatory of Athens (NOA) [3,4]. We tried several synthetic tests to understand power of new velocity structure, and examined phase residuals, RMS values and shifting tests. After evaluating these tests; we decided one dimensional velocity structure and minimum 1-D P wave velocities, hypocentral parameter and earthquake locations from VELEST algorithm. Distribution of earthquakes was visibly improved by using new minimum velocity structure.

  18. A fast autofocus algorithm for synthetic aperture radar processing

    DEFF Research Database (Denmark)

    Dall, Jørgen

    1992-01-01

    High-resolution synthetic aperture radar (SAR) imaging requires the motion of the radar platform to be known very accurately. Otherwise, phase errors are induced in the processing of the raw SAR data, and bad focusing results. In particular, a constant error in the measured along-track velocity o...... of magnitude lower than that of other algorithms providing comparable accuracies is presented. The algorithm has been tested on data from the Danish Airborne SAR, and the performance is compared with that of the traditional map drift algorithm...

  19. The radial velocities of planetary nebulae in NGC 3379

    Science.gov (United States)

    Ciardullo, Robin; Jacoby, George H.; Dejonghe, Herwig B.

    1993-09-01

    We present the results of a radial velocity survey of planetary nebulae (PNs) in the normal elliptical galaxy NGC 3379 performed with the Kitt Peak 4 m telescope and the NESSIE multifiber spectrograph. In two half-nights, we measured 29 PNs with projected galactocentric distances between 0.4 and 3.8 effective radii with an observational uncertainty of about 7 km/s. These data extend three times farther into the halo than any previous absorption-line velocity study. The velocity dispersion and photometric profile of the galaxy agrees extremely well with that expected from a constant mass-to-light ratio, isotropic orbit Jaffe model with M/L(B) about 7; the best-fitting anisotropic models from a quadratic programming algorithm also give M/L(B) about 7. The data are consistent with models that contain no dark matter within 3.5 effective radii of the galaxy's nucleus.

  20. Simulation of spreading depolarization trajectories in cerebral cortex: Correlation of velocity and susceptibility in patients with aneurysmal subarachnoid hemorrhage

    Directory of Open Access Journals (Sweden)

    Denny Milakara

    2017-01-01

    Full Text Available In many cerebral grey matter structures including the neocortex, spreading depolarization (SD is the principal mechanism of the near-complete breakdown of the transcellular ion gradients with abrupt water influx into neurons. Accordingly, SDs are abundantly recorded in patients with traumatic brain injury, spontaneous intracerebral hemorrhage, aneurysmal subarachnoid hemorrhage (aSAH and malignant hemispheric stroke using subdural electrode strips. SD is observed as a large slow potential change, spreading in the cortex at velocities between 2 and 9 mm/min. Velocity and SD susceptibility typically correlate positively in various animal models. In patients monitored in neurocritical care, the Co-Operative Studies on Brain Injury Depolarizations (COSBID recommends several variables to quantify SD occurrence and susceptibility, although accurate measures of SD velocity have not been possible. Therefore, we developed an algorithm to estimate SD velocities based on reconstructing SD trajectories of the wave-front's curvature center from magnetic resonance imaging scans and time-of-SD-arrival-differences between subdural electrode pairs. We then correlated variables indicating SD susceptibility with algorithm-estimated SD velocities in twelve aSAH patients. Highly significant correlations supported the algorithm's validity. The trajectory search failed significantly more often for SDs recorded directly over emerging focal brain lesions suggesting in humans similar to animals that the complexity of SD propagation paths increase in tissue undergoing injury.

  1. Distributed Extended Kalman Filter for Position, Velocity, Time, Estimation in Satellite Navigation Receivers

    Directory of Open Access Journals (Sweden)

    O. Jakubov

    2013-09-01

    Full Text Available Common techniques for position-velocity-time estimation in satellite navigation, iterative least squares and the extended Kalman filter, involve matrix operations. The matrix inversion and inclusion of a matrix library pose requirements on a computational power and operating platform of the navigation processor. In this paper, we introduce a novel distributed algorithm suitable for implementation in simple parallel processing units each for a tracked satellite. Such a unit performs only scalar sum, subtraction, multiplication, and division. The algorithm can be efficiently implemented in hardware logic. Given the fast position-velocity-time estimator, frequent estimates can foster dynamic performance of a vector tracking receiver. The algorithm has been designed from a factor graph representing the extended Kalman filter by splitting vector nodes into scalar ones resulting in a cyclic graph with few iterations needed. Monte Carlo simulations have been conducted to investigate convergence and accuracy. Simulation case studies for a vector tracking architecture and experimental measurements with a real-time software receiver developed at CTU in Prague were conducted. The algorithm offers compromises in stability, accuracy, and complexity depending on the number of iterations. In scenarios with a large number of tracked satellites, it can outperform the traditional methods at low complexity.

  2. RESPONSE OF STRUCTURES TO HIGH VELOCITY IMPACTS: A GENERALIZED ALGORITHM

    Directory of Open Access Journals (Sweden)

    Aversh'ev Anatoliy Sergeevich

    2012-10-01

    Full Text Available In this paper, a high velocity impact produced by a spherical striker and a target are considered; different stages of loading and unloading, target deformations and propagation of non-stationary wave surfaces within the target are analyzed. The problem of the strike modeling and subsequent deformations is solved by using not only the equations of mechanics of deformable rigid bodies, but also fluid mechanics equations. The target material is simulated by means of an ideal "plastic gas". Modeling results and theoretical calculations are compared to the experimental results. The crater depth, its correlation with the striker diameter, values of the pressure and deformations of the target underneath the contact area are determined as the main characteristics of dynamic interaction.

  3. Homogenization and implementation of a 3D regional velocity model in Mexico for its application in moment tensor inversion of intermediate-magnitude earthquakes

    Science.gov (United States)

    Rodríguez Cardozo, Félix; Hjörleifsdóttir, Vala; Caló, Marco

    2017-04-01

    Moment tensor inversions for intermediate and small earthquakes (M. < 4.5) are challenging as they principally excite relatively short period seismic waves that interact strongly with local heterogeneities. Incorporating detailed regional 3D velocity models permits obtaining realistic synthetic seismograms and recover the seismic source parameters these smaller events. Two 3D regional velocity models have recently been developed for Mexico, using surface waves and seismic noise tomography (Spica et al., 2016; Gaite et al., 2015), which could be used to model the waveforms of intermediate magnitud earthquakes in this region. Such models are parameterized as layered velocity profiles and for some of the profiles, the velocity difference between two layers are considerable. The "jump" in velocities between two layers is inconvenient for some methods and algorithms that calculate synthetic waveforms, in particular for the method that we are using, the spectral element method (SPECFEM3D GLOBE, Komatitsch y Tromp, 2000), when the mesh does not follow the layer boundaries. In order to make the velocity models more easily implementec in SPECFEM3D GLOBE it is neccesary to apply a homogenization algorithm (Capdeville et al., 2015) such that the (now anisotropic) layer velocities are smoothly varying with depth. In this work, we apply a homogenization algorithm to the regional velocity models in México for implementing them in SPECFEM3D GLOBE, calculate synthetic waveforms for intermediate-magnitude earthquakes in México and invert them for the seismic moment tensor.

  4. Description of multiple processes on the basis of triangulation in the velocity space

    International Nuclear Information System (INIS)

    Baldin, A.M.; Baldin, A.A.

    1986-01-01

    A method of the construction of polyhedrons in the relative four-velocity space is suggested which gives a complete description of multiple processes. A method of the consideration of a general case, when the total number of the relative velocity variables exceeds the number of the degrees of freedom, is also given. The account of the particular features of the polyhedrons due to the clusterization in the velocity space, as well as the account of the existence of intermediate asymptotics and the correlation depletion principle makes it possible to propose an algorithm for processing much larger bulk of experimental information on multiple processes as compared to the inclusive approach

  5. Numerical calculation of velocity distribution near a vertical flat plate immersed in bubble flow

    International Nuclear Information System (INIS)

    Matsuura, Akihiro; Nakamura, Hajime; Horihata, Hideyuki; Hiraoka, Setsuro; Aragaki, Tsutomu; Yamada, Ikuho; Isoda, Shinji.

    1992-01-01

    Liquid and gas velocity distributions for bubble flow near a vertical flat plate were calculated numerically by using the SIMPLER method, where the flow was assumed to be laminar, two-dimensional, and at steady state. The two-fluid flow model was used in the numerical analysis. To calculate the drag force on a small bubble, Stokes' law for a rigid sphere is applicable. The dimensionless velocity distributions which were arranged with characteristic boundary layer thickness and maximum liquid velocity were adjusted with a single line and their forms were similar to that for single-phase wall-jet flow. The average wall shear stress derived from the velocity gradient at the plate wall was strongly affected by bubble diameter but not by inlet liquid velocity. The present dimensionless velocity distributions obtained numerically agreed well with previous experimental results, and the proposed numerical algorithm was validated. (author)

  6. Continuous Data Assimilation for a 2D Bénard Convection System Through Horizontal Velocity Measurements Alone

    Science.gov (United States)

    Farhat, Aseel; Lunasin, Evelyn; Titi, Edriss S.

    2017-06-01

    In this paper we propose a continuous data assimilation (downscaling) algorithm for a two-dimensional Bénard convection problem. Specifically we consider the two-dimensional Boussinesq system of a layer of incompressible fluid between two solid horizontal walls, with no-normal flow and stress-free boundary conditions on the walls, and the fluid is heated from the bottom and cooled from the top. In this algorithm, we incorporate the observables as a feedback (nudging) term in the evolution equation of the horizontal velocity. We show that under an appropriate choice of the nudging parameter and the size of the spatial coarse mesh observables, and under the assumption that the observed data are error free, the solution of the proposed algorithm converges at an exponential rate, asymptotically in time, to the unique exact unknown reference solution of the original system, associated with the observed data on the horizontal component of the velocity.

  7. Handoff Triggering and Network Selection Algorithms for Load-Balancing Handoff in CDMA-WLAN Integrated Networks

    Directory of Open Access Journals (Sweden)

    Khalid Qaraqe

    2008-10-01

    Full Text Available This paper proposes a novel vertical handoff algorithm between WLAN and CDMA networks to enable the integration of these networks. The proposed vertical handoff algorithm assumes a handoff decision process (handoff triggering and network selection. The handoff trigger is decided based on the received signal strength (RSS. To reduce the likelihood of unnecessary false handoffs, the distance criterion is also considered. As a network selection mechanism, based on the wireless channel assignment algorithm, this paper proposes a context-based network selection algorithm and the corresponding communication algorithms between WLAN and CDMA networks. This paper focuses on a handoff triggering criterion which uses both the RSS and distance information, and a network selection method which uses context information such as the dropping probability, blocking probability, GoS (grade of service, and number of handoff attempts. As a decision making criterion, the velocity threshold is determined to optimize the system performance. The optimal velocity threshold is adjusted to assign the available channels to the mobile stations. The optimal velocity threshold is adjusted to assign the available channels to the mobile stations using four handoff strategies. The four handoff strategies are evaluated and compared with each other in terms of GOS. Finally, the proposed scheme is validated by computer simulations.

  8. Handoff Triggering and Network Selection Algorithms for Load-Balancing Handoff in CDMA-WLAN Integrated Networks

    Directory of Open Access Journals (Sweden)

    Kim Jang-Sub

    2008-01-01

    Full Text Available This paper proposes a novel vertical handoff algorithm between WLAN and CDMA networks to enable the integration of these networks. The proposed vertical handoff algorithm assumes a handoff decision process (handoff triggering and network selection. The handoff trigger is decided based on the received signal strength (RSS. To reduce the likelihood of unnecessary false handoffs, the distance criterion is also considered. As a network selection mechanism, based on the wireless channel assignment algorithm, this paper proposes a context-based network selection algorithm and the corresponding communication algorithms between WLAN and CDMA networks. This paper focuses on a handoff triggering criterion which uses both the RSS and distance information, and a network selection method which uses context information such as the dropping probability, blocking probability, GoS (grade of service, and number of handoff attempts. As a decision making criterion, the velocity threshold is determined to optimize the system performance. The optimal velocity threshold is adjusted to assign the available channels to the mobile stations. The optimal velocity threshold is adjusted to assign the available channels to the mobile stations using four handoff strategies. The four handoff strategies are evaluated and compared with each other in terms of GOS. Finally, the proposed scheme is validated by computer simulations.

  9. Factors controlling the field settling velocity of cohesive sediment in estuaries

    DEFF Research Database (Denmark)

    Pejrup, Morten; Mikkelsen, Ole

    2010-01-01

    in the correlation of the description of W-50 and the controlling parameters from each area can be obtained. A generic algorithm describing the data from all the investigated areas is suggested. It works well within specific tidal areas but fails to give a generic description of the field settling velocity....

  10. Linac design algorithm with symmetric segments

    International Nuclear Information System (INIS)

    Takeda, Harunori; Young, L.M.; Nath, S.; Billen, J.H.; Stovall, J.E.

    1996-01-01

    The cell lengths in linacs of traditional design are typically graded as a function of particle velocity. By making groups of cells and individual cells symmetric in both the CCDTL AND CCL, the cavity design as well as mechanical design and fabrication is simplified without compromising the performance. We have implemented a design algorithm in the PARMILA code in which cells and multi-cavity segments are made symmetric, significantly reducing the number of unique components. Using the symmetric algorithm, a sample linac design was generated and its performance compared with a similar one of conventional design

  11. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  12. Fast simulated annealing inversion of surface waves on pavement using phase-velocity spectra

    Science.gov (United States)

    Ryden, N.; Park, C.B.

    2006-01-01

    The conventional inversion of surface waves depends on modal identification of measured dispersion curves, which can be ambiguous. It is possible to avoid mode-number identification and extraction by inverting the complete phase-velocity spectrum obtained from a multichannel record. We use the fast simulated annealing (FSA) global search algorithm to minimize the difference between the measured phase-velocity spectrum and that calculated from a theoretical layer model, including the field setup geometry. Results show that this algorithm can help one avoid getting trapped in local minima while searching for the best-matching layer model. The entire procedure is demonstrated on synthetic and field data for asphalt pavement. The viscoelastic properties of the top asphalt layer are taken into account, and the inverted asphalt stiffness as a function of frequency compares well with laboratory tests on core samples. The thickness and shear-wave velocity of the deeper embedded layers are resolved within 10% deviation from those values measured separately during pavement construction. The proposed method may be equally applicable to normal soil site investigation and in the field of ultrasonic testing of materials. ?? 2006 Society of Exploration Geophysicists.

  13. A Synchronous-Asynchronous Particle Swarm Optimisation Algorithm

    Science.gov (United States)

    Ab Aziz, Nor Azlina; Mubin, Marizan; Mohamad, Mohd Saberi; Ab Aziz, Kamarulzaman

    2014-01-01

    In the original particle swarm optimisation (PSO) algorithm, the particles' velocities and positions are updated after the whole swarm performance is evaluated. This algorithm is also known as synchronous PSO (S-PSO). The strength of this update method is in the exploitation of the information. Asynchronous update PSO (A-PSO) has been proposed as an alternative to S-PSO. A particle in A-PSO updates its velocity and position as soon as its own performance has been evaluated. Hence, particles are updated using partial information, leading to stronger exploration. In this paper, we attempt to improve PSO by merging both update methods to utilise the strengths of both methods. The proposed synchronous-asynchronous PSO (SA-PSO) algorithm divides the particles into smaller groups. The best member of a group and the swarm's best are chosen to lead the search. Members within a group are updated synchronously, while the groups themselves are asynchronously updated. Five well-known unimodal functions, four multimodal functions, and a real world optimisation problem are used to study the performance of SA-PSO, which is compared with the performances of S-PSO and A-PSO. The results are statistically analysed and show that the proposed SA-PSO has performed consistently well. PMID:25121109

  14. Control of baker’s yeast fermentation : PID and fuzzy algorithms

    OpenAIRE

    Machado, Carlos; Gomes, Pedro; Soares, Rui; Pereira, Silvia; Soares, Filomena

    2001-01-01

    A MATLAB/SIMULINK-based simulator was employed for studies concerning the control of baker’s yeast fed-batch fermentation. Four control algorithms were implemented and compared: the classical PID control, two discrete versions- modified velocity and position algorithms, and a fuzzy law. The simulation package was seen to be an efficient tool for the simulation and tests of control strategies of the non-linear process.

  15. A First Layered Crustal Velocity Model for the Western Solomon Islands: Inversion of Measured Group Velocity of Surface Waves using Ambient Noise Cross-Correlation

    Science.gov (United States)

    Ku, C. S.; Kuo, Y. T.; Chao, W. A.; You, S. H.; Huang, B. S.; Chen, Y. G.; Taylor, F. W.; Yih-Min, W.

    2017-12-01

    Two earthquakes, MW 8.1 in 2007 and MW 7.1 in 2010, hit the Western Province of Solomon Islands and caused extensive damage, but motivated us to set up the first seismic network in this area. During the first phase, eight broadband seismic stations (BBS) were installed around the rupture zone of 2007 earthquake. With one-year seismic records, we cross-correlated the vertical component of ambient noise recorded in our BBS and calculated Rayleigh-wave group velocity dispersion curves on inter-station paths. The genetic algorithm to invert one-dimensional crustal velocity model is applied by fitting the averaged dispersion curves. The one-dimensional crustal velocity model is constituted by two layers and one half-space, representing the upper crust, lower crust, and uppermost mantle respectively. The resulted thickness values of the upper and lower crust are 6.4 and 14.2 km, respectively. Shear-wave velocities (VS) of the upper crust, lower crust, and uppermost mantle are 2.53, 3.57 and 4.23 km/s with the VP/VS ratios of 1.737, 1.742 and 1.759, respectively. This first layered crustal velocity model can be used as a preliminary reference to further study seismic sources such as earthquake activity and tectonic tremor.

  16. Universal algorithms and programs for calculating the motion parameters in the two-body problem

    Science.gov (United States)

    Bakhshiyan, B. T.; Sukhanov, A. A.

    1979-01-01

    The algorithms and FORTRAN programs for computing positions and velocities, orbital elements and first and second partial derivatives in the two-body problem are presented. The algorithms are applicable for any value of eccentricity and are convenient for computing various navigation parameters.

  17. A Biologically-Inspired Power Control Algorithm for Energy-Efficient Cellular Networks

    Directory of Open Access Journals (Sweden)

    Hyun-Ho Choi

    2016-03-01

    Full Text Available Most of the energy used to operate a cellular network is consumed by a base station (BS, and reducing the transmission power of a BS can therefore afford a substantial reduction in the amount of energy used in a network. In this paper, we propose a distributed transmit power control (TPC algorithm inspired by bird flocking behavior as a means of improving the energy efficiency of a cellular network. Just as each bird in a flock attempts to match its velocity with the average velocity of adjacent birds, in the proposed algorithm, each mobile station (MS in a cell matches its rate with the average rate of the co-channel MSs in adjacent cells by controlling the transmit power of its serving BS. We verify that this bio-inspired TPC algorithm using a local rate-average process achieves an exponential convergence and maximizes the minimum rate of the MSs concerned. Simulation results show that the proposed TPC algorithm follows the same convergence properties as the flocking algorithm and also effectively reduces the power consumption at the BSs while maintaining a low outage probability as the inter-cell interference increases; in so doing, it significantly improves the energy efficiency of a cellular network.

  18. S-velocity structure in Cimandiri fault zone derived from neighbourhood inversion of teleseismic receiver functions

    Science.gov (United States)

    Syuhada; Anggono, T.; Febriani, F.; Ramdhan, M.

    2018-03-01

    The availability information about realistic velocity earth model in the fault zone is crucial in order to quantify seismic hazard analysis, such as ground motion modelling, determination of earthquake locations and focal mechanism. In this report, we use teleseismic receiver function to invert the S-velocity model beneath a seismic station located in the Cimandiri fault zone using neighbourhood algorithm inversion method. The result suggests the crustal thickness beneath the station is about 32-38 km. Furthermore, low velocity layers with high Vp/Vs exists in the lower crust, which may indicate the presence of hot material ascending from the subducted slab.

  19. A physics-enabled flow restoration algorithm for sparse PIV and PTV measurements

    Science.gov (United States)

    Vlasenko, Andrey; Steele, Edward C. C.; Nimmo-Smith, W. Alex M.

    2015-06-01

    The gaps and noise present in particle image velocimetry (PIV) and particle tracking velocimetry (PTV) measurements affect the accuracy of the data collected. Existing algorithms developed for the restoration of such data are only applicable to experimental measurements collected under well-prepared laboratory conditions (i.e. where the pattern of the velocity flow field is known), and the distribution, size and type of gaps and noise may be controlled by the laboratory set-up. However, in many cases, such as PIV and PTV measurements of arbitrarily turbid coastal waters, the arrangement of such conditions is not possible. When the size of gaps or the level of noise in these experimental measurements become too large, their successful restoration with existing algorithms becomes questionable. Here, we outline a new physics-enabled flow restoration algorithm (PEFRA), specially designed for the restoration of such velocity data. Implemented as a ‘black box’ algorithm, where no user-background in fluid dynamics is necessary, the physical structure of the flow in gappy or noisy data is able to be restored in accordance with its hydrodynamical basis. The use of this is not dependent on types of flow, types of gaps or noise in measurements. The algorithm will operate on any data time-series containing a sequence of velocity flow fields recorded by PIV or PTV. Tests with numerical flow fields established that this method is able to successfully restore corrupted PIV and PTV measurements with different levels of sparsity and noise. This assessment of the algorithm performance is extended with an example application to in situ submersible 3D-PTV measurements collected in the bottom boundary layer of the coastal ocean, where the naturally-occurring plankton and suspended sediments used as tracers causes an increase in the noise level that, without such denoising, will contaminate the measurements.

  20. A physics-enabled flow restoration algorithm for sparse PIV and PTV measurements

    International Nuclear Information System (INIS)

    Vlasenko, Andrey; Steele, Edward C C; Nimmo-Smith, W Alex M

    2015-01-01

    The gaps and noise present in particle image velocimetry (PIV) and particle tracking velocimetry (PTV) measurements affect the accuracy of the data collected. Existing algorithms developed for the restoration of such data are only applicable to experimental measurements collected under well-prepared laboratory conditions (i.e. where the pattern of the velocity flow field is known), and the distribution, size and type of gaps and noise may be controlled by the laboratory set-up. However, in many cases, such as PIV and PTV measurements of arbitrarily turbid coastal waters, the arrangement of such conditions is not possible. When the size of gaps or the level of noise in these experimental measurements become too large, their successful restoration with existing algorithms becomes questionable. Here, we outline a new physics-enabled flow restoration algorithm (PEFRA), specially designed for the restoration of such velocity data. Implemented as a ‘black box’ algorithm, where no user-background in fluid dynamics is necessary, the physical structure of the flow in gappy or noisy data is able to be restored in accordance with its hydrodynamical basis. The use of this is not dependent on types of flow, types of gaps or noise in measurements. The algorithm will operate on any data time-series containing a sequence of velocity flow fields recorded by PIV or PTV. Tests with numerical flow fields established that this method is able to successfully restore corrupted PIV and PTV measurements with different levels of sparsity and noise. This assessment of the algorithm performance is extended with an example application to in situ submersible 3D-PTV measurements collected in the bottom boundary layer of the coastal ocean, where the naturally-occurring plankton and suspended sediments used as tracers causes an increase in the noise level that, without such denoising, will contaminate the measurements. (paper)

  1. The impact of groundwater velocity fields on streamlines in an aquifer system with a discontinuous aquitard (Inner Mongolia, China)

    Science.gov (United States)

    Wu, Qiang; Zhao, Yingwang; Xu, Hua

    2018-04-01

    Many numerical methods that simulate groundwater flow, particularly the continuous Galerkin finite element method, do not produce velocity information directly. Many algorithms have been proposed to improve the accuracy of velocity fields computed from hydraulic potentials. The differences in the streamlines generated from velocity fields obtained using different algorithms are presented in this report. The superconvergence method employed by FEFLOW, a popular commercial code, and some dual-mesh methods proposed in recent years are selected for comparison. The applications to depict hydrogeologic conditions using streamlines are used, and errors in streamlines are shown to lead to notable errors in boundary conditions, the locations of material interfaces, fluxes and conductivities. Furthermore, the effects of the procedures used in these two types of methods, including velocity integration and local conservation, are analyzed. The method of interpolating velocities across edges using fluxes is shown to be able to eliminate errors associated with refraction points that are not located along material interfaces and streamline ends at no-flow boundaries. Local conservation is shown to be a crucial property of velocity fields and can result in more accurate streamline densities. A case study involving both three-dimensional and two-dimensional cross-sectional models of a coal mine in Inner Mongolia, China, are used to support the conclusions presented.

  2. Time dependent response of low velocity impact induced composite conical shells under multiple delamination

    Science.gov (United States)

    Dey, Sudip; Karmakar, Amit

    2014-02-01

    This paper presents the time dependent response of multiple delaminated angle-ply composite pretwisted conical shells subjected to low velocity normal impact. The finite element formulation is based on Mindlin's theory incorporating rotary inertia and effects of transverse shear deformation. An eight-noded isoparametric plate bending element is employed to satisfy the compatibility of deformation and equilibrium of resultant forces and moments at the delamination crack front. A multipoint constraint algorithm is incorporated which leads to asymmetric stiffness matrices. The modified Hertzian contact law which accounts for permanent indentation is utilized to compute the contact force, and the time dependent equations are solved by Newmark's time integration algorithm. Parametric studies are conducted with respect to triggering parameters like laminate configuration, location of delamination, angle of twist, velocity of impactor, and impactor's displacement for centrally impacted shells.

  3. Numerical simulation of a high velocity impact on fiber reinforced materials

    International Nuclear Information System (INIS)

    Thoma, Klaus; Vinckier, David

    1994-01-01

    Whereas the calculation of a high velocity impact on isotropical materials can be done on a routine basis, the simulation of the impact and penetration process into nonisotropical materials such as reinforced concrete or fiber reinforced materials still is a research task.We present the calculation of an impact of a metallic fragment on a modern protective wall structure. Such lightweight protective walls typically consist of two layers, a first outer layer made out of a material with high hardness and a backing layer. The materials for the backing layer are preferably fiber reinforced materials. Such types of walls offer a protection against fragments in a wide velocity range.For our calculations we used a non-linear finite element Lagrange code with explicit time integration. To be able to simulate the high velocity penetration process with a continuous erosion of the impacting metallic fragment, we used our newly developed contact algorithm with eroding surfaces. This contact algorithm is vectorized to a high degree and especially robust as it was developed to work for a wide range of contact-impact problems. To model the behavior of the fiber reinforced material under the highly dynamic loads, we present a material model which initially was developed to calculate the crash behavior (automotive applications) of modern high strength fiber-matrix systems. The model can describe the failure and the postfailure behavior up to complete material crushing.A detailed simulation shows the impact of a metallic fragment with a velocity of 750ms -1 on a protective wall with two layers, the deformation and erosion of fragment and wall material and the failure of the fiber reinforced material. ((orig.))

  4. High-velocity two-phase flow two-dimensional modeling

    International Nuclear Information System (INIS)

    Mathes, R.; Alemany, A.; Thilbault, J.P.

    1995-01-01

    The two-phase flow in the nozzle of a LMMHD (liquid metal magnetohydrodynamic) converter has been studied numerically and experimentally. A two-dimensional model for two-phase flow has been developed including the viscous terms (dragging and turbulence) and the interfacial mass, momentum and energy transfer between the phases. The numerical results were obtained by a finite volume method based on the SIMPLE algorithm. They have been verified by an experimental facility using air-water as a simulation pair and a phase Doppler particle analyzer for velocity and droplet size measurement. The numerical simulation of a lithium-cesium high-temperature pair showed that a nearly homogeneous and isothermal expansion of the two phases is possible with small pressure losses and high kinetic efficiencies. In the throat region a careful profiling is necessary to reduce the inertial effects on the liquid velocity field

  5. Smartphone-Based Indoor Integrated WiFi/MEMS Positioning Algorithm in a Multi-Floor Environment

    Directory of Open Access Journals (Sweden)

    Zengshan Tian

    2015-03-01

    Full Text Available Indoor positioning in a multi-floor environment by using a smartphone is considered in this paper. The positioning accuracy and robustness of WiFi fingerprinting-based positioning are limited due to the unexpected variation of WiFi measurements between floors. On this basis, we propose a novel smartphone-based integrated WiFi/MEMS positioning algorithm based on the robust extended Kalman filter (EKF. The proposed algorithm first relies on the gait detection approach and quaternion algorithm to estimate the velocity and heading angles of the target. Second, the velocity and heading angles, together with the results of WiFi fingerprinting-based positioning, are considered as the input of the robust EKF for the sake of conducting two-dimensional (2D positioning. Third, the proposed algorithm calculates the height of the target by using the real-time recorded barometer and geographic data. Finally, the experimental results show that the proposed algorithm achieves the positioning accuracy with root mean square errors (RMSEs less than 1 m in an actual multi-floor environment.

  6. Accuracy Analysis of Lunar Lander Terminal Guidance Algorithm

    Directory of Open Access Journals (Sweden)

    E. K. Li

    2017-01-01

    Full Text Available This article studies a proposed analytical algorithm of the terminal guidance for the lunar lander. The analytical solution, which forms the basis of the algorithm, was obtained for a constant acceleration trajectory and thrust vector orientation programs that are essentially linear with time. The main feature of the proposed algorithm is a completely analytical solution to provide the lander terminal guidance to the desired spot in 3D space when landing on the atmosphereless body with no numerical procedures. To reach 6 terminal conditions (components of position and velocity vectors at the final time are used 6 guidance law parameters, namely time-to-go, desired value of braking deceleration, initial values of pitch and yaw angles and rates of their change. In accordance with the principle of flexible trajectories, this algorithm assumes the implementation of a regularly updated control program that ensures reaching terminal conditions from the current state that corresponds to the control program update time. The guidance law parameters, which ensure that terminal conditions are reached, are generated as a function of the current phase coordinates of a lander. The article examines an accuracy and reliability of the proposed analytical algorithm that provides the terminal guidance of the lander in 3D space through mathematical modeling of the lander guidance from the circumlunar pre-landing orbit to the desired spot near the lunar surface. A desired terminal position of the lunar lander is specified by the selenographic latitude, longitude and altitude above the lunar surface. The impact of variations in orbital parameters on the terminal guidance accuracy has been studied. By varying the five initial orbit parameters (obliquity, ascending node longitude, argument of periapsis, periapsis height, apoapsis height when the terminal spot is fixed the statistic characteristics of the terminal guidance algorithm error according to the terminal

  7. A Numerical Instability in an ADI Algorithm for Gyrokinetics

    International Nuclear Information System (INIS)

    Belli, E.A.; Hammett, G.W.

    2004-01-01

    We explore the implementation of an Alternating Direction Implicit (ADI) algorithm for a gyrokinetic plasma problem and its resulting numerical stability properties. This algorithm, which uses a standard ADI scheme to divide the field solve from the particle distribution function advance, has previously been found to work well for certain plasma kinetic problems involving one spatial and two velocity dimensions, including collisions and an electric field. However, for the gyrokinetic problem we find a severe stability restriction on the time step. Furthermore, we find that this numerical instability limitation also affects some other algorithms, such as a partially implicit Adams-Bashforth algorithm, where the parallel motion operator v parallel ∂/∂z is treated implicitly and the field terms are treated with an Adams-Bashforth explicit scheme. Fully explicit algorithms applied to all terms can be better at long wavelengths than these ADI or partially implicit algorithms

  8. A neural circuit for angular velocity computation

    Directory of Open Access Journals (Sweden)

    Samuel B Snider

    2010-12-01

    Full Text Available In one of the most remarkable feats of motor control in the animal world, some Diptera, such as the housefly, can accurately execute corrective flight maneuvers in tens of milliseconds. These reflexive movements are achieved by the halteres, gyroscopic force sensors, in conjunction with rapidly-tunable wing-steering muscles. Specifically, the mechanosensory campaniform sensilla located at the base of the halteres transduce and transform rotation-induced gyroscopic forces into information about the angular velocity of the fly's body. But how exactly does the fly's neural architecture generate the angular velocity from the lateral strain forces on the left and right halteres? To explore potential algorithms, we built a neuro-mechanical model of the rotation detection circuit. We propose a neurobiologically plausible method by which the fly could accurately separate and measure the three-dimensional components of an imposed angular velocity. Our model assumes a single sign-inverting synapse and formally resembles some models of directional selectivity by the retina. Using multidimensional error analysis, we demonstrate the robustness of our model under a variety of input conditions. Our analysis reveals the maximum information available to the fly given its physical architecture and the mathematics governing the rotation-induced forces at the haltere's end knob.

  9. PARALLEL ALGORITHM FOR THREE-DIMENSIONAL STOKES FLOW SIMULATION USING BOUNDARY ELEMENT METHOD

    Directory of Open Access Journals (Sweden)

    D. G. Pribytok

    2016-01-01

    Full Text Available Parallel computing technique for modeling three-dimensional viscous flow (Stokes flow using direct boundary element method is presented. The problem is solved in three phases: sampling and construction of system of linear algebraic equations (SLAE, its decision and finding the velocity of liquid at predetermined points. For construction of the system and finding the velocity, the parallel algorithms using graphics CUDA cards programming technology have been developed and implemented. To solve the system of linear algebraic equations the implemented software libraries are used. A comparison of time consumption for three main algorithms on the example of calculation of viscous fluid motion in three-dimensional cavity is performed.

  10. Three-Dimensional Velocity Field De-Noising using Modal Projection

    Science.gov (United States)

    Frank, Sarah; Ameli, Siavash; Szeri, Andrew; Shadden, Shawn

    2017-11-01

    PCMRI and Doppler ultrasound are common modalities for imaging velocity fields inside the body (e.g. blood, air, etc) and PCMRI is increasingly being used for other fluid mechanics applications where optical imaging is difficult. This type of imaging is typically applied to internal flows, which are strongly influenced by domain geometry. While these technologies are evolving, it remains that measured data is noisy and boundary layers are poorly resolved. We have developed a boundary modal analysis method to de-noise 3D velocity fields such that the resulting field is divergence-free and satisfies no-slip/no-penetration boundary conditions. First, two sets of divergence-free modes are computed based on domain geometry. The first set accounts for flow through ``truncation boundaries'', and the second set of modes has no-slip/no-penetration conditions imposed on all boundaries. The modes are calculated by minimizing the velocity gradient throughout the domain while enforcing a divergence-free condition. The measured velocity field is then projected onto these modes using a least squares algorithm. This method is demonstrated on CFD simulations with artificial noise. Different degrees of noise and different numbers of modes are tested to reveal the capabilities of the approach. American Heart Association Award 17PRE33660202.

  11. A multiresolution remeshed Vortex-In-Cell algorithm using patches

    DEFF Research Database (Denmark)

    Rasmussen, Johannes Tophøj; Cottet, Georges-Henri; Walther, Jens Honore

    2011-01-01

    We present a novel multiresolution Vortex-In-Cell algorithm using patches of varying resolution. The Poisson equation relating the fluid vorticity and velocity is solved using Fast Fourier Transforms subject to free space boundary conditions. Solid boundaries are implemented using the semi...

  12. Nonlinear inversion of borehole-radar tomography data to reconstruct velocity and attenuation distribution in earth materials

    Science.gov (United States)

    Zhou, C.; Liu, L.; Lane, J.W.

    2001-01-01

    A nonlinear tomographic inversion method that uses first-arrival travel-time and amplitude-spectra information from cross-hole radar measurements was developed to simultaneously reconstruct electromagnetic velocity and attenuation distribution in earth materials. Inversion methods were developed to analyze single cross-hole tomography surveys and differential tomography surveys. Assuming the earth behaves as a linear system, the inversion methods do not require estimation of source radiation pattern, receiver coupling, or geometrical spreading. The data analysis and tomographic inversion algorithm were applied to synthetic test data and to cross-hole radar field data provided by the US Geological Survey (USGS). The cross-hole radar field data were acquired at the USGS fractured-rock field research site at Mirror Lake near Thornton, New Hampshire, before and after injection of a saline tracer, to monitor the transport of electrically conductive fluids in the image plane. Results from the synthetic data test demonstrate the algorithm computational efficiency and indicate that the method robustly can reconstruct electromagnetic (EM) wave velocity and attenuation distribution in earth materials. The field test results outline zones of velocity and attenuation anomalies consistent with the finding of previous investigators; however, the tomograms appear to be quite smooth. Further work is needed to effectively find the optimal smoothness criterion in applying the Tikhonov regularization in the nonlinear inversion algorithms for cross-hole radar tomography. ?? 2001 Elsevier Science B.V. All rights reserved.

  13. FrFT-CSWSF: Estimating cross-range velocities of ground moving targets using multistatic synthetic aperture radar

    Directory of Open Access Journals (Sweden)

    Li Chenlei

    2014-10-01

    Full Text Available Estimating cross-range velocity is a challenging task for space-borne synthetic aperture radar (SAR, which is important for ground moving target indication (GMTI. Because the velocity of a target is very small compared with that of the satellite, it is difficult to correctly estimate it using a conventional monostatic platform algorithm. To overcome this problem, a novel method employing multistatic SAR is presented in this letter. The proposed hybrid method, which is based on an extended space-time model (ESTIM of the azimuth signal, has two steps: first, a set of finite impulse response (FIR filter banks based on a fractional Fourier transform (FrFT is used to separate multiple targets within a range gate; second, a cross-correlation spectrum weighted subspace fitting (CSWSF algorithm is applied to each of the separated signals in order to estimate their respective parameters. As verified through computer simulation with the constellations of Cartwheel, Pendulum and Helix, this proposed time-frequency-subspace method effectively improves the estimation precision of the cross-range velocities of multiple targets.

  14. An Approach to Predict Debris Flow Average Velocity

    Directory of Open Access Journals (Sweden)

    Chen Cao

    2017-03-01

    Full Text Available Debris flow is one of the major threats for the sustainability of environmental and social development. The velocity directly determines the impact on the vulnerability. This study focuses on an approach using radial basis function (RBF neural network and gravitational search algorithm (GSA for predicting debris flow velocity. A total of 50 debris flow events were investigated in the Jiangjia gully. These data were used for building the GSA-based RBF approach (GSA-RBF. Eighty percent (40 groups of the measured data were selected randomly as the training database. The other 20% (10 groups of data were used as testing data. Finally, the approach was applied to predict six debris flow gullies velocities in the Wudongde Dam site area, where environmental conditions were similar to the Jiangjia gully. The modified Dongchuan empirical equation and the pulled particle analysis of debris flow (PPA approach were used for comparison and validation. The results showed that: (i the GSA-RBF predicted debris flow velocity values are very close to the measured values, which performs better than those using RBF neural network alone; (ii the GSA-RBF results and the MDEE results are similar in the Jiangjia gully debris flow velocities prediction, and GSA-RBF performs better; (iii in the study area, the GSA-RBF results are validated reliable; and (iv we could consider more variables in predicting the debris flow velocity by using GSA-RBF on the basis of measured data in other areas, which is more applicable. Because the GSA-RBF approach was more accurate, both the numerical simulation and the empirical equation can be taken into consideration for constructing debris flow mitigation works. They could be complementary and verified for each other.

  15. Multisensors Cooperative Detection Task Scheduling Algorithm Based on Hybrid Task Decomposition and MBPSO

    Directory of Open Access Journals (Sweden)

    Changyun Liu

    2017-01-01

    Full Text Available A multisensor scheduling algorithm based on the hybrid task decomposition and modified binary particle swarm optimization (MBPSO is proposed. Firstly, aiming at the complex relationship between sensor resources and tasks, a hybrid task decomposition method is presented, and the resource scheduling problem is decomposed into subtasks; then the sensor resource scheduling problem is changed into the match problem of sensors and subtasks. Secondly, the resource match optimization model based on the sensor resources and tasks is established, which considers several factors, such as the target priority, detecting benefit, handover times, and resource load. Finally, MBPSO algorithm is proposed to solve the match optimization model effectively, which is based on the improved updating means of particle’s velocity and position through the doubt factor and modified Sigmoid function. The experimental results show that the proposed algorithm is better in terms of convergence velocity, searching capability, solution accuracy, and efficiency.

  16. Automatic annotation of head velocity and acceleration in Anvil

    DEFF Research Database (Denmark)

    Jongejan, Bart

    2012-01-01

    We describe an automatic face tracker plugin for the ANVIL annotation tool. The face tracker produces data for velocity and for acceleration in two dimensions. We compare the annotations generated by the face tracking algorithm with independently made manual annotations for head movements....... The annotations are a useful supplement to manual annotations and may help human annotators to quickly and reliably determine onset of head movements and to suggest which kind of head movement is taking place....

  17. Wide-field absolute transverse blood flow velocity mapping in vessel centerline

    Science.gov (United States)

    Wu, Nanshou; Wang, Lei; Zhu, Bifeng; Guan, Caizhong; Wang, Mingyi; Han, Dingan; Tan, Haishu; Zeng, Yaguang

    2018-02-01

    We propose a wide-field absolute transverse blood flow velocity measurement method in vessel centerline based on absorption intensity fluctuation modulation effect. The difference between the light absorption capacities of red blood cells and background tissue under low-coherence illumination is utilized to realize the instantaneous and average wide-field optical angiography images. The absolute fuzzy connection algorithm is used for vessel centerline extraction from the average wide-field optical angiography. The absolute transverse velocity in the vessel centerline is then measured by a cross-correlation analysis according to instantaneous modulation depth signal. The proposed method promises to contribute to the treatment of diseases, such as those related to anemia or thrombosis.

  18. On the effect of grain burnback on STS-SRM fragment velocity

    International Nuclear Information System (INIS)

    Eck, M.B.; Mukunda, M.

    1991-01-01

    Concerns raised during the Ulysses Final Safety Analysis Review (FSAR) process called the solid rocket motor (SRM) fragment velocity prediction model into question. The specific area of concern was that there was a section of the SRM casing which was exposed to SRM chamber pressure as the grain (fuel) was consumed. These questions centered on the velocity of fragments which originated from the field joint region given that failure occurred between 37 and 72 seconds mission elapsed time (MET). Two dimensional coupled Eulerian-Lagrangian calculations were performed to assess the hot gas flow field which resulted from SRM casing fragmentation. The fragment to gas interface-pressure time-history obtained from these analyses was reduced to a boundary condition algorithm which was applied to an explicit-time-integration, finite element, three dimensional shell model of the SRM casing and unburned fuel. The results of these calculations showed that the velocity of fragments originating in the field joint was adequately described by the range of velocities given in the Shuttle Data Book (1988). Based on these results, no further analyses were required, and approval was obtained from the Launch Abort Subpanel of the Interagency Nuclear Safety Review Panel to use the SRM fragment velocity environments presented in the Ulysses FSAR (1990)

  19. Seismic tomography with the reversible jump algorithm

    Science.gov (United States)

    Bodin, Thomas; Sambridge, Malcolm

    2009-09-01

    The reversible jump algorithm is a statistical method for Bayesian inference with a variable number of unknowns. Here, we apply this method to the seismic tomography problem. The approach lets us consider the issue of model parametrization (i.e. the way of discretizing the velocity field) as part of the inversion process. The model is parametrized using Voronoi cells with mobile geometry and number. The size, position and shape of the cells defining the velocity model are directly determined by the data. The inverse problem is tackled within a Bayesian framework and explicit regularization of model parameters is not required. The mobile position and number of cells means that global damping procedures, controlled by an optimal regularization parameter, are avoided. Many velocity models with variable numbers of cells are generated via a transdimensional Markov chain and information is extracted from the ensemble as a whole. As an aid to interpretation we visualize the expected earth model that is obtained via Monte Carlo integration in a straightforward manner. The procedure is particularly adept at imaging rapid changes or discontinuities in wave speed. While each velocity model in the final ensemble consists of many discontinuities at cell boundaries, these are smoothed out in the averaged ensemble solution while those required by the data are reinforced. The ensemble of models can also be used to produce uncertainty estimates and experiments with synthetic data suggest that they represent actual uncertainty surprisingly well. We use the fast marching method in order to iteratively update the ray geometry and account for the non-linearity of the problem. The method is tested here with synthetic data in a 2-D application and compared with a subspace method that is a more standard matrix-based inversion scheme. Preliminary results illustrate the advantages of the reversible jump algorithm. A real data example is also shown where a tomographic image of Rayleigh wave

  20. SIMULATIONS OF HIGH-VELOCITY CLOUDS. I. HYDRODYNAMICS AND HIGH-VELOCITY HIGH IONS

    International Nuclear Information System (INIS)

    Kwak, Kyujin; Henley, David B.; Shelton, Robin L.

    2011-01-01

    We present hydrodynamic simulations of high-velocity clouds (HVCs) traveling through the hot, tenuous medium in the Galactic halo. A suite of models was created using the FLASH hydrodynamics code, sampling various cloud sizes, densities, and velocities. In all cases, the cloud-halo interaction ablates material from the clouds. The ablated material falls behind the clouds where it mixes with the ambient medium to produce intermediate-temperature gas, some of which radiatively cools to less than 10,000 K. Using a non-equilibrium ionization algorithm, we track the ionization levels of carbon, nitrogen, and oxygen in the gas throughout the simulation period. We present observation-related predictions, including the expected H I and high ion (C IV, N V, and O VI) column densities on sightlines through the clouds as functions of evolutionary time and off-center distance. The predicted column densities overlap those observed for Complex C. The observations are best matched by clouds that have interacted with the Galactic environment for tens to hundreds of megayears. Given the large distances across which the clouds would travel during such time, our results are consistent with Complex C having an extragalactic origin. The destruction of HVCs is also of interest; the smallest cloud (initial mass ∼ 120 M sun ) lost most of its mass during the simulation period (60 Myr), while the largest cloud (initial mass ∼ 4 x 10 5 M sun ) remained largely intact, although deformed, during its simulation period (240 Myr).

  1. Remote determination of the velocity index and mean streamwise velocity profiles

    Science.gov (United States)

    Johnson, E. D.; Cowen, E. A.

    2017-09-01

    When determining volumetric discharge from surface measurements of currents in a river or open channel, the velocity index is typically used to convert surface velocities to depth-averaged velocities. The velocity index is given by, k=Ub/Usurf, where Ub is the depth-averaged velocity and Usurf is the local surface velocity. The USGS (United States Geological Survey) standard value for this coefficient, k = 0.85, was determined from a series of laboratory experiments and has been widely used in the field and in laboratory measurements of volumetric discharge despite evidence that the velocity index is site-specific. Numerous studies have documented that the velocity index varies with Reynolds number, flow depth, and relative bed roughness and with the presence of secondary flows. A remote method of determining depth-averaged velocity and hence the velocity index is developed here. The technique leverages the findings of Johnson and Cowen (2017) and permits remote determination of the velocity power-law exponent thereby, enabling remote prediction of the vertical structure of the mean streamwise velocity, the depth-averaged velocity, and the velocity index.

  2. A high-precision algorithm for axisymmetric flow

    Directory of Open Access Journals (Sweden)

    A. Gokhman

    1995-01-01

    Full Text Available We present a new algorithm for highly accurate computation of axisymmetric potential flow. The principal feature of the algorithm is the use of orthogonal curvilinear coordinates. These coordinates are used to write down the equations and to specify quadrilateral elements following the boundary. In particular, boundary conditions for the Stokes' stream-function are satisfied exactly. The velocity field is determined by differentiating the stream-function. We avoid the use of quadratures in the evaluation of Galerkin integrals, and instead use splining of the boundaries of elements to take the double integrals of the shape functions in closed form. This is very accurate and not time consuming.

  3. On measuring surface wave phase velocity from station–station cross-correlation of ambient signal

    DEFF Research Database (Denmark)

    Boschi, Lapo; Weemstra, Cornelis; Verbeke, Julie

    2012-01-01

    We apply two different algorithms to measure surface wave phase velocity, as a function of frequency, from seismic ambient noise recorded at pairs of stations from a large European network. The two methods are based on consistent theoretical formulations, but differ in the implementation: one met...

  4. Dense velocity reconstruction from tomographic PTV with material derivatives

    Science.gov (United States)

    Schneiders, Jan F. G.; Scarano, Fulvio

    2016-09-01

    A method is proposed to reconstruct the instantaneous velocity field from time-resolved volumetric particle tracking velocimetry (PTV, e.g., 3D-PTV, tomographic PTV and Shake-the-Box), employing both the instantaneous velocity and the velocity material derivative of the sparse tracer particles. The constraint to the measured temporal derivative of the PTV particle tracks improves the consistency of the reconstructed velocity field. The method is christened as pouring time into space, as it leverages temporal information to increase the spatial resolution of volumetric PTV measurements. This approach becomes relevant in cases where the spatial resolution is limited by the seeding concentration. The method solves an optimization problem to find the vorticity and velocity fields that minimize a cost function, which includes next to instantaneous velocity, also the velocity material derivative. The velocity and its material derivative are related through the vorticity transport equation, and the cost function is minimized using the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm. The procedure is assessed numerically with a simulated PTV experiment in a turbulent boundary layer from a direct numerical simulation (DNS). The experimental validation considers a tomographic particle image velocimetry (PIV) experiment in a similar turbulent boundary layer and the additional case of a jet flow. The proposed technique (`vortex-in-cell plus', VIC+) is compared to tomographic PIV analysis (3D iterative cross-correlation), PTV interpolation methods (linear and adaptive Gaussian windowing) and to vortex-in-cell (VIC) interpolation without the material derivative. A visible increase in resolved details in the turbulent structures is obtained with the VIC+ approach, both in numerical simulations and experiments. This results in a more accurate determination of the turbulent stresses distribution in turbulent boundary layer investigations. Data from a jet

  5. Zero velocity interval detection based on a continuous hidden Markov model in micro inertial pedestrian navigation

    Science.gov (United States)

    Sun, Wei; Ding, Wei; Yan, Huifang; Duan, Shunli

    2018-06-01

    Shoe-mounted pedestrian navigation systems based on micro inertial sensors rely on zero velocity updates to correct their positioning errors in time, which effectively makes determining the zero velocity interval play a key role during normal walking. However, as walking gaits are complicated, and vary from person to person, it is difficult to detect walking gaits with a fixed threshold method. This paper proposes a pedestrian gait classification method based on a hidden Markov model. Pedestrian gait data are collected with a micro inertial measurement unit installed at the instep. On the basis of analyzing the characteristics of the pedestrian walk, a single direction angular rate gyro output is used to classify gait features. The angular rate data are modeled into a univariate Gaussian mixture model with three components, and a four-state left–right continuous hidden Markov model (CHMM) is designed to classify the normal walking gait. The model parameters are trained and optimized using the Baum–Welch algorithm and then the sliding window Viterbi algorithm is used to decode the gait. Walking data are collected through eight subjects walking along the same route at three different speeds; the leave-one-subject-out cross validation method is conducted to test the model. Experimental results show that the proposed algorithm can accurately detect different walking gaits of zero velocity interval. The location experiment shows that the precision of CHMM-based pedestrian navigation improved by 40% when compared to the angular rate threshold method.

  6. Precession feature extraction of ballistic missile warhead with high velocity

    Science.gov (United States)

    Sun, Huixia

    2018-04-01

    This paper establishes the precession model of ballistic missile warhead, and derives the formulas of micro-Doppler frequency induced by the target with precession. In order to obtain micro-Doppler feature of ballistic missile warhead with precession, micro-Doppler bandwidth estimation algorithm, which avoids velocity compensation, is presented based on high-resolution time-frequency transform. The results of computer simulations confirm the effectiveness of the proposed method even with low signal-to-noise ratio.

  7. Trajectory generation algorithm for smooth movement of a hybrid-type robot Rocker-Pillar

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Seung Min; Choi, Dong Kyu; Kim, Jong Won [School of Mechanical and Aerospace Engineering, Seoul National University, Seoul (Korea, Republic of); Kim, Hwa Soo [Dept. of Mechanical System Engineering, Kyonggi University, Suwon (Korea, Republic of)

    2016-11-15

    While traveling on rough terrain, smooth movement of a mobile robot plays an important role in carrying out the given tasks successfully. This paper describes the trajectory generation algorithm for smooth movement of hybrid-type mobile robot Rocker-Pillar by adjusting the angular velocity of its caterpillar as well as each wheel velocity in such a manner to minimize a proper index for smoothness. To this end, a new Smoothness index (SI) is first suggested to evaluate the smoothness of movement of Rocker-Pillar. Then, the trajectory generation algorithm is proposed to reduce the undesired oscillations of its Center of mass (CoM). The experiment are performed to examine the movement of Rocker-Pillar climbing up the step whose height is twice larger than its wheel radius. It is verified that the resulting SI is improved by more than 40 % so that the movement of Rocker-Pillar becomes much smoother by the proposed trajectory algorithm.

  8. Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis

    Science.gov (United States)

    Chiou, Jin-Chern

    1990-01-01

    Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.

  9. A parabolic velocity-decomposition method for wind turbines

    Science.gov (United States)

    Mittal, Anshul; Briley, W. Roger; Sreenivas, Kidambi; Taylor, Lafayette K.

    2017-02-01

    An economical parabolized Navier-Stokes approximation for steady incompressible flow is combined with a compatible wind turbine model to simulate wind turbine flows, both upstream of the turbine and in downstream wake regions. The inviscid parabolizing approximation is based on a Helmholtz decomposition of the secondary velocity vector and physical order-of-magnitude estimates, rather than an axial pressure gradient approximation. The wind turbine is modeled by distributed source-term forces incorporating time-averaged aerodynamic forces generated by a blade-element momentum turbine model. A solution algorithm is given whose dependent variables are streamwise velocity, streamwise vorticity, and pressure, with secondary velocity determined by two-dimensional scalar and vector potentials. In addition to laminar and turbulent boundary-layer test cases, solutions for a streamwise vortex-convection test problem are assessed by mesh refinement and comparison with Navier-Stokes solutions using the same grid. Computed results for a single turbine and a three-turbine array are presented using the NREL offshore 5-MW baseline wind turbine. These are also compared with an unsteady Reynolds-averaged Navier-Stokes solution computed with full rotor resolution. On balance, the agreement in turbine wake predictions for these test cases is very encouraging given the substantial differences in physical modeling fidelity and computer resources required.

  10. Feed-Forward Neural Network Soft-Sensor Modeling of Flotation Process Based on Particle Swarm Optimization and Gravitational Search Algorithm

    Directory of Open Access Journals (Sweden)

    Jie-Sheng Wang

    2015-01-01

    Full Text Available For predicting the key technology indicators (concentrate grade and tailings recovery rate of flotation process, a feed-forward neural network (FNN based soft-sensor model optimized by the hybrid algorithm combining particle swarm optimization (PSO algorithm and gravitational search algorithm (GSA is proposed. Although GSA has better optimization capability, it has slow convergence velocity and is easy to fall into local optimum. So in this paper, the velocity vector and position vector of GSA are adjusted by PSO algorithm in order to improve its convergence speed and prediction accuracy. Finally, the proposed hybrid algorithm is adopted to optimize the parameters of FNN soft-sensor model. Simulation results show that the model has better generalization and prediction accuracy for the concentrate grade and tailings recovery rate to meet the online soft-sensor requirements of the real-time control in the flotation process.

  11. Robust reactor power control system design by genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yoon Joon; Cho, Kyung Ho; Kim, Sin [Cheju National University, Cheju (Korea, Republic of)

    1998-12-31

    The H{sub {infinity}} robust controller for the reactor power control system is designed by use of the mixed weight sensitivity. The system is configured into the typical two-port model with which the weight functions are augmented. Since the solution depends on the weighting functions and the problem is of nonconvex, the genetic algorithm is used to determine the weighting functions. The cost function applied in the genetic algorithm permits the direct control of the power tracking performances. In addition, the actual operating constraints such as rod velocity and acceleration can be treated as design parameters. Compared with the conventional approach, the controller designed by the genetic algorithm results in the better performances with the realistic constraints. Also, it is found that the genetic algorithm could be used as an effective tool in the robust design. 4 refs., 6 figs. (Author)

  12. Robust reactor power control system design by genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yoon Joon; Cho, Kyung Ho; Kim, Sin [Cheju National University, Cheju (Korea, Republic of)

    1997-12-31

    The H{sub {infinity}} robust controller for the reactor power control system is designed by use of the mixed weight sensitivity. The system is configured into the typical two-port model with which the weight functions are augmented. Since the solution depends on the weighting functions and the problem is of nonconvex, the genetic algorithm is used to determine the weighting functions. The cost function applied in the genetic algorithm permits the direct control of the power tracking performances. In addition, the actual operating constraints such as rod velocity and acceleration can be treated as design parameters. Compared with the conventional approach, the controller designed by the genetic algorithm results in the better performances with the realistic constraints. Also, it is found that the genetic algorithm could be used as an effective tool in the robust design. 4 refs., 6 figs. (Author)

  13. Development of an optimal velocity selection method with velocity obstacle

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Min Geuk; Oh, Jun Ho [KAIST, Daejeon (Korea, Republic of)

    2015-08-15

    The Velocity obstacle (VO) method is one of the most well-known methods for local path planning, allowing consideration of dynamic obstacles and unexpected obstacles. Typical VO methods separate a velocity map into a collision area and a collision-free area. A robot can avoid collisions by selecting its velocity from within the collision-free area. However, if there are numerous obstacles near a robot, the robot will have very few velocity candidates. In this paper, a method for choosing optimal velocity components using the concept of pass-time and vertical clearance is proposed for the efficient movement of a robot. The pass-time is the time required for a robot to pass by an obstacle. By generating a latticized available velocity map for a robot, each velocity component can be evaluated using a cost function that considers the pass-time and other aspects. From the output of the cost function, even a velocity component that will cause a collision in the future can be chosen as a final velocity if the pass-time is sufficiently long enough.

  14. Genetic algorithms and their use in Geophysical Problems

    Energy Technology Data Exchange (ETDEWEB)

    Parker, Paul B. [Univ. of California, Berkeley, CA (United States)

    1999-04-01

    Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or ''fittest'' models from a ''population'' and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show that certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Optimal efficiency is usually achieved with smaller (< 50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (> 2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems

  15. Field test and theoretical analysis of electromagnetic pulse propagation velocity on crossbonded cable systems

    DEFF Research Database (Denmark)

    Jensen, Christian Flytkjær; Bak, Claus Leth; Gudmundsdottir, Unnur Stella

    2014-01-01

    In this paper, the electromagnetic pulse propagation velocity on a three-phase cable system, consisting of three single core (SC) cables in flat formation with an earth continuity conductor is under study. The propagation velocity is an important parameter for most travelling wave off- and online...... fault location methods and needs to be exactly known for optimal performance of these algorithm types. Field measurements are carried out on a 6.9 km and a 31.4 km 245 kV crossbonded cable system, and the results are analysed using the modal decomposition theory. Several ways for determining...

  16. Study on time-varying velocity measurement with self-mixing laser diode based on Discrete Chirp-Fourier Transform

    International Nuclear Information System (INIS)

    Zhang Zhaoyun; Gao Yang; Zhao Xinghai; Zhao Xiang

    2011-01-01

    Laser's optical output power and frequency are modulated when the optical beam is back-scattered into the active cavity of the laser. By signal processing, the Doppler frequency can be acquired, and the target's velocity can be calculated. Based on these properties, an interferometry velocity sensor can be designed. When target move in time-varying velocity mode, it is difficult to extract the target's velocity. Time-varying velocity measurement by self-mixing laser diode is explored. A mathematics model was proposed for the time-varying velocity (invariable acceleration) measurement by self-mixing laser diode. Based on this model, a Discrete Chirp-Fourier Transform (DCFT) method was applied, DCFT is analogous to DFT. We show that when the signal length N is prime, the magnitudes of all the side lobes are 1, whereas the magnitudes of the main lobe is √N, And the coordinates of the main lobe shows the target's velocity and acceleration information. The simulation results prove the validity of the algorithm even in the situation of low SNR when N is prime.

  17. A 1DVAR-based snowfall rate retrieval algorithm for passive microwave radiometers

    Science.gov (United States)

    Meng, Huan; Dong, Jun; Ferraro, Ralph; Yan, Banghua; Zhao, Limin; Kongoli, Cezar; Wang, Nai-Yu; Zavodsky, Bradley

    2017-06-01

    Snowfall rate retrieval from spaceborne passive microwave (PMW) radiometers has gained momentum in recent years. PMW can be so utilized because of its ability to sense in-cloud precipitation. A physically based, overland snowfall rate (SFR) algorithm has been developed using measurements from the Advanced Microwave Sounding Unit-A/Microwave Humidity Sounder sensor pair and the Advanced Technology Microwave Sounder. Currently, these instruments are aboard five polar-orbiting satellites, namely, NOAA-18, NOAA-19, Metop-A, Metop-B, and Suomi-NPP. The SFR algorithm relies on a separate snowfall detection algorithm that is composed of a satellite-based statistical model and a set of numerical weather prediction model-based filters. There are four components in the SFR algorithm itself: cloud properties retrieval, computation of ice particle terminal velocity, ice water content adjustment, and the determination of snowfall rate. The retrieval of cloud properties is the foundation of the algorithm and is accomplished using a one-dimensional variational (1DVAR) model. An existing model is adopted to derive ice particle terminal velocity. Since no measurement of cloud ice distribution is available when SFR is retrieved in near real time, such distribution is implicitly assumed by deriving an empirical function that adjusts retrieved SFR toward radar snowfall estimates. Finally, SFR is determined numerically from a complex integral. The algorithm has been validated against both radar and ground observations of snowfall events from the contiguous United States with satisfactory results. Currently, the SFR product is operationally generated at the National Oceanic and Atmospheric Administration and can be obtained from that organization.

  18. An effective medium inversion algorithm for gas hydrate quantification and its application to laboratory and borehole measurements of gas hydrate-bearing sediments

    Science.gov (United States)

    Chand, S.; Minshull, T.A.; Priest, J.A.; Best, A.I.; Clayton, C.R.I.; Waite, W.F.

    2006-01-01

    The presence of gas hydrate in marine sediments alters their physical properties. In some circumstances, gas hydrate may cement sediment grains together and dramatically increase the seismic P- and S-wave velocities of the composite medium. Hydrate may also form a load-bearing structure within the sediment microstructure, but with different seismic wave attenuation characteristics, changing the attenuation behaviour of the composite. Here we introduce an inversion algorithm based on effective medium modelling to infer hydrate saturations from velocity and attenuation measurements on hydrate-bearing sediments. The velocity increase is modelled as extra binding developed by gas hydrate that strengthens the sediment microstructure. The attenuation increase is modelled through a difference in fluid flow properties caused by different permeabilities in the sediment and hydrate microstructures. We relate velocity and attenuation increases in hydrate-bearing sediments to their hydrate content, using an effective medium inversion algorithm based on the self-consistent approximation (SCA), differential effective medium (DEM) theory, and Biot and squirt flow mechanisms of fluid flow. The inversion algorithm is able to convert observations in compressional and shear wave velocities and attenuations to hydrate saturation in the sediment pore space. We applied our algorithm to a data set from the Mallik 2L–38 well, Mackenzie delta, Canada, and to data from laboratory measurements on gas-rich and water-saturated sand samples. Predictions using our algorithm match the borehole data and water-saturated laboratory data if the proportion of hydrate contributing to the load-bearing structure increases with hydrate saturation. The predictions match the gas-rich laboratory data if that proportion decreases with hydrate saturation. We attribute this difference to differences in hydrate formation mechanisms between the two environments.

  19. On protecting the planet against cosmic attack: Ultrafast real-time estimate of the asteroid's radial velocity

    Science.gov (United States)

    Zakharchenko, V. D.; Kovalenko, I. G.

    2014-05-01

    A new method for the line-of-sight velocity estimation of a high-speed near-Earth object (asteroid, meteorite) is suggested. The method is based on the use of fractional, one-half order derivative of a Doppler signal. The algorithm suggested is much simpler and more economical than the classical one, and it appears preferable for use in orbital weapon systems of threat response. Application of fractional differentiation to quick evaluation of mean frequency location of the reflected Doppler signal is justified. The method allows an assessment of the mean frequency in the time domain without spectral analysis. An algorithm structure for the real-time estimation is presented. The velocity resolution estimates are made for typical asteroids in the X-band. It is shown that the wait time can be shortened by orders of magnitude compared with similar value in the case of a standard spectral processing.

  20. Effects of Turbulence on Settling Velocities of Synthetic and Natural Particles

    Science.gov (United States)

    Jacobs, C.; Jendrassak, M.; Gurka, R.; Hackett, E. E.

    2014-12-01

    For large-scale sediment transport predictions, an important parameter is the settling or terminal velocity of particles because it plays a key role in determining the concentration of sediment particles within the water column as well as the deposition rate of particles onto the seabed. The settling velocity of particles is influenced by the fluid dynamic environment as well as attributes of the particle, such as its size, shape, and density. This laboratory study examines the effects of turbulence, generated by an oscillating grid, on both synthetic and natural particles for a range of flow conditions. Because synthetic particles are spherical, they serve as a reference for the natural particles that are irregular in shape. Particle image velocimetry (PIV) and high-speed imaging systems were used simultaneously to study the interaction between the fluid mechanics and sediment particles' dynamics in a tank. The particles' dynamics were analyzed using a custom two-dimensional tracking algorithm used to obtain distributions of the particle's velocity and acceleration. Turbulence properties, such as root-mean-square turbulent velocity and vorticity, were calculated from the PIV data. Results are classified by Stokes number, which was based-on the integral scale deduced from the auto-correlation function of velocity. We find particles with large Stokes numbers are unaffected by the turbulence, while particles with small Stokes numbers primarily show an increase in settling velocity in comparison to stagnant flow. The results also show an inverse relationship between Stokes number and standard deviation of the settling velocity. This research enables a better understanding of the interdependence between particles and turbulent flow, which can be used to improve parameterizations in large-scale sediment transport models.

  1. Pressure algorithm for elliptic flow calculations with the PDF method

    Science.gov (United States)

    Anand, M. S.; Pope, S. B.; Mongia, H. C.

    1991-01-01

    An algorithm to determine the mean pressure field for elliptic flow calculations with the probability density function (PDF) method is developed and applied. The PDF method is a most promising approach for the computation of turbulent reacting flows. Previous computations of elliptic flows with the method were in conjunction with conventional finite volume based calculations that provided the mean pressure field. The algorithm developed and described here permits the mean pressure field to be determined within the PDF calculations. The PDF method incorporating the pressure algorithm is applied to the flow past a backward-facing step. The results are in good agreement with data for the reattachment length, mean velocities, and turbulence quantities including triple correlations.

  2. Decompositions of bubbly flow PIV velocity fields using discrete wavelets multi-resolution and multi-section image method

    International Nuclear Information System (INIS)

    Choi, Je-Eun; Takei, Masahiro; Doh, Deog-Hee; Jo, Hyo-Jae; Hassan, Yassin A.; Ortiz-Villafuerte, Javier

    2008-01-01

    Currently, wavelet transforms are widely used for the analyses of particle image velocimetry (PIV) velocity vector fields. This is because the wavelet provides not only spatial information of the velocity vectors, but also of the time and frequency domains. In this study, a discrete wavelet transform is applied to real PIV images of bubbly flows. The vector fields obtained by a self-made cross-correlation PIV algorithm were used for the discrete wavelet transform. The performances of the discrete wavelet transforms were investigated by changing the level of power of discretization. The images decomposed by wavelet multi-resolution showed conspicuous characteristics of the bubbly flows for the different levels. A high spatial bubble concentrated area could be evaluated by the constructed discrete wavelet transform algorithm, in which high-leveled wavelets play dominant roles in revealing the flow characteristics

  3. Adaptive PID formation control of nonholonomic robots without leader's velocity information.

    Science.gov (United States)

    Shen, Dongbin; Sun, Weijie; Sun, Zhendong

    2014-03-01

    This paper proposes an adaptive proportional integral derivative (PID) algorithm to solve a formation control problem in the leader-follower framework where the leader robot's velocities are unknown for the follower robots. The main idea is first to design some proper ideal control law for the formation system to obtain a required performance, and then to propose the adaptive PID methodology to approach the ideal controller. As a result, the formation is achieved with much more enhanced robust formation performance. The stability of the closed-loop system is theoretically proved by Lyapunov method. Both numerical simulations and physical vehicle experiments are presented to verify the effectiveness of the proposed adaptive PID algorithm. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  4. The challenge associated with the robust computation of meteor velocities from video and photographic records

    Science.gov (United States)

    Egal, A.; Gural, P. S.; Vaubaillon, J.; Colas, F.; Thuillot, W.

    2017-09-01

    The CABERNET project was designed to push the limits for obtaining accurate measurements of meteoroids orbits from photographic and video meteor camera recordings. The discrepancy between the measured and theoretic orbits of these objects heavily depends on the semi-major axis determination, and thus on the reliability of the pre-atmospheric velocity computation. With a spatial resolution of 0.01° per pixel and a temporal resolution of up to 10 ms, CABERNET should be able to provide accurate measurements of velocities and trajectories of meteors. To achieve this, it is necessary to improve the precision of the data reduction processes, and especially the determination of the meteor's velocity. In this work, most of the steps of the velocity computation are thoroughly investigated in order to reduce the uncertainties and error contributions at each stage of the reduction process. The accuracy of the measurement of meteor centroids is established and results in a precision of 0.09 pixels for CABERNET, which corresponds to 3.24‧‧. Several methods to compute the velocity were investigated based on the trajectory determination algorithms described in Ceplecha (1987) and Borovicka (1990), as well as the multi-parameter fitting (MPF) method proposed by Gural (2012). In the case of the MPF, many optimization methods were implemented in order to find the most efficient and robust technique to solve the minimization problem. The entire data reduction process is assessed using simulated meteors, with different geometrical configurations and deceleration behaviors. It is shown that the multi-parameter fitting method proposed by Gural(2012)is the most accurate method to compute the pre-atmospheric velocity in all circumstances. Many techniques that assume constant velocity at the beginning of the path as derived from the trajectory determination using Ceplecha (1987) or Borovicka (1990) can lead to large errors for decelerating meteors. The MPF technique also allows one to

  5. Magnetic particle imaging for in vivo blood flow velocity measurements in mice

    Science.gov (United States)

    Kaul, Michael G.; Salamon, Johannes; Knopp, Tobias; Ittrich, Harald; Adam, Gerhard; Weller, Horst; Jung, Caroline

    2018-03-01

    Magnetic particle imaging (MPI) is a new imaging technology. It is a potential candidate to be used for angiographic purposes, to study perfusion and cell migration. The aim of this work was to measure velocities of the flowing blood in the inferior vena cava of mice, using MPI, and to evaluate it in comparison with magnetic resonance imaging (MRI). A phantom mimicking the flow within the inferior vena cava with velocities of up to 21 cm s‑1 was used for the evaluation of the applied analysis techniques. Time–density and distance–density analyses for bolus tracking were performed to calculate flow velocities. These findings were compared with the calibrated velocities set by a flow pump, and it can be concluded that velocities of up to 21 cm s‑1 can be measured by MPI. A time–density analysis using an arrival time estimation algorithm showed the best agreement with the preset velocities. In vivo measurements were performed in healthy FVB mice (n  =  10). MRI experiments were performed using phase contrast (PC) for velocity mapping. For MPI measurements, a standardized injection of a superparamagnetic iron oxide tracer was applied. In vivo MPI data were evaluated by a time–density analysis and compared to PC MRI. A Bland–Altman analysis revealed good agreement between the in vivo velocities acquired by MRI of 4.0  ±  1.5 cm s‑1 and those measured by MPI of 4.8  ±  1.1 cm s‑1. Magnetic particle imaging is a new tool with which to measure and quantify flow velocities. It is fast, radiation-free, and produces 3D images. It therefore offers the potential for vascular imaging.

  6. A Turn-Projected State-Based Conflict Resolution Algorithm

    Science.gov (United States)

    Butler, Ricky W.; Lewis, Timothy A.

    2013-01-01

    State-based conflict detection and resolution (CD&R) algorithms detect conflicts and resolve them on the basis on current state information without the use of additional intent information from aircraft flight plans. Therefore, the prediction of the trajectory of aircraft is based solely upon the position and velocity vectors of the traffic aircraft. Most CD&R algorithms project the traffic state using only the current state vectors. However, the past state vectors can be used to make a better prediction of the future trajectory of the traffic aircraft. This paper explores the idea of using past state vectors to detect traffic turns and resolve conflicts caused by these turns using a non-linear projection of the traffic state. A new algorithm based on this idea is presented and validated using a fast-time simulator developed for this study.

  7. Application of multiple signal classification algorithm to frequency estimation in coherent dual-frequency lidar

    Science.gov (United States)

    Li, Ruixiao; Li, Kun; Zhao, Changming

    2018-01-01

    Coherent dual-frequency Lidar (CDFL) is a new development of Lidar which dramatically enhances the ability to decrease the influence of atmospheric interference by using dual-frequency laser to measure the range and velocity with high precision. Based on the nature of CDFL signals, we propose to apply the multiple signal classification (MUSIC) algorithm in place of the fast Fourier transform (FFT) to estimate the phase differences in dual-frequency Lidar. In the presence of Gaussian white noise, the simulation results show that the signal peaks are more evident when using MUSIC algorithm instead of FFT in condition of low signal-noise-ratio (SNR), which helps to improve the precision of detection on range and velocity, especially for the long distance measurement systems.

  8. Optimal Velocity Control for a Battery Electric Vehicle Driven by Permanent Magnet Synchronous Motors

    Directory of Open Access Journals (Sweden)

    Dongbin Lu

    2014-01-01

    Full Text Available The permanent magnet synchronous motor (PMSM has high efficiency and high torque density. Field oriented control (FOC is usually used in the motor to achieve maximum efficiency control. In the electric vehicle (EV application, the PMSM efficiency model, combined with the EV and road load system model, is used to study the optimal energy-saving control strategy, which is significant for the economic operation of EVs. With the help of GPS, IMU, and other information technologies, the road conditions can be measured in advance. Based on this information, the optimal velocity of the EV driven by PMSM can be obtained through the analytical algorithm according to the efficiency model of PMSM and the vehicle dynamic model in simple road conditions. In complex road conditions, considering the dynamic characteristics, the economic operating velocity trajectory of the EV can be obtained through the dynamic programming (DP algorithm. Simulation and experimental results show that the minimum energy consumption and global energy optimization can be achieved when the EV operates in the economic operation area.

  9. A PSO-Optimized Reciprocal Velocity Obstacles Algorithm for Navigation of Multiple Mobile Robots

    Directory of Open Access Journals (Sweden)

    Ziyad Allawi

    2015-03-01

    Full Text Available In this paper, a new optimization method for the Reciprocal Velocity Obstacles (RVO is proposed. It uses the well-known Particle Swarm Optimization (PSO for navigation control of multiple mobile robots with kinematic constraints. The RVO is used for collision avoidance between the robots, while PSO is used to choose the best path for the robot maneuver to avoid colliding with other robots and to get to its goal faster. This method was applied on 24 mobile robots facing each other. Simulation results have shown that this method outperforms the ordinary RVO when the path is heuristically chosen.

  10. Study on polarized optical flow algorithm for imaging bionic polarization navigation micro sensor

    Science.gov (United States)

    Guan, Le; Liu, Sheng; Li, Shi-qi; Lin, Wei; Zhai, Li-yuan; Chu, Jin-kui

    2018-05-01

    At present, both the point source and the imaging polarization navigation devices only can output the angle information, which means that the velocity information of the carrier cannot be extracted from the polarization field pattern directly. Optical flow is an image-based method for calculating the velocity of pixel point movement in an image. However, for ordinary optical flow, the difference in pixel value as well as the calculation accuracy can be reduced in weak light. Polarization imaging technology has the ability to improve both the detection accuracy and the recognition probability of the target because it can acquire the extra polarization multi-dimensional information of target radiation or reflection. In this paper, combining the polarization imaging technique with the traditional optical flow algorithm, a polarization optical flow algorithm is proposed, and it is verified that the polarized optical flow algorithm has good adaptation in weak light and can improve the application range of polarization navigation sensors. This research lays the foundation for day and night all-weather polarization navigation applications in future.

  11. Moving Object Tracking and Avoidance Algorithm for Differential Driving AGV Based on Laser Measurement Technology

    Directory of Open Access Journals (Sweden)

    Pandu Sandi Pratama

    2012-12-01

    Full Text Available This paper proposed an algorithm to track the obstacle position and avoid the moving objects for differential driving Automatic Guided Vehicles (AGV system in industrial environment. This algorithm has several abilities such as: to detect the moving objects, to predict the velocity and direction of moving objects, to predict the collision possibility and to plan the avoidance maneuver. For sensing the local environment and positioning, the laser measurement system LMS-151 and laser navigation system NAV-200 are applied. Based on the measurement results of the sensors, the stationary and moving obstacles are detected and the collision possibility is calculated. The velocity and direction of the obstacle are predicted using Kalman filter algorithm. Collision possibility, time, and position can be calculated by comparing the AGV movement and obstacle prediction result obtained by Kalman filter. Finally the avoidance maneuver using the well known tangent Bug algorithm is decided based on the calculation data. The effectiveness of proposed algorithm is verified using simulation and experiment. Several examples of experiment conditions are presented using stationary obstacle, and moving obstacles. The simulation and experiment results show that the AGV can detect and avoid the obstacles successfully in all experimental condition. [Keywords— Obstacle avoidance, AGV, differential drive, laser measurement system, laser navigation system].

  12. Auditory velocity discrimination in the horizontal plane at very high velocities.

    Science.gov (United States)

    Frissen, Ilja; Féron, François-Xavier; Guastavino, Catherine

    2014-10-01

    We determined velocity discrimination thresholds and Weber fractions for sounds revolving around the listener at very high velocities. Sounds used were a broadband white noise and two harmonic sounds with fundamental frequencies of 330 Hz and 1760 Hz. Experiment 1 used velocities ranging between 288°/s and 720°/s in an acoustically treated room and Experiment 2 used velocities between 288°/s and 576°/s in a highly reverberant hall. A third experiment addressed potential confounds in the first two experiments. The results show that people can reliably discriminate velocity at very high velocities and that both thresholds and Weber fractions decrease as velocity increases. These results violate Weber's law but are consistent with the empirical trend observed in the literature. While thresholds for the noise and 330 Hz harmonic stimulus were similar, those for the 1760 Hz harmonic stimulus were substantially higher. There were no reliable differences in velocity discrimination between the two acoustical environments, suggesting that auditory motion perception at high velocities is robust against the effects of reverberation. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Study on improved Ip-iq APF control algorithm and its application in micro grid

    Science.gov (United States)

    Xie, Xifeng; Shi, Hua; Deng, Haiyingv

    2018-01-01

    In order to enhance the tracking velocity and accuracy of harmonic detection by ip-iq algorithm, a novel ip-iq control algorithm based on the Instantaneous reactive power theory is presented, the improved algorithm adds the lead correction link to adjust the zero point of the detection system, the Fuzzy Self-Tuning Adaptive PI control is introduced to dynamically adjust the DC-link Voltage, which meets the requirement of the harmonic compensation of the micro grid. Simulation and experimental results verify the proposed method is feasible and effective in micro grid.

  14. Velocity-independent layer stripping of PP and PS reflection traveltimes

    Digital Repository Service at National Institute of Oceanography (India)

    Dewangan, P.; Tsvankin, I.

    The principle of the PP + PS = SS method can be used to carry out exact layer stripping for both pure and mode-converted waves in anisotropic media. The main assumptions of the algorithm intro- duced here are that the overburden is laterally homogeneous and has... horizontal and dipping interfaces in each layer H20849Alkhalifah and Tsvankin, 1995; Tsvankin, 2005H20850. This requirement, which is often difficult to satisfy in practice, is no long- er needed if the interval moveout is computed by the velocity...

  15. Identifying Clusters with Mixture Models that Include Radial Velocity Observations

    Science.gov (United States)

    Czarnatowicz, Alexis; Ybarra, Jason E.

    2018-01-01

    The study of stellar clusters plays an integral role in the study of star formation. We present a cluster mixture model that considers radial velocity data in addition to spatial data. Maximum likelihood estimation through the Expectation-Maximization (EM) algorithm is used for parameter estimation. Our mixture model analysis can be used to distinguish adjacent or overlapping clusters, and estimate properties for each cluster.Work supported by awards from the Virginia Foundation for Independent Colleges (VFIC) Undergraduate Science Research Fellowship and The Research Experience @Bridgewater (TREB).

  16. Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.

    Science.gov (United States)

    Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu

    2015-08-01

    This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.

  17. Prediction of Compressional, Shear, and Stoneley Wave Velocities from Conventional Well Log Data Using a Committee Machine with Intelligent Systems

    Science.gov (United States)

    Asoodeh, Mojtaba; Bagheripour, Parisa

    2012-01-01

    Measurement of compressional, shear, and Stoneley wave velocities, carried out by dipole sonic imager (DSI) logs, provides invaluable data in geophysical interpretation, geomechanical studies and hydrocarbon reservoir characterization. The presented study proposes an improved methodology for making a quantitative formulation between conventional well logs and sonic wave velocities. First, sonic wave velocities were predicted from conventional well logs using artificial neural network, fuzzy logic, and neuro-fuzzy algorithms. Subsequently, a committee machine with intelligent systems was constructed by virtue of hybrid genetic algorithm-pattern search technique while outputs of artificial neural network, fuzzy logic and neuro-fuzzy models were used as inputs of the committee machine. It is capable of improving the accuracy of final prediction through integrating the outputs of aforementioned intelligent systems. The hybrid genetic algorithm-pattern search tool, embodied in the structure of committee machine, assigns a weight factor to each individual intelligent system, indicating its involvement in overall prediction of DSI parameters. This methodology was implemented in Asmari formation, which is the major carbonate reservoir rock of Iranian oil field. A group of 1,640 data points was used to construct the intelligent model, and a group of 800 data points was employed to assess the reliability of the proposed model. The results showed that the committee machine with intelligent systems performed more effectively compared with individual intelligent systems performing alone.

  18. Application of Plenoptic PIV for 3D Velocity Measurements Over Roughness Elements in a Refractive Index Matched Facility

    Science.gov (United States)

    Thurow, Brian; Johnson, Kyle; Kim, Taehoon; Blois, Gianluca; Best, Jim; Christensen, Ken

    2014-11-01

    The application of Plenoptic PIV in a Refractive Index Matched (RIM) facility housed at Illinois is presented. Plenoptic PIV is an emerging 3D diagnostic that exploits the light-field imaging capabilities of a plenoptic camera. Plenoptic cameras utilize a microlens array to measure the position and angle of light rays captured by the camera. 3D/3C velocity fields are determined through application of the MART algorithm for volume reconstruction and a conventional 3D cross-correlation PIV algorithm. The RIM facility is a recirculating tunnel with a 62.5% aqueous solution of sodium iodide used as the working fluid. Its resulting index of 1.49 is equal to that of acrylic. Plenoptic PIV was used to measure the 3D velocity field of a turbulent boundary layer flow over a smooth wall, a single wall-mounted hemisphere and a full array of hemispheres (i.e. a rough wall) with a k/ δ ~ 4.6. Preliminary time averaged and instantaneous 3D velocity fields will be presented. This material is based upon work supported by the National Science Foundation under Grant No. 1235726.

  19. Luminescent two-color tracer particles for simultaneous velocity and temperature measurements in microfluidics

    International Nuclear Information System (INIS)

    Massing, J; Kähler, C J; Cierpka, C; Kaden, D

    2016-01-01

    The simultaneous and non-intrusive measurement of temperature and velocity fields in flows is of great scientific and technological interest. To sample the velocity and temperature, tracer particle based approaches have been developed, where the velocity is measured using PIV or PTV and the temperature is obtained from the intensity (LIF, thermographic phosphors) or frequency (TLC) of the light emitted or reflected by the tracer particles. In this article, a measurement technique is introduced, that relates the luminescent intensity ratio of individual dual-color luminescent tracer particles to temperature. Different processing algorithms are tested on synthetic particle images and compared with respect to their accuracy in estimating the intensity ratio. Furthermore, polymer particles which are doped with the temperature sensitive dye europium (III) thenoyltrifluoroacetonate (EuTTA) and the nearly temperature insensitive reference dye perylene are characterized as valid tracers. The results show a reduction of the temperature measurement uncertainty of almost 40% (95% confidence interval) compared to previously reported luminescent particle based measurement techniques for microfluidics. (paper)

  20. Superconducting RF for Low-Velocity and Intermediate-Velocity Beams

    CERN Document Server

    Grimm, Terry L

    2005-01-01

    Existing superconducting radio frequency (SRF) linacs are used to accelerate ions (protons through uranium) with velocities less than about 15% the speed of light, or electrons with velocities approximately equal to the speed of light. In the last ten years, prototype SRF cavities have completely covered the remaining range of velocities. They have demonstrated that SRF linacs will be capable of accelerating electrons from rest up to the speed of light, and ions from less than 1% up to the speed of light. When the Spallation Neutron Source is operational, SRF ion linacs will have covered the full range of velocities except for v/c ~ 0.15 to v/c ~ 0.5. A number of proposed projects (RIA, EURISOL) would span the latter range of velocities. Future SRF developments will have to address the trade-offs associated with a number of issues, including high gradient operation, longitudinal and transverse acceptance, microphonics, Lorentz detuning, operating temperature, cryogenic load, number of gaps or cells per cavity...

  1. Critical velocities in He II for independently varied superfluid and normal fluid velocities

    International Nuclear Information System (INIS)

    Baehr, M.L.

    1984-01-01

    Experiments were performed to measure the critical velocity in pure superflow and compare to the theoretical prediction; to measure the first critical velocity for independently varied superfluid and normal fluid velocities; and to investigate the propagation of the second critical velocity from the thermal counterflow line through the V/sub n/,-V/sub s/ quadrant. The experimental apparatus employed a thermal counterflow heater to adjust the normal fluid velocity, a fountain pump to vary the superfluid velocity, and a level sensing capacitor to measure the superfluid velocity. The results of the pure superfluid critical velocity measurements indicate that this velocity is temperature independent contrary to Schwarz's theory. It was found that the first critical velocity for independently varied V/sub n/ and V/sub s/ could be described by a linear function of V/sub n/ and was otherwise temperature independent. It was found that the second critical velocity could only be distinguished near the thermal counterflow line

  2. User's Manual for the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA)

    Science.gov (United States)

    Gnoffo, Peter A.; Cheatwood, F. McNeil

    1996-01-01

    This user's manual provides detailed instructions for the installation and the application of version 4.1 of the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA). Also provides simulation of flow field in thermochemical nonequilibrium around vehicles traveling at hypersonic velocities through the atmosphere. Earlier versions of LAURA were predominantly research codes, and they had minimal (or no) documentation. This manual describes UNIX-based utilities for customizing the code for special applications that also minimize system resource requirements. The algorithm is reviewed, and the various program options are related to specific equations and variables in the theoretical development.

  3. Migration velocity analysis using a transversely isotropic medium with tilt normal to the reflector dip

    KAUST Repository

    Alkhalifah, T.

    2010-06-13

    A transversely isotropic model in which the tilt is constrained to be normal to the dip (DTI model) allows for simplifications in the imaging and velocity model building efforts as compared to a general TTI model. Though this model, in some cases, can not be represented physically like in the case of conflicting dips, it handles all dips with the assumption of symmetry axis normal to the dip. It provides a process in which areas that meet this feature is handled properly. We use efficient downward continuation algorithms that utilizes the reflection features of such a model. For lateral inhomogeneity, phase shift migration can be easily extended to approximately handle lateral inhomogeneity, because unlike the general TTI case the DTI model reduces to VTI for zero dip. We also equip these continuation algorithms with tools that expose inaccuracies in the velocity. We test this model on synthetic data of general TTI nature and show its resilience even couping with complex models like the recently released anisotropic BP model.

  4. Blind test of methods for obtaining 2-D near-surface seismic velocity models from first-arrival traveltimes

    Science.gov (United States)

    Zelt, Colin A.; Haines, Seth; Powers, Michael H.; Sheehan, Jacob; Rohdewald, Siegfried; Link, Curtis; Hayashi, Koichi; Zhao, Don; Zhou, Hua-wei; Burton, Bethany L.; Petersen, Uni K.; Bonal, Nedra D.; Doll, William E.

    2013-01-01

    Seismic refraction methods are used in environmental and engineering studies to image the shallow subsurface. We present a blind test of inversion and tomographic refraction analysis methods using a synthetic first-arrival-time dataset that was made available to the community in 2010. The data are realistic in terms of the near-surface velocity model, shot-receiver geometry and the data's frequency and added noise. Fourteen estimated models were determined by ten participants using eight different inversion algorithms, with the true model unknown to the participants until it was revealed at a session at the 2011 SAGEEP meeting. The estimated models are generally consistent in terms of their large-scale features, demonstrating the robustness of refraction data inversion in general, and the eight inversion algorithms in particular. When compared to the true model, all of the estimated models contain a smooth expression of its two main features: a large offset in the bedrock and the top of a steeply dipping low-velocity fault zone. The estimated models do not contain a subtle low-velocity zone and other fine-scale features, in accord with conventional wisdom. Together, the results support confidence in the reliability and robustness of modern refraction inversion and tomographic methods.

  5. Mean Velocity vs. Mean Propulsive Velocity vs. Peak Velocity: Which Variable Determines Bench Press Relative Load With Higher Reliability?

    Science.gov (United States)

    García-Ramos, Amador; Pestaña-Melero, Francisco L; Pérez-Castilla, Alejandro; Rojas, Francisco J; Gregory Haff, G

    2018-05-01

    García-Ramos, A, Pestaña-Melero, FL, Pérez-Castilla, A, Rojas, FJ, and Haff, GG. Mean velocity vs. mean propulsive velocity vs. peak velocity: which variable determines bench press relative load with higher reliability? J Strength Cond Res 32(5): 1273-1279, 2018-This study aimed to compare between 3 velocity variables (mean velocity [MV], mean propulsive velocity [MPV], and peak velocity [PV]): (a) the linearity of the load-velocity relationship, (b) the accuracy of general regression equations to predict relative load (%1RM), and (c) the between-session reliability of the velocity attained at each percentage of the 1-repetition maximum (%1RM). The full load-velocity relationship of 30 men was evaluated by means of linear regression models in the concentric-only and eccentric-concentric bench press throw (BPT) variants performed with a Smith machine. The 2 sessions of each BPT variant were performed within the same week separated by 48-72 hours. The main findings were as follows: (a) the MV showed the strongest linearity of the load-velocity relationship (median r = 0.989 for concentric-only BPT and 0.993 for eccentric-concentric BPT), followed by MPV (median r = 0.983 for concentric-only BPT and 0.980 for eccentric-concentric BPT), and finally PV (median r = 0.974 for concentric-only BPT and 0.969 for eccentric-concentric BPT); (b) the accuracy of the general regression equations to predict relative load (%1RM) from movement velocity was higher for MV (SEE = 3.80-4.76%1RM) than for MPV (SEE = 4.91-5.56%1RM) and PV (SEE = 5.36-5.77%1RM); and (c) the PV showed the lowest within-subjects coefficient of variation (3.50%-3.87%), followed by MV (4.05%-4.93%), and finally MPV (5.11%-6.03%). Taken together, these results suggest that the MV could be the most appropriate variable for monitoring the relative load (%1RM) in the BPT exercise performed in a Smith machine.

  6. A Robust Method to Detect Zero Velocity for Improved 3D Personal Navigation Using Inertial Sensors

    Science.gov (United States)

    Xu, Zhengyi; Wei, Jianming; Zhang, Bo; Yang, Weijun

    2015-01-01

    This paper proposes a robust zero velocity (ZV) detector algorithm to accurately calculate stationary periods in a gait cycle. The proposed algorithm adopts an effective gait cycle segmentation method and introduces a Bayesian network (BN) model based on the measurements of inertial sensors and kinesiology knowledge to infer the ZV period. During the detected ZV period, an Extended Kalman Filter (EKF) is used to estimate the error states and calibrate the position error. The experiments reveal that the removal rate of ZV false detections by the proposed method increases 80% compared with traditional method at high walking speed. Furthermore, based on the detected ZV, the Personal Inertial Navigation System (PINS) algorithm aided by EKF performs better, especially in the altitude aspect. PMID:25831086

  7. Flocking algorithm for autonomous flying robots.

    Science.gov (United States)

    Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás

    2014-06-01

    Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks.

  8. Study on velocity distribution in a pool by submersible mixers

    International Nuclear Information System (INIS)

    Tian, F; Shi, W D; Lu, X N; Chen, B; Jiang, H

    2012-01-01

    To study the distribution of submersible mixers and agitating effect in the sewage treatment pool, Pro/E software was utilized to build the three-dimensional model. Then, the large-scale computational fluid dynamics software FLUENT6.3 was used. ICEM software was used to build unstructured grid of sewage treatment pool. After that, the sewage treatment pool was numerically simulated by dynamic coordinate system technology and RNG k-ε turbulent model and PIOS algorithm. The macro fluid field and each section velocity flow field distribution were analyzed to observe the efficiency of each submersible mixer. The average velocity and mixing area in the sewage pool were studied simultaneously. Results show that: the preferred project B, two submersible mixers speed is 980 r/min, and setting angles are all 30°. Fluid mixing area in the pool has reached more than 95%. Under the action of two mixers, the fluid in the sewage pool form a continuous circulating water flow. The fluid is mixed adequately and average velocity of fluid in the pool is at around 0.241m/s, which agreed with the work requirements. Consequently it can provide a reference basis for practical engineering application of submersible mixers by using this method.

  9. Real-Time Attitude Control Algorithm for Fast Tumbling Objects under Torque Constraint

    Science.gov (United States)

    Tsuda, Yuichi; Nakasuka, Shinichi

    This paper describes a new control algorithm for achieving any arbitrary attitude and angular velocity states of a rigid body, even fast and complicated tumbling rotations, under some practical constraints. This technique is expected to be applied for the attitude motion synchronization to capture a non-cooperative, tumbling object in such missions as removal of debris from orbit, servicing broken-down satellites for repairing or inspection, rescue of manned vehicles, etc. For this objective, we have introduced a novel control algorithm called Free Motion Path Method (FMPM) in the previous paper, which was formulated as an open-loop controller. The next step of this consecutive work is to derive a closed-loop FMPM controller, and as the preliminary step toward the objective, this paper attempts to derive a conservative state variables representation of a rigid body dynamics. 6-Dimensional conservative state variables are introduced in place of general angular velocity-attitude angle representation, and how to convert between both representations are shown in this paper.

  10. Rarefied gas flow simulations using high-order gas-kinetic unified algorithms for Boltzmann model equations

    Science.gov (United States)

    Li, Zhi-Hui; Peng, Ao-Ping; Zhang, Han-Xin; Yang, Jaw-Yen

    2015-04-01

    This article reviews rarefied gas flow computations based on nonlinear model Boltzmann equations using deterministic high-order gas-kinetic unified algorithms (GKUA) in phase space. The nonlinear Boltzmann model equations considered include the BGK model, the Shakhov model, the Ellipsoidal Statistical model and the Morse model. Several high-order gas-kinetic unified algorithms, which combine the discrete velocity ordinate method in velocity space and the compact high-order finite-difference schemes in physical space, are developed. The parallel strategies implemented with the accompanying algorithms are of equal importance. Accurate computations of rarefied gas flow problems using various kinetic models over wide ranges of Mach numbers 1.2-20 and Knudsen numbers 0.0001-5 are reported. The effects of different high resolution schemes on the flow resolution under the same discrete velocity ordinate method are studied. A conservative discrete velocity ordinate method to ensure the kinetic compatibility condition is also implemented. The present algorithms are tested for the one-dimensional unsteady shock-tube problems with various Knudsen numbers, the steady normal shock wave structures for different Mach numbers, the two-dimensional flows past a circular cylinder and a NACA 0012 airfoil to verify the present methodology and to simulate gas transport phenomena covering various flow regimes. Illustrations of large scale parallel computations of three-dimensional hypersonic rarefied flows over the reusable sphere-cone satellite and the re-entry spacecraft using almost the largest computer systems available in China are also reported. The present computed results are compared with the theoretical prediction from gas dynamics, related DSMC results, slip N-S solutions and experimental data, and good agreement can be found. The numerical experience indicates that although the direct model Boltzmann equation solver in phase space can be computationally expensive

  11. Interface Based on Electrooculography for Velocity Control of a Robot Arm

    Directory of Open Access Journals (Sweden)

    Eduardo Iáñez

    2010-01-01

    Full Text Available This paper describes a technique based on electrooculography to control a robot arm. This technique detects the movement of the eyes, measuring the difference of potential between the cornea and the retina by placing electrodes around the ocular area. The processing algorithm developed to obtain the position of the eye at the blink of the user is explained. The output of the processing algorithm offers, apart from the direction, four different values (zero to three to control the velocity of the robot arm according to how much the user is looking in one direction. This allows controlling two degrees of freedom of a robot arm with the eyes movement. The blink has been used to mark some targets in tests. In this paper, the experimental results obtained with a real robot arm are shown.

  12. Revision of an automated microseismic location algorithm for DAS - 3C geophone hybrid array

    Science.gov (United States)

    Mizuno, T.; LeCalvez, J.; Raymer, D.

    2017-12-01

    Application of distributed acoustic sensing (DAS) has been studied in several areas in seismology. One of the areas is microseismic reservoir monitoring (e.g., Molteni et al., 2017, First Break). Considering the present limitations of DAS, which include relatively low signal-to-noise ratio (SNR) and no 3C polarization measurements, a DAS - 3C geophone hybrid array is a practical option when using a single monitoring well. Considering the large volume of data from distributed sensing, microseismic event detection and location using a source scanning type algorithm is a reasonable choice, especially for real-time monitoring. The algorithm must handle both strain rate along the borehole axis for DAS and particle velocity for 3C geophones. Only a small quantity of large SNR events will be detected throughout a large aperture encompassing the hybrid array; therefore, the aperture is to be optimized dynamically to eliminate noisy channels for a majority of events. For such hybrid array, coalescence microseismic mapping (CMM) (Drew et al., 2005, SPE) was revised. CMM forms a likelihood function of location of event and its origin time. At each receiver, a time function of event arrival likelihood is inferred using an SNR function, and it is migrated to time and space to determine hypocenter and origin time likelihood. This algorithm was revised to dynamically optimize such a hybrid array by identifying receivers where a microseismic signal is possibly detected and using only those receivers to compute the likelihood function. Currently, peak SNR is used to select receivers. To prevent false results due to small aperture, a minimum aperture threshold is employed. The algorithm refines location likelihood using 3C geophone polarization. We tested this algorithm using a ray-based synthetic dataset. Leaney (2014, PhD thesis, UBC) is used to compute particle velocity at receivers. Strain rate along the borehole axis is computed from particle velocity as DAS microseismic

  13. Numerical simulation of bubble behavior in subcooled flow boiling under velocity and temperature gradient

    International Nuclear Information System (INIS)

    Bahreini, Mohammad; Ramiar, Abas; Ranjbar, Ali Akbar

    2015-01-01

    Highlights: • Condensing bubble is numerically investigated using VOF model in OpenFOAM package. • Bubble mass reduces as it goes through condensation and achieves higher velocities. • At a certain time the slope of changing bubble diameter with time, varies suddenly. • Larger bubbles experience more lateral migration to higher velocity regions. • Bubbles migrate back to a lower velocity region for higher liquid subcooling rates. - Abstract: In this paper, numerical simulation of the bubble condensation in the subcooled boiling flow is performed. The interface between two-phase is tracked via the volume of fluid (VOF) method with continuous surface force (CSF) model, implemented in the open source OpenFOAM CFD package. In order to simulate the condensing bubble with the OpenFOAM code, the original energy equation and mass transfer model for phase change have been modified and a new solver is developed. The Newtonian flow is solved using the finite volume scheme based on the pressure implicit with splitting of operators (PISO) algorithm. Comparison of the simulation results with previous experimental data revealed that the model predicted well the behavior of the actual condensing bubble. The bubble lifetime is almost proportional to bubble initial size and is prolonged by increasing the system pressure. In addition, the initial bubble size, subcooling of liquid and velocity gradient play an important role in the bubble deformation behavior. Velocity gradient makes the bubble move to the higher velocity region and the subcooling rate makes it to move back to the lower velocity region.

  14. Numerical simulation of bubble behavior in subcooled flow boiling under velocity and temperature gradient

    Energy Technology Data Exchange (ETDEWEB)

    Bahreini, Mohammad, E-mail: m.bahreini1990@gmail.com; Ramiar, Abas, E-mail: aramiar@nit.ac.ir; Ranjbar, Ali Akbar, E-mail: ranjbar@nit.ac.ir

    2015-11-15

    Highlights: • Condensing bubble is numerically investigated using VOF model in OpenFOAM package. • Bubble mass reduces as it goes through condensation and achieves higher velocities. • At a certain time the slope of changing bubble diameter with time, varies suddenly. • Larger bubbles experience more lateral migration to higher velocity regions. • Bubbles migrate back to a lower velocity region for higher liquid subcooling rates. - Abstract: In this paper, numerical simulation of the bubble condensation in the subcooled boiling flow is performed. The interface between two-phase is tracked via the volume of fluid (VOF) method with continuous surface force (CSF) model, implemented in the open source OpenFOAM CFD package. In order to simulate the condensing bubble with the OpenFOAM code, the original energy equation and mass transfer model for phase change have been modified and a new solver is developed. The Newtonian flow is solved using the finite volume scheme based on the pressure implicit with splitting of operators (PISO) algorithm. Comparison of the simulation results with previous experimental data revealed that the model predicted well the behavior of the actual condensing bubble. The bubble lifetime is almost proportional to bubble initial size and is prolonged by increasing the system pressure. In addition, the initial bubble size, subcooling of liquid and velocity gradient play an important role in the bubble deformation behavior. Velocity gradient makes the bubble move to the higher velocity region and the subcooling rate makes it to move back to the lower velocity region.

  15. GSpecDisp: A matlab GUI package for phase-velocity dispersion measurements from ambient-noise correlations

    Science.gov (United States)

    Sadeghisorkhani, Hamzeh; Gudmundsson, Ólafur; Tryggvason, Ari

    2018-01-01

    We present a graphical user interface (GUI) package to facilitate phase-velocity dispersion measurements of surface waves in noise-correlation traces. The package, called GSpecDisp, provides an interactive environment for the measurements and presentation of the results. The selection of a dispersion curve can be done automatically or manually within the package. The data are time-domain cross-correlations in SAC format, but GSpecDisp measures phase velocity in the spectral domain. Two types of phase-velocity dispersion measurements can be carried out with GSpecDisp; (1) average velocity of a region, and (2) single-pair phase velocity. Both measurements are done by matching the real part of the cross-correlation spectrum with the appropriate Bessel function. Advantages of these two types of measurements are that no prior knowledge about surface-wave dispersion in the region is needed, and that phase velocity can be measured up to that period for which the inter-station distance corresponds to one wavelength. GSpecDisp can measure the phase velocity of Rayleigh and Love waves from all possible components of the noise correlation tensor. First, we briefly present the theory behind the methods that are used, and then describe different modules of the package. Finally, we validate the developed algorithms by applying them to synthetic and real data, and by comparison with other methods. The source code of GSpecDisp can be downloaded from: https://github.com/Hamzeh-Sadeghi/GSpecDisp

  16. A multi-parametric particle-pairing algorithm for particle tracking in single and multiphase flows

    International Nuclear Information System (INIS)

    Cardwell, Nicholas D; Vlachos, Pavlos P; Thole, Karen A

    2011-01-01

    Multiphase flows (MPFs) offer a rich area of fundamental study with many practical applications. Examples of such flows range from the ingestion of foreign particulates in gas turbines to transport of particles within the human body. Experimental investigation of MPFs, however, is challenging, and requires techniques that simultaneously resolve both the carrier and discrete phases present in the flowfield. This paper presents a new multi-parametric particle-pairing algorithm for particle tracking velocimetry (MP3-PTV) in MPFs. MP3-PTV improves upon previous particle tracking algorithms by employing a novel variable pair-matching algorithm which utilizes displacement preconditioning in combination with estimated particle size and intensity to more effectively and accurately match particle pairs between successive images. To improve the method's efficiency, a new particle identification and segmentation routine was also developed. Validation of the new method was initially performed on two artificial data sets: a traditional single-phase flow published by the Visualization Society of Japan (VSJ) and an in-house generated MPF data set having a bi-modal distribution of particles diameters. Metrics of the measurement yield, reliability and overall tracking efficiency were used for method comparison. On the VSJ data set, the newly presented segmentation routine delivered a twofold improvement in identifying particles when compared to other published methods. For the simulated MPF data set, measurement efficiency of the carrier phases improved from 9% to 41% for MP3-PTV as compared to a traditional hybrid PTV. When employed on experimental data of a gas–solid flow, the MP3-PTV effectively identified the two particle populations and reported a vector efficiency and velocity measurement error comparable to measurements for the single-phase flow images. Simultaneous measurement of the dispersed particle and the carrier flowfield velocities allowed for the calculation of

  17. Referencing geostrophic velocities using ADCP data Referencing geostrophic velocities using ADCP data

    Directory of Open Access Journals (Sweden)

    Isis Comas-Rodríguez

    2010-06-01

    Full Text Available Acoustic Doppler Current Profilers (ADCPs have proven to be a useful oceanographic tool in the study of ocean dynamics. Data from D279, a transatlantic hydrographic cruise carried out in spring 2004 along 24.5°N, were processed, and lowered ADCP (LADCP bottom track data were used to assess the choice of reference velocity for geostrophic calculations. The reference velocities from different combinations of ADCP data were compared to one another and a reference velocity was chosen based on the LADCP data. The barotropic tidal component was subtracted to provide a final reference velocity estimated by LADCP data. The results of the velocity fields are also shown. Further studies involving inverse solutions will include the reference velocity calculated here.

  18. Solid phase stability of molybdenum under compression: Sound velocity measurements and first-principles calculations

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xiulu [Laboratory for Shock Wave and Detonation Physics Research, Institute of Fluid Physics, P.O. Box 919-102, 621900 Mianyang, Sichuan (China); Laboratory for Extreme Conditions Matter Properties, Southwest University of Science and Technology, 621010 Mianyang, Sichuan (China); Liu, Zhongli [Laboratory for Shock Wave and Detonation Physics Research, Institute of Fluid Physics, P.O. Box 919-102, 621900 Mianyang, Sichuan (China); College of Physics and Electric Information, Luoyang Normal University, 471022 Luoyang, Henan (China); Jin, Ke; Xi, Feng; Yu, Yuying; Tan, Ye; Dai, Chengda; Cai, Lingcang [Laboratory for Shock Wave and Detonation Physics Research, Institute of Fluid Physics, P.O. Box 919-102, 621900 Mianyang, Sichuan (China)

    2015-02-07

    The high-pressure solid phase stability of molybdenum (Mo) has been the center of a long-standing controversy on its high-pressure melting. In this work, experimental and theoretical researches have been conducted to check its solid phase stability under compression. First, we performed sound velocity measurements from 38 to 160 GPa using the two-stage light gas gun and explosive loading in backward- and forward-impact geometries, along with the high-precision velocity interferometry. From the sound velocities, we found no solid-solid phase transition in Mo before shock melting, which does not support the previous solid-solid phase transition conclusion inferred from the sharp drops of the longitudinal sound velocity [Hixson et al., Phys. Rev. Lett. 62, 637 (1989)]. Then, we searched its structures globally using the multi-algorithm collaborative crystal structure prediction technique combined with the density functional theory. By comparing the enthalpies of body centered cubic structure with those of the metastable structures, we found that bcc is the most stable structure in the range of 0–300 GPa. The present theoretical results together with previous ones greatly support our experimental conclusions.

  19. Application of Genetic Algorithms in Seismic Tomography

    Science.gov (United States)

    Soupios, Pantelis; Akca, Irfan; Mpogiatzis, Petros; Basokur, Ahmet; Papazachos, Constantinos

    2010-05-01

    application of hybrid genetic algorithms in seismic tomography is examined and the efficiency of least squares and genetic methods as representative of the local and global optimization, respectively, is presented and evaluated. The robustness of both optimization methods has been tested and compared for the same source-receiver geometry and characteristics of the model structure (anomalies, etc.). A set of seismic refraction synthetic (noise free) data was used for modeling. Specifically, cross-well, down-hole and typical refraction studies using 24 geophones and 5 shoots were used to confirm the applicability of the genetic algorithms in seismic tomography. To solve the forward modeling and estimate the traveltimes, the revisited ray bending method was used supplemented by an approximate computation of the first Fresnel volume. The root mean square (rms) error as the misfit function was used and calculated for the entire random velocity model for each generation. After the end of each generation and based on the misfit of the individuals (velocity models), the selection, crossover and mutation (typical process steps of genetic algorithms) were selected continuing the evolution theory and coding the new generation. To optimize the computation time, since the whole procedure is quite time consuming, the Matlab Distributed Computing Environment (MDCE) was used in a multicore engine. During the tests, we noticed that the fast convergence that the algorithm initially exhibits (first 5 generations) is followed by progressively slower improvements of the reconstructed velocity models. Thus, to improve the final tomographic models, a hybrid genetic algorithm (GA) approach was adopted by combining the GAs with a local optimization method after several generations, on the basis of the convergence of the resulting models. This approach is shown to be efficient, as it directs the solution search towards a model region close to the global minimum solution.

  20. Free-surface velocity measurements using an optically recording velocity interferometer

    International Nuclear Information System (INIS)

    Lu Jianxin; Wang Zhao; Liang Jing; Shan Yusheng; Zhou Chuangzhi; Xiang Yihuai; Lu Ze; Tang Xiuzhang

    2006-01-01

    An optically recording velocity interferometer system (ORVIS) was developed for the free-surface velocity measurements in the equation of state experiments. The time history of free-surface velocity could be recorded by the electronic streak camera. In the experiments, ORVIS got a 179 ps time resolution, and a higher time resolution could be got by minimizing the delay time. The equation of state experiments were carried out on the high power excimer laser system called 'Heaven I' with laser wavelength of 248.4 nm, pulse duration of 25 ns and maximum energy 158 J. Free-surface velocity of 20 μm thick iron got 3.86 km/s with laser intensity of 6.24 x 10 11 W·cm -2 , and free-surface velocity of 100 μm thick aluminum with 100 μm CH foil at the front got 2.87 km/s with laser intensity 7.28 x 10 11 W·cm -2 . (authors)

  1. A Novel AHRS Inertial Sensor-Based Algorithm for Wheelchair Propulsion Performance Analysis

    Directory of Open Access Journals (Sweden)

    Jonathan Bruce Shepherd

    2016-08-01

    Full Text Available With the increasing rise of professionalism in sport, athletes, teams, and coaches are looking to technology to monitor performance in both games and training in order to find a competitive advantage. The use of inertial sensors has been proposed as a cost effective and adaptable measurement device for monitoring wheelchair kinematics; however, the outcomes are dependent on the reliability of the processing algorithms. Though there are a variety of algorithms that have been proposed to monitor wheelchair propulsion in court sports, they all have limitations. Through experimental testing, we have shown the Attitude and Heading Reference System (AHRS-based algorithm to be a suitable and reliable candidate algorithm for estimating velocity, distance, and approximating trajectory. The proposed algorithm is computationally inexpensive, agnostic of wheel camber, not sensitive to sensor placement, and can be embedded for real-time implementations. The research is conducted under Griffith University Ethics (GU Ref No: 2016/294.

  2. On linear relationship between shock velocity and particle velocity

    International Nuclear Information System (INIS)

    Dandache, H.

    1986-11-01

    We attempt to derive the linear relationship between shock velocity U s and particle velocity U p from thermodynamic considerations, taking into account an ideal gas equation of state and a Mie-Grueneisen equation of state for solids. 23 refs

  3. Determination of the filtration velocities and mean velocity in ground waters using radiotracers

    International Nuclear Information System (INIS)

    Duran P, Oscar; Diaz V, Francisco; Heresi M, Nelida

    1994-01-01

    An experimental method to determine filtration, or, Darcy velocity and mean velocity in underground waters using radiotracers, is described. After selecting the most appropriate tracers, from 6 chemical compounds, to measure water velocity, a method to measure filtration velocity was developed. By fully labelling the water column with 2 radioisotopes, Br and tritium, almost identical values were obtained for the aquifer filtration velocity in the sounding S1. This value was 0.04 m/d. Field porosity was calculated at 11% and mean velocity at 0.37 m.d. With the filtration velocity value and knowing the hydraulic variation between the soundings S1 and S2 placed at 10 meters, field permeability was estimated at 2.4 x 10 m/s. (author)

  4. THE ALGORITHM OF DETERMINATION OF EYE FUNDUS VESSELS BLOOD FLOW CHARACTERISTICS ON VIDEOSEQUENCE

    Directory of Open Access Journals (Sweden)

    O. V. Nedzvedz

    2018-01-01

    Full Text Available The method of determination of the dynamic characteristics like the vessel diameter change, the linear and volume blood velocities in the vessels of the eye fundus is considered. Such characteristics allow to determine blood flow changes in the microvasculature affecting the blood flow in the brain, kidneys and coronary vessels. Developed algorithm includes four stages: the video sequence stabilization, the vessels segmentation with the help of a neural network, the determination of the instantaneous velocity in the vessels based on the optical flow and the analysis of the results.

  5. Frequentist and Bayesian Orbital Parameter Estimaton from Radial Velocity Data Using RVLIN, BOOTTRAN, and RUN DMC

    Science.gov (United States)

    Nelson, Benjamin Earl; Wright, Jason Thomas; Wang, Sharon

    2015-08-01

    For this hack session, we will present three tools used in analyses of radial velocity exoplanet systems. RVLIN is a set of IDL routines used to quickly fit an arbitrary number of Keplerian curves to radial velocity data to find adequate parameter point estimates. BOOTTRAN is an IDL-based extension of RVLIN to provide orbital parameter uncertainties using bootstrap based on a Keplerian model. RUN DMC is a highly parallelized Markov chain Monte Carlo algorithm that employs an n-body model, primarily used for dynamically complex or poorly constrained exoplanet systems. We will compare the performance of these tools and their applications to various exoplanet systems.

  6. Radial velocity asymmetries from jets with variable velocity profiles

    International Nuclear Information System (INIS)

    Cerqueira, A. H.; Vasconcelos, M. J.; Velazquez, P. F.; Raga, A. C.; De Colle, F.

    2006-01-01

    We have computed a set of 3-D numerical simulations of radiatively cooling jets including variabilities in both the ejection direction (precession) and the jet velocity (intermittence), using the Yguazu-a code. In order to investigate the effects of jet rotation on the shape of the line profiles, we also introduce an initial toroidal rotation velocity profile. Since the Yguazu-a code includes an atomic/ionic network, we are able to compute the emission coefficients for several emission lines, and we generate line profiles for the Hα, [O I]λ6300, [S II]λ6716 and [N II]λ6548 lines. Using initial parameters that are suitable for the DG Tau microjet, we show that the computed radial velocity shift for the medium-velocity component of the line profile as a function of distance from the jet axis is strikingly similar for rotating and non-rotating jet models

  7. VSMURF: A Novel Sliding Window Cleaning Algorithm for RFID Networks

    Directory of Open Access Journals (Sweden)

    He Xu

    2017-01-01

    Full Text Available Radio Frequency Identification (RFID is one of the key technologies of the Internet of Things (IoT and is used in many areas, such as mobile payments, public transportation, smart lock, and environment protection. However, the performance of RFID equipment can be easily affected by the surrounding environment, such as electronic productions and metal appliances. These can impose an impact on the RF signal, which makes the collection of RFID data unreliable. Usually, the unreliability of RFID source data includes three aspects: false negatives, false positives, and dirty data. False negatives are the key problem, as the probability of false positives and dirty data occurrence is relatively small. This paper proposes a novel sliding window cleaning algorithm called VSMURF, which is based on the traditional SMURF algorithm which combines the dynamic change of tags and the value analysis of confidence. Experimental results show that VSMURF algorithm performs better in most conditions and when the tag’s speed is low or high. In particular, if the velocity parameter is set to 2 m/epoch, our proposed VSMURF algorithm performs better than SMURF. The results also show that VSMURF algorithm has better performance than other algorithms in solving the problem of false negatives for RFID networks.

  8. The Measurement of cloud velocity using the pulsed laser and image tracking technique

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Seong-Ouk; Baik, Seung-Hoon; Park, Seung-Kyu; Park, Nak-Gyu; Kim, Dong-lyul; Ahn, Yong-Jin [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    The height of the clouds is also important for the three dimensional radiative interaction of aerosols and clouds, since the radiative effects vary strongly depending whether the cloud is above, below or even embedded in an aerosol layer. Clouds play an important role in climate change, in the prediction of local weather, and also in aviation safety when instrument assisted flying is unavailable. Presently, various ground-based instruments used for the measurements of the cloud base height or velocity. Lidar techniques are powerful and have many applications in climate studies, including the clouds' temperature measurement, the aerosol particle properties, etc. Otherwise, it is very circumscribed in cloud velocity measurements In this paper, we propose a new method to measure the cloud velocity. In this paper, we presented a method for the measurement of the cloud altitude and velocity using lidar's range detection and the tracking system. For the lidar system, we used an injection-seeded pulsed Nd:YAG laser as the transmitter to measure the distance to the target clouds. We used the DIC system to track the cloud image and calculate the actual displacement per unit time. The configured lidar system acquired the lidar signal of clouds at a distance of about 4 km. The developed fast correlation algorithm of the tracking, which is used to track the fast moving cloud relatively, was efficient for measuring the cloud velocity in real time. The measurement values had a linear distribution.

  9. Low velocity gunshot wounds result in significant contamination regardless of ballistic characteristics.

    Science.gov (United States)

    Weinstein, Joseph; Putney, Emily; Egol, Kenneth

    2014-01-01

    Controversy exists among the orthopedic community regarding the treatment of gunshot injuries. No consistent treatment algorithm exists for treatment of low energy gunshot wound (GSW) trauma. The purpose of this study was to critically examine the wound contamination following low velocity GSW based upon bullet caliber and clothing fiber type found within the injury track. Four types of handguns were fired at ballistic gel from a 10-foot distance. Various clothing materials were applied (denim, cotton, polyester, and wool) circumferentially around the tissue agar in a loose manor. A total of 32 specimens were examined. Each caliber handgun was fired a minimum of 5 times into a gel. Regardless of bullet caliber there was gross contamination of the entire bullet track in 100% of specimens in all scenarios and for all fiber types. Furthermore, as would be expected, the degree of contamination appeared to increase as the size of the bullet increased. Low velocity GSWs result in significant contamination regardless of bullet caliber and jacket type. Based upon our results further investigation of low velocity GSW tracks is warranted. Further clinical investigation should focus on the degree to which debridement should be undertaken.

  10. Propagation of the Semidiurnal Internal Tide: Phase Velocity Versus Group Velocity

    Science.gov (United States)

    Zhao, Zhongxiang

    2017-12-01

    The superposition of two waves of slightly different wavelengths has long been used to illustrate the distinction between phase velocity and group velocity. The first-mode M2 and S2 internal tides exemplify such a two-wave model in the natural ocean. The M2 and S2 tidal frequencies are 1.932 and 2 cycles per day, respectively, and their superposition forms a spring-neap cycle in the semidiurnal band. The spring-neap cycle acts like a wave, with its frequency, wave number, and phase being the differences of the M2 and S2 internal tides. The spring-neap cycle and energy of the semidiurnal internal tide propagate at the group velocity. Long-range propagation of M2 and S2 internal tides in the North Pacific is observed by satellite altimetry. Along a 3,400 km beam spanning 24°-54°N, the M2 and S2 travel times are 10.9 and 11.2 days, respectively. For comparison, it takes the spring-neap cycle 21.1 days to travel over this distance. Spatial maps of the M2 phase velocity, the S2 phase velocity, and the group velocity are determined from phase gradients of the corresponding satellite observed internal tide fields. The observed phase and group velocities agree with theoretical values estimated using the World Ocean Atlas 2013 annual-mean ocean stratification.

  11. Digital signal processing for velocity measurements in dynamical material's behaviour studies

    International Nuclear Information System (INIS)

    Devlaminck, Julien; Luc, Jerome; Chanal, Pierre-Yves

    2014-01-01

    In this work, we describe different configurations of optical fiber interferometers (types Michelson and Mach-Zehnder) used to measure velocities during dynamical material's behaviour studies. We detail the algorithms of processing developed and optimized to improve the performance of these interferometers especially in terms of time and frequency resolutions. Three methods of analysis of interferometric signals were studied. For Michelson interferometers, the time-frequency analysis of signals by Short-Time Fourier Transform (STFT) is compared to a time-frequency analysis by Continuous Wavelet Transform (CWT). The results have shown that the CWT was more suitable than the STFT for signals with low signal-to-noise, and low velocity and high acceleration areas. For Mach- Zehnder interferometers, the measurement is carried out by analyzing the phase shift between three interferometric signals (Triature processing). These three methods of digital signal processing were evaluated, their measurement uncertainties estimated, and their restrictions or operational limitations specified from experimental results performed on a pulsed power machine. (authors)

  12. Determination of velocity correction factors for real-time air velocity monitoring in underground mines.

    Science.gov (United States)

    Zhou, Lihong; Yuan, Liming; Thomas, Rick; Iannacchione, Anthony

    2017-12-01

    When there are installations of air velocity sensors in the mining industry for real-time airflow monitoring, a problem exists with how the monitored air velocity at a fixed location corresponds to the average air velocity, which is used to determine the volume flow rate of air in an entry with the cross-sectional area. Correction factors have been practically employed to convert a measured centerline air velocity to the average air velocity. However, studies on the recommended correction factors of the sensor-measured air velocity to the average air velocity at cross sections are still lacking. A comprehensive airflow measurement was made at the Safety Research Coal Mine, Bruceton, PA, using three measuring methods including single-point reading, moving traverse, and fixed-point traverse. The air velocity distribution at each measuring station was analyzed using an air velocity contour map generated with Surfer ® . The correction factors at each measuring station for both the centerline and the sensor location were calculated and are discussed.

  13. Inter- and Intrasubject Similarity of Muscle Synergies During Bench Press With Slow and Fast Velocity.

    Science.gov (United States)

    Samani, Afshin; Kristiansen, Mathias

    2018-01-01

    We investigated the effect of low and high bar velocity on inter- and intrasubject similarity of muscle synergies during bench press. A total of 13 trained male subjects underwent two exercise conditions: a slow- and a fast-velocity bench press. Surface electromyography was recorded from 13 muscles, and muscle synergies were extracted using a nonnegative matrix factorization algorithm. The intrasubject similarity across conditions and intersubject similarity within conditions were computed for muscle synergy vectors and activation coefficients. Two muscle synergies were sufficient to describe the dataset variability. For the second synergy activation coefficient, the intersubject similarity within the fast-velocity condition was greater than the intrasubject similarity of the activation coefficient across the conditions. An opposite pattern was observed for the first muscle synergy vector. We concluded that the activation coefficients are robust within conditions, indicating a robust temporal pattern of muscular activity across individuals, but the muscle synergy vector seemed to be individually assigned.

  14. MO-DE-207A-12: Toward Patient-Specific 4DCT Reconstruction Using Adaptive Velocity Binning

    International Nuclear Information System (INIS)

    Morris, E.D.; Glide-Hurst, C.; Klahr, P.

    2016-01-01

    Purpose: While 4DCT provides organ/tumor motion information, it often samples data over 10–20 breathing cycles. For patients presenting with compromised pulmonary function, breathing patterns can change over the acquisition time, potentially leading to tumor delineation discrepancies. This work introduces a novel adaptive velocity-modulated binning (AVB) 4DCT algorithm that modulates the reconstruction based on the respiratory waveform, yielding a patient-specific 4DCT solution. Methods: AVB was implemented in a research reconstruction configuration. After filtering the respiratory waveform, the algorithm examines neighboring data to a phase reconstruction point and the temporal gate is widened until the difference between the reconstruction point and waveform exceeds a threshold value—defined as percent difference between maximum/minimum waveform amplitude. The algorithm only impacts reconstruction if the gate width exceeds a set minimum temporal width required for accurate reconstruction. A sensitivity experiment of threshold values (0.5, 1, 5, 10, and 12%) was conducted to examine the interplay between threshold, signal to noise ratio (SNR), and image sharpness for phantom and several patient 4DCT cases using ten-phase reconstructions. Individual phase reconstructions were examined. Subtraction images and regions of interest were compared to quantify changes in SNR. Results: AVB increased signal in reconstructed 4DCT slices for respiratory waveforms that met the prescribed criteria. For the end-exhale phases, where the respiratory velocity is low, patient data revealed a threshold of 0.5% demonstrated increased SNR in the AVB reconstructions. For intermediate breathing phases, threshold values were required to be >10% to notice appreciable changes in CT intensity with AVB. AVB reconstructions exhibited appreciably higher SNR and reduced noise in regions of interest that were photon deprived such as the liver. Conclusion: We demonstrated that patient

  15. MO-DE-207A-12: Toward Patient-Specific 4DCT Reconstruction Using Adaptive Velocity Binning

    Energy Technology Data Exchange (ETDEWEB)

    Morris, E.D.; Glide-Hurst, C. [Henry Ford Health System, Detroit, MI (United States); Wayne State University, Detroit, MI (United States); Klahr, P. [Philips Healthcare, Cleveland, Ohio (United States)

    2016-06-15

    Purpose: While 4DCT provides organ/tumor motion information, it often samples data over 10–20 breathing cycles. For patients presenting with compromised pulmonary function, breathing patterns can change over the acquisition time, potentially leading to tumor delineation discrepancies. This work introduces a novel adaptive velocity-modulated binning (AVB) 4DCT algorithm that modulates the reconstruction based on the respiratory waveform, yielding a patient-specific 4DCT solution. Methods: AVB was implemented in a research reconstruction configuration. After filtering the respiratory waveform, the algorithm examines neighboring data to a phase reconstruction point and the temporal gate is widened until the difference between the reconstruction point and waveform exceeds a threshold value—defined as percent difference between maximum/minimum waveform amplitude. The algorithm only impacts reconstruction if the gate width exceeds a set minimum temporal width required for accurate reconstruction. A sensitivity experiment of threshold values (0.5, 1, 5, 10, and 12%) was conducted to examine the interplay between threshold, signal to noise ratio (SNR), and image sharpness for phantom and several patient 4DCT cases using ten-phase reconstructions. Individual phase reconstructions were examined. Subtraction images and regions of interest were compared to quantify changes in SNR. Results: AVB increased signal in reconstructed 4DCT slices for respiratory waveforms that met the prescribed criteria. For the end-exhale phases, where the respiratory velocity is low, patient data revealed a threshold of 0.5% demonstrated increased SNR in the AVB reconstructions. For intermediate breathing phases, threshold values were required to be >10% to notice appreciable changes in CT intensity with AVB. AVB reconstructions exhibited appreciably higher SNR and reduced noise in regions of interest that were photon deprived such as the liver. Conclusion: We demonstrated that patient

  16. Uncertainty assessment of 3D instantaneous velocity model from stack velocities

    Science.gov (United States)

    Emanuele Maesano, Francesco; D'Ambrogi, Chiara

    2015-04-01

    3D modelling is a powerful tool that is experiencing increasing applications in data analysis and dissemination. At the same time the need of quantitative uncertainty evaluation is strongly requested in many aspects of the geological sciences and by the stakeholders. In many cases the starting point for 3D model building is the interpretation of seismic profiles that provide indirect information about the geology of the subsurface in the domain of time. The most problematic step in the 3D modelling construction is the conversion of the horizons and faults interpreted in time domain to the depth domain. In this step the dominant variable that could lead to significantly different results is the velocity. The knowledge of the subsurface velocities is related mainly to punctual data (sonic logs) that are often sparsely distributed in the areas covered by the seismic interpretation. The extrapolation of velocity information to wide extended horizons is thus a critical step to obtain a 3D model in depth that can be used for predictive purpose. In the EU-funded GeoMol Project, the availability of a dense network of seismic lines (confidentially provided by ENI S.p.A.) in the Central Po Plain, is paired with the presence of 136 well logs, but few of them have sonic logs and in some portion of the area the wells are very widely spaced. The depth conversion of the 3D model in time domain has been performed testing different strategies for the use and the interpolation of velocity data. The final model has been obtained using a 4 layer cake 3D instantaneous velocity model that considers both the initial velocity (v0) in every reference horizon and the gradient of velocity variation with depth (k). Using this method it is possible to consider the geological constraint given by the geometries of the horizons and the geo-statistical approach to the interpolation of velocities and gradient. Here we present an experiment based on the use of set of pseudo-wells obtained from the

  17. Velocity Feedback Experiments

    Directory of Open Access Journals (Sweden)

    Chiu Choi

    2017-02-01

    Full Text Available Transient response such as ringing in a control system can be reduced or removed by velocity feedback. It is a useful control technique that should be covered in the relevant engineering laboratory courses. We developed velocity feedback experiments using two different low cost technologies, viz., operational amplifiers and microcontrollers. These experiments can be easily integrated into laboratory courses on feedback control systems or microcontroller applications. The intent of developing these experiments was to illustrate the ringing problem and to offer effective, low cost solutions for removing such problem. In this paper the pedagogical approach for these velocity feedback experiments was described. The advantages and disadvantages of the two different implementation of velocity feedback were discussed also.

  18. The Parameters Selection of PSO Algorithm influencing On performance of Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    He Yan

    2016-01-01

    Full Text Available The particle swarm optimization (PSO is an optimization algorithm based on intelligent optimization. Parameters selection of PSO will play an important role in performance and efficiency of the algorithm. In this paper, the performance of PSO is analyzed when the control parameters vary, including particle number, accelerate constant, inertia weight and maximum limited velocity. And then PSO with dynamic parameters has been applied on the neural network training for gearbox fault diagnosis, the results with different parameters of PSO are compared and analyzed. At last some suggestions for parameters selection are proposed to improve the performance of PSO.

  19. The velocity of sound

    International Nuclear Information System (INIS)

    Beyer, R.T.

    1985-01-01

    The paper reviews the work carried out on the velocity of sound in liquid alkali metals. The experimental methods to determine the velocity measurements are described. Tables are presented of reported data on the velocity of sound in lithium, sodium, potassium, rubidium and caesium. A formula is given for alkali metals, in which the sound velocity is a function of shear viscosity, atomic mass and atomic volume. (U.K.)

  20. Algorithm for automatic analysis of electro-oculographic data.

    Science.gov (United States)

    Pettersson, Kati; Jagadeesan, Sharman; Lukander, Kristian; Henelius, Andreas; Haeggström, Edward; Müller, Kiti

    2013-10-25

    Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics.

  1. Kalker's algorithm Fastsim solves tangential contact problems with slip-dependent friction and friction anisotropy

    Science.gov (United States)

    Piotrowski, J.

    2010-07-01

    This paper presents two extensions of Kalker's algorithm Fastsim of the simplified theory of rolling contact. The first extension is for solving tangential contact problems with the coefficient of friction depending on slip velocity. Two friction laws have been considered: with and without recuperation of the static friction. According to the tribological hypothesis for metallic bodies shear failure, the friction law without recuperation of static friction is more suitable for wheel and rail than the other one. Sample results present local quantities inside the contact area (division to slip and adhesion, traction) as well as global ones (creep forces as functions of creepages and rolling velocity). For the coefficient of friction diminishing with slip, the creep forces decay after reaching the maximum and they depend on the rolling velocity. The second extension is for solving tangential contact problems with friction anisotropy characterised by a convex set of the permissible tangential tractions. The effect of the anisotropy has been shown on examples of rolling without spin and in the presence of pure spin for the elliptical set. The friction anisotropy influences tangential tractions and creep forces. Sample results present local and global quantities. Both extensions have been described with the same language of formulation and they may be merged into one, joint algorithm.

  2. Nerve conduction velocity

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/article/003927.htm Nerve conduction velocity To use the sharing features on this page, please enable JavaScript. Nerve conduction velocity (NCV) is a test to see ...

  3. On the Spatial Distribution of High Velocity Al-26 Near the Galactic Center

    Science.gov (United States)

    Sturner, Steven J.

    2000-01-01

    We present results of simulations of the distribution of 1809 keV radiation from the decay of Al-26 in the Galaxy. Recent observations of this emission line using the Gamma Ray Imaging Spectrometer (GRIS) have indicated that the bulk of the AL-26 must have a velocity of approx. 500 km/ s. We have previously shown that a velocity this large could be maintained over the 10(exp 6) year lifetime of the Al-26 if it is trapped in dust grains that are reaccelerated periodically in the ISM. Here we investigate whether a dust grain velocity of approx. 500 km/ s will produce a distribution of 1809 keV emission in latitude that is consistent with the narrow distribution seen by COMPTEL. We find that dust grain velocities in the range 275 - 1000 km/ s are able to reproduce the COMPTEL 1809 keV emission maps reconstructed using the Richardson-Lucy and Maximum Entropy image reconstruction methods while the emission map reconstructed using the Multiresolution Regularized Expectation Maximization algorithm is not well fit by any of our models. The Al-26 production rate that is needed to reproduce the observed 1809 keV intensity yields in a Galactic mass of Al-26 of approx. 1.5 - 2 solar mass which is in good agreement with both other observations and theoretical production rates.

  4. Using virtual environment for autonomous vehicle algorithm validation

    Science.gov (United States)

    Levinskis, Aleksandrs

    2018-04-01

    This paper describes possible use of modern game engine for validating and proving the concept of algorithm design. As the result simple visual odometry algorithm will be provided to show the concept and go over all workflow stages. Some of stages will involve using of Kalman filter in such a way that it will estimate optical flow velocity as well as position of moving camera located at vehicle body. In particular Unreal Engine 4 game engine will be used for generating optical flow patterns and ground truth path. For optical flow determination Horn and Schunck method will be applied. As the result, it will be shown that such method can estimate position of the camera attached to vehicle with certain displacement error respect to ground truth depending on optical flow pattern. For displacement rate RMS error is calculating between estimated and actual position.

  5. Remote measurement of surface-water velocity using infrared videography and PIV: a proof-of-concept for Alaskan rivers

    Science.gov (United States)

    Kinzel, Paul J.; Legleiter, Carl; Nelson, Jonathan M.; Conaway, Jeffrey S.

    2017-01-01

    Thermal cameras with high sensitivity to medium and long wavelengths can resolve features at the surface of flowing water arising from turbulent mixing. Images acquired by these cameras can be processed with particle image velocimetry (PIV) to compute surface velocities based on the displacement of thermal features as they advect with the flow. We conducted a series of field measurements to test this methodology for remote sensing of surface velocities in rivers. We positioned an infrared video camera at multiple stations across bridges that spanned five rivers in Alaska. Simultaneous non-contact measurements of surface velocity were collected with a radar gun. In situ velocity profiles were collected with Acoustic Doppler Current Profilers (ADCP). Infrared image time series were collected at a frequency of 10Hz for a one-minute duration at a number of stations spaced across each bridge. Commercial PIV software used a cross-correlation algorithm to calculate pixel displacements between successive frames, which were then scaled to produce surface velocities. A blanking distance below the ADCP prevents a direct measurement of the surface velocity. However, we estimated surface velocity from the ADCP measurements using a program that normalizes each ADCP transect and combines those normalized transects to compute a mean measurement profile. The program can fit a power law to the profile and in so doing provides a velocity index, the ratio between the depth-averaged and surface velocity. For the rivers in this study, the velocity index ranged from 0.82 – 0.92. Average radar and extrapolated ADCP surface velocities were in good agreement with average infrared PIV calculations.

  6. Determination of velocity correction factors for real-time air velocity monitoring in underground mines

    OpenAIRE

    Zhou, Lihong; Yuan, Liming; Thomas, Rick; Iannacchione, Anthony

    2017-01-01

    When there are installations of air velocity sensors in the mining industry for real-time airflow monitoring, a problem exists with how the monitored air velocity at a fixed location corresponds to the average air velocity, which is used to determine the volume flow rate of air in an entry with the cross-sectional area. Correction factors have been practically employed to convert a measured centerline air velocity to the average air velocity. However, studies on the recommended correction fac...

  7. Multiple Model Adaptive Attitude Control of LEO Satellite with Angular Velocity Constraints

    Science.gov (United States)

    Shahrooei, Abolfazl; Kazemi, Mohammad Hosein

    2018-04-01

    In this paper, the multiple model adaptive control is utilized to improve the transient response of attitude control system for a rigid spacecraft. An adaptive output feedback control law is proposed for attitude control under angular velocity constraints and its almost global asymptotic stability is proved. The multiple model adaptive control approach is employed to counteract large uncertainty in parameter space of the inertia matrix. The nonlinear dynamics of a low earth orbit satellite is simulated and the proposed control algorithm is implemented. The reported results show the effectiveness of the suggested scheme.

  8. Motion Estimation Using the Firefly Algorithm in Ultrasonic Image Sequence of Soft Tissue

    Directory of Open Access Journals (Sweden)

    Chih-Feng Chao

    2015-01-01

    Full Text Available Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method.

  9. Simulasi Sifat Fisis Model Molekuler Dinamik Gas Argon dengan Potensial Lennard-Jones

    Directory of Open Access Journals (Sweden)

    Wira Bahari Nurdin

    2014-01-01

    Full Text Available DOWNLOAD PDFTelah  dilakukan  pembuatan  dan  pengujian  suatu  simulasi  tentang  sifat  fisis  gasargon  dengan  menggunakan  dinamika  molekuler  menggunakan  potensial Lennard-Jonesdalam sistem terisolasi (ensemble mikrokanonik. Jumlah molekul, energi total sistem danluas  kotak  simulasi  telah  divariasikan.  Untuk  menghitung  perubahan  posisi  digunakanalgoritma Verlet. Sifat fisis yang ditentukan dalam simulasi adalah temperatur dan energitotal  sistem  untuk  menentukan  adanya  fase  transisi.  Dari  hasil  simulasi,  terdapatkesesuaian antara simulasi dengan gas argon dan tidak diperoleh adanya fase transisi.Kata kunci: Simulasi dinamika molekul, argon, potensial Lennard-Jones, ensemblemikrokanonik, algoritma Verlet

  10. An Enhanced Hybrid Social Based Routing Algorithm for MANET-DTN

    Directory of Open Access Journals (Sweden)

    Martin Matis

    2016-01-01

    Full Text Available A new routing algorithm for mobile ad hoc networks is proposed in this paper: an Enhanced Hybrid Social Based Routing (HSBR algorithm for MANET-DTN as optimal solution for well-connected multihop mobile networks (MANET and/or worse connected MANET with small density of the nodes and/or due to mobility fragmented MANET into two or more subnetworks or islands. This proposed HSBR algorithm is fully decentralized combining main features of both Dynamic Source Routing (DSR and Social Based Opportunistic Routing (SBOR algorithms. The proposed scheme is simulated and evaluated by replaying real life traces which exhibit this highly dynamic topology. Evaluation of new proposed HSBR algorithm was made by comparison with DSR and SBOR. All methods were simulated with different levels of velocity. The results show that HSBR has the highest success of packet delivery, but with higher delay in comparison with DSR, and much lower in comparison with SBOR. Simulation results indicate that HSBR approach can be applicable in networks, where MANET or DTN solutions are separately useless or ineffective. This method provides delivery of the message in every possible situation in areas without infrastructure and can be used as backup method for disaster situation when infrastructure is destroyed.

  11. Collective cell migration without proliferation: density determines cell velocity and wave velocity

    Science.gov (United States)

    Tlili, Sham; Gauquelin, Estelle; Li, Brigitte; Cardoso, Olivier; Ladoux, Benoît; Delanoë-Ayari, Hélène; Graner, François

    2018-05-01

    Collective cell migration contributes to embryogenesis, wound healing and tumour metastasis. Cell monolayer migration experiments help in understanding what determines the movement of cells far from the leading edge. Inhibiting cell proliferation limits cell density increase and prevents jamming; we observe long-duration migration and quantify space-time characteristics of the velocity profile over large length scales and time scales. Velocity waves propagate backwards and their frequency depends only on cell density at the moving front. Both cell average velocity and wave velocity increase linearly with the cell effective radius regardless of the distance to the front. Inhibiting lamellipodia decreases cell velocity while waves either disappear or have a lower frequency. Our model combines conservation laws, monolayer mechanical properties and a phenomenological coupling between strain and polarity: advancing cells pull on their followers, which then become polarized. With reasonable values of parameters, this model agrees with several of our experimental observations. Together, our experiments and model disantangle the respective contributions of active velocity and of proliferation in monolayer migration, explain how cells maintain their polarity far from the moving front, and highlight the importance of strain-polarity coupling and density in long-range information propagation.

  12. Consideration of some difficulties in migration velocity analysis; Migration velocity analysis no shomondai ni kansuru kento

    Energy Technology Data Exchange (ETDEWEB)

    Akama, K [Japan National Oil Corp., Tokyo (Japan). Technology Research Center; Matsuoka, T [Japan Petroleum Exploration Corp., Tokyo (Japan)

    1997-10-22

    Concerning migration velocity analysis in the seismic exploration method, two typical techniques, out of velocity analysis techniques using residual moveout in the CIP gather, are verified. Deregowski`s method uses pre-stacking deep-level migration records for velocity analysis to obtain velocities free of spatial inconsistency and not dependent on the velocity structure. This method is very like the conventional DMO velocity analysis method and is easy to understand intuitively. In this method, however, error is apt to be aggravated in the process of obtaining the depth-sector velocity from the time-RMS velocity. Al-Yahya`s method formulates the moveout residual in the CIP gather. This assumes horizontal stratification and a small residual velocity, however, and fails to guarantee convergence in the case of a steep structure or a grave model error. In the updating of the velocity model, in addition, it has to maintain required accuracy and, at the same time, incorporate smoothing to ensure not to deteriorate high convergence. 2 refs., 5 figs.

  13. Impulse excitation scanning acoustic microscopy for local quantification of Rayleigh surface wave velocity using B-scan analysis

    Science.gov (United States)

    Cherry, M.; Dierken, J.; Boehnlein, T.; Pilchak, A.; Sathish, S.; Grandhi, R.

    2018-01-01

    A new technique for performing quantitative scanning acoustic microscopy imaging of Rayleigh surface wave (RSW) velocity was developed based on b-scan processing. In this technique, the focused acoustic beam is moved through many defocus distances over the sample and excited with an impulse excitation, and advanced algorithms based on frequency filtering and the Hilbert transform are used to post-process the b-scans to estimate the Rayleigh surface wave velocity. The new method was used to estimate the RSW velocity on an optically flat E6 glass sample, and the velocity was measured at ±2 m/s and the scanning time per point was on the order of 1.0 s, which are both improvement from the previous two-point defocus method. The new method was also applied to the analysis of two titanium samples, and the velocity was estimated with very low standard deviation in certain large grains on the sample. A new behavior was observed with the b-scan analysis technique where the amplitude of the surface wave decayed dramatically on certain crystallographic orientations. The new technique was also compared with previous results, and the new technique has been found to be much more reliable and to have higher contrast than previously possible with impulse excitation.

  14. Genetic Design of an Interval Type-2 Fuzzy Controller for Velocity Regulation in a DC Motor

    Directory of Open Access Journals (Sweden)

    Yazmin Maldonado

    2012-11-01

    Full Text Available This paper proposes the design of a Type-2 Fuzzy Logic Controller (T2-FLC using Genetic Algorithms (GAs. The T2-FLC was tested with different levels of uncertainty to regulate velocity in a Direct Current (DC motor. The T2-FLC was synthesized in Very High Description Language (VHDL code for a Field-programmable Gate Array (FPGA, using the Xilinx System Generator (XSG of Xilinx ISE and Matlab-Simulink. Comparisons were made between the Type-1 Fuzzy Logic Controller and the T2-FLC in VHDL code and a Proportional Integral Differential (PID Controller so as to regulate the velocity of a DC motor and evaluate the difference in performance of the three types of controllers, using the t-student test statistic.

  15. A Muscle Fibre Conduction Velocity Tracking ASIC for Local Fatigue Monitoring.

    Science.gov (United States)

    Koutsos, Ermis; Cretu, Vlad; Georgiou, Pantelis

    2016-12-01

    Electromyography analysis can provide information about a muscle's fatigue state by estimating Muscle Fibre Conduction Velocity (MFCV), a measure of the travelling speed of Motor Unit Action Potentials (MUAPs) in muscle tissue. MFCV better represents the physical manifestations of muscle fatigue, compared to the progressive compression of the myoelectic Power Spectral Density, hence it is more suitable for a muscle fatigue tracking system. This paper presents a novel algorithm for the estimation of MFCV using single threshold bit-stream conversion and a dedicated application-specified integrated circuit (ASIC) for its implementation, suitable for a compact, wearable and easy to use muscle fatigue monitor. The presented ASIC is implemented in a commercially available AMS 0.35 [Formula: see text] CMOS technology and utilizes a bit-stream cross-correlator that estimates the conduction velocity of the myoelectric signal in real time. A test group of 20 subjects was used to evaluate the performance of the developed ASIC, achieving good accuracy with an error of only 3.2% compared to Matlab.

  16. How Angular Velocity Features and Different Gyroscope Noise Types Interact and Determine Orientation Estimation Accuracy

    Directory of Open Access Journals (Sweden)

    Ilaria Pasciuto

    2015-09-01

    Full Text Available In human movement analysis, 3D body segment orientation can be obtained through the numerical integration of gyroscope signals. These signals, however, are affected by errors that, for the case of micro-electro-mechanical systems, are mainly due to: constant bias, scale factor, white noise, and bias instability. The aim of this study is to assess how the orientation estimation accuracy is affected by each of these disturbances, and whether it is influenced by the angular velocity magnitude and 3D distribution across the gyroscope axes. Reference angular velocity signals, either constant or representative of human walking, were corrupted with each of the four noise types within a simulation framework. The magnitude of the angular velocity affected the error in the orientation estimation due to each noise type, except for the white noise. Additionally, the error caused by the constant bias was also influenced by the angular velocity 3D distribution. As the orientation error depends not only on the noise itself but also on the signal it is applied to, different sensor placements could enhance or mitigate the error due to each disturbance, and special attention must be paid in providing and interpreting measures of accuracy for orientation estimation algorithms.

  17. How Angular Velocity Features and Different Gyroscope Noise Types Interact and Determine Orientation Estimation Accuracy

    Science.gov (United States)

    Pasciuto, Ilaria; Ligorio, Gabriele; Bergamini, Elena; Vannozzi, Giuseppe; Sabatini, Angelo Maria; Cappozzo, Aurelio

    2015-01-01

    In human movement analysis, 3D body segment orientation can be obtained through the numerical integration of gyroscope signals. These signals, however, are affected by errors that, for the case of micro-electro-mechanical systems, are mainly due to: constant bias, scale factor, white noise, and bias instability. The aim of this study is to assess how the orientation estimation accuracy is affected by each of these disturbances, and whether it is influenced by the angular velocity magnitude and 3D distribution across the gyroscope axes. Reference angular velocity signals, either constant or representative of human walking, were corrupted with each of the four noise types within a simulation framework. The magnitude of the angular velocity affected the error in the orientation estimation due to each noise type, except for the white noise. Additionally, the error caused by the constant bias was also influenced by the angular velocity 3D distribution. As the orientation error depends not only on the noise itself but also on the signal it is applied to, different sensor placements could enhance or mitigate the error due to each disturbance, and special attention must be paid in providing and interpreting measures of accuracy for orientation estimation algorithms. PMID:26393606

  18. How Angular Velocity Features and Different Gyroscope Noise Types Interact and Determine Orientation Estimation Accuracy.

    Science.gov (United States)

    Pasciuto, Ilaria; Ligorio, Gabriele; Bergamini, Elena; Vannozzi, Giuseppe; Sabatini, Angelo Maria; Cappozzo, Aurelio

    2015-09-18

    In human movement analysis, 3D body segment orientation can be obtained through the numerical integration of gyroscope signals. These signals, however, are affected by errors that, for the case of micro-electro-mechanical systems, are mainly due to: constant bias, scale factor, white noise, and bias instability. The aim of this study is to assess how the orientation estimation accuracy is affected by each of these disturbances, and whether it is influenced by the angular velocity magnitude and 3D distribution across the gyroscope axes. Reference angular velocity signals, either constant or representative of human walking, were corrupted with each of the four noise types within a simulation framework. The magnitude of the angular velocity affected the error in the orientation estimation due to each noise type, except for the white noise. Additionally, the error caused by the constant bias was also influenced by the angular velocity 3D distribution. As the orientation error depends not only on the noise itself but also on the signal it is applied to, different sensor placements could enhance or mitigate the error due to each disturbance, and special attention must be paid in providing and interpreting measures of accuracy for orientation estimation algorithms.

  19. Regional three-dimensional seismic velocity model of the crust and uppermost mantle of northern California

    Science.gov (United States)

    Thurber, C.; Zhang, H.; Brocher, T.; Langenheim, V.

    2009-01-01

    We present a three-dimensional (3D) tomographic model of the P wave velocity (Vp) structure of northern California. We employed a regional-scale double-difference tomography algorithm that incorporates a finite-difference travel time calculator and spatial smoothing constraints. Arrival times from earthquakes and travel times from controlled-source explosions, recorded at network and/or temporary stations, were inverted for Vp on a 3D grid with horizontal node spacing of 10 to 20 km and vertical node spacing of 3 to 8 km. Our model provides an unprecedented, comprehensive view of the regional-scale structure of northern California, putting many previously identified features into a broader regional context and improving the resolution of a number of them and revealing a number of new features, especially in the middle and lower crust, that have never before been reported. Examples of the former include the complex subducting Gorda slab, a steep, deeply penetrating fault beneath the Sacramento River Delta, crustal low-velocity zones beneath Geysers-Clear Lake and Long Valley, and the high-velocity ophiolite body underlying the Great Valley. Examples of the latter include mid-crustal low-velocity zones beneath Mount Shasta and north of Lake Tahoe. Copyright 2009 by the American Geophysical Union.

  20. Velocity Segregation and Systematic Biases In Velocity Dispersion Estimates with the SPT-GMOS Spectroscopic Survey

    Science.gov (United States)

    Bayliss, Matthew. B.; Zengo, Kyle; Ruel, Jonathan; Benson, Bradford A.; Bleem, Lindsey E.; Bocquet, Sebastian; Bulbul, Esra; Brodwin, Mark; Capasso, Raffaella; Chiu, I.-non; McDonald, Michael; Rapetti, David; Saro, Alex; Stalder, Brian; Stark, Antony A.; Strazzullo, Veronica; Stubbs, Christopher W.; Zenteno, Alfredo

    2017-03-01

    The velocity distribution of galaxies in clusters is not universal; rather, galaxies are segregated according to their spectral type and relative luminosity. We examine the velocity distributions of different populations of galaxies within 89 Sunyaev Zel’dovich (SZ) selected galaxy clusters spanning 0.28GMOS spectroscopic survey, supplemented by additional published spectroscopy, resulting in a final spectroscopic sample of 4148 galaxy spectra—2868 cluster members. The velocity dispersion of star-forming cluster galaxies is 17 ± 4% greater than that of passive cluster galaxies, and the velocity dispersion of bright (m< {m}* -0.5) cluster galaxies is 11 ± 4% lower than the velocity dispersion of our total member population. We find good agreement with simulations regarding the shape of the relationship between the measured velocity dispersion and the fraction of passive versus star-forming galaxies used to measure it, but we find a small offset between this relationship as measured in data and simulations, which suggests that our dispersions are systematically low by as much as 3% relative to simulations. We argue that this offset could be interpreted as a measurement of the effective velocity bias that describes the ratio of our observed velocity dispersions and the intrinsic velocity dispersion of dark matter particles in a published simulation result. Measuring velocity bias in this way suggests that large spectroscopic surveys can improve dispersion-based mass-observable scaling relations for cosmology even in the face of velocity biases, by quantifying and ultimately calibrating them out.

  1. Sodium Velocity Maps on Mercury

    Science.gov (United States)

    Potter, A. E.; Killen, R. M.

    2011-01-01

    The objective of the current work was to measure two-dimensional maps of sodium velocities on the Mercury surface and examine the maps for evidence of sources or sinks of sodium on the surface. The McMath-Pierce Solar Telescope and the Stellar Spectrograph were used to measure Mercury spectra that were sampled at 7 milliAngstrom intervals. Observations were made each day during the period October 5-9, 2010. The dawn terminator was in view during that time. The velocity shift of the centroid of the Mercury emission line was measured relative to the solar sodium Fraunhofer line corrected for radial velocity of the Earth. The difference between the observed and calculated velocity shift was taken to be the velocity vector of the sodium relative to Earth. For each position of the spectrograph slit, a line of velocities across the planet was measured. Then, the spectrograph slit was stepped over the surface of Mercury at 1 arc second intervals. The position of Mercury was stabilized by an adaptive optics system. The collection of lines were assembled into an images of surface reflection, sodium emission intensities, and Earthward velocities over the surface of Mercury. The velocity map shows patches of higher velocity in the southern hemisphere, suggesting the existence of sodium sources there. The peak earthward velocity occurs in the equatorial region, and extends to the terminator. Since this was a dawn terminator, this might be an indication of dawn evaporation of sodium. Leblanc et al. (2008) have published a velocity map that is similar.

  2. Water velocity meter

    Science.gov (United States)

    Roberts, C. W.; Smith, D. L.

    1970-01-01

    Simple, inexpensive drag sphere velocity meter with a zero to 6 ft/sec range measures steady-state flow. When combined with appropriate data acquisition system, it is suited to applications where large numbers of simultaneous measurements are needed for current mapping or velocity profile determination.

  3. The Grover energy transfer algorithm for relativistic speeds

    International Nuclear Information System (INIS)

    Garcia-Escartin, Juan Carlos; Chamorro-Posada, Pedro

    2010-01-01

    Grover's algorithm for quantum search can also be applied to classical energy transfer. The procedure takes a system in which the total energy is equally distributed among N subsystems and transfers most of it to one marked subsystem. We show that in a relativistic setting the efficiency of this procedure can be improved. We will consider the transfer of relativistic kinetic energy in a series of elastic collisions. In this case, the number of steps of the energy transfer procedure approaches 1 as the initial velocities of the objects become closer to the speed of light. This is a consequence of introducing nonlinearities in the procedure. However, the maximum attainable transfer will depend on the particular combination of speed and number of objects. In the procedure, we will use N elements, as in the classical non-relativistic case, instead of the log 2 (N) states of the quantum algorithm.

  4. Estimation of S-wave Velocity Structures by Using Microtremor Array Measurements for Subsurface Modeling in Jakarta

    Directory of Open Access Journals (Sweden)

    Mohamad Ridwan

    2014-12-01

    Full Text Available Jakarta is located on a thick sedimentary layer that potentially has a very high seismic wave amplification. However, the available information concerning the subsurface model and bedrock depth is insufficient for a seismic hazard analysis. In this study, a microtremor array method was applied to estimate the geometry and S-wave velocity of the sedimentary layer. The spatial autocorrelation (SPAC method was applied to estimate the dispersion curve, while the S-wave velocity was estimated using a genetic algorithm approach. The analysis of the 1D and 2D S-wave velocity profiles shows that along a north-south line, the sedimentary layer is thicker towards the north. It has a positive correlation with a geological cross section derived from a borehole down to a depth of about 300 m. The SPT data from the BMKG site were used to verify the 1D S-wave velocity profile. They show a good agreement. The microtremor analysis reached the engineering bedrock in a range from 359 to 608 m as depicted by a cross section in the north-south direction. The site class was also estimated at each site, based on the average S-wave velocity until 30 m depth. The sites UI to ISTN belong to class D (medium soil, while BMKG and ANCL belong to class E (soft soil.

  5. Effects of Random Values for Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Hou-Ping Dai

    2018-02-01

    Full Text Available Particle swarm optimization (PSO algorithm is generally improved by adaptively adjusting the inertia weight or combining with other evolution algorithms. However, in most modified PSO algorithms, the random values are always generated by uniform distribution in the range of [0, 1]. In this study, the random values, which are generated by uniform distribution in the ranges of [0, 1] and [−1, 1], and Gauss distribution with mean 0 and variance 1 ( U [ 0 , 1 ] , U [ − 1 , 1 ] and G ( 0 , 1 , are respectively used in the standard PSO and linear decreasing inertia weight (LDIW PSO algorithms. For comparison, the deterministic PSO algorithm, in which the random values are set as 0.5, is also investigated in this study. Some benchmark functions and the pressure vessel design problem are selected to test these algorithms with different types of random values in three space dimensions (10, 30, and 100. The experimental results show that the standard PSO and LDIW-PSO algorithms with random values generated by U [ − 1 , 1 ] or G ( 0 , 1 are more likely to avoid falling into local optima and quickly obtain the global optima. This is because the large-scale random values can expand the range of particle velocity to make the particle more likely to escape from local optima and obtain the global optima. Although the random values generated by U [ − 1 , 1 ] or G ( 0 , 1 are beneficial to improve the global searching ability, the local searching ability for a low dimensional practical optimization problem may be decreased due to the finite particles.

  6. Optimization of thermal performance of a smooth flat-plate solar air heater using teaching–learning-based optimization algorithm

    Directory of Open Access Journals (Sweden)

    R. Venkata Rao

    2015-12-01

    Full Text Available This paper presents the performance of teaching–learning-based optimization (TLBO algorithm to obtain the optimum set of design and operating parameters for a smooth flat plate solar air heater (SFPSAH. The TLBO algorithm is a recently proposed population-based algorithm, which simulates the teaching–learning process of the classroom. Maximization of thermal efficiency is considered as an objective function for the thermal performance of SFPSAH. The number of glass plates, irradiance, and the Reynolds number are considered as the design parameters and wind velocity, tilt angle, ambient temperature, and emissivity of the plate are considered as the operating parameters to obtain the thermal performance of the SFPSAH using the TLBO algorithm. The computational results have shown that the TLBO algorithm is better or competitive to other optimization algorithms recently reported in the literature for the considered problem.

  7. Influence of lateral slab edge distance on plate velocity, trench velocity, and subduction partitioning

    NARCIS (Netherlands)

    Schellart, W. P.; Stegman, D. R.; Farrington, R. J.; Moresi, L.

    2011-01-01

    Subduction of oceanic lithosphere occurs through both trenchward subducting plate motion and trench retreat. We investigate how subducting plate velocity, trench velocity and the partitioning of these two velocity components vary for individual subduction zone segments as a function of proximity to

  8. Computer simulation of liquid cesium using embedded atom model

    International Nuclear Information System (INIS)

    Belashchenko, D K; Nikitin, N Yu

    2008-01-01

    The new method is presented for the inventing an embedded atom potential (EAM potential) for liquid metals. This method uses directly the pair correlation function (PCF) of the liquid metal near the melting temperature. Because of the specific analytic form of this EAM potential, the pair term of potential can be calculated using the pair correlation function and, for example, Schommers algorithm. Other parameters of EAM potential may be found using the potential energy, module of compression and pressure at some conditions, mainly near the melting temperature, at very high temperature or in strongly compressed state. We used the simple exponential formula for effective EAM electronic density and a polynomial series for embedding energy. Molecular dynamics method was applied with L. Verlet algorithm. A series of models with 1968 atoms in the basic cube was constructed in temperature interval 323-1923 K. The thermodynamic properties of liquid cesium, structure data and self-diffusion coefficients are calculated. In general, agreement between the model data and known experimental ones is reasonable. The evaluation is given for the critical temperature of cesium models with EAM potential

  9. A Localization Method for Underwater Wireless Sensor Networks Based on Mobility Prediction and Particle Swarm Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Ying Zhang

    2016-02-01

    Full Text Available Due to their special environment, Underwater Wireless Sensor Networks (UWSNs are usually deployed over a large sea area and the nodes are usually floating. This results in a lower beacon node distribution density, a longer time for localization, and more energy consumption. Currently most of the localization algorithms in this field do not pay enough consideration on the mobility of the nodes. In this paper, by analyzing the mobility patterns of water near the seashore, a localization method for UWSNs based on a Mobility Prediction and a Particle Swarm Optimization algorithm (MP-PSO is proposed. In this method, the range-based PSO algorithm is used to locate the beacon nodes, and their velocities can be calculated. The velocity of an unknown node is calculated by using the spatial correlation of underwater object’s mobility, and then their locations can be predicted. The range-based PSO algorithm may cause considerable energy consumption and its computation complexity is a little bit high, nevertheless the number of beacon nodes is relatively smaller, so the calculation for the large number of unknown nodes is succinct, and this method can obviously decrease the energy consumption and time cost of localizing these mobile nodes. The simulation results indicate that this method has higher localization accuracy and better localization coverage rate compared with some other widely used localization methods in this field.

  10. A Localization Method for Underwater Wireless Sensor Networks Based on Mobility Prediction and Particle Swarm Optimization Algorithms.

    Science.gov (United States)

    Zhang, Ying; Liang, Jixing; Jiang, Shengming; Chen, Wei

    2016-02-06

    Due to their special environment, Underwater Wireless Sensor Networks (UWSNs) are usually deployed over a large sea area and the nodes are usually floating. This results in a lower beacon node distribution density, a longer time for localization, and more energy consumption. Currently most of the localization algorithms in this field do not pay enough consideration on the mobility of the nodes. In this paper, by analyzing the mobility patterns of water near the seashore, a localization method for UWSNs based on a Mobility Prediction and a Particle Swarm Optimization algorithm (MP-PSO) is proposed. In this method, the range-based PSO algorithm is used to locate the beacon nodes, and their velocities can be calculated. The velocity of an unknown node is calculated by using the spatial correlation of underwater object's mobility, and then their locations can be predicted. The range-based PSO algorithm may cause considerable energy consumption and its computation complexity is a little bit high, nevertheless the number of beacon nodes is relatively smaller, so the calculation for the large number of unknown nodes is succinct, and this method can obviously decrease the energy consumption and time cost of localizing these mobile nodes. The simulation results indicate that this method has higher localization accuracy and better localization coverage rate compared with some other widely used localization methods in this field.

  11. Characteristics of Offshore Hawai';i Island Seismicity and Velocity Structure, including Lo';ihi Submarine Volcano

    Science.gov (United States)

    Merz, D. K.; Caplan-Auerbach, J.; Thurber, C. H.

    2013-12-01

    The Island of Hawai';i is home to the most active volcanoes in the Hawaiian Islands. The island's isolated nature, combined with the lack of permanent offshore seismometers, creates difficulties in recording small magnitude earthquakes with accuracy. This background offshore seismicity is crucial in understanding the structure of the lithosphere around the island chain, the stresses on the lithosphere generated by the weight of the islands, and how the volcanoes interact with each other offshore. This study uses the data collected from a 9-month deployment of a temporary ocean bottom seismometer (OBS) network fully surrounding Lo';ihi volcano. This allowed us to widen the aperture of earthquake detection around the Big Island, lower the magnitude detection threshold, and better constrain the hypocentral depths of offshore seismicity that occurs between the OBS network and the Hawaii Volcano Observatory's land based network. Although this study occurred during a time of volcanic quiescence for Lo';ihi, it establishes a basis for background seismicity of the volcano. More than 480 earthquakes were located using the OBS network, incorporating data from the HVO network where possible. Here we present relocated hypocenters using the double-difference earthquake location algorithm HypoDD (Waldhauser & Ellsworth, 2000), as well as tomographic images for a 30 km square area around the summit of Lo';ihi. Illuminated by using the double-difference earthquake location algorithm HypoDD (Waldhauser & Ellsworth, 2000), offshore seismicity during this study is punctuated by events locating in the mantle fault zone 30-50km deep. These events reflect rupture on preexisting faults in the lower lithosphere caused by stresses induced by volcano loading and flexure of the Pacific Plate (Wolfe et al., 2004; Pritchard et al., 2007). Tomography was performed using the double-difference seismic tomography method TomoDD (Zhang & Thurber, 2003) and showed overall velocities to be slower than

  12. An iterative algorithm for the finite element approximation to convection-diffusion problems

    International Nuclear Information System (INIS)

    Buscaglia, Gustavo; Basombrio, Fernando

    1988-01-01

    An iterative algorithm for steady convection-diffusion is presented, which avoids unsymmetric matrices by means of an equivalent mixed formulation. Upwind is introduced by adding a balancing dissipation in the flow direction, but there is no dependence of the global matrix on the velocity field. Convergence is shown in habitual test cases. Advantages of its use in coupled calculation of more complex problems are discussed. (Author)

  13. Visual guidance of forward flight in hummingbirds reveals control based on image features instead of pattern velocity.

    Science.gov (United States)

    Dakin, Roslyn; Fellows, Tyee K; Altshuler, Douglas L

    2016-08-02

    Information about self-motion and obstacles in the environment is encoded by optic flow, the movement of images on the eye. Decades of research have revealed that flying insects control speed, altitude, and trajectory by a simple strategy of maintaining or balancing the translational velocity of images on the eyes, known as pattern velocity. It has been proposed that birds may use a similar algorithm but this hypothesis has not been tested directly. We examined the influence of pattern velocity on avian flight by manipulating the motion of patterns on the walls of a tunnel traversed by Anna's hummingbirds. Contrary to prediction, we found that lateral course control is not based on regulating nasal-to-temporal pattern velocity. Instead, birds closely monitored feature height in the vertical axis, and steered away from taller features even in the absence of nasal-to-temporal pattern velocity cues. For vertical course control, we observed that birds adjusted their flight altitude in response to upward motion of the horizontal plane, which simulates vertical descent. Collectively, our results suggest that birds avoid collisions using visual cues in the vertical axis. Specifically, we propose that birds monitor the vertical extent of features in the lateral visual field to assess distances to the side, and vertical pattern velocity to avoid collisions with the ground. These distinct strategies may derive from greater need to avoid collisions in birds, compared with small insects.

  14. Velocity measurement by vortex shedding. Contribution to the mass-flow measurement

    International Nuclear Information System (INIS)

    Martinez Piquer, T.

    1988-01-01

    The phenomenon of vortex shedding has been known for centuries and has been the subject of scientific studies for about one hundred years. It is only in the ten last years that is has been applied to the measurement of fluids velocity. In 1878 F. Strouhal observed the vortex shedding phenomenon and shown that the shedding frequency of a wire vibrating in the wind was related to the wire diameter and the wind velocity. Rayleigh, who introduced the non-dimensional Strouhal number, von Karman and Rohsko, carried out extensive work or the subject which indicated that vortex shedding could form the basis for a new type of flowmeter. The thesis describes two parallel lines of investigation which study in depth the practical applications of vortex shedding. The first one deals with the measure of velocity and it presents the novelty of a bluff body with a cross-section which has not been used until this day. This body is a circular cylinder with a two-dimensional slit along the diameter and situated in crossdirection to the fluid's stream. It possesses excellent characteristics and it is the most stable as a vortex shedder, which gives it great advantage to the rest of the shapes used until now. The detection of the vortex has been performed by measuring the pressure changes generated by the vortex on two posts situated just beside the slit. To calculate the frequency of the vortex shedding, we obtain the difference of the mentioned signals, which are the same and 180 out of phase. Finding out the period of the autocorrelation function of this signal we can estimate the velocity of the fluid. A logical equipment based on a microprocessor has been designed for the calculation using a zero-crossing time algorithm implemented in assembler language. The second line of research refers to a new method of measure mass flow. The pressure signal generated by the vortex has an intensity which is proportional to the density and to the square of the velocity. Since we have already

  15. A fast iterative model for discrete velocity calculations on triangular grids

    International Nuclear Information System (INIS)

    Szalmas, Lajos; Valougeorgis, Dimitris

    2010-01-01

    A fast synthetic type iterative model is proposed to speed up the slow convergence of discrete velocity algorithms for solving linear kinetic equations on triangular lattices. The efficiency of the scheme is verified both theoretically by a discrete Fourier stability analysis and computationally by solving a rarefied gas flow problem. The stability analysis of the discrete kinetic equations yields the spectral radius of the typical and the proposed iterative algorithms and reveal the drastically improved performance of the latter one for any grid resolution. This is the first time that stability analysis of the full discrete kinetic equations related to rarefied gas theory is formulated, providing the detailed dependency of the iteration scheme on the discretization parameters in the phase space. The corresponding characteristics of the model deduced by solving numerically the rarefied gas flow through a duct with triangular cross section are in complete agreement with the theoretical findings. The proposed approach may open a way for fast computation of rarefied gas flows on complex geometries in the whole range of gas rarefaction including the hydrodynamic regime.

  16. Enhancing PIV image and fractal descriptor for velocity and shear stresses propagation around a circular pier

    Directory of Open Access Journals (Sweden)

    Alireza Keshavarzi

    2017-07-01

    Full Text Available In this study, the fractal dimensions of velocity fluctuations and the Reynolds shear stresses propagation for flow around a circular bridge pier are presented. In the study reported herein, the fractal dimension of velocity fluctuations (u′, v′, w′ and the Reynolds shear stresses (u′v′ and u′w′ of flow around a bridge pier were computed using a Fractal Interpolation Function (FIF algorithm. The velocity fluctuations of flow along a horizontal plane above the bed were measured using Acoustic Doppler Velocity meter (ADV and Particle Image Velocimetry (PIV. The PIV is a powerful technique which enables us to attain high resolution spatial and temporal information of turbulent flow using instantaneous time snapshots. In this study, PIV was used for detection of high resolution fractal scaling around a bridge pier. The results showed that the fractal dimension of flow fluctuated significantly in the longitudinal and transverse directions in the vicinity of the pier. It was also found that the fractal dimension of velocity fluctuations and shear stresses increased rapidly at vicinity of pier at downstream whereas it remained approximately unchanged far downstream of the pier. The higher value of fractal dimension was found at a distance equal to one times of the pier diameter in the back of the pier. Furthermore, the average fractal dimension for the streamwise and transverse velocity fluctuations decreased from the centreline to the side wall of the flume. Finally, the results from ADV measurement were consistent with the result from PIV, therefore, the ADV enables to detect turbulent characteristics of flow around a circular bridge pier.

  17. An Advanced Coupled Genetic Algorithm for Identifying Unknown Moving Loads on Bridge Decks

    Directory of Open Access Journals (Sweden)

    Sang-Youl Lee

    2014-01-01

    Full Text Available This study deals with an inverse method to identify moving loads on bridge decks using the finite element method (FEM and a coupled genetic algorithm (c-GA. We developed the inverse technique using a coupled genetic algorithm that can make global solution searches possible as opposed to classical gradient-based optimization techniques. The technique described in this paper allows us to not only detect the weight of moving vehicles but also find their moving velocities. To demonstrate the feasibility of the method, the algorithm is applied to a bridge deck model with beam elements. In addition, 1D and 3D finite element models are simulated to study the influence of measurement errors and model uncertainty between numerical and real structures. The results demonstrate the excellence of the method from the standpoints of computation efficiency and avoidance of premature convergence.

  18. Genetic algorithm based on qubits and quantum gates

    International Nuclear Information System (INIS)

    Silva, Joao Batista Rosa; Ramos, Rubens Viana

    2003-01-01

    Full text: Genetic algorithm, a computational technique based on the evolution of the species, in which a possible solution of the problem is coded in a binary string, called chromosome, has been used successfully in several kinds of problems, where the search of a minimal or a maximal value is necessary, even when local minima are present. A natural generalization of a binary string is a qubit string. Hence, it is possible to use the structure of a genetic algorithm having a sequence of qubits as a chromosome and using quantum operations in the reproduction in order to find the best solution in some problems of quantum information. For example, given a unitary matrix U what is the pair of qubits that, when applied at the input, provides the output state with maximal entanglement? In order to solve this problem, a population of chromosomes of two qubits was created. The crossover was performed applying the quantum gates CNOT and SWAP at the pair of qubits, while the mutation was performed applying the quantum gates Hadamard, Z and Not in a single qubit. The result was compared with a classical genetic algorithm used to solve the same problem. A hundred simulations using the same U matrix was performed. Both algorithms, hereafter named by CGA (classical) and QGA (using qu bits), reached good results close to 1 however, the number of generations needed to find the best result was lower for the QGA. Another problem where the QGA can be useful is in the calculation of the relative entropy of entanglement. We have tested our algorithm using 100 pure states chosen randomly. The stop criterion used was the error lower than 0.01. The main advantages of QGA are its good precision, robustness and very easy implementation. The main disadvantage is its low velocity, as happen for all kind of genetic algorithms. (author)

  19. A random-walk algorithm for modeling lithospheric density and the role of body forces in the evolution of the Midcontinent Rift

    Science.gov (United States)

    Levandowski, William Brower; Boyd, Oliver; Briggs, Richard; Gold, Ryan D.

    2015-01-01

    This paper develops a Monte Carlo algorithm for extracting three-dimensional lithospheric density models from geophysical data. Empirical scaling relationships between velocity and density create a 3D starting density model, which is then iteratively refined until it reproduces observed gravity and topography. This approach permits deviations from uniform crustal velocity-density scaling, which provide insight into crustal lithology and prevent spurious mapping of crustal anomalies into the mantle.

  20. Automatic stair-climbing algorithm of the planetary wheel type mobile robot in nuclear facilities

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Byung Soo; Kim, Seung Ho; Lee, Jong Min [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-10-01

    A mobile robot, named KAEROT, has been developed for inspection and maintenance operations in nuclear facilities. The main feature of locomotion system is the planetary wheel assembly with small wheels. This mechanism has been designed to be able to go over the stairs and obstacles with stability. This paper presents the inverse kinematic solution that is to be operated by remote control. The automatic stair climbing algorithm is also proposed. The proposed algorithms the moving paths of small wheels and calculates the angular velocity of 3 actuation wheels. The results of simulations and experiments are given for KAEROT performed on the irregular stairs in laboratory. It is shown that the proposed algorithm provides the lower inclination angle of the robot body and increases its stability during navigation. 14 figs., 16 refs. (Author).

  1. Automatic stair-climbing algorithm of the planetary wheel type mobile robot in nuclear facilities

    International Nuclear Information System (INIS)

    Kim, Byung Soo; Kim, Seung Ho; Lee, Jong Min

    1995-01-01

    A mobile robot, named KAEROT, has been developed for inspection and maintenance operations in nuclear facilities. The main feature of locomotion system is the planetary wheel assembly with small wheels. This mechanism has been designed to be able to go over the stairs and obstacles with stability. This paper presents the inverse kinematic solution that is to be operated by remote control. The automatic stair climbing algorithm is also proposed. The proposed algorithms the moving paths of small wheels and calculates the angular velocity of 3 actuation wheels. The results of simulations and experiments are given for KAEROT performed on the irregular stairs in laboratory. It is shown that the proposed algorithm provides the lower inclination angle of the robot body and increases its stability during navigation. 14 figs., 16 refs. (Author)

  2. The Grover energy transfer algorithm for relativistic speeds

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Escartin, Juan Carlos; Chamorro-Posada, Pedro, E-mail: juagar@yllera.tel.uva.e [Dpto. de TeorIa de la Senal y Comunicaciones e Ingenieria Telematica, Universidad de Valladolid, ETSI de Telecomunicacion, Campus Miguel Delibes, Paseo Belen 15, 47011 Valladolid (Spain)

    2010-11-12

    Grover's algorithm for quantum search can also be applied to classical energy transfer. The procedure takes a system in which the total energy is equally distributed among N subsystems and transfers most of it to one marked subsystem. We show that in a relativistic setting the efficiency of this procedure can be improved. We will consider the transfer of relativistic kinetic energy in a series of elastic collisions. In this case, the number of steps of the energy transfer procedure approaches 1 as the initial velocities of the objects become closer to the speed of light. This is a consequence of introducing nonlinearities in the procedure. However, the maximum attainable transfer will depend on the particular combination of speed and number of objects. In the procedure, we will use N elements, as in the classical non-relativistic case, instead of the log{sub 2}(N) states of the quantum algorithm.

  3. Fast vector quantization using a Bat algorithm for image compression

    Directory of Open Access Journals (Sweden)

    Chiranjeevi Karri

    2016-06-01

    Full Text Available Linde–Buzo–Gray (LBG, a traditional method of vector quantization (VQ generates a local optimal codebook which results in lower PSNR value. The performance of vector quantization (VQ depends on the appropriate codebook, so researchers proposed optimization techniques for global codebook generation. Particle swarm optimization (PSO and Firefly algorithm (FA generate an efficient codebook, but undergoes instability in convergence when particle velocity is high and non-availability of brighter fireflies in the search space respectively. In this paper, we propose a new algorithm called BA-LBG which uses Bat Algorithm on initial solution of LBG. It produces an efficient codebook with less computational time and results very good PSNR due to its automatic zooming feature using adjustable pulse emission rate and loudness of bats. From the results, we observed that BA-LBG has high PSNR compared to LBG, PSO-LBG, Quantum PSO-LBG, HBMO-LBG and FA-LBG, and its average convergence speed is 1.841 times faster than HBMO-LBG and FA-LBG but no significance difference with PSO.

  4. The critical ionization velocity

    International Nuclear Information System (INIS)

    Raadu, M.A.

    1980-06-01

    The critical ionization velocity effect was first proposed in the context of space plasmas. This effect occurs for a neutral gas moving through a magnetized plasma and leads to rapid ionization and braking of the relative motion when a marginal velocity, 'the critical velocity', is exceeded. Laboratory experiments have clearly established the significance of the critical velocity and have provided evidence for an underlying mechanism which relies on the combined action of electron impact ionization and a collective plasma interaction heating electrons. There is experimental support for such a mechanism based on the heating of electrons by the modified two-stream instability as part of a feedback process. Several applications to space plasmas have been proposed and the possibility of space experiments has been discussed. (author)

  5. Modeling of pedestrian evacuation based on the particle swarm optimization algorithm

    Science.gov (United States)

    Zheng, Yaochen; Chen, Jianqiao; Wei, Junhong; Guo, Xiwei

    2012-09-01

    By applying the evolutionary algorithm of Particle Swarm Optimization (PSO), we have developed a new pedestrian evacuation model. In the new model, we first introduce the local pedestrian’s density concept which is defined as the number of pedestrians distributed in a certain area divided by the area. Both the maximum velocity and the size of a particle (pedestrian) are supposed to be functions of the local density. An attempt to account for the impact consequence between pedestrians is also made by introducing a threshold of injury into the model. The updating rule of the model possesses heterogeneous spatial and temporal characteristics. Numerical examples demonstrate that the model is capable of simulating the typical features of evacuation captured by CA (Cellular Automata) based models. As contrast to CA-based simulations, in which the velocity (via step size) of a pedestrian in each time step is a constant value and limited in several directions, the new model is more flexible in describing pedestrians’ velocities since they are not limited in discrete values and directions according to the new updating rule.

  6. Mean Velocity Prediction Information Feedback Strategy in Two-Route Systems under ATIS

    Directory of Open Access Journals (Sweden)

    Jianqiang Wang

    2015-02-01

    Full Text Available Feedback contents of previous information feedback strategies in advanced traveler information systems are almost real-time traffic information. Compared with real-time information, prediction traffic information obtained by a reliable and effective prediction algorithm has many undisputable advantages. In prediction information environment, a traveler is prone to making a more rational route-choice. For these considerations, a mean velocity prediction information feedback strategy (MVPFS is presented. The approach adopts the autoregressive-integrated moving average model (ARIMA to forecast short-term traffic flow. Furthermore, prediction results of mean velocity are taken as feedback contents and displayed on a variable message sign to guide travelers' route-choice. Meanwhile, discrete choice model (Logit model is selected to imitate more appropriately travelers' route-choice behavior. In order to investigate the performance of MVPFS, a cellular automaton model with ARIMA is adopted to simulate a two-route scenario. The simulation shows that such innovative prediction feedback strategy is feasible and efficient. Even more importantly, this study demonstrates the excellence of prediction feedback ideology.

  7. Examples of Vector Velocity Imaging

    DEFF Research Database (Denmark)

    Hansen, Peter M.; Pedersen, Mads M.; Hansen, Kristoffer L.

    2011-01-01

    To measure blood flow velocity in vessels with conventional ultrasound, the velocity is estimated along the direction of the emitted ultrasound wave. It is therefore impossible to obtain accurate information on blood flow velocity and direction, when the angle between blood flow and ultrasound wa...

  8. Seasonal and inter-annual variability in velocity and frontal position of Siachen Glacier (Eastern Karakorum) using multi-satellite data

    Science.gov (United States)

    Usman, M.; Furuya, M.; Sakakibara, D.; Abe, T.

    2017-12-01

    The anomalous behavior of Karakorum glaciers is a hot topic of discussion in the scientific community. Siachen Glacier is one of the longest glaciers ( 75km) in Karakorum Range. This glacier is supposed to be a surge type but so far no studies have confirmed this claim. Detailed velocity mapping of this glacier can possibly provide some clues about intra/inter-annual changes in velocity and observed terminus. Using L-band SAR data of ALOS-1/2, we applied the feature tracking technique (search patch of 128x128 pixels (range x azimuth) , sampling interval of 12x36 pixels) to derive velocity changes; we used GAMMA software. The velocity was calculated by following the parallel flow assumption. To calculate the local topographic gradient unit vector, we used ASTER-GDEM. We also used optical images acquired by Landsat 5 Thematic Mapper (TM), the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) to derive surface velocity. The algorithm we used is Cross-Correlation in Frequency domain on Orientation images (CCF-O). The velocity was finally calculated by setting a flow line and averaging over the area of 200x200m2. The results indicate seasonal speed up signals that modulate inter-annually from 1999 to 2011, with slight or no change in the observed frontal position. However, in ALOS-2 data, the `observed terminus' seems to have been advancing.

  9. The SCEC Unified Community Velocity Model (UCVM) Software Framework for Distributing and Querying Seismic Velocity Models

    Science.gov (United States)

    Maechling, P. J.; Taborda, R.; Callaghan, S.; Shaw, J. H.; Plesch, A.; Olsen, K. B.; Jordan, T. H.; Goulet, C. A.

    2017-12-01

    Crustal seismic velocity models and datasets play a key role in regional three-dimensional numerical earthquake ground-motion simulation, full waveform tomography, modern physics-based probabilistic earthquake hazard analysis, as well as in other related fields including geophysics, seismology, and earthquake engineering. The standard material properties provided by a seismic velocity model are P- and S-wave velocities and density for any arbitrary point within the geographic volume for which the model is defined. Many seismic velocity models and datasets are constructed by synthesizing information from multiple sources and the resulting models are delivered to users in multiple file formats, such as text files, binary files, HDF-5 files, structured and unstructured grids, and through computer applications that allow for interactive querying of material properties. The Southern California Earthquake Center (SCEC) has developed the Unified Community Velocity Model (UCVM) software framework to facilitate the registration and distribution of existing and future seismic velocity models to the SCEC community. The UCVM software framework is designed to provide a standard query interface to multiple, alternative velocity models, even if the underlying velocity models are defined in different formats or use different geographic projections. The UCVM framework provides a comprehensive set of open-source tools for querying seismic velocity model properties, combining regional 3D models and 1D background models, visualizing 3D models, and generating computational models in the form of regular grids or unstructured meshes that can be used as inputs for ground-motion simulations. The UCVM framework helps researchers compare seismic velocity models and build equivalent simulation meshes from alternative velocity models. These capabilities enable researchers to evaluate the impact of alternative velocity models in ground-motion simulations and seismic hazard analysis applications

  10. Velocity Models of the Upper Mantle Beneath the MER, Somali Platform, and Ethiopian Highlands from Body Wave Tomography

    Science.gov (United States)

    Hariharan, A.; Keranen, K. M.; Alemayehu, S.; Ayele, A.; Bastow, I. D.; Eilon, Z.

    2016-12-01

    The Main Ethiopian Rift (MER) presents a unique opportunity to improve our understanding of an active continental rift. Here we use body wave tomography to generate compressional and shear wave velocity models of the region beneath the rift. The models help us understand the rifting process over the broader region around the MER, extending the geographic region beyond that captured in past studies. We use differential arrival times of body waves from teleseismic earthquakes and multi-channel cross correlation to generate travel time residuals relative to the global IASP91 1-d velocity model. The events used for the tomographic velocity model include 200 teleseismic earthquakes with moment magnitudes greater than 5.5 from our recent 2014-2016 deployment in combination with 200 earthquakes from the earlier EBSE and EAGLE deployments (Bastow et al. 2008). We use the finite-frequency tomography analysis of Schmandt et al. (2010), which uses a first Fresnel zone paraxial approximation to the Born theoretical kernel with spatial smoothing and model norm damping in an iterative LSQR algorithm. Results show a broad, slow region beneath the rift with a distinct low-velocity anomaly beneath the northwest shoulder. This robust and well-resolved low-velocity anomaly is visible at a range of depths beneath the Ethiopian plateau, within the footprint of Oligocene flood basalts, and near surface expressions of diking. We interpret this anomaly as a possible plume conduit, or a low-velocity finger rising from a deeper, larger plume. Within the rift, results are consistent with previous work, exhibiting rift segmentation and low-velocities beneath the rift valley.

  11. Optimization of an Accelerometer and Gyroscope-Based Fall Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Quoc T. Huynh

    2015-01-01

    Full Text Available Falling is a common and significant cause of injury in elderly adults (>65 yrs old, often leading to disability and death. In the USA, one in three of the elderly suffers from fall injuries annually. This study’s purpose is to develop, optimize, and assess the efficacy of a falls detection algorithm based upon a wireless, wearable sensor system (WSS comprised of a 3-axis accelerometer and gyroscope. For this study, the WSS is placed at the chest center to collect real-time motion data of various simulated daily activities (i.e., walking, running, stepping, and falling. Tests were conducted on 36 human subjects with a total of 702 different movements collected in a laboratory setting. Half of the dataset was used for development of the fall detection algorithm including investigations of critical sensor thresholds and the remaining dataset was used for assessment of algorithm sensitivity and specificity. Experimental results show that the algorithm detects falls compared to other daily movements with a sensitivity and specificity of 96.3% and 96.2%, respectively. The addition of gyroscope information enhances sensitivity dramatically from results in the literature as angular velocity changes provide further delineation of a fall event from other activities that may also experience high acceleration peaks.

  12. A Novel Generic Ball Recognition Algorithm Based on Omnidirectional Vision for Soccer Robots

    Directory of Open Access Journals (Sweden)

    Hui Zhang

    2013-11-01

    Full Text Available It is significant for the final goal of RoboCup to realize the recognition of generic balls for soccer robots. In this paper, a novel generic ball recognition algorithm based on omnidirectional vision is proposed by combining the modified Haar-like features and AdaBoost learning algorithm. The algorithm is divided into offline training and online recognition. During the phase of offline training, numerous sub-images are acquired from various panoramic images, including generic balls, and then the modified Haar-like features are extracted from them and used as the input of the AdaBoost learning algorithm to obtain a classifier. During the phase of online recognition, and according to the imaging characteristics of our omnidirectional vision system, rectangular windows are defined to search for the generic ball along the rotary and radial directions in the panoramic image, and the learned classifier is used to judge whether a ball is included in the window. After the ball has been recognized globally, ball tracking is realized by integrating a ball velocity estimation algorithm to reduce the computational cost. The experimental results show that good performance can be achieved using our algorithm, and that the generic ball can be recognized and tracked effectively.

  13. Relationship between throwing velocity, muscle power, and bar velocity during bench press in elite handball players.

    Science.gov (United States)

    Marques, Mario C; van den Tilaar, Roland; Vescovi, Jason D; Gonzalez-Badillo, Juan Jose

    2007-12-01

    The purpose of this study was to examine the relationship between ball-throwing velocity during a 3-step running throw and dynamic strength, power, and bar velocity during a concentric-only bench-press exercise in team-handball players. Fourteen elite senior male team-handball players volunteered to participate. Each volunteer had power and bar velocity measured during a concentric-only bench-press test with 26, 36, and 46 kg, as well as having 1-repetition-maximum (1-RMBP) strength determined. Ball-throwing velocity was evaluated with a standard 3-step running throw using a radar gun. Ball-throwing velocity was related to the absolute load lifted during the 1-RMBP (r = .637, P = .014), peak power using 36 kg (r = .586, P = .028) and 46 kg (r = .582, P = .029), and peak bar velocity using 26 kg (r = .563, P = .036) and 36 kg (r = .625, P = .017). The results indicate that throwing velocity of elite team-handball players is related to maximal dynamic strength, peak power, and peak bar velocity. Thus, a training regimen designed to improve ball-throwing velocity in elite male team-handball players should include exercises that are aimed at increasing both strength and power in the upper body.

  14. Development of Fast Error Compensation Algorithm for Integrated Inertial-Satellite Navigation System of Small-size Unmanned Aerial Vehicles in Complex Environment

    Directory of Open Access Journals (Sweden)

    A. V. Fomichev

    2015-01-01

    Full Text Available In accordance with the structural features of small-size unmanned aerial vehicle (UAV, and considering the feasibility of this project, the article studies an integrated inertial-satellite navigation system (INS. The INS algorithm development is based on the method of indirect filtration and principle of loosely coupled combination of output data on UAV positions and velocity. Data on position and velocity are provided from the strapdown inertial navigation system (SINS and satellite navigation system (GPS. A difference between the output flows of measuring data on position and velocity provided from the SINS and GPS is used to evaluate SINS errors by means of the basic algorithm of Kalman filtering. Then the outputs of SINS are revised. The INS possesses the following advantages: a simpler mathematical model of Kalman filtering, high reliability, two independently operating navigation systems, and high redundancy of available navigation information.But in case of loosely coupled scheme, INS can meet the challenge of high precision and reliability of navigation only when the SINS and GPS operating conditions are normal all the time. The proposed INS is used with UAV moving in complex environment due to obstacles available, severe natural climatic conditions, etc. This case expects that it is impossible for UAV to receive successful GPS-signals frequently. In order to solve this problem, was developed an algorithm for rapid compensation for errors of INS information, which could effectively solve the problem of failure of the navigation system in case there are no GPS-signals .Since it is almost impossible to obtain the data of the real trajectory in practice, in the course of simulation in accordance with the kinematic model of the UAV and the complex environment of the terrain, the flight path generator is used to produce the flight path. The errors of positions and velocities are considered as an indicator of the INS effectiveness. The results

  15. Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG ShunJin; ZHANG Hua

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  16. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  17. Estimating the Wet-Rock P-Wave Velocity from the Dry-Rock P-Wave Velocity for Pyroclastic Rocks

    Science.gov (United States)

    Kahraman, Sair; Fener, Mustafa; Kilic, Cumhur Ozcan

    2017-07-01

    Seismic methods are widely used for the geotechnical investigations in volcanic areas or for the determination of the engineering properties of pyroclastic rocks in laboratory. Therefore, developing a relation between the wet- and dry-rock P-wave velocities will be helpful for engineers when evaluating the formation characteristics of pyroclastic rocks. To investigate the predictability of the wet-rock P-wave velocity from the dry-rock P-wave velocity for pyroclastic rocks P-wave velocity measurements were conducted on 27 different pyroclastic rocks. In addition, dry-rock S-wave velocity measurements were conducted. The test results were modeled using Gassmann's and Wood's theories and it was seen that estimates for saturated P-wave velocity from the theories fit well measured data. For samples having values of less and greater than 20%, practical equations were derived for reliably estimating wet-rock P-wave velocity as function of dry-rock P-wave velocity.

  18. Solving k-Barrier Coverage Problem Using Modified Gravitational Search Algorithm

    Directory of Open Access Journals (Sweden)

    Yanhua Zhang

    2017-01-01

    Full Text Available Coverage problem is a critical issue in wireless sensor networks for security applications. The k-barrier coverage is an effective measure to ensure robustness. In this paper, we formulate the k-barrier coverage problem as a constrained optimization problem and introduce the energy constraint of sensor node to prolong the lifetime of the k-barrier coverage. A novel hybrid particle swarm optimization and gravitational search algorithm (PGSA is proposed to solve this problem. The proposed PGSA adopts a k-barrier coverage generation strategy based on probability and integrates the exploitation ability in particle swarm optimization to update the velocity and enhance the global search capability and introduce the boundary mutation strategy of an agent to increase the population diversity and search accuracy. Extensive simulations are conducted to demonstrate the effectiveness of our proposed algorithm.

  19. Object-Oriented/Data-Oriented Design of a Direct Simulation Monte Carlo Algorithm

    Science.gov (United States)

    Liechty, Derek S.

    2014-01-01

    Over the past decade, there has been much progress towards improved phenomenological modeling and algorithmic updates for the direct simulation Monte Carlo (DSMC) method, which provides a probabilistic physical simulation of gas Rows. These improvements have largely been based on the work of the originator of the DSMC method, Graeme Bird. Of primary importance are improved chemistry, internal energy, and physics modeling and a reduction in time to solution. These allow for an expanded range of possible solutions In altitude and velocity space. NASA's current production code, the DSMC Analysis Code (DAC), is well-established and based on Bird's 1994 algorithms written in Fortran 77 and has proven difficult to upgrade. A new DSMC code is being developed in the C++ programming language using object-oriented and data-oriented design paradigms to facilitate the inclusion of the recent improvements and future development activities. The development efforts on the new code, the Multiphysics Algorithm with Particles (MAP), are described, and performance comparisons are made with DAC.

  20. The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators

    Directory of Open Access Journals (Sweden)

    Dazhi Jiang

    2015-01-01

    Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.

  1. Field Testing of an In-well Point Velocity Probe for the Rapid Characterization of Groundwater Velocity

    Science.gov (United States)

    Osorno, T.; Devlin, J. F.

    2017-12-01

    Reliable estimates of groundwater velocity is essential in order to best implement in-situ monitoring and remediation technologies. The In-well Point Velocity Probe (IWPVP) is an inexpensive, reusable tool developed for rapid measurement of groundwater velocity at the centimeter-scale in monitoring wells. IWPVP measurements of groundwater speed are based on a small-scale tracer test conducted as ambient groundwater passes through the well screen and the body of the probe. Horizontal flow direction can be determined from the difference in tracer mass passing detectors placed in four funnel-and-channel pathways through the probe, arranged in a cross pattern. The design viability of the IWPVP was confirmed using a two-dimensional numerical model in Comsol Multiphysics, followed by a series of laboratory tank experiments in which IWPVP measurements were calibrated to quantify seepage velocities in both fine and medium sand. Lab results showed that the IWPVP was capable of measuring the seepage velocity in less than 20 minutes per test, when the seepage velocity was in the range of 0.5 to 4.0 m/d. Further, the IWPVP estimated the groundwater speed with a precision of ± 7%, and an accuracy of ± 14%, on average. The horizontal flow direction was determined with an accuracy of ± 15°, on average. Recently, a pilot field test of the IWPVP was conducted in the Borden aquifer, C.F.B. Borden, Ontario, Canada. A total of approximately 44 IWPVP tests were conducted within two 2-inch groundwater monitoring wells comprising a 5 ft. section of #8 commercial well screen. Again, all tests were completed in under 20 minutes. The velocities estimated from IWPVP data were compared to 21 Point Velocity Probe (PVP) tests, as well as Darcy-based estimates of groundwater velocity. Preliminary data analysis shows strong agreement between the IWPVP and PVP estimates of groundwater velocity. Further, both the IWPVP and PVP estimates of groundwater velocity appear to be reasonable when

  2. Gas-kinetic unified algorithm for hypersonic flows covering various flow regimes solving Boltzmann model equation in nonequilibrium effect

    International Nuclear Information System (INIS)

    Li, Zhihui; Ma, Qiang; Wu, Junlin; Jiang, Xinyu; Zhang, Hanxin

    2014-01-01

    Based on the Gas-Kinetic Unified Algorithm (GKUA) directly solving the Boltzmann model equation, the effect of rotational non-equilibrium is investigated recurring to the kinetic Rykov model with relaxation property of rotational degrees of freedom. The spin movement of diatomic molecule is described by moment of inertia, and the conservation of total angle momentum is taken as a new Boltzmann collision invariant. The molecular velocity distribution function is integrated by the weight factor on the internal energy, and the closed system of two kinetic controlling equations is obtained with inelastic and elastic collisions. The optimization selection technique of discrete velocity ordinate points and numerical quadrature rules for macroscopic flow variables with dynamic updating evolvement are developed to simulate hypersonic flows, and the gas-kinetic numerical scheme is constructed to capture the time evolution of the discretized velocity distribution functions. The gas-kinetic boundary conditions in thermodynamic non-equilibrium and numerical procedures are studied and implemented by directly acting on the velocity distribution function, and then the unified algorithm of Boltzmann model equation involving non-equilibrium effect is presented for the whole range of flow regimes. The hypersonic flows involving non-equilibrium effect are numerically simulated including the inner flows of shock wave structures in nitrogen with different Mach numbers of 1.5-Ma-25, the planar ramp flow with the whole range of Knudsen numbers of 0.0009-Kn-10 and the three-dimensional re-entering flows around tine double-cone body

  3. An Extended Kalman Filter-Based Attitude Tracking Algorithm for Star Sensors.

    Science.gov (United States)

    Li, Jian; Wei, Xinguo; Zhang, Guangjun

    2017-08-21

    Efficiency and reliability are key issues when a star sensor operates in tracking mode. In the case of high attitude dynamics, the performance of existing attitude tracking algorithms degenerates rapidly. In this paper an extended Kalman filtering-based attitude tracking algorithm is presented. The star sensor is modeled as a nonlinear stochastic system with the state estimate providing the three degree-of-freedom attitude quaternion and angular velocity. The star positions in the star image are predicted and measured to estimate the optimal attitude. Furthermore, all the cataloged stars observed in the sensor field-of-view according the predicted image motion are accessed using a catalog partition table to speed up the tracking, called star mapping. Software simulation and night-sky experiment are performed to validate the efficiency and reliability of the proposed method.

  4. An efficient and stable hybrid extended Lagrangian/self-consistent field scheme for solving classical mutual induction

    International Nuclear Information System (INIS)

    Albaugh, Alex; Demerdash, Omar; Head-Gordon, Teresa

    2015-01-01

    We have adapted a hybrid extended Lagrangian self-consistent field (EL/SCF) approach, developed for time reversible Born Oppenheimer molecular dynamics for quantum electronic degrees of freedom, to the problem of classical polarization. In this context, the initial guess for the mutual induction calculation is treated by auxiliary induced dipole variables evolved via a time-reversible velocity Verlet scheme. However, we find numerical instability, which is manifested as an accumulation in the auxiliary velocity variables, that in turn results in an unacceptable increase in the number of SCF cycles to meet even loose convergence tolerances for the real induced dipoles over the course of a 1 ns trajectory of the AMOEBA14 water model. By diagnosing the numerical instability as a problem of resonances that corrupt the dynamics, we introduce a simple thermostating scheme, illustrated using Berendsen weak coupling and Nose-Hoover chain thermostats, applied to the auxiliary dipole velocities. We find that the inertial EL/SCF (iEL/SCF) method provides superior energy conservation with less stringent convergence thresholds and a correspondingly small number of SCF cycles, to reproduce all properties of the polarization model in the NVT and NVE ensembles accurately. Our iEL/SCF approach is a clear improvement over standard SCF approaches to classical mutual induction calculations and would be worth investigating for application to ab initio molecular dynamics as well

  5. Enhancement of tracking performance in electro-optical system based on servo control algorithm

    Science.gov (United States)

    Choi, WooJin; Kim, SungSu; Jung, DaeYoon; Seo, HyoungKyu

    2017-10-01

    Modern electro-optical surveillance and reconnaissance systems require tracking capability to get exact images of target or to accurately direct the line of sight to target which is moving or still. This leads to the tracking system composed of image based tracking algorithm and servo control algorithm. In this study, we focus on the servo control function to minimize the overshoot in the tracking motion and do not miss the target. The scheme is to limit acceleration and velocity parameters in the tracking controller, depending on the target state information in the image. We implement the proposed techniques by creating a system model of DIRCM and simulate the same environment, validate the performance on the actual equipment.

  6. Does polar interaction influence medium viscosity? A computer ...

    Indian Academy of Sciences (India)

    special attention because of the possibility of access- ing rich, often new, .... grated by using the Verlet leapfrog integration scheme30 .... Eventhough no analytical theory or simulations exist for .... This feature makes the rotations in the model ...

  7. Joint Inversion of 1-D Magnetotelluric and Surface-Wave Dispersion Data with an Improved Multi-Objective Genetic Algorithm and Application to the Data of the Longmenshan Fault Zone

    Science.gov (United States)

    Wu, Pingping; Tan, Handong; Peng, Miao; Ma, Huan; Wang, Mao

    2018-05-01

    Magnetotellurics and seismic surface waves are two prominent geophysical methods for deep underground exploration. Joint inversion of these two datasets can help enhance the accuracy of inversion. In this paper, we describe a method for developing an improved multi-objective genetic algorithm (NSGA-SBX) and applying it to two numerical tests to verify the advantages of the algorithm. Our findings show that joint inversion with the NSGA-SBX method can improve the inversion results by strengthening structural coupling when the discontinuities of the electrical and velocity models are consistent, and in case of inconsistent discontinuities between these models, joint inversion can retain the advantages of individual inversions. By applying the algorithm to four detection points along the Longmenshan fault zone, we observe several features. The Sichuan Basin demonstrates low S-wave velocity and high conductivity in the shallow crust probably due to thick sedimentary layers. The eastern margin of the Tibetan Plateau shows high velocity and high resistivity in the shallow crust, while two low velocity layers and a high conductivity layer are observed in the middle lower crust, probably indicating the mid-crustal channel flow. Along the Longmenshan fault zone, a high conductivity layer from 8 to 20 km is observed beneath the northern segment and decreases with depth beneath the middle segment, which might be caused by the elevated fluid content of the fault zone.

  8. Hypocenter relocation of microseismic events using a 3-D velocity model of the shale-gas production site in the Horn River Basin

    Science.gov (United States)

    Woo, J. U.; Kim, J. H.; Rhie, J.; Kang, T. S.

    2016-12-01

    Microseismic monitoring is a crucial process to evaluate the efficiency of hydro-fracking and to understand the development of fracture networks. Consequently, it can provide valuable information for designing the post hydro-fracking stages and estimating the stimulated rock volumes. The fundamental information is a set of source parameters of microseismic events. The most important parameter is the hypocenter of event, and thus the accurate hypocenter determination is a key for the successful microseismic monitoring. The accuracy of hypocenters for a given dataset of seismic phase arrival times is dependent on that of the velocity model used in the seismic analysis. In this study, we evaluated how a 3-D model can affect the accuracy of hypocenters. We used auto-picked P- and S-wave travel-time data of about 8,000 events at the commercial shale gas production site in the Horn River Basin, Canada. The initial hypocenters of the events were determined using a single-difference linear inversion algorithm with a 1-D velocity model obtained from the well-logging data. Then we iteratively inverted travel times of events for the 3-D velocity perturbations and relocated their hypocenters using double-difference algorithm. Significant reduction of the errors in the final hypocenter was obtained. This result indicates that the 3-D model is useful for improving the performance of microseismic monitoring.

  9. Velocity Dispersions Across Bulge Types

    International Nuclear Information System (INIS)

    Fabricius, Maximilian; Bender, Ralf; Hopp, Ulrich; Saglia, Roberto; Drory, Niv; Fisher, David

    2010-01-01

    We present first results from a long-slit spectroscopic survey of bulge kinematics in local spiral galaxies. Our optical spectra were obtained at the Hobby-Eberly Telescope with the LRS spectrograph and have a velocity resolution of 45 km/s (σ*), which allows us to resolve the velocity dispersions in the bulge regions of most objects in our sample. We find that the velocity dispersion profiles in morphological classical bulge galaxies are always centrally peaked while the velocity dispersion of morphologically disk-like bulges stays relatively flat towards the center--once strongly barred galaxies are discarded.

  10. Estimation of vector velocity

    DEFF Research Database (Denmark)

    2000-01-01

    Using a pulsed ultrasound field, the two-dimensional velocity vector can be determined with the invention. The method uses a transversally modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation. The new...

  11. The 1994 Northridge, California, earthquake: Investigation of rupture velocity, risetime, and high-frequency radiation

    Science.gov (United States)

    Hartzell, S.; Liu, P.; Mendoza, C.

    1996-01-01

    A hybrid global search algorithm is used to solve the nonlinear problem of calculating slip amplitude, rake, risetime, and rupture time on a finite fault. Thirty-five strong motion velocity records are inverted by this method over the frequency band from 0.1 to 1.0 Hz for the Northridge earthquake. Four regions of larger-amplitude slip are identified: one near the hypocenter at a depth of 17 km, a second west of the hypocenter at about the same depth, a third updip from the hypocenter at a depth of 10 km, and a fourth updip from the hypocenter and to the northwest. The results further show an initial fast rupture with a velocity of 2.8 to 3.0 km/s followed by a slow termination of the rupture with velocities of 2.0 to 2.5 km/s. The initial energetic rupture phase lasts for 3 s, extending out 10 km from the hypocenter. Slip near the hypocenter has a short risetime of 0.5 s, which increases to 1.5 s for the major slip areas removed from the hypocentral region. The energetic rupture phase is also shown to be the primary source of high-frequency radiation (1-15 Hz) by an inversion of acceleration envelopes. The same global search algorithm is used in the envelope inversion to calculate high-frequency radiation intensity on the fault and rupture time. The rupture timing from the low- and high-frequency inversions is similar, indicating that the high frequencies are produced primarily at the mainshock rupture front. Two major sources of high-frequency radiation are identified within the energetic rupture phase, one at the hypocenter and another deep source to the west of the hypocenter. The source at the hypocenter is associated with the initiation of rupture and the breaking of a high-stress-drop asperity and the second is associated with stopping of the rupture in a westerly direction.

  12. Analysis of photosynthate translocation velocity and measurement of weighted average velocity in transporting pathway of crops

    International Nuclear Information System (INIS)

    Ge Cailin; Luo Shishi; Gong Jian; Zhang Hao; Ma Fei

    1996-08-01

    The translocation profile pattern of 14 C-photosynthate along the transporting pathway in crops were monitored by pulse-labelling a mature leaf with 14 CO 2 . The progressive spreading of translocation profile pattern along the sheath or stem indicates that the translocation of photosynthate along the sheath or stem proceed with a range of velocities rather than with just a single velocity. The method for measuring the weighted average velocity of photosynthate translocation along the sheath or stem was established in living crops. The weighted average velocity and the maximum velocity of photosynthate translocation along the sheath in rice and maize were measured actually. (4 figs., 3 tabs.)

  13. Correlation of right atrial appendage velocity with left atrial appendage velocity and brain natriuretic Peptide.

    Science.gov (United States)

    Kim, Bu-Kyung; Heo, Jung-Ho; Lee, Jae-Woo; Kim, Hyun-Soo; Choi, Byung-Joo; Cha, Tae-Joon

    2012-03-01

    Left atrial appendage (LAA) anatomy and function have been well characterized both in healthy and diseased people, whereas relatively little attention has been focused on the right atrial appendage (RAA). We sought to evaluate RAA flow velocity and to compare these parameters with LAA indices and with a study of biomarkers, such as brain natriuretic peptide, among patients with sinus rhythm (SR) and atrial fibrillation (AF). In a series of 79 consecutive patients referred for transesophageal echocardiography, 43 patients (23 with AF and 20 controls) were evaluated. AF was associated with a decrease in flow velocity for both LAA and RAA [LAA velocity-SR vs. AF: 61 ± 22 vs. 29 ± 18 m/sec (p vs. AF: 46 ± 20 vs. 19 ± 8 m/sec (p brain natriuretic peptide (BNP). AF was associated with decreased RAA and LAA flow velocities. RAA velocity was found to be positively correlated with LAA velocity and negatively correlated with BNP. The plasma BNP concentration may serve as a determinant of LAA and RAA functions.

  14. A Parallel, Finite-Volume Algorithm for Large-Eddy Simulation of Turbulent Flows

    Science.gov (United States)

    Bui, Trong T.

    1999-01-01

    A parallel, finite-volume algorithm has been developed for large-eddy simulation (LES) of compressible turbulent flows. This algorithm includes piecewise linear least-square reconstruction, trilinear finite-element interpolation, Roe flux-difference splitting, and second-order MacCormack time marching. Parallel implementation is done using the message-passing programming model. In this paper, the numerical algorithm is described. To validate the numerical method for turbulence simulation, LES of fully developed turbulent flow in a square duct is performed for a Reynolds number of 320 based on the average friction velocity and the hydraulic diameter of the duct. Direct numerical simulation (DNS) results are available for this test case, and the accuracy of this algorithm for turbulence simulations can be ascertained by comparing the LES solutions with the DNS results. The effects of grid resolution, upwind numerical dissipation, and subgrid-scale dissipation on the accuracy of the LES are examined. Comparison with DNS results shows that the standard Roe flux-difference splitting dissipation adversely affects the accuracy of the turbulence simulation. For accurate turbulence simulations, only 3-5 percent of the standard Roe flux-difference splitting dissipation is needed.

  15. Validation of deformable image registration algorithms on CT images of ex vivo porcine bladders with fiducial markers

    Energy Technology Data Exchange (ETDEWEB)

    Wognum, S., E-mail: s.wognum@gmail.com; Heethuis, S. E.; Bel, A. [Department of Radiation Oncology, Academic Medical Center, Meibergdreef 9, 1105 AZ Amsterdam (Netherlands); Rosario, T. [Department of Radiation Oncology, VU University Medical Center, De Boelelaan 1117, 1081 HZ Amsterdam (Netherlands); Hoogeman, M. S. [Department of Radiation Oncology, Erasmus MC Cancer Institute, Erasmus Medical Center, Groene Hilledijk 301, 3075 EA Rotterdam (Netherlands)

    2014-07-15

    Purpose: The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations. Excised porcine bladders, exhibiting similar filling and voiding behavior as human bladders, provide such an environment. The aim of this study was to determine the spatial accuracy of different DIR algorithms on CT images ofex vivo porcine bladders with radiopaque fiducial markers applied to the outer surface, for a range of bladder volumes, using various accuracy metrics. Methods: Five excised porcine bladders with a grid of 30–40 radiopaque fiducial markers attached to the outer wall were suspended inside a water-filled phantom. The bladder was filled with a controlled amount of water with added contrast medium for a range of filling volumes (100–400 ml in steps of 50 ml) using a luer lock syringe, and CT scans were acquired at each filling volume. DIR was performed for each data set, with the 100 ml bladder as the reference image. Six intensity-based algorithms (optical flow or demons-based) implemented in theMATLAB platform DIRART, a b-spline algorithm implemented in the commercial software package VelocityAI, and a structure-based algorithm (Symmetric Thin Plate Spline Robust Point Matching) were validated, using adequate parameter settings according to values previously published. The resulting deformation vector field from each registration was applied to the contoured bladder structures and to the marker coordinates for spatial error calculation. The quality of the algorithms was assessed by comparing the different error metrics across the different algorithms, and by comparing the effect of deformation magnitude (bladder volume difference) per algorithm, using the Independent Samples Kruskal-Wallis test. Results: The authors found good structure

  16. Validation of deformable image registration algorithms on CT images of ex vivo porcine bladders with fiducial markers.

    Science.gov (United States)

    Wognum, S; Heethuis, S E; Rosario, T; Hoogeman, M S; Bel, A

    2014-07-01

    The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations. Excised porcine bladders, exhibiting similar filling and voiding behavior as human bladders, provide such an environment. The aim of this study was to determine the spatial accuracy of different DIR algorithms on CT images of ex vivo porcine bladders with radiopaque fiducial markers applied to the outer surface, for a range of bladder volumes, using various accuracy metrics. Five excised porcine bladders with a grid of 30-40 radiopaque fiducial markers attached to the outer wall were suspended inside a water-filled phantom. The bladder was filled with a controlled amount of water with added contrast medium for a range of filling volumes (100-400 ml in steps of 50 ml) using a luer lock syringe, and CT scans were acquired at each filling volume. DIR was performed for each data set, with the 100 ml bladder as the reference image. Six intensity-based algorithms (optical flow or demons-based) implemented in theMATLAB platform DIRART, a b-spline algorithm implemented in the commercial software package VelocityAI, and a structure-based algorithm (Symmetric Thin Plate Spline Robust Point Matching) were validated, using adequate parameter settings according to values previously published. The resulting deformation vector field from each registration was applied to the contoured bladder structures and to the marker coordinates for spatial error calculation. The quality of the algorithms was assessed by comparing the different error metrics across the different algorithms, and by comparing the effect of deformation magnitude (bladder volume difference) per algorithm, using the Independent Samples Kruskal-Wallis test. The authors found good structure accuracy without dependency on

  17. Validation of deformable image registration algorithms on CT images of ex vivo porcine bladders with fiducial markers

    International Nuclear Information System (INIS)

    Wognum, S.; Heethuis, S. E.; Bel, A.; Rosario, T.; Hoogeman, M. S.

    2014-01-01

    Purpose: The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations. Excised porcine bladders, exhibiting similar filling and voiding behavior as human bladders, provide such an environment. The aim of this study was to determine the spatial accuracy of different DIR algorithms on CT images ofex vivo porcine bladders with radiopaque fiducial markers applied to the outer surface, for a range of bladder volumes, using various accuracy metrics. Methods: Five excised porcine bladders with a grid of 30–40 radiopaque fiducial markers attached to the outer wall were suspended inside a water-filled phantom. The bladder was filled with a controlled amount of water with added contrast medium for a range of filling volumes (100–400 ml in steps of 50 ml) using a luer lock syringe, and CT scans were acquired at each filling volume. DIR was performed for each data set, with the 100 ml bladder as the reference image. Six intensity-based algorithms (optical flow or demons-based) implemented in theMATLAB platform DIRART, a b-spline algorithm implemented in the commercial software package VelocityAI, and a structure-based algorithm (Symmetric Thin Plate Spline Robust Point Matching) were validated, using adequate parameter settings according to values previously published. The resulting deformation vector field from each registration was applied to the contoured bladder structures and to the marker coordinates for spatial error calculation. The quality of the algorithms was assessed by comparing the different error metrics across the different algorithms, and by comparing the effect of deformation magnitude (bladder volume difference) per algorithm, using the Independent Samples Kruskal-Wallis test. Results: The authors found good structure

  18. Online Wavelet Complementary velocity Estimator.

    Science.gov (United States)

    Righettini, Paolo; Strada, Roberto; KhademOlama, Ehsan; Valilou, Shirin

    2018-02-01

    In this paper, we have proposed a new online Wavelet Complementary velocity Estimator (WCE) over position and acceleration data gathered from an electro hydraulic servo shaking table. This is a batch estimator type that is based on the wavelet filter banks which extract the high and low resolution of data. The proposed complementary estimator combines these two resolutions of velocities which acquired from numerical differentiation and integration of the position and acceleration sensors by considering a fixed moving horizon window as input to wavelet filter. Because of using wavelet filters, it can be implemented in a parallel procedure. By this method the numerical velocity is estimated without having high noise of differentiators, integration drifting bias and with less delay which is suitable for active vibration control in high precision Mechatronics systems by Direct Velocity Feedback (DVF) methods. This method allows us to make velocity sensors with less mechanically moving parts which makes it suitable for fast miniature structures. We have compared this method with Kalman and Butterworth filters over stability, delay and benchmarked them by their long time velocity integration for getting back the initial position data. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Mapping Deep Low Velocity Zones in Alaskan Arctic Coastal Permafrost using Seismic Surface Waves

    Science.gov (United States)

    Dou, S.; Ajo Franklin, J. B.; Dreger, D. S.

    2012-12-01

    Surface Waves (MASW) suggests the existence of pronounced low shear wave velocity zones that span the depth range of 2 - 30 meters; this zone has shear velocity values comparable to partially thawed soils. Such features coincide with previous findings of very low electrical resistivity structure (as low as ~10 Ohm*m at some locations) from measurements obtained in the first NGEE-Arctic geophysical field campaign (conducted in the week of September 24 - October 1, 2011). These low shear velocity zones are likely representative of regions with high unfrozen water content and thus have important implications on the rate of microbial activity and the vulnerability of deep permafrost carbon pools. Analysis of this dataset required development of a novel inversion approach based on waveform inversion. The existence of multiple closely spaced Rayleigh wave modes made traditional inversion based on mode picking virtually impossible; As a result, we selected a direct misfit evaluation based on comparing dispersion images in the phase velocity/frequency domain. The misfit function was optimized using a global search algorithm, in this case Huyer and Neumaier's Multi Coordinate Search algorithm (MCS). This combination of MCS and waveform misfit allowed recovery of the low velocity region despite the existence of closely spaced modes.

  20. On whistler-mode group velocity

    International Nuclear Information System (INIS)

    Sazhin, S.S.

    1986-01-01

    An analytical of the group velocity of whistler-mode waves propagating parallel to the magnetic field in a hot anisotropic plasma is presented. Some simple approximate formulae, which can be used for the magnetospheric applications, are derived. These formulae can predict some properties of this group velocity which were not previously recognized or were obtained by numerical methods. In particular, it is pointed out that the anisotropy tends to compensate for the influence of the electron temperature on the value of the group velocity when the wave frequency is well below the electron gyrofrequency. It is predicted, that under conditions at frequencies near the electron gyrofrequency, this velocity tends towards zero

  1. Genetic algorithm trajectory plan optimization for EAMA: EAST Articulated Maintenance Arm

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Jing, E-mail: wujing@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd., Hefei, Anhui (China); Lappeenranta University of Technology, Skinnarilankatu 34, Lappeenranta (Finland); Wu, Huapeng [Lappeenranta University of Technology, Skinnarilankatu 34, Lappeenranta (Finland); Song, Yuntao; Cheng, Yong; Zhao, Wenglong [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd., Hefei, Anhui (China); Wang, Yongbo [Lappeenranta University of Technology, Skinnarilankatu 34, Lappeenranta (Finland)

    2016-11-01

    Highlights: • A redundant 10-DOF serial-articulated robot for EAST assembly and maintains is presented. • A trajectory optimization algorithm of the robot is developed. • A minimum jerk objective is presented to suppress machining vibration of the robot. - Abstract: EAMA (EAST Articulated Maintenance Arm) is an articulated serial manipulator with 7 degrees of freedom (DOF) articulated arm followed by 3-DOF gripper, total length is 8.867 m, works in experimental advanced superconductor tokamak (EAST) vacuum vessel (VV) to perform blanket inspection and remote maintenance tasks. This paper presents a trajectory optimization method which aims to pursue the 7-DOF articulated arm a stable movement, which keeps the mounted inspection camera anti-vibration. Based on dynamics analysis, trajectory optimization algorithm adopts multi-order polynomial interpolation in joint space and high order geometry Jacobian transform. The object of optimization algorithm is to suppress end-effector movement vibration by minimizing jerk RMS (root mean square) value. The proposed solution has such characteristics which can satisfy kinematic constraints of EAMA’s motion and ensure the arm running under the absolute values of velocity, acceleration and jerk boundaries. GA (genetic algorithm) is employed to find global and robust solution for this problem.

  2. Characteristic wave velocities in spherical electromagnetic cloaks

    International Nuclear Information System (INIS)

    Yaghjian, A D; Maci, S; Martini, E

    2009-01-01

    We investigate the characteristic wave velocities in spherical electromagnetic cloaks, namely, phase, ray, group and energy-transport velocities. After deriving explicit expressions for the phase and ray velocities (the latter defined as the phase velocity along the direction of the Poynting vector), special attention is given to the determination of group and energy-transport velocities, because a cursory application of conventional formulae for local group and energy-transport velocities can lead to a discrepancy between these velocities if the permittivity and permeability dyadics are not equal over a frequency range about the center frequency. In contrast, a general theorem can be proven from Maxwell's equations that the local group and energy-transport velocities are equal in linear, lossless, frequency dispersive, source-free bianisotropic material. This apparent paradox is explained by showing that the local fields of the spherical cloak uncouple into an E wave and an H wave, each with its own group and energy-transport velocities, and that the group and energy-transport velocities of either the E wave or the H wave are equal and thus satisfy the general theorem.

  3. Supercontinuum optimization for dual-soliton based light sources using genetic algorithms in a grid platform.

    Science.gov (United States)

    Arteaga-Sierra, F R; Milián, C; Torres-Gómez, I; Torres-Cisneros, M; Moltó, G; Ferrando, A

    2014-09-22

    We present a numerical strategy to design fiber based dual pulse light sources exhibiting two predefined spectral peaks in the anomalous group velocity dispersion regime. The frequency conversion is based on the soliton fission and soliton self-frequency shift occurring during supercontinuum generation. The optimization process is carried out by a genetic algorithm that provides the optimum input pulse parameters: wavelength, temporal width and peak power. This algorithm is implemented in a Grid platform in order to take advantage of distributed computing. These results are useful for optical coherence tomography applications where bell-shaped pulses located in the second near-infrared window are needed.

  4. An efficient particle Fokker–Planck algorithm for rarefied gas flows

    Energy Technology Data Exchange (ETDEWEB)

    Gorji, M. Hossein; Jenny, Patrick

    2014-04-01

    This paper is devoted to the algorithmic improvement and careful analysis of the Fokker–Planck kinetic model derived by Jenny et al. [1] and Gorji et al. [2]. The motivation behind the Fokker–Planck based particle methods is to gain efficiency in low Knudsen rarefied gas flow simulations, where conventional direct simulation Monte Carlo (DSMC) becomes expensive. This can be achieved due to the fact that the resulting model equations are continuous stochastic differential equations in velocity space. Accordingly, the computational particles evolve along independent stochastic paths and thus no collision needs to be calculated. Therefore the computational cost of the solution algorithm becomes independent of the Knudsen number. In the present study, different computational improvements were persuaded in order to augment the method, including an accurate time integration scheme, local time stepping and noise reduction. For assessment of the performance, gas flow around a cylinder and lid driven cavity flow were studied. Convergence rates, accuracy and computational costs were compared with respect to DSMC for a range of Knudsen numbers (from hydrodynamic regime up to above one). In all the considered cases, the model together with the proposed scheme give rise to very efficient yet accurate solution algorithms.

  5. A general concurrent algorithm for plasma particle-in-cell simulation codes

    International Nuclear Information System (INIS)

    Liewer, P.C.; Decyk, V.K.

    1989-01-01

    We have developed a new algorithm for implementing plasma particle-in-cell (PIC) simulation codes on concurrent processors with distributed memory. This algorithm, named the general concurrent PIC algorithm (GCPIC), has been used to implement an electrostatic PIC code on the 33-node JPL Mark III Hypercube parallel computer. To decompose at PIC code using the GCPIC algorithm, the physical domain of the particle simulation is divided into sub-domains, equal in number to the number of processors, such that all sub-domains have roughly equal numbers of particles. For problems with non-uniform particle densities, these sub-domains will be of unequal physical size. Each processor is assigned a sub-domain and is responsible for updating the particles in its sub-domain. This algorithm has led to a a very efficient parallel implementation of a well-benchmarked 1-dimensional PIC code. The dominant portion of the code, updating the particle positions and velocities, is nearly 100% efficient when the number of particles is increased linearly with the number of hypercube processors used so that the number of particles per processor is constant. For example, the increase in time spent updating particles in going from a problem with 11,264 particles run on 1 processor to 360,448 particles on 32 processors was only 3% (parallel efficiency of 97%). Although implemented on a hypercube concurrent computer, this algorithm should also be efficient for PIC codes on other parallel architectures and for large PIC codes on sequential computers where part of the data must reside on external disks. copyright 1989 Academic Press, Inc

  6. Multi-component pre-stack time-imaging and migration-based velocity analysis in transversely isotropic media; Imagerie sismique multicomposante et analyse de vitesse de migration en milieu transverse isotrope

    Energy Technology Data Exchange (ETDEWEB)

    Gerea, C.V.

    2001-06-01

    Complementary to the recording of compressional (P-) waves, the observation of P-S converted waves has recently been receiving specific attention. This is mainly due to their tremendous potential as a tool for fracture and lithology characterization, imaging sediments in gas saturated rocks, and imaging shallow sediments with higher resolution than conventional P-P data. In a conventional marine seismic survey, we cannot record P-to-S converted-wave energy since the fluids cannot support shear-wave strain. Thus, to capture the converted-wave energy, we need to record it at the water-bottom casing an ocean-bottom cable (OBC). The S-waves recorded at the seabed are mainly converted from P to S (i.e., PS-waves or C-waves) at the subsurface reflectors. The most accurate way to image seismic data is pre-stack depth migration. In this thesis, I develop a numerically efficient 2.5-D true-amplitude elastic Kirchhoff pre-stack migration algorithm designed to handle OBC data gathered along a single line. All the kinematic and dynamic elastic Green's functions required in the computation of true-amplitude weight term of Kirchhoff summation, are based on the non-hyperbolic explicit approximations of P- and SV-wave travel-times in layered transversely isotropic (VTI) media. Hence, this elastic imaging algorithm is very well-suited for migration-based velocity analysis techniques, for which fast, robust and iterative pre-stack migration is desired. In this thesis, I approach also the topic of anisotropic velocity model building for elastic pre-stack time-imaging. and propose an original methodology for joint PP-PS migration-based velocity analysis (MVA) in layered VTI anisotropic media. Tests on elastic synthetic and real OBC seismic data ascertain the validity of the pre-stack migration algorithm and velocity analysis methodology. (author)

  7. Southern high-velocity stars

    International Nuclear Information System (INIS)

    Augensen, H.J.; Buscombe, W.

    1978-01-01

    Using the model of the Galaxy presented by Eggen, Lynden-Bell and Sandage (1962), plane galactic orbits have been calculated for 800 southern high-velocity stars which possess parallax, proper motion, and radial velocity data. The stars with trigonometric parallaxes were selected from Buscombe and Morris (1958), supplemented by more recent spectroscopic data. Photometric parallaxes from infrared color indices were used for bright red giants studied by Eggen (1970), and for red dwarfs for which Rodgers and Eggen (1974) determined radial velocities. A color-color diagram based on published values of (U-B) and (B-V) for most of these stars is shown. (Auth.)

  8. Modified Feynman ratchet with velocity-dependent fluctuations

    Directory of Open Access Journals (Sweden)

    Jack Denur

    2004-03-01

    Full Text Available Abstract: The randomness of Brownian motion at thermodynamic equilibrium can be spontaneously broken by velocity-dependence of fluctuations, i.e., by dependence of values or probability distributions of fluctuating properties on Brownian-motional velocity. Such randomness-breaking can spontaneously obtain via interaction between Brownian-motional Doppler effects --- which manifest the required velocity-dependence --- and system geometrical asymmetry. A non random walk is thereby spontaneously superposed on Brownian motion, resulting in a systematic net drift velocity despite thermodynamic equilibrium. The time evolution of this systematic net drift velocity --- and of velocity probability density, force, and power output --- is derived for a velocity-dependent modification of Feynman's ratchet. We show that said spontaneous randomness-breaking, and consequent systematic net drift velocity, imply: bias from the Maxwellian of the system's velocity probability density, the force that tends to accelerate it, and its power output. Maximization, especially of power output, is discussed. Uncompensated decreases in total entropy, challenging the second law of thermodynamics, are thereby implied.

  9. Derivation of site-specific relationships between hydraulic parameters and p-wave velocities based on hydraulic and seismic tomography

    Energy Technology Data Exchange (ETDEWEB)

    Brauchler, R.; Doetsch, J.; Dietrich, P.; Sauter, M.

    2012-01-10

    In this study, hydraulic and seismic tomographic measurements were used to derive a site-specific relationship between the geophysical parameter p-wave velocity and the hydraulic parameters, diffusivity and specific storage. Our field study includes diffusivity tomograms derived from hydraulic travel time tomography, specific storage tomograms, derived from hydraulic attenuation tomography, and p-wave velocity tomograms, derived from seismic tomography. The tomographic inversion was performed in all three cases with the SIRT (Simultaneous Iterative Reconstruction Technique) algorithm, using a ray tracing technique with curved trajectories. The experimental set-up was designed such that the p-wave velocity tomogram overlaps the hydraulic tomograms by half. The experiments were performed at a wellcharacterized sand and gravel aquifer, located in the Leine River valley near Göttingen, Germany. Access to the shallow subsurface was provided by direct-push technology. The high spatial resolution of hydraulic and seismic tomography was exploited to derive representative site-specific relationships between the hydraulic and geophysical parameters, based on the area where geophysical and hydraulic tests were performed. The transformation of the p-wave velocities into hydraulic properties was undertaken using a k-means cluster analysis. Results demonstrate that the combination of hydraulic and geophysical tomographic data is a promising approach to improve hydrogeophysical site characterization.

  10. Velocity distribution of fragments of catastrophic impacts

    Science.gov (United States)

    Takagi, Yasuhiko; Kato, Manabu; Mizutani, Hitoshi

    1992-01-01

    Three dimensional velocities of fragments produced by laboratory impact experiments were measured for basalts and pyrophyllites. The velocity distribution of fragments obtained shows that the velocity range of the major fragments is rather narrow, at most within a factor of 3 and that no clear dependence of velocity on the fragment mass is observed. The NonDimensional Impact Stress (NDIS) defined by Mizutani et al. (1990) is found to be an appropriate scaling parameter to describe the overall fragment velocity as well as the antipodal velocity.

  11. Potential, velocity, and density fields from redshift-distance samples: Application - Cosmography within 6000 kilometers per second

    International Nuclear Information System (INIS)

    Bertschinger, E.; Dekel, A.; Faber, S.M.; Dressler, A.; Burstein, D.

    1990-01-01

    A potential flow reconstruction algorithm has been applied to the real universe to reconstruct the three-dimensional potential, velocity, and mass density fields smoothed on large scales. The results are shown as maps of these fields, revealing the three-dimensional structure within 6000 km/s distance from the Local Group. The dominant structure is an extended deep potential well in the Hydra-Centaurus region, stretching across the Galactic plane toward Pavo, broadly confirming the Great Attractor (GA) model of Lynden-Bell et al. (1988). The Local Supercluster appears to be an extended ridge on the near flank of the GA, proceeding through the Virgo Southern Extension to the Virgo and Ursa Major clusters. The Virgo cluster and the Local Group are both falling toward the bottom of the GA potential well with peculiar velocities of 658 + or - 121 km/s and 565 + or - 125 km/s, respectively. 65 refs

  12. Potential, velocity, and density fields from redshift-distance samples: Application - Cosmography within 6000 kilometers per second

    Science.gov (United States)

    Bertschinger, Edmund; Dekel, Avishai; Faber, Sandra M.; Dressler, Alan; Burstein, David

    1990-12-01

    A potential flow reconstruction algorithm has been applied to the real universe to reconstruct the three-dimensional potential, velocity, and mass density fields smoothed on large scales. The results are shown as maps of these fields, revealing the three-dimensional structure within 6000 km/s distance from the Local Group. The dominant structure is an extended deep potential well in the Hydra-Centaurus region, stretching across the Galactic plane toward Pavo, broadly confirming the Great Attractor (GA) model of Lynden-Bell et al. (1988). The Local Supercluster appears to be an extended ridge on the near flank of the GA, proceeding through the Virgo Southern Extension to the Virgo and Ursa Major clusters. The Virgo cluster and the Local Group are both falling toward the bottom of the GA potential well with peculiar velocities of 658 + or - 121 km/s and 565 + or - 125 km/s, respectively.

  13. Development and evaluation of a micro-macro algorithm for the simulation of polymer flow

    International Nuclear Information System (INIS)

    Feigl, Kathleen; Tanner, Franz X.

    2006-01-01

    A micro-macro algorithm for the calculation of polymer flow is developed and numerically evaluated. The system being solved consists of the momentum and mass conservation equations from continuum mechanics coupled with a microscopic-based rheological model for polymer stress. Standard finite element techniques are used to solve the conservation equations for velocity and pressure, while stochastic simulation techniques are used to compute polymer stress from the simulated polymer dynamics in the rheological model. The rheological model considered combines aspects of reptation, network and continuum models. Two types of spatial approximation are considered for the configuration fields defining the dynamics in the model: piecewise constant and piecewise linear. The micro-macro algorithm is evaluated by simulating the abrupt planar die entry flow of a polyisobutylene solution described in the literature. The computed velocity and stress fields are found to be essentially independent of mesh size and ensemble size, while there is some dependence of the results on the order of spatial approximation to the configuration fields close to the die entry. Comparison with experimental data shows that the piecewise linear approximation leads to better predictions of the centerline first normal stress difference. Finally, the computational time associated with the piecewise constant spatial approximation is found to be about 2.5 times lower than that associated with the piecewise linear approximation. This is the result of the more efficient time integration scheme that is possible with the former type of approximation due to the pointwise incompressibility guaranteed by the choice of velocity-pressure finite element

  14. Evaluation Technique of Chloride Penetration Using Apparent Diffusion Coefficient and Neural Network Algorithm

    Directory of Open Access Journals (Sweden)

    Yun-Yong Kim

    2014-01-01

    Full Text Available Diffusion coefficient from chloride migration test is currently used; however this cannot provide a conventional solution like total chloride contents since it depicts only ion migration velocity in electrical field. This paper proposes a simple analysis technique for chloride behavior using apparent diffusion coefficient from neural network algorithm with time-dependent diffusion phenomena. For this work, thirty mix proportions of high performance concrete are prepared and their diffusion coefficients are obtained after long term-NaCl submerged test. Considering time-dependent diffusion coefficient based on Fick’s 2nd Law and NNA (neural network algorithm, analysis technique for chloride penetration is proposed. The applicability of the proposed technique is verified through the results from accelerated test, long term submerged test, and field investigation results.

  15. Analyses of large quasistatic deformations of inelastic bodies by a new hybrid-stress finite element algorithm

    Science.gov (United States)

    Reed, K. W.; Atluri, S. N.

    1983-01-01

    A new hybrid-stress finite element algorithm, suitable for analyses of large, quasistatic, inelastic deformations, is presented. The algorithm is base upon a generalization of de Veubeke's complementary energy principle. The principal variables in the formulation are the nominal stress rate and spin, and thg resulting finite element equations are discrete versions of the equations of compatibility and angular momentum balance. The algorithm produces true rates, time derivatives, as opposed to 'increments'. There results a complete separation of the boundary value problem (for stress rate and velocity) and the initial value problem (for total stress and deformation); hence, their numerical treatments are essentially independent. After a fairly comprehensive discussion of the numerical treatment of the boundary value problem, we launch into a detailed examination of the numerical treatment of the initial value problem, covering the topics of efficiency, stability and objectivity. The paper is closed with a set of examples, finite homogeneous deformation problems, which serve to bring out important aspects of the algorithm.

  16. Efficient Algorithm for a k-out-of-N System Reliability Modeling-Case Study: Pitot Sensors System for Aircraft Velocity

    Directory of Open Access Journals (Sweden)

    Wajih Ezzeddine

    2017-08-01

    Full Text Available The k-out-of-N system is widely applied in several industrial systems. This structure is a part of fault-tolerant systems for which both parallel and series systems are special cases. Because of the importance of industrial systems reliability determination for production and maintenance management purposes, a number of techniques and methods are incorporated to formulate and estimate its analytic expression. In this paper, an algorithm is put forward for a k-out-of-N system with identical components under information about the influence factors that affect the system efficiency. The developed approach is applied in the case of the Pitot sensors system. However, the algorithm application could be generalized for any device which during a mission is subject to environmental and operational factors that affect its degradation process.

  17. An analysis of 3D particle path integration algorithms

    International Nuclear Information System (INIS)

    Darmofal, D.L.; Haimes, R.

    1996-01-01

    Several techniques for the numerical integration of particle paths in steady and unsteady vector (velocity) fields are analyzed. Most of the analysis applies to unsteady vector fields, however, some results apply to steady vector field integration. Multistep, multistage, and some hybrid schemes are considered. It is shown that due to initialization errors, many unsteady particle path integration schemes are limited to third-order accuracy in time. Multistage schemes require at least three times more internal data storage than multistep schemes of equal order. However, for timesteps within the stability bounds, multistage schemes are generally more accurate. A linearized analysis shows that the stability of these integration algorithms are determined by the eigenvalues of the local velocity tensor. Thus, the accuracy and stability of the methods are interpreted with concepts typically used in critical point theory. This paper shows how integration schemes can lead to erroneous classification of critical points when the timestep is finite and fixed. For steady velocity fields, we demonstrate that timesteps outside of the relative stability region can lead to similar integration errors. From this analysis, guidelines for accurate timestep sizing are suggested for both steady and unsteady flows. In particular, using simulation data for the unsteady flow around a tapered cylinder, we show that accurate particle path integration requires timesteps which are at most on the order of the physical timescale of the flow

  18. Velocity spectrum for the Iranian plateau

    Science.gov (United States)

    Bastami, Morteza; Soghrat, M. R.

    2018-01-01

    Peak ground acceleration (PGA) and spectral acceleration values have been proposed in most building codes/guidelines, unlike spectral velocity (SV) and peak ground velocity (PGV). Recent studies have demonstrated the importance of spectral velocity and peak ground velocity in the design of long period structures (e.g., pipelines, tunnels, tanks, and high-rise buildings) and evaluation of seismic vulnerability in underground structures. The current study was undertaken to develop a velocity spectrum and for estimation of PGV. In order to determine these parameters, 398 three-component accelerograms recorded by the Building and Housing Research Center (BHRC) were used. The moment magnitude (Mw) in the selected database was 4.1 to 7.3, and the events occurred after 1977. In the database, the average shear-wave velocity at 0 to 30 m in depth (Vs30) was available for only 217 records; thus, the site class for the remaining was estimated using empirical methods. Because of the importance of the velocity spectrum at low frequencies, the signal-to-noise ratio of 2 was chosen for determination of the low and high frequency to include a wider range of frequency content. This value can produce conservative results. After estimation of the shape of the velocity design spectrum, the PGV was also estimated for the region under study by finding the correlation between PGV and spectral acceleration at the period of 1 s.

  19. Optimization of the Kinetic Activation-Relaxation Technique, an off-lattice and self-learning kinetic Monte-Carlo method

    International Nuclear Information System (INIS)

    Joly, Jean-François; Béland, Laurent Karim; Brommer, Peter; Mousseau, Normand; El-Mellouhi, Fedwa

    2012-01-01

    We present two major optimizations for the kinetic Activation-Relaxation Technique (k-ART), an off-lattice self-learning kinetic Monte Carlo (KMC) algorithm with on-the-fly event search THAT has been successfully applied to study a number of semiconducting and metallic systems. K-ART is parallelized in a non-trivial way: A master process uses several worker processes to perform independent event searches for possible events, while all bookkeeping and the actual simulation is performed by the master process. Depending on the complexity of the system studied, the parallelization scales well for tens to more than one hundred processes. For dealing with large systems, we present a near order 1 implementation. Techniques such as Verlet lists, cell decomposition and partial force calculations are implemented, and the CPU time per time step scales sublinearly with the number of particles, providing an efficient use of computational resources.

  20. Maximal intended velocity training induces greater gains in bench press performance than deliberately slower half-velocity training.

    Science.gov (United States)

    González-Badillo, Juan José; Rodríguez-Rosell, David; Sánchez-Medina, Luis; Gorostiaga, Esteban M; Pareja-Blanco, Fernando

    2014-01-01

    The purpose of this study was to compare the effect on strength gains of two isoinertial resistance training (RT) programmes that only differed in actual concentric velocity: maximal (MaxV) vs. half-maximal (HalfV) velocity. Twenty participants were assigned to a MaxV (n = 9) or HalfV (n = 11) group and trained 3 times per week during 6 weeks using the bench press (BP). Repetition velocity was controlled using a linear velocity transducer. A complementary study (n = 10) aimed to analyse whether the acute metabolic (blood lactate and ammonia) and mechanical response (velocity loss) was different between the MaxV and HalfV protocols used. Both groups improved strength performance from pre- to post-training, but MaxV resulted in significantly greater gains than HalfV in all variables analysed: one-repetition maximum (1RM) strength (18.2 vs. 9.7%), velocity developed against all (20.8 vs. 10.0%), light (11.5 vs. 4.5%) and heavy (36.2 vs. 17.3%) loads common to pre- and post-tests. Light and heavy loads were identified with those moved faster or slower than 0.80 m · s(-1) (∼ 60% 1RM in BP). Lactate tended to be significantly higher for MaxV vs. HalfV, with no differences observed for ammonia which was within resting values. Both groups obtained the greatest improvements at the training velocities (≤ 0.80 m · s(-1)). Movement velocity can be considered a fundamental component of RT intensity, since, for a given %1RM, the velocity at which loads are lifted largely determines the resulting training effect. BP strength gains can be maximised when repetitions are performed at maximal intended velocity.

  1. Accelerated radial Fourier-velocity encoding using compressed sensing

    Energy Technology Data Exchange (ETDEWEB)

    Hilbert, Fabian; Han, Dietbert [Wuerzburg Univ. (Germany). Inst. of Radiology; Wech, Tobias; Koestler, Herbert [Wuerzburg Univ. (Germany). Inst. of Radiology; Wuerzburg Univ. (Germany). Comprehensive Heart Failure Center (CHFC)

    2014-10-01

    Purpose:Phase Contrast Magnetic Resonance Imaging (MRI) is a tool for non-invasive determination of flow velocities inside blood vessels. Because Phase Contrast MRI only measures a single mean velocity per voxel, it is only applicable to vessels significantly larger than the voxel size. In contrast, Fourier Velocity Encoding measures the entire velocity distribution inside a voxel, but requires a much longer acquisition time. For accurate diagnosis of stenosis in vessels on the scale of spatial resolution, it is important to know the velocity distribution of a voxel. Our aim was to determine velocity distributions with accelerated Fourier Velocity Encoding in an acquisition time required for a conventional Phase Contrast image. Materials and Methods:We imaged the femoral artery of healthy volunteers with ECG - triggered, radial CINE acquisition. Data acquisition was accelerated by undersampling, while missing data were reconstructed by Compressed Sensing. Velocity spectra of the vessel were evaluated by high resolution Phase Contrast images and compared to spectra from fully sampled and undersampled Fourier Velocity Encoding. By means of undersampling, it was possible to reduce the scan time for Fourier Velocity Encoding to the duration required for a conventional Phase Contrast image. Results:Acquisition time for a fully sampled data set with 12 different Velocity Encodings was 40 min. By applying a 12.6 - fold retrospective undersampling, a data set was generated equal to 3:10 min acquisition time, which is similar to a conventional Phase Contrast measurement. Velocity spectra from fully sampled and undersampled Fourier Velocity Encoded images are in good agreement and show the same maximum velocities as compared to velocity maps from Phase Contrast measurements. Conclusion: Compressed Sensing proved to reliably reconstruct Fourier Velocity Encoded data. Our results indicate that Fourier Velocity Encoding allows an accurate determination of the velocity

  2. Accelerated radial Fourier-velocity encoding using compressed sensing

    International Nuclear Information System (INIS)

    Hilbert, Fabian; Han, Dietbert

    2014-01-01

    Purpose:Phase Contrast Magnetic Resonance Imaging (MRI) is a tool for non-invasive determination of flow velocities inside blood vessels. Because Phase Contrast MRI only measures a single mean velocity per voxel, it is only applicable to vessels significantly larger than the voxel size. In contrast, Fourier Velocity Encoding measures the entire velocity distribution inside a voxel, but requires a much longer acquisition time. For accurate diagnosis of stenosis in vessels on the scale of spatial resolution, it is important to know the velocity distribution of a voxel. Our aim was to determine velocity distributions with accelerated Fourier Velocity Encoding in an acquisition time required for a conventional Phase Contrast image. Materials and Methods:We imaged the femoral artery of healthy volunteers with ECG - triggered, radial CINE acquisition. Data acquisition was accelerated by undersampling, while missing data were reconstructed by Compressed Sensing. Velocity spectra of the vessel were evaluated by high resolution Phase Contrast images and compared to spectra from fully sampled and undersampled Fourier Velocity Encoding. By means of undersampling, it was possible to reduce the scan time for Fourier Velocity Encoding to the duration required for a conventional Phase Contrast image. Results:Acquisition time for a fully sampled data set with 12 different Velocity Encodings was 40 min. By applying a 12.6 - fold retrospective undersampling, a data set was generated equal to 3:10 min acquisition time, which is similar to a conventional Phase Contrast measurement. Velocity spectra from fully sampled and undersampled Fourier Velocity Encoded images are in good agreement and show the same maximum velocities as compared to velocity maps from Phase Contrast measurements. Conclusion: Compressed Sensing proved to reliably reconstruct Fourier Velocity Encoded data. Our results indicate that Fourier Velocity Encoding allows an accurate determination of the velocity

  3. Accelerated radial Fourier-velocity encoding using compressed sensing.

    Science.gov (United States)

    Hilbert, Fabian; Wech, Tobias; Hahn, Dietbert; Köstler, Herbert

    2014-09-01

    Phase Contrast Magnetic Resonance Imaging (MRI) is a tool for non-invasive determination of flow velocities inside blood vessels. Because Phase Contrast MRI only measures a single mean velocity per voxel, it is only applicable to vessels significantly larger than the voxel size. In contrast, Fourier Velocity Encoding measures the entire velocity distribution inside a voxel, but requires a much longer acquisition time. For accurate diagnosis of stenosis in vessels on the scale of spatial resolution, it is important to know the velocity distribution of a voxel. Our aim was to determine velocity distributions with accelerated Fourier Velocity Encoding in an acquisition time required for a conventional Phase Contrast image. We imaged the femoral artery of healthy volunteers with ECG-triggered, radial CINE acquisition. Data acquisition was accelerated by undersampling, while missing data were reconstructed by Compressed Sensing. Velocity spectra of the vessel were evaluated by high resolution Phase Contrast images and compared to spectra from fully sampled and undersampled Fourier Velocity Encoding. By means of undersampling, it was possible to reduce the scan time for Fourier Velocity Encoding to the duration required for a conventional Phase Contrast image. Acquisition time for a fully sampled data set with 12 different Velocity Encodings was 40 min. By applying a 12.6-fold retrospective undersampling, a data set was generated equal to 3:10 min acquisition time, which is similar to a conventional Phase Contrast measurement. Velocity spectra from fully sampled and undersampled Fourier Velocity Encoded images are in good agreement and show the same maximum velocities as compared to velocity maps from Phase Contrast measurements. Compressed Sensing proved to reliably reconstruct Fourier Velocity Encoded data. Our results indicate that Fourier Velocity Encoding allows an accurate determination of the velocity distribution in vessels in the order of the voxel size. Thus

  4. A single frequency component-based re-estimated MUSIC algorithm for impact localization on complex composite structures

    International Nuclear Information System (INIS)

    Yuan, Shenfang; Bao, Qiao; Qiu, Lei; Zhong, Yongteng

    2015-01-01

    The growing use of composite materials on aircraft structures has attracted much attention for impact monitoring as a kind of structural health monitoring (SHM) method. Multiple signal classification (MUSIC)-based monitoring technology is a promising method because of its directional scanning ability and easy arrangement of the sensor array. However, for applications on real complex structures, some challenges still exist. The impact-induced elastic waves usually exhibit a wide-band performance, giving rise to the difficulty in obtaining the phase velocity directly. In addition, composite structures usually have obvious anisotropy, and the complex structural style of real aircrafts further enhances this performance, which greatly reduces the localization precision of the MUSIC-based method. To improve the MUSIC-based impact monitoring method, this paper first analyzes and demonstrates the influence of measurement precision of the phase velocity on the localization results of the MUSIC impact localization method. In order to improve the accuracy of the phase velocity measurement, a single frequency component extraction method is presented. Additionally, a single frequency component-based re-estimated MUSIC (SFCBR-MUSIC) algorithm is proposed to reduce the localization error caused by the anisotropy of the complex composite structure. The proposed method is verified on a real composite aircraft wing box, which has T-stiffeners and screw holes. Three typical categories of 41 impacts are monitored. Experimental results show that the SFCBR-MUSIC algorithm can localize impact on complex composite structures with an obviously improved accuracy. (paper)

  5. Questions Students Ask: About Terminal Velocity.

    Science.gov (United States)

    Meyer, Earl R.; Nelson, Jim

    1984-01-01

    If a ball were given an initial velocity in excess of its terminal velocity, would the upward force of air resistance (a function of velocity) be greater than the downward force of gravity and thus push the ball back upwards? An answer to this question is provided. (JN)

  6. The radial velocity, velocity dispersion, and mass-to-light ratio of the Sculptor dwarf galaxy

    Science.gov (United States)

    Armandroff, T. E.; Da Costa, G. S.

    1986-01-01

    The radial velocity, velocity dispersion, and mass-to-light ratio for 16 K giants in the Sculptor dwarf galaxy are calculated. Spectra at the Ca II triplet are analyzed using cross-correlation techniques in order to obtain the mean velocity of + 107.4 + or - 2.0 km/s. The dimensional velocity dispersion estimated as 6.3 (+1.1, -1.3) km/s is combined with the calculated core radius and observed central surface brightness to produce a mass-to-light ratio of 6.0 in solar units. It is noted that the data indicate that the Sculptor contains a large amount of mass not found in globular clusters, and the mass is either in the form of remnant stars or low-mass dwarfs.

  7. Towards a new technique to construct a 3D shear-wave velocity model based on converted waves

    Science.gov (United States)

    Hetényi, G.; Colavitti, L.

    2017-12-01

    A 3D model is essential in all branches of solid Earth sciences because geological structures can be heterogeneous and change significantly in their lateral dimension. The main target of this research is to build a crustal S-wave velocity structure in 3D. The currently popular methodologies to construct 3D shear-wave velocity models are Ambient Noise Tomography (ANT) and Local Earthquake Tomography (LET). Here we propose a new technique to map Earth discontinuities and velocities at depth based on the analysis of receiver functions. The 3D model is obtained by simultaneously inverting P-to-S converted waveforms recorded at a dense array. The individual velocity models corresponding to each trace are extracted from the 3D initial model along ray paths that are calculated using the shooting method, and the velocity model is updated during the inversion. We consider a spherical approximation of ray propagation using a global velocity model (iasp91, Kennett and Engdahl, 1991) for the teleseismic part, while we adopt Cartesian coordinates and a local velocity model for the crust. During the inversion process we work with a multi-layer crustal model for shear-wave velocity, with a flexible mesh for the depth of the interfaces. The RFs inversion represents a complex problem because the amplitude and the arrival time of different phases depend in a non-linear way on the depth of interfaces and the characteristics of the velocity structure. The solution we envisage to manage the inversion problem is the stochastic Neighbourhood Algorithm (NA, Sambridge, 1999), whose goal is to find an ensemble of models that sample the good data-fitting regions of a multidimensional parameter space. Depending on the studied area, this method can accommodate possible independent and complementary geophysical data (gravity, active seismics, LET, ANT, etc.), helping to reduce the non-linearity of the inversion. Our first focus of application is the Central Alps, where a 20-year long dataset of

  8. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  9. Application of Vectors to Relative Velocity

    Science.gov (United States)

    Tin-Lam, Toh

    2004-01-01

    The topic 'relative velocity' has recently been introduced into the Cambridge Ordinary Level Additional Mathematics syllabus under the application of Vectors. In this note, the results of relative velocity and the 'reduction to rest' technique of teaching relative velocity are derived mathematically from vector algebra, in the hope of providing…

  10. Machine-Learning Algorithms to Automate Morphological and Functional Assessments in 2D Echocardiography.

    Science.gov (United States)

    Narula, Sukrit; Shameer, Khader; Salem Omar, Alaa Mabrouk; Dudley, Joel T; Sengupta, Partho P

    2016-11-29

    Machine-learning models may aid cardiac phenotypic recognition by using features of cardiac tissue deformation. This study investigated the diagnostic value of a machine-learning framework that incorporates speckle-tracking echocardiographic data for automated discrimination of hypertrophic cardiomyopathy (HCM) from physiological hypertrophy seen in athletes (ATH). Expert-annotated speckle-tracking echocardiographic datasets obtained from 77 ATH and 62 HCM patients were used for developing an automated system. An ensemble machine-learning model with 3 different machine-learning algorithms (support vector machines, random forests, and artificial neural networks) was developed and a majority voting method was used for conclusive predictions with further K-fold cross-validation. Feature selection using an information gain (IG) algorithm revealed that volume was the best predictor for differentiating between HCM ands. ATH (IG = 0.24) followed by mid-left ventricular segmental (IG = 0.134) and average longitudinal strain (IG = 0.131). The ensemble machine-learning model showed increased sensitivity and specificity compared with early-to-late diastolic transmitral velocity ratio (p 13 mm. In this subgroup analysis, the automated model continued to show equal sensitivity, but increased specificity relative to early-to-late diastolic transmitral velocity ratio, e', and strain. Our results suggested that machine-learning algorithms can assist in the discrimination of physiological versus pathological patterns of hypertrophic remodeling. This effort represents a step toward the development of a real-time, machine-learning-based system for automated interpretation of echocardiographic images, which may help novice readers with limited experience. Copyright © 2016 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  11. Computing discharge using the index velocity method

    Science.gov (United States)

    Levesque, Victor A.; Oberg, Kevin A.

    2012-01-01

    Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs) in 1997. Presently (2011), the index velocity method is being used to compute discharge records for approximately 470 gaging stations operated and maintained by the U.S. Geological Survey. The purpose of this report is to document and describe techniques for computing discharge records using the index velocity method. Computing discharge using the index velocity method differs from the traditional stage-discharge method by separating velocity and area into two ratings—the index velocity rating and the stage-area rating. The outputs from each of these ratings, mean channel velocity (V) and cross-sectional area (A), are then multiplied together to compute a discharge. For the index velocity method, V is a function of such parameters as streamwise velocity, stage, cross-stream velocity, and velocity head, and A is a function of stage and cross-section shape. The index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate when more than one specific discharge can be measured for a specific stage. After the ADVM is selected, installed, and configured, the stage-area rating and the index velocity rating must be developed. A standard cross section is identified and surveyed in order to develop the stage-area rating. The standard cross section should be surveyed every year for the first 3 years of operation and thereafter at a lesser frequency, depending on the susceptibility of the cross section to change. Periodic measurements of discharge are used to calibrate and validate the index rating for the range of conditions experienced at the gaging station. Data from discharge measurements, ADVMs, and stage sensors are compiled for index-rating analysis. Index ratings are developed by means of regression

  12. Seismic velocity structure of the forearc in northern Cascadia from Bayesian inversion of teleseismic data

    Science.gov (United States)

    Gosselin, J.; Audet, P.; Schaeffer, A. J.

    2017-12-01

    The seismic velocity structure in the forearc of subduction zones provides important constraints on material properties, with implications for seismogenesis. In Cascadia, previous studies have imaged a downgoing low-velocity zone (LVZ) characterized by an elevated P-to-S velocity ratio (Vp/Vs) down to 45 km depth, near the intersection with the mantle wedge corner, beyond which the signature of the LVZ disappears. These results, combined with the absence of a "normal" continental Moho, indicate that the down-going oceanic crust likely carries large amounts of overpressured free fluids that are released downdip at the onset of crustal eclogitization, and are further stored in the mantle wedge as serpentinite. These overpressured free fluids affect the stability of the plate interface and facilitate slow slip. These results are based on the inversion and migration of scattered teleseismic data for individual layer properties; a methodology which suffers from regularization and smoothing, non-uniqueness, and does not consider model uncertainty. This study instead applies trans-dimensional Bayesian inversion of teleseismic data collected in the forearc of northern Cascadia (the CAFÉ experiment in northern Washington) to provide rigorous, quantitative estimates of local velocity structure, and associated uncertainties (particularly Vp/Vs structure and depth to the plate interface). Trans-dimensional inversion is a generalization of fixed-dimensional inversion that includes the number (and type) of parameters required to describe the velocity model (or data error model) as unknown in the problem. This allows model complexity to be inherently determined by data information content, not by subjective regularization. The inversion is implemented here using the reversible-jump Markov chain Monte Carlo algorithm. The result is an ensemble set of candidate velocity-structure models which approximate the posterior probability density (PPD) of the model parameters. The solution

  13. An Automated Algorithm for Identifying and Tracking Transverse Waves in Solar Images

    Science.gov (United States)

    Weberg, Micah J.; Morton, Richard J.; McLaughlin, James A.

    2018-01-01

    Recent instrumentation has demonstrated that the solar atmosphere supports omnipresent transverse waves, which could play a key role in energizing the solar corona. Large-scale studies are required in order to build up an understanding of the general properties of these transverse waves. To help facilitate this, we present an automated algorithm for identifying and tracking features in solar images and extracting the wave properties of any observed transverse oscillations. We test and calibrate our algorithm using a set of synthetic data, which includes noise and rotational effects. The results indicate an accuracy of 1%–2% for displacement amplitudes and 4%–10% for wave periods and velocity amplitudes. We also apply the algorithm to data from the Atmospheric Imaging Assembly on board the Solar Dynamics Observatory and find good agreement with previous studies. Of note, we find that 35%–41% of the observed plumes exhibit multiple wave signatures, which indicates either the superposition of waves or multiple independent wave packets observed at different times within a single structure. The automated methods described in this paper represent a significant improvement on the speed and quality of direct measurements of transverse waves within the solar atmosphere. This algorithm unlocks a wide range of statistical studies that were previously impractical.

  14. Demonstration of a Vector Velocity Technique

    DEFF Research Database (Denmark)

    Hansen, Peter Møller; Pedersen, Mads M.; Hansen, Kristoffer L.

    2011-01-01

    With conventional Doppler ultrasound it is not possible to estimate direction and velocity of blood flow, when the angle of insonation exceeds 60–70°. Transverse oscillation is an angle independent vector velocity technique which is now implemented on a conventional ultrasound scanner. In this pa......With conventional Doppler ultrasound it is not possible to estimate direction and velocity of blood flow, when the angle of insonation exceeds 60–70°. Transverse oscillation is an angle independent vector velocity technique which is now implemented on a conventional ultrasound scanner...

  15. On Newton-Raphson formulation and algorithm for displacement based structural dynamics problem with quadratic damping nonlinearity

    Directory of Open Access Journals (Sweden)

    Koh Kim Jie

    2017-01-01

    Full Text Available Quadratic damping nonlinearity is challenging for displacement based structural dynamics problem as the problem is nonlinear in time derivative of the primitive variable. For such nonlinearity, the formulation of tangent stiffness matrix is not lucid in the literature. Consequently, ambiguity related to kinematics update arises when implementing the time integration-iterative algorithm. In present work, an Euler-Bernoulli beam vibration problem with quadratic damping nonlinearity is addressed as the main source of quadratic damping nonlinearity arises from drag force estimation, which is generally valid only for slender structures. Employing Newton-Raphson formulation, tangent stiffness components associated with quadratic damping nonlinearity requires velocity input for evaluation purpose. For this reason, two mathematically equivalent algorithm structures with different kinematics arrangement are tested. Both algorithm structures result in the same accuracy and convergence characteristic of solution.

  16. A California statewide three-dimensional seismic velocity model from both absolute and differential times

    Science.gov (United States)

    Lin, G.; Thurber, C.H.; Zhang, H.; Hauksson, E.; Shearer, P.M.; Waldhauser, F.; Brocher, T.M.; Hardebeck, J.

    2010-01-01

    We obtain a seismic velocity model of the California crust and uppermost mantle using a regional-scale double-difference tomography algorithm. We begin by using absolute arrival-time picks to solve for a coarse three-dimensional (3D) P velocity (VP) model with a uniform 30 km horizontal node spacing, which we then use as the starting model for a finer-scale inversion using double-difference tomography applied to absolute and differential pick times. For computational reasons, we split the state into 5 subregions with a grid spacing of 10 to 20 km and assemble our final statewide VP model by stitching together these local models. We also solve for a statewide S-wave model using S picks from both the Southern California Seismic Network and USArray, assuming a starting model based on the VP results and a VP=VS ratio of 1.732. Our new model has improved areal coverage compared with previous models, extending 570 km in the SW-NE directionand 1320 km in the NW-SE direction. It also extends to greater depth due to the inclusion of substantial data at large epicentral distances. Our VP model generally agrees with previous separate regional models for northern and southern California, but we also observe some new features, such as high-velocity anomalies at shallow depths in the Klamath Mountains and Mount Shasta area, somewhat slow velocities in the northern Coast Ranges, and slow anomalies beneath the Sierra Nevada at midcrustal and greater depths. This model can be applied to a variety of regional-scale studies in California, such as developing a unified statewide earthquake location catalog and performing regional waveform modeling.

  17. Vector blood velocity estimation in medical ultrasound

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Gran, Fredrik; Udesen, Jesper

    2006-01-01

    Two methods for making vector velocity estimation in medical ultrasound are presented. All of the techniques can find both the axial and transverse velocity in the image and can be used for displaying both the correct velocity magnitude and direction. The first method uses a transverse oscillation...... in the ultrasound field to find the transverse velocity. In-vivo examples from the carotid artery are shown, where complex turbulent flow is found in certain parts of the cardiac cycle. The second approach uses directional beam forming along the flow direction to estimate the velocity magnitude. Using a correlation...... search can also yield the direction, and the full velocity vector is thereby found. An examples from a flow rig is shown....

  18. STAR Algorithm Integration Team - Facilitating operational algorithm development

    Science.gov (United States)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  19. High-velocity frictional properties of gabbro

    Science.gov (United States)

    Tsutsumi, Akito; Shimamoto, Toshihiko

    High-velocity friction experiments have been performed on a pair of hollow-cylindrical specimens of gabbro initially at room temperature, at slip rates from 7.5 mm/s to 1.8 m/s, with total circumferential displacements of 125 to 174 m, and at normal stresses to 5 MPa, using a rotary-shear high-speed friction testing machine. Steady-state friction increases slightly with increasing slip rate at slip rates to about 100 mm/s (velocity strengthening) and it decreases markedly with increasing slip rate at higher velocities (velocity weakening). Steady-state friction in the velocity weakening regime is lower for the non-melting case than the frictional melting case, due perhaps to severe thermal fracturing. A very large peak friction is always recognized upon the initiation of visible frictional melting, presumably owing to the welding of fault surfaces upon the solidification of melt patches. Frictional properties thus change dramatically with increasing displacement at high velocities, and such a non-linear effect must be incorporated into the analysis of earthquake initiation processes.

  20. Statistical properties of the coarse-grained velocity gradient tensor in turbulence: Monte-Carlo simulations of the tetrad model

    International Nuclear Information System (INIS)

    Pumir, Alain; Naso, Aurore

    2010-01-01

    A proper description of the velocity gradient tensor is crucial for understanding the dynamics of turbulent flows, in particular the energy transfer from large to small scales. Insight into the statistical properties of the velocity gradient tensor and into its coarse-grained generalization can be obtained with the help of a stochastic 'tetrad model' that describes the coarse-grained velocity gradient tensor based on the evolution of four points. Although the solution of the stochastic model can be formally expressed in terms of path integrals, its numerical determination in terms of the Monte-Carlo method is very challenging, as very few configurations contribute effectively to the statistical weight. Here, we discuss a strategy that allows us to solve the tetrad model numerically. The algorithm is based on the importance sampling method, which consists here of identifying and sampling preferentially the configurations that are likely to correspond to a large statistical weight, and selectively rejecting configurations with a small statistical weight. The algorithm leads to an efficient numerical determination of the solutions of the model and allows us to determine their qualitative behavior as a function of scale. We find that the moments of order n≤4 of the solutions of the model scale with the coarse-graining scale and that the scaling exponents are very close to the predictions of the Kolmogorov theory. The model qualitatively reproduces quite well the statistics concerning the local structure of the flow. However, we find that the model generally tends to predict an excess of strain compared to vorticity. Thus, our results show that while some physical aspects are not fully captured by the model, our approach leads to a very good description of several important qualitative properties of real turbulent flows.

  1. Velocity distribution in snow avalanches

    Science.gov (United States)

    Nishimura, K.; Ito, Y.

    1997-12-01

    In order to investigate the detailed structure of snow avalanches, we have made snow flow experiments at the Miyanomori ski jump in Sapporo and systematic observations in the Shiai-dani, Kurobe Canyon. In the winter of 1995-1996, a new device to measure static pressures was used to estimate velocities in the snow cloud that develops above the flowing layer of avalanches. Measurements during a large avalanche in the Shiai-dani which damaged and destroyed some instruments indicate velocities increased rapidly to more than 50 m/s soon after the front. Velocities decreased gradually in the following 10 s. Velocities of the lower flowing layer were also calculated by differencing measurement of impact pressure. Both recordings in the snow cloud and in the flowing layer changed with a similar trend and suggest a close interaction between the two layers. In addition, the velocity showed a periodic change. Power spectrum analysis of the impact pressure and the static pressure depression showed a strong peak at a frequency between 4 and 6 Hz, which might imply the existence of either ordered structure or a series of surges in the flow.

  2. External force/velocity control for an autonomous rehabilitation robot

    Science.gov (United States)

    Saekow, Peerayuth; Neranon, Paramin; Smithmaitrie, Pruittikorn

    2018-01-01

    Stroke is a primary cause of death and the leading cause of permanent disability in adults. There are many stroke survivors, who live with a variety of levels of disability and always need rehabilitation activities on daily basis. Several studies have reported that usage of rehabilitation robotic devices shows the better improvement outcomes in upper-limb stroke patients than the conventional therapy-nurses or therapists actively help patients with exercise-based rehabilitation. This research focuses on the development of an autonomous robotic trainer designed to guide a stroke patient through an upper-limb rehabilitation task. The robotic device was designed and developed to automate the reaching exercise as mentioned. The designed robotic system is made up of a four-wheel omni-directional mobile robot, an ATI Gamma multi-axis force/torque sensor used to measure contact force and a microcontroller real-time operating system. Proportional plus Integral control was adapted to control the overall performance and stability of the autonomous assistive robot. External force control was successfully implemented to establish the behavioral control strategy for the robot force and velocity control scheme. In summary, the experimental results indicated satisfactorily stable performance of the robot force and velocity control can be considered acceptable. The gain tuning for proportional integral (PI) velocity control algorithms was suitably estimated using the Ziegler-Nichols method in which the optimized proportional and integral gains are 0.45 and 0.11, respectively. Additionally, the PI external force control gains were experimentally tuned using the trial and error method based on a set of experiments which allow a human participant moves the robot along the constrained circular path whilst attempting to minimize the radial force. The performance was analyzed based on the root mean square error (E_RMS) of the radial forces, in which the lower the variation in radial

  3. Sound Velocity in Soap Foams

    International Nuclear Information System (INIS)

    Wu Gong-Tao; Lü Yong-Jun; Liu Peng-Fei; Li Yi-Ning; Shi Qing-Fan

    2012-01-01

    The velocity of sound in soap foams at high gas volume fractions is experimentally studied by using the time difference method. It is found that the sound velocities increase with increasing bubble diameter, and asymptotically approach to the value in air when the diameter is larger than 12.5 mm. We propose a simple theoretical model for the sound propagation in a disordered foam. In this model, the attenuation of a sound wave due to the scattering of the bubble wall is equivalently described as the effect of an additional length. This simplicity reasonably reproduces the sound velocity in foams and the predicted results are in good agreement with the experiments. Further measurements indicate that the increase of frequency markedly slows down the sound velocity, whereas the latter does not display a strong dependence on the solution concentration

  4. SEVEN NEW BINARIES DISCOVERED IN THE KEPLER LIGHT CURVES THROUGH THE BEER METHOD CONFIRMED BY RADIAL-VELOCITY OBSERVATIONS

    International Nuclear Information System (INIS)

    Faigler, S.; Mazeh, T.; Tal-Or, L.; Quinn, S. N.; Latham, D. W.

    2012-01-01

    We present seven newly discovered non-eclipsing short-period binary systems with low-mass companions, identified by the recently introduced BEER algorithm, applied to the publicly available 138-day photometric light curves obtained by the Kepler mission. The detection is based on the beaming effect (sometimes called Doppler boosting), which increases (decreases) the brightness of any light source approaching (receding from) the observer, enabling a prediction of the stellar Doppler radial-velocity (RV) modulation from its precise photometry. The BEER algorithm identifies the BEaming periodic modulation, with a combination of the well-known Ellipsoidal and Reflection/heating periodic effects, induced by short-period companions. The seven detections were confirmed by spectroscopic RV follow-up observations, indicating minimum secondary masses in the range 0.07-0.4 M ☉ . The binaries discovered establish for the first time the feasibility of the BEER algorithm as a new detection method for short-period non-eclipsing binaries, with the potential to detect in the near future non-transiting brown-dwarf secondaries, or even massive planets.

  5. Neural network fusion capabilities for efficient implementation of tracking algorithms

    Science.gov (United States)

    Sundareshan, Malur K.; Amoozegar, Farid

    1997-03-01

    The ability to efficiently fuse information of different forms to facilitate intelligent decision making is one of the major capabilities of trained multilayer neural networks that is now being recognized. While development of innovative adaptive control algorithms for nonlinear dynamical plants that attempt to exploit these capabilities seems to be more popular, a corresponding development of nonlinear estimation algorithms using these approaches, particularly for application in target surveillance and guidance operations, has not received similar attention. We describe the capabilities and functionality of neural network algorithms for data fusion and implementation of tracking filters. To discuss details and to serve as a vehicle for quantitative performance evaluations, the illustrative case of estimating the position and velocity of surveillance targets is considered. Efficient target- tracking algorithms that can utilize data from a host of sensing modalities and are capable of reliably tracking even uncooperative targets executing fast and complex maneuvers are of interest in a number of applications. The primary motivation for employing neural networks in these applications comes from the efficiency with which more features extracted from different sensor measurements can be utilized as inputs for estimating target maneuvers. A system architecture that efficiently integrates the fusion capabilities of a trained multilayer neural net with the tracking performance of a Kalman filter is described. The innovation lies in the way the fusion of multisensor data is accomplished to facilitate improved estimation without increasing the computational complexity of the dynamical state estimator itself.

  6. THE HARPS-TERRA PROJECT. I. DESCRIPTION OF THE ALGORITHMS, PERFORMANCE, AND NEW MEASUREMENTS ON A FEW REMARKABLE STARS OBSERVED BY HARPS

    Energy Technology Data Exchange (ETDEWEB)

    Anglada-Escude, Guillem; Butler, R. Paul, E-mail: anglada@dtm.ciw.edu [Carnegie Institution of Washington, Department of Terrestrial Magnetism, 5241 Broad Branch Rd. NW, Washington, DC 20015 (United States)

    2012-06-01

    Doppler spectroscopy has uncovered or confirmed all the known planets orbiting nearby stars. Two main techniques are used to obtain precision Doppler measurements at optical wavelengths. The first approach is the gas cell method, which consists of least-squares matching of the spectrum of iodine imprinted on the spectrum of the star. The second method relies on the construction of a stabilized spectrograph externally calibrated in wavelength. The most precise stabilized spectrometer in operation is the High Accuracy Radial velocity Planet Searcher (HARPS), operated by the European Southern Observatory in La Silla Observatory, Chile. The Doppler measurements obtained with HARPS are typically obtained using the cross-correlation function (CCF) technique. This technique consists of multiplying the stellar spectrum by a weighted binary mask and finding the minimum of the product as a function of the Doppler shift. It is known that CCF is suboptimal in exploiting the Doppler information in the stellar spectrum. Here we describe an algorithm to obtain precision radial velocity measurements using least-squares matching of each observed spectrum to a high signal-to-noise ratio template derived from the same observations. This algorithm is implemented in our software HARPS-TERRA (Template-Enhanced Radial velocity Re-analysis Application). New radial velocity measurements on a representative sample of stars observed by HARPS are used to illustrate the benefits of the proposed method. We show that, compared with CCF, template matching provides a significant improvement in accuracy, especially when applied to M dwarfs.

  7. Selfish Gene Algorithm Vs Genetic Algorithm: A Review

    Science.gov (United States)

    Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed

    2016-11-01

    Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.

  8. Imaging water velocity and volume fraction distributions in water continuous multiphase flows using inductive flow tomography and electrical resistance tomography

    International Nuclear Information System (INIS)

    Meng, Yiqing; Lucas, Gary P

    2017-01-01

    This paper presents the design and implementation of an inductive flow tomography (IFT) system, employing a multi-electrode electromagnetic flow meter (EMFM) and novel reconstruction techniques, for measuring the local water velocity distribution in water continuous single and multiphase flows. A series of experiments were carried out in vertical-upward and upward-inclined single phase water flows and ‘water continuous’ gas–water and oil–gas–water flows in which the velocity profiles ranged from axisymmetric (single phase and vertical-upward multiphase flows) to highly asymmetric (upward-inclined multiphase flows). Using potential difference measurements obtained from the electrode array of the EMFM, local axial velocity distributions of the continuous water phase were reconstructed using two different IFT reconstruction algorithms denoted RT#1, which assumes that the overall water velocity profile comprises the sum of a series of polynomial velocity components, and RT#2, which is similar to RT#1 but which assumes that the zero’th order velocity component may be replaced by an axisymmetric ‘power law’ velocity distribution. During each experiment, measurement of the local water volume fraction distribution was also made using the well-established technique of electrical resistance tomography (ERT). By integrating the product of the local axial water velocity and the local water volume fraction in the cross section an estimate of the water volumetric flow rate was made which was compared with a reference measurement of the water volumetric flow rate. In vertical upward flows RT#2 was found to give rise to water velocity profiles which are consistent with the previous literature although the profiles obtained in the multiphase flows had relatively higher central velocity peaks than was observed for the single phase profiles. This observation was almost certainly a result of the transfer of axial momentum from the less dense dispersed phases to the

  9. Imaging water velocity and volume fraction distributions in water continuous multiphase flows using inductive flow tomography and electrical resistance tomography

    Science.gov (United States)

    Meng, Yiqing; Lucas, Gary P.

    2017-05-01

    This paper presents the design and implementation of an inductive flow tomography (IFT) system, employing a multi-electrode electromagnetic flow meter (EMFM) and novel reconstruction techniques, for measuring the local water velocity distribution in water continuous single and multiphase flows. A series of experiments were carried out in vertical-upward and upward-inclined single phase water flows and ‘water continuous’ gas-water and oil-gas-water flows in which the velocity profiles ranged from axisymmetric (single phase and vertical-upward multiphase flows) to highly asymmetric (upward-inclined multiphase flows). Using potential difference measurements obtained from the electrode array of the EMFM, local axial velocity distributions of the continuous water phase were reconstructed using two different IFT reconstruction algorithms denoted RT#1, which assumes that the overall water velocity profile comprises the sum of a series of polynomial velocity components, and RT#2, which is similar to RT#1 but which assumes that the zero’th order velocity component may be replaced by an axisymmetric ‘power law’ velocity distribution. During each experiment, measurement of the local water volume fraction distribution was also made using the well-established technique of electrical resistance tomography (ERT). By integrating the product of the local axial water velocity and the local water volume fraction in the cross section an estimate of the water volumetric flow rate was made which was compared with a reference measurement of the water volumetric flow rate. In vertical upward flows RT#2 was found to give rise to water velocity profiles which are consistent with the previous literature although the profiles obtained in the multiphase flows had relatively higher central velocity peaks than was observed for the single phase profiles. This observation was almost certainly a result of the transfer of axial momentum from the less dense dispersed phases to the water

  10. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  11. Algorithms for detection of objects in image sequences captured from an airborne imaging system

    Science.gov (United States)

    Kasturi, Rangachar; Camps, Octavia; Tang, Yuan-Liang; Devadiga, Sadashiva; Gandhi, Tarak

    1995-01-01

    This research was initiated as a part of the effort at the NASA Ames Research Center to design a computer vision based system that can enhance the safety of navigation by aiding the pilots in detecting various obstacles on the runway during critical section of the flight such as a landing maneuver. The primary goal is the development of algorithms for detection of moving objects from a sequence of images obtained from an on-board video camera. Image regions corresponding to the independently moving objects are segmented from the background by applying constraint filtering on the optical flow computed from the initial few frames of the sequence. These detected regions are tracked over subsequent frames using a model based tracking algorithm. Position and velocity of the moving objects in the world coordinate is estimated using an extended Kalman filter. The algorithms are tested using the NASA line image sequence with six static trucks and a simulated moving truck and experimental results are described. Various limitations of the currently implemented version of the above algorithm are identified and possible solutions to build a practical working system are investigated.

  12. Distinguishing zero-group-velocity modes in photonic crystals

    International Nuclear Information System (INIS)

    Ghebrebrhan, M.; Ibanescu, M.; Johnson, Steven G.; Soljacic, M.; Joannopoulos, J. D.

    2007-01-01

    We examine differences between various zero-group-velocity modes in photonic crystals, including those that arise from Bragg diffraction, anticrossings, and band repulsion. Zero-group velocity occurs at points where the group velocity changes sign, and therefore is conceptually related to 'left-handed' media, in which the group velocity is opposite to the phase velocity. We consider this relationship more quantitatively in terms of the Fourier decomposition of the modes, by defining a measure of how much the ''average'' phase velocity is parallel to the group velocity--an anomalous region is one in which they are mostly antiparallel. We find that this quantity can be used to qualitatively distinguish different zero-group-velocity points. In one dimension, such anomalous regions are found never to occur. In higher dimensions, they are exhibited around certain zero-group-velocity points, and lead to unusual enhanced confinement behavior in microcavities

  13. Settling velocities in batch sedimentation

    International Nuclear Information System (INIS)

    Fricke, A.M.; Thompson, B.E.

    1982-10-01

    The sedimentation of mixtures containing one and two sizes of spherical particles (44 and 62 μm in diameter) was studied. Radioactive tracing with 57 Co was used to measure the settling velocities. The ratio of the settling velocity U of uniformly sized particles to the velocity predicted to Stokes' law U 0 was correlated to an expression of the form U/U 0 = epsilon/sup α/, where epsilon is the liquid volume fraction and α is an empirical constant, determined experimentally to be 4.85. No effect of viscosity on the ratio U/U 0 was observed as the viscosity of the liquid medium was varied from 1x10 -3 to 5x10 -3 Pa.s. The settling velocities of particles in a bimodal mixture were fit by the same correlation; the ratio U/U 0 was independent of the concentrations of different-sized particles

  14. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  15. Direct and precise measurement of displacement and velocity of flexible web in roll-to-roll manufacturing systems

    International Nuclear Information System (INIS)

    Kang, Dongwoo; Lee, Eonseok; Choi, Young-Man; Lee, Taik-Min; Kim, Duk Young; Kim, Dongmin

    2013-01-01

    Interest in the production of printed electronics using a roll-to-roll system has gradually increased due to its low mass-production costs and compatibility with flexible substrate. To improve the accuracy of roll-to-roll manufacturing systems, the movement of the web needs to be measured precisely in advance. In this paper, a novel measurement method is developed to measure the displacement and velocity of the web precisely and directly. The proposed algorithm is based on the traditional single field encoder principle, and the scale grating has been replaced with a printed grating on the web. Because a printed grating cannot be as accurate as a scale grating in a traditional encoder, there will inevitably be variations in pitch and line-width, and the motion of the web should be measured even though there are variations in pitch and line-width in the printed grating patterns. For this reason, the developed algorithm includes a precise method of estimating the variations in pitch. In addtion, a method of correcting the Lissajous curve is presented for precision phase interpolation to improve measurement accuracy by correcting Lissajous circle to unit circle. The performance of the developed method is evaluated by simulation and experiment. In the experiment, the displacement error was less than 2.5 μm and the velocity error of 1σ was about 0.25%, while the grating scale moved 30 mm

  16. Direct and precise measurement of displacement and velocity of flexible web in roll-to-roll manufacturing systems

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Dongwoo; Lee, Eonseok; Choi, Young-Man; Lee, Taik-Min [Advanced Manufacturing Systems Research Division, Korea Institute of Machinery and Materials, 156 Gajeongbuk-Ro, Yuseong-Gu, Daejeon 305-343 (Korea, Republic of); Kim, Duk Young [Nano-Opto-Mechatronics Lab., Dept. of Mechanical Eng., KAIST, 335 Gwahangno, Yuseong-Gu, Daejeon 305-701 (Korea, Republic of); Kim, Dongmin [Korea Research Institute of Standards and Science, 267 Gajeong-Ro, Yuseong-Gu, Daejeon 305-340 (Korea, Republic of)

    2013-12-15

    Interest in the production of printed electronics using a roll-to-roll system has gradually increased due to its low mass-production costs and compatibility with flexible substrate. To improve the accuracy of roll-to-roll manufacturing systems, the movement of the web needs to be measured precisely in advance. In this paper, a novel measurement method is developed to measure the displacement and velocity of the web precisely and directly. The proposed algorithm is based on the traditional single field encoder principle, and the scale grating has been replaced with a printed grating on the web. Because a printed grating cannot be as accurate as a scale grating in a traditional encoder, there will inevitably be variations in pitch and line-width, and the motion of the web should be measured even though there are variations in pitch and line-width in the printed grating patterns. For this reason, the developed algorithm includes a precise method of estimating the variations in pitch. In addtion, a method of correcting the Lissajous curve is presented for precision phase interpolation to improve measurement accuracy by correcting Lissajous circle to unit circle. The performance of the developed method is evaluated by simulation and experiment. In the experiment, the displacement error was less than 2.5 μm and the velocity error of 1σ was about 0.25%, while the grating scale moved 30 mm.

  17. FORMALIZATION OF DIESEL ENGINE OPERATION CONSIDERING THE EVALUATION OF VELOCITY DURING THE COMBUSTION PROCESSES

    Directory of Open Access Journals (Sweden)

    V. P. Litvinenko

    2015-10-01

    Full Text Available Purpose. Under modern conditions the applying methods and design models as well as the evaluation of the operational characteristics of diesel engines do not completely take into consideration the specifics of the combustion processes. In part, such situation is characterized by the complexity of considering of varied by its nature processes that haven’t been completely investigated. In this context it is necessary to find the new methods and models which would provide relatively simple solutions through the use of integrated factors based on the analysis of parameters of diesel engines. Methodology. The proposed algorithms for the estimating of the combustion process in the form of volumetric and linear velocities is based on the well-known parameters of power and mean effective pressure and allows to compare the efficiency of their behavior in various versions of diesel engines. Findings. The author specified that the volumetric / linear velocity ratio is characterized by some strength and depends on the geometric dimensions of the cylinder-piston group. Due to the assumptions it has become possible to consider the operation of a diesel engine as a system comprising: 1 the subsystem that provides the possibility of obtaining the thermal energy; 2 the subsystem providing the thermal energy transformation; 3 the subsystem that provides the necessary diesel engine power depending on terms of combustion of air-fuel mixture. Originality. The author of the paper proposed the indices of volumetric and linear combustion velocity of air-fuel mixture in the engine cylinder, that allow to obtain the comparative value in different modifications taking into account the possible choice of optimum ratio. Practical value. The usage of indices of volumetric and linear velocities of the combustion processes in the engine cylinder combined with a mathematical model will simplify the method of diesels calculating. Parametric indices of the mentioned velocities

  18. Superhilac real-time velocity measurements

    International Nuclear Information System (INIS)

    Feinberg, B.; Meaney, D.; Thatcher, R.; Timossi, C.

    1987-03-01

    Phase probes have been placed in several external beam lines at the LBL heavy ion linear accelerator (SuperHILAC) to provide non-destructive velocity measurements independent of the ion being accelerated. The existing system has been improved to provide the following features: a display refresh rate better than twice per second, a sensitive pseudo-correlation technique to pick out the signal from the noise, simultaneous measurements of up to four ion velocities when more than one beam is being accelerated, and a touch-screen operator interface. These improvements allow the system to be used as a routine tuning aid and beam velocity monitor

  19. Fractals control in particle's velocity

    International Nuclear Information System (INIS)

    Zhang Yongping; Liu Shutang; Shen Shulan

    2009-01-01

    Julia set, a fractal set of the literature of nonlinear physics, has significance for the engineering applications. For example, the fractal structure characteristics of the generalized M-J set could visually reflect the change rule of particle's velocity. According to the real world requirement, the system need show various particle's velocity in some cases. Thus, the control of the nonlinear behavior, i.e., Julia set, has attracted broad attention. In this work, an auxiliary feedback control is introduced to effectively control the Julia set that visually reflects the change rule of particle's velocity. It satisfies the performance requirement of the real world problems.

  20. 3-D crustal P-wave velocity tomography of the Italian region using local and regional seismicity data

    Directory of Open Access Journals (Sweden)

    F. M. Mele

    1995-06-01

    Full Text Available A tomographic experiment was performed in the Italian region using local and regional arrivaI times of p and S seismological phases selected from the Italian National Bulletin in the time interval 1984-1991. We deter- mined a 3-D crustal P-wave velocity model using a simultaneous inversion method that iteratively re1ocates the hypocenters and computes the unknown model parameters. A fast two-point ray tracing algorithm was adopted to compute the ray paths and travel times of P", S", P g' Sg phases with good accuracy. Synthetic tests were performed using the "true" hypocenter and station distribution to rough1y evaluate the extension of the areas most densely spanned by the ray paths; the agreement between synthetic and computed models is more satisfactory at Moho depths than in the upper crust. The qua1ity of the model resulting from inversion of real data is examined by the ca1culation of the Spread Function (Toomey and Foulger, 1989. The 3-D crustal P-wave velocity mode1 of the Italian region shows remarkab1e trends at Moho depths: the areas east of the Apennines call for positive adjustments of the initial velocity va1ue, while the west region shows negative ad- justments. The correspondence among the main features of the velocity field, the map of Moho isobaths and the map of the gravity anoma1ies is also outlined.

  1. Essential algorithms a practical approach to computer algorithms

    CERN Document Server

    Stephens, Rod

    2013-01-01

    A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s

  2. Journal of Chemical Sciences | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Atomistic details of the molecular recognition of DNA-RNA hybrid duplex by ... Long timescale molecular dynamics simulations have been performed on the ... theory with Lindemann criterion, inherent structure analysis and Hansen-Verlet rule ..... of temperature dependent dissociation mechanism of HF in HF(H2O)7 cluster.

  3. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  4. Clinical longitudinal standards for height, weight, height velocity, weight velocity, and stages of puberty.

    Science.gov (United States)

    Tanner, J M; Whitehouse, R H

    1976-01-01

    New charts for height, weight, height velocity, and weight velocity are presented for clinical (as opposed to population survey) use. They are based on longitudinal-type growth curves, using the same data as in the British 1965 growth standards. In the velocity standards centiles are given for children who are early- and late-maturing as well as for those who mature at the average age (thus extending the use of the previous charts). Limits of normality for the age of occurrence of the adolescent growth spurt are given and also for the successive stages of penis, testes, and pubic hair development in boys, and for stages of breast and pubic hair development in girls. PMID:952550

  5. A numerical scheme to calculate temperature and salinity dependent air-water transfer velocities for any gas

    Directory of Open Access Journals (Sweden)

    M. T. Johnson

    2010-10-01

    Full Text Available The ocean-atmosphere flux of a gas can be calculated from its measured or estimated concentration gradient across the air-sea interface and the transfer velocity (a term representing the conductivity of the layers either side of the interface with respect to the gas of interest. Traditionally the transfer velocity has been estimated from empirical relationships with wind speed, and then scaled by the Schmidt number of the gas being transferred. Complex, physically based models of transfer velocity (based on more physical forcings than wind speed alone, such as the NOAA COARE algorithm, have more recently been applied to well-studied gases such as carbon dioxide and DMS (although many studies still use the simpler approach for these gases, but there is a lack of validation of such schemes for other, more poorly studied gases. The aim of this paper is to provide a flexible numerical scheme which will allow the estimation of transfer velocity for any gas as a function of wind speed, temperature and salinity, given data on the solubility and liquid molar volume of the particular gas. New and existing parameterizations (including a novel empirical parameterization of the salinity-dependence of Henry's law solubility are brought together into a scheme implemented as a modular, extensible program in the R computing environment which is available in the supplementary online material accompanying this paper; along with input files containing solubility and structural data for ~90 gases of general interest, enabling the calculation of their total transfer velocities and component parameters. Comparison of the scheme presented here with alternative schemes and methods for calculating air-sea flux parameters shows good agreement in general. It is intended that the various components of this numerical scheme should be applied only in the absence of experimental data providing robust values for parameters for a particular gas of interest.

  6. A numerical scheme to calculate temperature and salinity dependent air-water transfer velocities for any gas

    Science.gov (United States)

    Johnson, M. T.

    2010-10-01

    The ocean-atmosphere flux of a gas can be calculated from its measured or estimated concentration gradient across the air-sea interface and the transfer velocity (a term representing the conductivity of the layers either side of the interface with respect to the gas of interest). Traditionally the transfer velocity has been estimated from empirical relationships with wind speed, and then scaled by the Schmidt number of the gas being transferred. Complex, physically based models of transfer velocity (based on more physical forcings than wind speed alone), such as the NOAA COARE algorithm, have more recently been applied to well-studied gases such as carbon dioxide and DMS (although many studies still use the simpler approach for these gases), but there is a lack of validation of such schemes for other, more poorly studied gases. The aim of this paper is to provide a flexible numerical scheme which will allow the estimation of transfer velocity for any gas as a function of wind speed, temperature and salinity, given data on the solubility and liquid molar volume of the particular gas. New and existing parameterizations (including a novel empirical parameterization of the salinity-dependence of Henry's law solubility) are brought together into a scheme implemented as a modular, extensible program in the R computing environment which is available in the supplementary online material accompanying this paper; along with input files containing solubility and structural data for ~90 gases of general interest, enabling the calculation of their total transfer velocities and component parameters. Comparison of the scheme presented here with alternative schemes and methods for calculating air-sea flux parameters shows good agreement in general. It is intended that the various components of this numerical scheme should be applied only in the absence of experimental data providing robust values for parameters for a particular gas of interest.

  7. Modeling and inversion Matlab algorithms for resistivity, induced polarization and seismic data

    Science.gov (United States)

    Karaoulis, M.; Revil, A.; Minsley, B. J.; Werkema, D. D.

    2011-12-01

    M. Karaoulis (1), D.D. Werkema (3), A. Revil (1,2), A., B. Minsley (4), (1) Colorado School of Mines, Dept. of Geophysics, Golden, CO, USA. (2) ISTerre, CNRS, UMR 5559, Université de Savoie, Equipe Volcan, Le Bourget du Lac, France. (3) U.S. EPA, ORD, NERL, ESD, CMB, Las Vegas, Nevada, USA . (4) USGS, Federal Center, Lakewood, 10, 80225-0046, CO. Abstract We propose 2D and 3D forward modeling and inversion package for DC resistivity, time domain induced polarization (IP), frequency-domain IP, and seismic refraction data. For the resistivity and IP case, discretization is based on rectangular cells, where each cell has as unknown resistivity in the case of DC modelling, resistivity and chargeability in the time domain IP modelling, and complex resistivity in the spectral IP modelling. The governing partial-differential equations are solved with the finite element method, which can be applied to both real and complex variables that are solved for. For the seismic case, forward modeling is based on solving the eikonal equation using a second-order fast marching method. The wavepaths are materialized by Fresnel volumes rather than by conventional rays. This approach accounts for complicated velocity models and is advantageous because it considers frequency effects on the velocity resolution. The inversion can accommodate data at a single time step, or as a time-lapse dataset if the geophysical data are gathered for monitoring purposes. The aim of time-lapse inversion is to find the change in the velocities or resistivities of each model cell as a function of time. Different time-lapse algorithms can be applied such as independent inversion, difference inversion, 4D inversion, and 4D active time constraint inversion. The forward algorithms are benchmarked against analytical solutions and inversion results are compared with existing ones. The algorithms are packaged as Matlab codes with a simple Graphical User Interface. Although the code is parallelized for multi

  8. Diverse Geological Applications For Basil: A 2d Finite-deformation Computational Algorithm

    Science.gov (United States)

    Houseman, Gregory A.; Barr, Terence D.; Evans, Lynn

    Geological processes are often characterised by large finite-deformation continuum strains, on the order of 100% or greater. Microstructural processes cause deformation that may be represented by a viscous constitutive mechanism, with viscosity that may depend on temperature, pressure, or strain-rate. We have developed an effective com- putational algorithm for the evaluation of 2D deformation fields produced by Newto- nian or non-Newtonian viscous flow. With the implementation of this algorithm as a computer program, Basil, we have applied it to a range of diverse applications in Earth Sciences. Viscous flow fields in 2D may be defined for the thin-sheet case or, using a velocity-pressure formulation, for the plane-strain case. Flow fields are represented using 2D triangular elements with quadratic interpolation for velocity components and linear for pressure. The main matrix equation is solved by an efficient and compact conjugate gradient algorithm with iteration for non-Newtonian viscosity. Regular grids may be used, or grids based on a random distribution of points. Definition of the prob- lem requires that velocities, tractions, or some combination of the two, are specified on all external boundary nodes. Compliant boundaries may also be defined, based on the idea that traction is opposed to and proportional to boundary displacement rate. In- ternal boundary segments, allowing fault-like displacements within a viscous medium have also been developed, and we find that the computed displacement field around the fault tip is accurately represented for Newtonian and non-Newtonian viscosities, in spite of the stress singularity at the fault tip. Basil has been applied by us and colleagues to problems that include: thin sheet calculations of continental collision, Rayleigh-Taylor instability of the continental mantle lithosphere, deformation fields around fault terminations at the outcrop scale, stress and deformation fields in and around porphyroblasts, and

  9. Low-velocity superconducting accelerating structures

    International Nuclear Information System (INIS)

    Delayen, J.R.

    1990-01-01

    The present paper reviews the status of RF superconductivity as applied to low-velocity accelerating properties. Heavy-ion accelerators must accelerate efficiently particles which travel at a velocity much smaller than that of light particles, whose velocity changes along accelerator, and also different particles which have different velocity profiles. Heavy-ion superconducting accelerators operate at frequencies which are lower than high-energy superconducting accelerators. The present paper first discusses the basic features of heavy-ion superconducting structures and linacs. Design choices are then addressed focusing on structure geometry, materials, frequency, phase control, and focusing. The report also gives an outline of the status of superconducting booster projects currently under way at the Argonne National Laboratory, SUNY Stony Brook, Weizmann Institute, University of Washington, Florida State, Saclay, Kansas State, Daresbury, Japanese Atomic Energy Research Institute, Legnaro, Bombay, Sao Paulo, ANU (Canberra), and Munich. Recent developments and future prospects are also described. (N.K.) 68 refs

  10. Spatially-resolved velocities of thermally-produced spray droplets using a velocity-divided Abel inversion of photographed streaks

    Science.gov (United States)

    Kawaguchi, Y.; Kobayashi, N.; Yamagata, Y.; Miyazaki, F.; Yamasaki, M.; Muraoka, K.

    2017-10-01

    Droplet velocities of thermal spray are known to have profound effects on important coating qualities, such as adhesive strength, porosity, and hardness, for various applications. For obtaining the droplet velocities, therefore, the TOF (time-of-flight) technique has been widely used, which relies on observations of emitted radiation from the droplets, where all droplets along the line-of-sight contribute to signals. Because droplets at and near the flow axis mostly contribute coating layers, it has been hoped to get spatially resolved velocities. For this purpose, a velocity-divided Abel inversion was devised from CMOS photographic data. From this result, it has turned out that the central velocity is about 25% higher than that obtained from the TOF technique for the case studied (at the position 150 mm downstream of the plasma spray gun, where substrates for spray coatings are usually placed). Further implications of the obtained results are discussed.

  11. Vector Control Algorithm for Electric Vehicle AC Induction Motor Based on Improved Variable Gain PID Controller

    Directory of Open Access Journals (Sweden)

    Gang Qin

    2015-01-01

    Full Text Available The acceleration performance of EV, which affects a lot of performances of EV such as start-up, overtaking, driving safety, and ride comfort, has become increasingly popular in recent researches. An improved variable gain PID control algorithm to improve the acceleration performance is proposed in this paper. The results of simulation with Matlab/Simulink demonstrate the effectiveness of the proposed algorithm through the control performance of motor velocity, motor torque, and three-phase current of motor. Moreover, it is investigated that the proposed controller is valid by comparison with the other PID controllers. Furthermore, the AC induction motor experiment set is constructed to verify the effect of proposed controller.

  12. The Limit Deposit Velocity model, a new approach

    Directory of Open Access Journals (Sweden)

    Miedema Sape A.

    2015-12-01

    Full Text Available In slurry transport of settling slurries in Newtonian fluids, it is often stated that one should apply a line speed above a critical velocity, because blow this critical velocity there is the danger of plugging the line. There are many definitions and names for this critical velocity. It is referred to as the velocity where a bed starts sliding or the velocity above which there is no stationary bed or sliding bed. Others use the velocity where the hydraulic gradient is at a minimum, because of the minimum energy consumption. Most models from literature are one term one equation models, based on the idea that the critical velocity can be explained that way.

  13. Denni Algorithm An Enhanced Of SMS (Scan, Move and Sort) Algorithm

    Science.gov (United States)

    Aprilsyah Lubis, Denni; Salim Sitompul, Opim; Marwan; Tulus; Andri Budiman, M.

    2017-12-01

    Sorting has been a profound area for the algorithmic researchers, and many resources are invested to suggest a more working sorting algorithm. For this purpose many existing sorting algorithms were observed in terms of the efficiency of the algorithmic complexity. Efficient sorting is important to optimize the use of other algorithms that require sorted lists to work correctly. Sorting has been considered as a fundamental problem in the study of algorithms that due to many reasons namely, the necessary to sort information is inherent in many applications, algorithms often use sorting as a key subroutine, in algorithm design there are many essential techniques represented in the body of sorting algorithms, and many engineering issues come to the fore when implementing sorting algorithms., Many algorithms are very well known for sorting the unordered lists, and one of the well-known algorithms that make the process of sorting to be more economical and efficient is SMS (Scan, Move and Sort) algorithm, an enhancement of Quicksort invented Rami Mansi in 2010. This paper presents a new sorting algorithm called Denni-algorithm. The Denni algorithm is considered as an enhancement on the SMS algorithm in average, and worst cases. The Denni algorithm is compared with the SMS algorithm and the results were promising.

  14. 3D velocity structure of upper crust beneath NW Bohemia/Vogtland

    Science.gov (United States)

    Javad Fallahi, Mohammad; Mousavi, Sima; Korn, Michael; Sens-Schönfelder, Christoph; Bauer, Klaus; Rößler, Dirk

    2013-04-01

    The 3D structure of the upper crust beneath west Bohemia/Vogtland region, analyzed with travel time tomography and ambient noise surface wave tomography using existing data. This region is characterized by a series of phenomena like occurrence of repeated earthquake swarms, surface exhalation, CO2 enriched fluids, mofettes, mineral springs and enhanced heat flow, and has been proposed as an excellent location for an ICDP drilling project targeted to a better understanding of the crust in an active magmatic environment. We performed a 3D tomography using P-and S-wave travel times of local earthquakes and explosions. The data set were taken from permanent and temporary seismic networks in Germany and Czech Republic from 2000 to 2010, as well as active seismic experiments like Celebration 2000 and quarry blasts. After picking P and S wave arrival times, 399 events which were recorded by 9 or more stations and azimuthal gap<160° were selected for inversion. A simultaneous inversion of P and S wave 1D velocity models together with relocations of hypocenters and station corrections was performed. The obtained minimum 1D velocity model was used as starting model for the 3D Vp and Vp/Vs velocity models. P and S wave travel time tomography employs damped least-square method and ray tracing by pseudo-bending algorithm. For model parametrization different cell node spacings have been tested to evaluate the resolution in each node. Synthetic checkerboard tests have been done to check the structural resolution. Then Vp and Vp/Vs in the preferred 3D grid model have been determined. Earthquakes locations in iteration process change till the hypocenter adjustments and travel time residuals become smaller than the defined threshold criteria. Finally the analysis of the resolution depicts the well resolved features for interpretation. We observed lower Vp/Vs ratio in depth of 5-10 km close to the foci of earthquake swarms and higher Vp/Vs ratio is observed in Saxoturingian zone and

  15. Modulating Functions Based Algorithm for the Estimation of the Coefficients and Differentiation Order for a Space-Fractional Advection-Dispersion Equation

    KAUST Repository

    Aldoghaither, Abeer

    2015-12-01

    In this paper, a new method, based on the so-called modulating functions, is proposed to estimate average velocity, dispersion coefficient, and differentiation order in a space-fractional advection-dispersion equation, where the average velocity and the dispersion coefficient are space-varying. First, the average velocity and the dispersion coefficient are estimated by applying the modulating functions method, where the problem is transformed into a linear system of algebraic equations. Then, the modulating functions method combined with a Newton\\'s iteration algorithm is applied to estimate the coefficients and the differentiation order simultaneously. The local convergence of the proposed method is proved. Numerical results are presented with noisy measurements to show the effectiveness and robustness of the proposed method. It is worth mentioning that this method can be extended to general fractional partial differential equations.

  16. Modulating Functions Based Algorithm for the Estimation of the Coefficients and Differentiation Order for a Space-Fractional Advection-Dispersion Equation

    KAUST Repository

    Aldoghaither, Abeer; Liu, Da-Yan; Laleg-Kirati, Taous-Meriem

    2015-01-01

    In this paper, a new method, based on the so-called modulating functions, is proposed to estimate average velocity, dispersion coefficient, and differentiation order in a space-fractional advection-dispersion equation, where the average velocity and the dispersion coefficient are space-varying. First, the average velocity and the dispersion coefficient are estimated by applying the modulating functions method, where the problem is transformed into a linear system of algebraic equations. Then, the modulating functions method combined with a Newton's iteration algorithm is applied to estimate the coefficients and the differentiation order simultaneously. The local convergence of the proposed method is proved. Numerical results are presented with noisy measurements to show the effectiveness and robustness of the proposed method. It is worth mentioning that this method can be extended to general fractional partial differential equations.

  17. Python algorithms mastering basic algorithms in the Python language

    CERN Document Server

    Hetland, Magnus Lie

    2014-01-01

    Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc

  18. A Simple Piece of Apparatus to Aid the Understanding of the Relationship between Angular Velocity and Linear Velocity

    Science.gov (United States)

    Unsal, Yasin

    2011-01-01

    One of the subjects that is confusing and difficult for students to fully comprehend is the concept of angular velocity and linear velocity. It is the relationship between linear and angular velocity that students find difficult; most students understand linear motion in isolation. In this article, we detail the design, construction and…

  19. Energy loss optimization of run-off-road wheels applying imperialist competitive algorithm

    Directory of Open Access Journals (Sweden)

    Hamid Taghavifar

    2014-08-01

    Full Text Available The novel imperialist competitive algorithm (ICA has presented outstanding fitness on various optimization problems. Application of meta-heuristics has been a dynamic studying interest of the reliability optimization to determine idleness and reliability constituents. The application of a meta-heuristic evolutionary optimization method, imperialist competitive algorithm (ICA, for minimization of energy loss due to wheel rolling resistance in a soil bin facility equipped with single-wheel tester is discussed. The required data were collected thorough various designed experiments in the controlled soil bin environment. Local and global searching of the search space proposed that the energy loss could be reduced to the minimum amount of 15.46 J at the optimized input variable configuration of wheel load at 1.2 kN, tire inflation pressure of 296 kPa and velocity of 2 m/s. Meanwhile, genetic algorithm (GA, particle swarm optimization (PSO and hybridized GA–PSO approaches were benchmarked among the broad spectrum of meta-heuristics to find the outperforming approach. It was deduced that, on account of the obtained results, ICA can achieve optimum configuration with superior accuracy in less required computational time.

  20. Peculiar velocity measurement in a clumpy universe

    Science.gov (United States)

    Habibi, Farhang; Baghram, Shant; Tavasoli, Saeed

    Aims: In this work, we address the issue of peculiar velocity measurement in a perturbed Friedmann universe using the deviations from measured luminosity distances of standard candles from background FRW universe. We want to show and quantify the statement that in intermediate redshifts (0.5 deviations from the background FRW model are not uniquely governed by peculiar velocities. Luminosity distances are modified by gravitational lensing. We also want to indicate the importance of relativistic calculations for peculiar velocity measurement at all redshifts. Methods: For this task, we discuss the relativistic correction on luminosity distance and redshift measurement and show the contribution of each of the corrections as lensing term, peculiar velocity of the source and Sachs-Wolfe effect. Then, we use the SNe Ia sample of Union 2, to investigate the relativistic effects, we consider. Results: We show that, using the conventional peculiar velocity method, that ignores the lensing effect, will result in an overestimate of the measured peculiar velocities at intermediate redshifts. Here, we quantify this effect. We show that at low redshifts the lensing effect is negligible compare to the effect of peculiar velocity. From the observational point of view, we show that the uncertainties on luminosity of the present SNe Ia data prevent us from precise measuring the peculiar velocities even at low redshifts (z < 0.2).

  1. Introduction to vector velocity imaging

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Udesen, Jesper; Hansen, Kristoffer Lindskov

    Current ultrasound scanners can only estimate the velocity along the ultrasound beam and this gives rise to the cos() factor on all velocity estimates. This is a major limitation as most vessels are close to perpendicular to the beam. Also the angle varies as a function of space and time making ...

  2. Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm

    Science.gov (United States)

    Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad

    2018-01-01

    Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.

  3. Noninvasive calculation of the aortic blood pressure waveform from the flow velocity waveform: a proof of concept.

    Science.gov (United States)

    Vennin, Samuel; Mayer, Alexia; Li, Ye; Fok, Henry; Clapp, Brian; Alastruey, Jordi; Chowienczyk, Phil

    2015-09-01

    Estimation of aortic and left ventricular (LV) pressure usually requires measurements that are difficult to acquire during the imaging required to obtain concurrent LV dimensions essential for determination of LV mechanical properties. We describe a novel method for deriving aortic pressure from the aortic flow velocity. The target pressure waveform is divided into an early systolic upstroke, determined by the water hammer equation, and a diastolic decay equal to that in the peripheral arterial tree, interposed by a late systolic portion described by a second-order polynomial constrained by conditions of continuity and conservation of mean arterial pressure. Pulse wave velocity (PWV, which can be obtained through imaging), mean arterial pressure, diastolic pressure, and diastolic decay are required inputs for the algorithm. The algorithm was tested using 1) pressure data derived theoretically from prespecified flow waveforms and properties of the arterial tree using a single-tube 1-D model of the arterial tree, and 2) experimental data acquired from a pressure/Doppler flow velocity transducer placed in the ascending aorta in 18 patients (mean ± SD: age 63 ± 11 yr, aortic BP 136 ± 23/73 ± 13 mmHg) at the time of cardiac catheterization. For experimental data, PWV was calculated from measured pressures/flows, and mean and diastolic pressures and diastolic decay were taken from measured pressure (i.e., were assumed to be known). Pressure reconstructed from measured flow agreed well with theoretical pressure: mean ± SD root mean square (RMS) error 0.7 ± 0.1 mmHg. Similarly, for experimental data, pressure reconstructed from measured flow agreed well with measured pressure (mean RMS error 2.4 ± 1.0 mmHg). First systolic shoulder and systolic peak pressures were also accurately rendered (mean ± SD difference 1.4 ± 2.0 mmHg for peak systolic pressure). This is the first noninvasive derivation of aortic pressure based on fluid dynamics (flow and wave speed) in the

  4. Computational physics simulation of classical and quantum systems

    CERN Document Server

    Scherer, Philipp O J

    2013-01-01

    This textbook presents basic and advanced computational physics in a very didactic style. It contains very-well-presented and simple mathematical descriptions of many of the most important algorithms used in computational physics. Many clear mathematical descriptions of important techniques in computational physics are given. The first part of the book discusses the basic numerical methods. A large number of exercises and computer experiments allows to study the properties of these methods. The second part concentrates on simulation of classical and quantum systems. It uses a rather general concept for the equation of motion which can be applied to ordinary and partial differential equations. Several classes of integration methods are discussed including not only the standard Euler and Runge Kutta method but also multistep methods and the class of Verlet methods which is introduced by studying the motion in Liouville space. Besides the classical methods, inverse interpolation is discussed, together with the p...

  5. Shale fabric and velocity anisotropy : a study from Pikes Peak Waseca Oil Pool, Saskatchewan

    Energy Technology Data Exchange (ETDEWEB)

    Newrick, R.T.; Lawton, D.C. [Calgary Univ., AB (Canada). Dept. of Geology and Geophysics

    2004-07-01

    The stratigraphic sequence of the Pikes Peaks region in west-central Saskatchewan consists of a thick sequence of shale overlying interbedded sandstones, shale and coal from the Mannville Group. Hydrocarbons exist in the Waseca, Sparky and General Petroleum Formations in the Pikes Peak region. The primary objective of this study was to examine the layering of clay minerals in the shale and to find similarities or differences between samples that may be associated with velocity anisotropy. Anisotropy is of key concern in areas with thick shale sequences. Several processing algorithms include corrections for velocity anisotropy in order for seismic images to be well focused and laterally positioned. This study also estimated the Thomsen parameters of anisotropy through field studies. The relationship between the shale fabric and anisotropy was determined by photographic core samples from Pike Peak using a scanning electron microscope. Shale from two wells in the Waseca Oil Pool demonstrated highly variable fabric over a limited vertical extent. No layering of clay minerals was noted at the sub-centimetre scale. Transverse isotropy of the stratigraphy was therefore considered to be mainly intrinsic. 7 refs., 3 tabs., 9 figs.

  6. Influence of Velocity on Variability in Gait Kinematics

    DEFF Research Database (Denmark)

    Yang, Sylvia X M; Larsen, Peter K; Alkjær, Tine

    2014-01-01

    the concurrence of joint angles throughout a gait cycle at three different velocities (3.0, 4.5, 6.0 km/h). Six datasets at each velocity were collected from 16 men. A variability range VR throughout the gait cycle at each velocity for each joint angle for each person was calculated. The joint angles at each...... velocity were compared pairwise, and whenever this showed values within the VR of this velocity, the case was positive. By adding the positives throughout the gait cycle, phases with high and low concurrences were located; peak concurrence was observed at mid-stance phase. Striving for the same velocity...

  7. Modified circular velocity law

    Science.gov (United States)

    Djeghloul, Nazim

    2018-05-01

    A modified circular velocity law is presented for a test body orbiting around a spherically symmetric mass. This law exhibits a distance scale parameter and allows to recover both usual Newtonian behaviour for lower distances and a constant velocity limit at large scale. Application to the Galaxy predicts the known behaviour and also leads to a galactic mass in accordance with the measured visible stellar mass so that additional dark matter inside the Galaxy can be avoided. It is also shown that this circular velocity law can be embedded in a geometrical description of spacetime within the standard general relativity framework upon relaxing the usual asymptotic flatness condition. This formulation allows to redefine the introduced Newtonian scale limit in term of the central mass exclusively. Moreover, a satisfactory answer to the galactic escape speed problem can be provided indicating the possibility that one can also get rid of dark matter halo outside the Galaxy.

  8. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  9. 14 CFR 29.87 - Height-velocity envelope.

    Science.gov (United States)

    2010-01-01

    ... Category A engine isolation requirements, the height-velocity envelope for complete power failure must be... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Height-velocity envelope. 29.87 Section 29... AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY ROTORCRAFT Flight Performance § 29.87 Height-velocity envelope. (a...

  10. Detonation velocity in poorly mixed gas mixtures

    Science.gov (United States)

    Prokhorov, E. S.

    2017-10-01

    The technique for computation of the average velocity of plane detonation wave front in poorly mixed mixture of gaseous hydrocarbon fuel and oxygen is proposed. Here it is assumed that along the direction of detonation propagation the chemical composition of the mixture has periodic fluctuations caused, for example, by layered stratification of gas charge. The technique is based on the analysis of functional dependence of ideal (Chapman-Jouget) detonation velocity on mole fraction (with respect to molar concentration) of the fuel. It is shown that the average velocity of detonation can be significantly (by more than 10%) less than the velocity of ideal detonation. The dependence that permits to estimate the degree of mixing of gas mixture basing on the measurements of average detonation velocity is established.

  11. Nonlinear Rayleigh wave inversion based on the shuffled frog-leaping algorithm

    Science.gov (United States)

    Sun, Cheng-Yu; Wang, Yan-Yan; Wu, Dun-Shi; Qin, Xiao-Jun

    2017-12-01

    At present, near-surface shear wave velocities are mainly calculated through Rayleigh wave dispersion-curve inversions in engineering surface investigations, but the required calculations pose a highly nonlinear global optimization problem. In order to alleviate the risk of falling into a local optimal solution, this paper introduces a new global optimization method, the shuffle frog-leaping algorithm (SFLA), into the Rayleigh wave dispersion-curve inversion process. SFLA is a swarm-intelligence-based algorithm that simulates a group of frogs searching for food. It uses a few parameters, achieves rapid convergence, and is capability of effective global searching. In order to test the reliability and calculation performance of SFLA, noise-free and noisy synthetic datasets were inverted. We conducted a comparative analysis with other established algorithms using the noise-free dataset, and then tested the ability of SFLA to cope with data noise. Finally, we inverted a real-world example to examine the applicability of SFLA. Results from both synthetic and field data demonstrated the effectiveness of SFLA in the interpretation of Rayleigh wave dispersion curves. We found that SFLA is superior to the established methods in terms of both reliability and computational efficiency, so it offers great potential to improve our ability to solve geophysical inversion problems.

  12. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce

    Science.gov (United States)

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223

  13. Concept of AHRS Algorithm Designed for Platform Independent Imu Attitude Alignment

    Science.gov (United States)

    Tomaszewski, Dariusz; Rapiński, Jacek; Pelc-Mieczkowska, Renata

    2017-12-01

    Nowadays, along with the advancement of technology one can notice the rapid development of various types of navigation systems. So far the most popular satellite navigation, is now supported by positioning results calculated with use of other measurement system. The method and manner of integration will depend directly on the destination of system being developed. To increase the frequency of readings and improve the operation of outdoor navigation systems, one will support satellite navigation systems (GPS, GLONASS ect.) with inertial navigation. Such method of navigation consists of several steps. The first stage is the determination of initial orientation of inertial measurement unit, called INS alignment. During this process, on the basis of acceleration and the angular velocity readings, values of Euler angles (pitch, roll, yaw) are calculated allowing for unambiguous orientation of the sensor coordinate system relative to external coordinate system. The following study presents the concept of AHRS (Attitude and heading reference system) algorithm, allowing to define the Euler angles.The study were conducted with the use of readings from low-cost MEMS cell phone sensors. Subsequently the results of the study were analyzed to determine the accuracy of featured algorithm. On the basis of performed experiments the legitimacy of developed algorithm was stated.

  14. A solution algorithm for fluid-particle flows across all flow regimes

    Science.gov (United States)

    Kong, Bo; Fox, Rodney O.

    2017-09-01

    Many fluid-particle flows occurring in nature and in technological applications exhibit large variations in the local particle volume fraction. For example, in circulating fluidized beds there are regions where the particles are close-packed as well as very dilute regions where particle-particle collisions are rare. Thus, in order to simulate such fluid-particle systems, it is necessary to design a flow solver that can accurately treat all flow regimes occurring simultaneously in the same flow domain. In this work, a solution algorithm is proposed for this purpose. The algorithm is based on splitting the free-transport flux solver dynamically and locally in the flow. In close-packed to moderately dense regions, a hydrodynamic solver is employed, while in dilute to very dilute regions a kinetic-based finite-volume solver is used in conjunction with quadrature-based moment methods. To illustrate the accuracy and robustness of the proposed solution algorithm, it is implemented in OpenFOAM for particle velocity moments up to second order, and applied to simulate gravity-driven, gas-particle flows exhibiting cluster-induced turbulence. By varying the average particle volume fraction in the flow domain, it is demonstrated that the flow solver can handle seamlessly all flow regimes present in fluid-particle flows.

  15. Sampling-Based Motion Planning Algorithms for Replanning and Spatial Load Balancing

    Energy Technology Data Exchange (ETDEWEB)

    Boardman, Beth Leigh [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-12

    The common theme of this dissertation is sampling-based motion planning with the two key contributions being in the area of replanning and spatial load balancing for robotic systems. Here, we begin by recalling two sampling-based motion planners: the asymptotically optimal rapidly-exploring random tree (RRT*), and the asymptotically optimal probabilistic roadmap (PRM*). We also provide a brief background on collision cones and the Distributed Reactive Collision Avoidance (DRCA) algorithm. The next four chapters detail novel contributions for motion replanning in environments with unexpected static obstacles, for multi-agent collision avoidance, and spatial load balancing. First, we show improved performance of the RRT* when using the proposed Grandparent-Connection (GP) or Focused-Refinement (FR) algorithms. Next, the Goal Tree algorithm for replanning with unexpected static obstacles is detailed and proven to be asymptotically optimal. A multi-agent collision avoidance problem in obstacle environments is approached via the RRT*, leading to the novel Sampling-Based Collision Avoidance (SBCA) algorithm. The SBCA algorithm is proven to guarantee collision free trajectories for all of the agents, even when subject to uncertainties in the knowledge of the other agents’ positions and velocities. Given that a solution exists, we prove that livelocks and deadlock will lead to the cost to the goal being decreased. We introduce a new deconfliction maneuver that decreases the cost-to-come at each step. This new maneuver removes the possibility of livelocks and allows a result to be formed that proves convergence to the goal configurations. Finally, we present a limited range Graph-based Spatial Load Balancing (GSLB) algorithm which fairly divides a non-convex space among multiple agents that are subject to differential constraints and have a limited travel distance. The GSLB is proven to converge to a solution when maximizing the area covered by the agents. The analysis

  16. Velocity navigator for motion compensated thermometry.

    Science.gov (United States)

    Maier, Florian; Krafft, Axel J; Yung, Joshua P; Stafford, R Jason; Elliott, Andrew; Dillmann, Rüdiger; Semmler, Wolfhard; Bock, Michael

    2012-02-01

    Proton resonance frequency shift thermometry is sensitive to breathing motion that leads to incorrect phase differences. In this work, a novel velocity-sensitive navigator technique for triggering MR thermometry image acquisition is presented. A segmented echo planar imaging pulse sequence was modified for velocity-triggered temperature mapping. Trigger events were generated when the estimated velocity value was less than 0.2 cm/s during the slowdown phase in parallel to the velocity-encoding direction. To remove remaining high-frequency spikes from pulsation in real time, a Kalman filter was applied to the velocity navigator data. A phantom experiment with heating and an initial volunteer experiment without heating were performed to show the applicability of this technique. Additionally, a breath-hold experiment was conducted for comparison. A temperature rise of ΔT = +37.3°C was seen in the phantom experiment, and a root mean square error (RMSE) outside the heated region of 2.3°C could be obtained for periodic motion. In the volunteer experiment, a RMSE of 2.7°C/2.9°C (triggered vs. breath hold) was measured. A novel velocity navigator with Kalman filter postprocessing in real time significantly improves the temperature accuracy over non-triggered acquisitions and suggests being comparable to a breath-held acquisition. The proposed technique might be clinically applied for monitoring of thermal ablations in abdominal organs.

  17. ROTATIONAL VELOCITIES FOR M DWARFS

    International Nuclear Information System (INIS)

    Jenkins, J. S.; Ramsey, L. W.; Jones, H. R. A.; Pavlenko, Y.; Barnes, J. R.; Pinfield, D. J.; Gallardo, J.

    2009-01-01

    We present spectroscopic rotation velocities (v sin i) for 56 M dwarf stars using high-resolution Hobby-Eberly Telescope High Resolution Spectrograph red spectroscopy. In addition, we have also determined photometric effective temperatures, masses, and metallicities ([Fe/H]) for some stars observed here and in the literature where we could acquire accurate parallax measurements and relevant photometry. We have increased the number of known v sin i values for mid M stars by around 80% and can confirm a weakly increasing rotation velocity with decreasing effective temperature. Our sample of v sin is peak at low velocities (∼3 km s -1 ). We find a change in the rotational velocity distribution between early M and late M stars, which is likely due to the changing field topology between partially and fully convective stars. There is also a possible further change in the rotational distribution toward the late M dwarfs where dust begins to play a role in the stellar atmospheres. We also link v sin i to age and show how it can be used to provide mid-M star age limits. When all literature velocities for M dwarfs are added to our sample, there are 198 with v sin i ≤ 10 km s -1 and 124 in the mid-to-late M star regime (M3.0-M9.5) where measuring precision optical radial velocities is difficult. In addition, we also search the spectra for any significant Hα emission or absorption. Forty three percent were found to exhibit such emission and could represent young, active objects with high levels of radial-velocity noise. We acquired two epochs of spectra for the star GJ1253 spread by almost one month and the Hα profile changed from showing no clear signs of emission, to exhibiting a clear emission peak. Four stars in our sample appear to be low-mass binaries (GJ1080, GJ3129, Gl802, and LHS3080), with both GJ3129 and Gl802 exhibiting double Hα emission features. The tables presented here will aid any future M star planet search target selection to extract stars with low v

  18. Propagation Velocity of Solid Earth Tides

    Science.gov (United States)

    Pathak, S.

    2017-12-01

    One of the significant considerations in most of the geodetic investigations is to take into account the outcome of Solid Earth tides on the location and its consequent impact on the time series of coordinates. In this research work, the propagation velocity resulting from the Solid Earth tides between the Indian stations is computed. Mean daily coordinates for the stations have been computed by applying static precise point positioning technique for a day. The computed coordinates are used as an input for computing the tidal displacements at the stations by Gravity method along three directions at 1-minute interval for 24 hours. Further the baseline distances are computed between four Indian stations. Computation of the propagation velocity for Solid Earth tides can be done by the virtue of study of the concurrent effect of it in-between the stations of identified baseline distance along with the time consumed by the tides for reaching from one station to another. The propagation velocity helps in distinguishing the impact at any station if the consequence at a known station for a specific time-period is known. Thus, with the knowledge of propagation velocity, the spatial and temporal effects of solid earth tides can be estimated with respect to a known station. As theoretically explained, the tides generated are due to the position of celestial bodies rotating about Earth. So the need of study is to observe the correlation of propagation velocity with the rotation speed of the Earth. The propagation velocity of Solid Earth tides comes out to be in the range of 440-470 m/s. This velocity comes out to be in a good agreement with the Earth's rotation speed.

  19. On using the Multiple Signal Classification algorithm to study microbaroms

    Science.gov (United States)

    Marcillo, O. E.; Blom, P. S.; Euler, G. G.

    2016-12-01

    Multiple Signal Classification (MUSIC) (Schmidt, 1986) is a well-known high-resolution algorithm used in array processing for parameter estimation. We report on the application of MUSIC to infrasonic array data in a study of the structure of microbaroms. Microbaroms can be globally observed and display energy centered around 0.2 Hz. Microbaroms are an infrasonic signal generated by the non-linear interaction of ocean surface waves that radiate into the ocean and atmosphere as well as the solid earth in the form of microseisms. Microbaroms sources are dynamic and, in many cases, distributed in space and moving in time. We assume that the microbarom energy detected by an infrasonic array is the result of multiple sources (with different back-azimuths) in the same bandwidth and apply the MUSIC algorithm accordingly to recover the back-azimuth and trace velocity of the individual components. Preliminary results show that the multiple component assumption in MUSIC allows one to resolve the fine structure in the microbarom band that can be related to multiple ocean surface phenomena.

  20. The species velocity of trees in Alaska

    Science.gov (United States)

    Morrison, B. D.; Napier, J.; de Lafontaine, G.; Heath, K.; Li, B.; Hu, F.; Greenberg, J. A.

    2017-12-01

    Anthropogenic climate change has motivated interest in the paleo record to enhance our knowledge about past vegetation responses to climate change and help understand potential responses in the future. Additionally, polar regions currently experience the most rapid rates of climate change globally, prompting concern over changes in the ecological composition of high latitude ecosystems. Recent analyses have attempted to construct methods to estimate a species' ability to track climate change by computing climate velocity; a measure of the rate of climate displacement across a landscape which may indicate the speed an organism must migrate to keep pace with climate change. However, a challenge to using climate velocity in understanding range shifts is a lack of species-specificity in the velocity calculations: climate velocity does not actually use any species data in its analysis. To solve the shortcomings of climate velocity in estimating species displacement rates, we computed the "species velocity" of white spruce, green and grey alder populations across the state of Alaska from the Last Glacial Maximum (LGM) to today. Species velocity represents the rate and direction a species is required to migrate to keep pace with a changing climate following the LGM. We used a species distribution model to determine past and present white spruce and alder distributions using statistically downscaled climate data at 60m. Species velocity was then derived from the change in species distribution per year by the change in distribution over Alaska (km/yr). High velocities indicate locations where the species environmental envelope is changing drastically and must disperse rapidly to survive climate change. As a result, high velocity regions are more vulnerable to distribution shifts and higher risk of local extinction. Conversely, low species velocities indicate locations where the local climate envelope is shifting relatively slowly, reducing the stress to disperse quickly

  1. Middle cerebral artery blood velocity during running

    DEFF Research Database (Denmark)

    Lyngeraa, Tobias; Pedersen, Lars Møller; Mantoni, T

    2013-01-01

    for eight subjects, respectively, were excluded from analysis because of insufficient signal quality. Running increased mean arterial pressure and mean MCA velocity and induced rhythmic oscillations in BP and in MCA velocity corresponding to the difference between step rate and heart rate (HR) frequencies....... During running, rhythmic oscillations in arterial BP induced by interference between HR and step frequency impact on cerebral blood velocity. For the exercise as a whole, average MCA velocity becomes elevated. These results suggest that running not only induces an increase in regional cerebral blood flow...

  2. Realization of a neural algorithm by means of front-propagation in a thyristor-based hybrid system

    CERN Document Server

    Niedernostheide, F J; Freyd, O; Bode, M; Gorbatyuk, A V

    2003-01-01

    Propagating fronts are generic structures in a bistable diffusion-driven system and can be used to realize neural algorithms, as e.g., the Kohonen or the neural-gas algorithm. We present an analog-digital hybrid system based on a thyristor-like structure with several gate terminals. This structure represents the continuous part in which a propagating front, separating a region of high current density from a region of low current density, is used to control the learning process of the neural algorithm. With a system containing five neurons and five gates in a quasi one-dimensional arrangement it is demonstrated that an efficient parallel operating learning process can be realized by using the winner-take-all principle and the front propagation, i.e. exploiting the intrinsic dynamics of the semiconductor device. Finally, numerical and analytical investigations of the dependency of the front velocity and its width on the load current have been performed since these are essential parameters for improving the netw...

  3. Realization of a neural algorithm by means of front-propagation in a thyristor-based hybrid system

    International Nuclear Information System (INIS)

    Niedernostheide, F.-J.; Schulze, H.-J.; Freyd, O.; Bode, M.; Gorbatyuk, A.V.

    2003-01-01

    Propagating fronts are generic structures in a bistable diffusion-driven system and can be used to realize neural algorithms, as e.g., the Kohonen or the neural-gas algorithm. We present an analog-digital hybrid system based on a thyristor-like structure with several gate terminals. This structure represents the continuous part in which a propagating front, separating a region of high current density from a region of low current density, is used to control the learning process of the neural algorithm. With a system containing five neurons and five gates in a quasi one-dimensional arrangement it is demonstrated that an efficient parallel operating learning process can be realized by using the winner-take-all principle and the front propagation, i.e. exploiting the intrinsic dynamics of the semiconductor device. Finally, numerical and analytical investigations of the dependency of the front velocity and its width on the load current have been performed since these are essential parameters for improving the network performance

  4. Predicting vertical jump height from bar velocity.

    Science.gov (United States)

    García-Ramos, Amador; Štirn, Igor; Padial, Paulino; Argüelles-Cienfuegos, Javier; De la Fuente, Blanca; Strojnik, Vojko; Feriche, Belén

    2015-06-01

    The objective of the study was to assess the use of maximum (Vmax) and final propulsive phase (FPV) bar velocity to predict jump height in the weighted jump squat. FPV was defined as the velocity reached just before bar acceleration was lower than gravity (-9.81 m·s(-2)). Vertical jump height was calculated from the take-off velocity (Vtake-off) provided by a force platform. Thirty swimmers belonging to the National Slovenian swimming team performed a jump squat incremental loading test, lifting 25%, 50%, 75% and 100% of body weight in a Smith machine. Jump performance was simultaneously monitored using an AMTI portable force platform and a linear velocity transducer attached to the barbell. Simple linear regression was used to estimate jump height from the Vmax and FPV recorded by the linear velocity transducer. Vmax (y = 16.577x - 16.384) was able to explain 93% of jump height variance with a standard error of the estimate of 1.47 cm. FPV (y = 12.828x - 6.504) was able to explain 91% of jump height variance with a standard error of the estimate of 1.66 cm. Despite that both variables resulted to be good predictors, heteroscedasticity in the differences between FPV and Vtake-off was observed (r(2) = 0.307), while the differences between Vmax and Vtake-off were homogenously distributed (r(2) = 0.071). These results suggest that Vmax is a valid tool for estimating vertical jump height in a loaded jump squat test performed in a Smith machine. Key pointsVertical jump height in the loaded jump squat can be estimated with acceptable precision from the maximum bar velocity recorded by a linear velocity transducer.The relationship between the point at which bar acceleration is less than -9.81 m·s(-2) and the real take-off is affected by the velocity of movement.Mean propulsive velocity recorded by a linear velocity transducer does not appear to be optimal to monitor ballistic exercise performance.

  5. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  6. Reality Check Algorithm for Complex Sources in Early Warning

    Science.gov (United States)

    Karakus, G.; Heaton, T. H.

    2013-12-01

    In almost all currently operating earthquake early warning (EEW) systems, presently available seismic data are used to predict future shaking. In most cases, location and magnitude are estimated. We are developing an algorithm to test the goodness of that prediction in real time. We monitor envelopes of acceleration, velocity, and displacement; if they deviate significantly from the envelope predicted by Cua's envelope gmpe's then we declare an overfit (perhaps false alarm) or an underfit (possibly a larger event has just occurred). This algorithm is designed to provide a robust measure and to work as quickly as possible in real-time. We monitor the logarithm of the ratio between the envelopes of the ongoing observed event and the envelopes derived from the predicted envelopes of channels of ground motion of the Virtual Seismologist (VS) (Cua, G. and Heaton, T.). Then, we recursively filter this result with a simple running median (de-spiking operator) to minimize the effect of one single high value. Depending on the result of the filtered value we make a decision such as if this value is large enough (e.g., >1), then we would declare, 'that a larger event is in progress', or similarly if this value is small enough (e.g., <-1), then we would declare a false alarm. We design the algorithm to work at a wide range of amplitude scales; that is, it should work for both small and large events.

  7. SU-E-J-89: Comparative Analysis of MIM and Velocity’s Image Deformation Algorithm Using Simulated KV-CBCT Images for Quality Assurance

    Energy Technology Data Exchange (ETDEWEB)

    Cline, K; Narayanasamy, G; Obediat, M; Stanley, D; Stathakis, S; Kirby, N [University of Texas Health Science Center at San Antonio, Cancer Therapy and Research Center, San Antonio, TX (United States); Kim, H [University of California San Francisco, San Francisco, CA (United States)

    2015-06-15

    Purpose: Deformable image registration (DIR) is used routinely in the clinic without a formalized quality assurance (QA) process. Using simulated deformations to digitally deform images in a known way and comparing to DIR algorithm predictions is a powerful technique for DIR QA. This technique must also simulate realistic image noise and artifacts, especially between modalities. This study developed an algorithm to create simulated daily kV cone-beam computed-tomography (CBCT) images from CT images for DIR QA between these modalities. Methods: A Catphan and physical head-and-neck phantom, with known deformations, were used. CT and kV-CBCT images of the Catphan were utilized to characterize the changes in Hounsfield units, noise, and image cupping that occur between these imaging modalities. The algorithm then imprinted these changes onto a CT image of the deformed head-and-neck phantom, thereby creating a simulated-CBCT image. CT and kV-CBCT images of the undeformed and deformed head-and-neck phantom were also acquired. The Velocity and MIM DIR algorithms were applied between the undeformed CT image and each of the deformed CT, CBCT, and simulated-CBCT images to obtain predicted deformations. The error between the known and predicted deformations was used as a metric to evaluate the quality of the simulated-CBCT image. Ideally, the simulated-CBCT image registration would produce the same accuracy as the deformed CBCT image registration. Results: For Velocity, the mean error was 1.4 mm for the CT-CT registration, 1.7 mm for the CT-CBCT registration, and 1.4 mm for the CT-simulated-CBCT registration. These same numbers were 1.5, 4.5, and 5.9 mm, respectively, for MIM. Conclusion: All cases produced similar accuracy for Velocity. MIM produced similar values of accuracy for CT-CT registration, but was not as accurate for CT-CBCT registrations. The MIM simulated-CBCT registration followed this same trend, but overestimated MIM DIR errors relative to the CT

  8. SU-E-J-89: Comparative Analysis of MIM and Velocity’s Image Deformation Algorithm Using Simulated KV-CBCT Images for Quality Assurance

    International Nuclear Information System (INIS)

    Cline, K; Narayanasamy, G; Obediat, M; Stanley, D; Stathakis, S; Kirby, N; Kim, H

    2015-01-01

    Purpose: Deformable image registration (DIR) is used routinely in the clinic without a formalized quality assurance (QA) process. Using simulated deformations to digitally deform images in a known way and comparing to DIR algorithm predictions is a powerful technique for DIR QA. This technique must also simulate realistic image noise and artifacts, especially between modalities. This study developed an algorithm to create simulated daily kV cone-beam computed-tomography (CBCT) images from CT images for DIR QA between these modalities. Methods: A Catphan and physical head-and-neck phantom, with known deformations, were used. CT and kV-CBCT images of the Catphan were utilized to characterize the changes in Hounsfield units, noise, and image cupping that occur between these imaging modalities. The algorithm then imprinted these changes onto a CT image of the deformed head-and-neck phantom, thereby creating a simulated-CBCT image. CT and kV-CBCT images of the undeformed and deformed head-and-neck phantom were also acquired. The Velocity and MIM DIR algorithms were applied between the undeformed CT image and each of the deformed CT, CBCT, and simulated-CBCT images to obtain predicted deformations. The error between the known and predicted deformations was used as a metric to evaluate the quality of the simulated-CBCT image. Ideally, the simulated-CBCT image registration would produce the same accuracy as the deformed CBCT image registration. Results: For Velocity, the mean error was 1.4 mm for the CT-CT registration, 1.7 mm for the CT-CBCT registration, and 1.4 mm for the CT-simulated-CBCT registration. These same numbers were 1.5, 4.5, and 5.9 mm, respectively, for MIM. Conclusion: All cases produced similar accuracy for Velocity. MIM produced similar values of accuracy for CT-CT registration, but was not as accurate for CT-CBCT registrations. The MIM simulated-CBCT registration followed this same trend, but overestimated MIM DIR errors relative to the CT

  9. Algorithm aversion: people erroneously avoid algorithms after seeing them err.

    Science.gov (United States)

    Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade

    2015-02-01

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

  10. The effect of fog on radionuclide deposition velocities

    International Nuclear Information System (INIS)

    Gibb, R.; Carson, P.; Thompson, W.

    1997-01-01

    Current nuclear power station release models do not evaluate deposition under foggy atmospheric conditions. Deposition velocities and scavenging coefficients of radioactive particles entrained in fog are presented for the Point Lepreau area of the Bay of Fundy coast. It is recommended to calculate deposition based on fog deposition velocities. The deposition velocities can be calculated from common meteorological data. The range of deposition velocities is approximately 1 - 100 cm/s. Fog deposition is surface roughness dependent with forests having larger deposition and deposition velocities than soil or grasses. (author)

  11. Balance velocities of the Greenland ice sheet

    DEFF Research Database (Denmark)

    Joughin, I.; Fahnestock, M.; Ekholm, Simon

    1997-01-01

    We present a map of balance velocities for the Greenland ice sheet. The resolution of the underlying DEM, which was derived primarily from radar altimetery data, yields far greater detail than earlier balance velocity estimates for Greenland. The velocity contours reveal in striking detail......, the balance map is useful for ice-sheet modelling, mass balance studies, and field planning....

  12. The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.

    Science.gov (United States)

    Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P

    1999-10-01

    In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.

  13. Optimal velocity difference model for a car-following theory

    International Nuclear Information System (INIS)

    Peng, G.H.; Cai, X.H.; Liu, C.Q.; Cao, B.F.; Tuo, M.X.

    2011-01-01

    In this Letter, we present a new optimal velocity difference model for a car-following theory based on the full velocity difference model. The linear stability condition of the new model is obtained by using the linear stability theory. The unrealistically high deceleration does not appear in OVDM. Numerical simulation of traffic dynamics shows that the new model can avoid the disadvantage of negative velocity occurred at small sensitivity coefficient λ in full velocity difference model by adjusting the coefficient of the optimal velocity difference, which shows that collision can disappear in the improved model. -- Highlights: → A new optimal velocity difference car-following model is proposed. → The effects of the optimal velocity difference on the stability of traffic flow have been explored. → The starting and braking process were carried out through simulation. → The effects of the optimal velocity difference can avoid the disadvantage of negative velocity.

  14. Three-dimensional Upper Crustal Velocity and Attenuation Structures of the Central Tibetan Plateau from Local Earthquake Tomography

    Science.gov (United States)

    Zhou, B.; Liang, X.; Lin, G.; Tian, X.; Zhu, G.; Mechie, J.; Teng, J.

    2017-12-01

    A series of V-shaped conjugate strike-slip faults are the most spectacular geologic features in the central Tibetan plateau. A previous study suggested that this conjugate strike-slip fault system accommodates the east-west extension and coeval north-south contraction. Another previous study suggested that the continuous convergence between the Indian and Eurasian continents and the eastward asthenospheric flow generated lithospheric paired general-shear (PGS) deformation, which then caused the development of conjugate strike-slip faults in central Tibet. Local seismic tomography can image three dimensional upper-crustal velocity and attenuation structures in central Tibet, which will provide us with more information about the spatial distribution of physical properties and compositional variations around the conjugate strike-slip fault zone. Ultimately, this information could improve our understanding of the development mechanism of the conjugate strike-slip fault system. In this study, we collected 6,809 Pg and 2,929 Sg arrival times from 414 earthquakes recorded by the temporary SANDWICH and permanent CNSN networks from November 2013 to November 2015. We also included 300 P and 17 S arrival times from 12 shots recorded by the INDEPTH III project during the summer of 1998 in the velocity tomography. We inverted for preliminary Vp and Vp/Vs models using the SIMUL2000 tomography algorithm, and then relocated the earthquakes with these preliminary velocity models. After that, we inverted for the final velocity models with these improved source locations and origin times. After the velocity inversion, we performed local attenuation tomography using t* measurements from the same dataset with an already existing approach. There are correlated features in the velocity and attenuation structures. From the surface to 10 km depth, the study area is dominated by high Vp and Qp anomalies. However, from 10 km to 20 km depth, there is a low Vp and Qp zone distributed along the

  15. Use of Genetic Algorithms to solve Inverse Problems in Relativistic Hydrodynamics

    Science.gov (United States)

    Guzmán, F. S.; González, J. A.

    2018-04-01

    We present the use of Genetic Algorithms (GAs) as a strategy to solve inverse problems associated with models of relativistic hydrodynamics. The signal we consider to emulate an observation is the density of a relativistic gas, measured at a point where a shock is traveling. This shock is generated numerically out of a Riemann problem with mildly relativistic conditions. The inverse problem we propose is the prediction of the initial conditions of density, velocity and pressure of the Riemann problem that gave origin to that signal. For this we use the density, velocity and pressure of the gas at both sides of the discontinuity, as the six genes of an organism, initially with random values within a tolerance. We then prepare an initial population of N of these organisms and evolve them using methods based on GAs. In the end, the organism with the best fitness of each generation is compared to the signal and the process ends when the set of initial conditions of the organisms of a later generation fit the Signal within a tolerance.

  16. Velocity measurement of conductor using electromagnetic induction

    International Nuclear Information System (INIS)

    Kim, Gu Hwa; Kim, Ho Young; Park, Joon Po; Jeong, Hee Tae; Lee, Eui Wan

    2002-01-01

    A basic technology was investigated to measure the speed of conductor by non-contact electromagnetic method. The principle of the velocity sensor was electromagnetic induction. To design electromagnet for velocity sensor, 2D electromagnetic analysis was performed using FEM software. The sensor output was analyzed according to the parameters of velocity sensor, such as the type of magnetizing currents and the lift-off. Output of magnetic sensor was linearly depended on the conductor speed and magnetizing current. To compensate the lift-off changes during measurement of velocity, the other magnetic sensor was put at the pole of electromagnet.

  17. Start-up flow in a three-dimensional lid-driven cavity by means of a massively parallel direction splitting algorithm

    KAUST Repository

    Guermond, J. L.

    2011-05-04

    The purpose of this paper is to validate a new highly parallelizable direction splitting algorithm. The parallelization capabilities of this algorithm are illustrated by providing a highly accurate solution for the start-up flow in a three-dimensional impulsively started lid-driven cavity of aspect ratio 1×1×2 at Reynolds numbers 1000 and 5000. The computations are done in parallel (up to 1024 processors) on adapted grids of up to 2 billion nodes in three space dimensions. Velocity profiles are given at dimensionless times t=4, 8, and 12; at least four digits are expected to be correct at Re=1000. © 2011 John Wiley & Sons, Ltd.

  18. A highly scalable particle tracking algorithm using partitioned global address space (PGAS) programming for extreme-scale turbulence simulations

    Science.gov (United States)

    Buaria, D.; Yeung, P. K.

    2017-12-01

    A new parallel algorithm utilizing a partitioned global address space (PGAS) programming model to achieve high scalability is reported for particle tracking in direct numerical simulations of turbulent fluid flow. The work is motivated by the desire to obtain Lagrangian information necessary for the study of turbulent dispersion at the largest problem sizes feasible on current and next-generation multi-petaflop supercomputers. A large population of fluid particles is distributed among parallel processes dynamically, based on instantaneous particle positions such that all of the interpolation information needed for each particle is available either locally on its host process or neighboring processes holding adjacent sub-domains of the velocity field. With cubic splines as the preferred interpolation method, the new algorithm is designed to minimize the need for communication, by transferring between adjacent processes only those spline coefficients determined to be necessary for specific particles. This transfer is implemented very efficiently as a one-sided communication, using Co-Array Fortran (CAF) features which facilitate small data movements between different local partitions of a large global array. The cost of monitoring transfer of particle properties between adjacent processes for particles migrating across sub-domain boundaries is found to be small. Detailed benchmarks are obtained on the Cray petascale supercomputer Blue Waters at the University of Illinois, Urbana-Champaign. For operations on the particles in a 81923 simulation (0.55 trillion grid points) on 262,144 Cray XE6 cores, the new algorithm is found to be orders of magnitude faster relative to a prior algorithm in which each particle is tracked by the same parallel process at all times. This large speedup reduces the additional cost of tracking of order 300 million particles to just over 50% of the cost of computing the Eulerian velocity field at this scale. Improving support of PGAS models on

  19. Cognitive regulation of saccadic velocity by reward prospect.

    Science.gov (United States)

    Chen, Lewis L; Hung, Leroy Y; Quinet, Julie; Kosek, Kevin

    2013-08-01

    It is known that expectation of reward speeds up saccades. Past studies have also shown the presence of a saccadic velocity bias in the orbit, resulting from a biomechanical regulation over varying eccentricities. Nevertheless, whether and how reward expectation interacts with the biomechanical regulation of saccadic velocities over varying eccentricities remains unknown. We addressed this question by conducting a visually guided double-step saccade task. The role of reward expectation was tested in monkeys performing two consecutive horizontal saccades, one associated with reward prospect and the other not. To adequately assess saccadic velocity and avoid adaptation, we systematically varied initial eye positions, saccadic directions and amplitudes. Our results confirmed the existence of a velocity bias in the orbit, i.e., saccadic peak velocity decreased linearly as the initial eye position deviated in the direction of the saccade. The slope of this bias increased as saccadic amplitudes increased. Nevertheless, reward prospect facilitated velocity to a greater extent for saccades away from than for saccades toward the orbital centre, rendering an overall reduction in the velocity bias. The rate (slope) and magnitude (intercept) of reward modulation over this velocity bias were linearly correlated with amplitudes, similar to the amplitude-modulated velocity bias without reward prospect, which presumably resulted from a biomechanical regulation. Small-amplitude (≤ 5°) saccades received little modulation. These findings together suggest that reward expectation modulated saccadic velocity not as an additive signal but as a facilitating mechanism that interacted with the biomechanical regulation. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  20. Global Plate Velocities from the Global Positioning System

    Science.gov (United States)

    Larson, Kristine M.; Freymueller, Jeffrey T.; Philipsen, Steven

    1997-01-01

    We have analyzed 204 days of Global Positioning System (GPS) data from the global GPS network spanning January 1991 through March 1996. On the basis of these GPS coordinate solutions, we have estimated velocities for 38 sites, mostly located on the interiors of the Africa, Antarctica, Australia, Eurasia, Nazca, North America, Pacific, and South America plates. The uncertainties of the horizontal velocity components range from 1.2 to 5.0 mm/yr. With the exception of sites on the Pacific and Nazca plates, the GPS velocities agree with absolute plate model predictions within 95% confidence. For most of the sites in North America, Antarctica, and Eurasia, the agreement is better than 2 mm/yr. We find no persuasive evidence for significant vertical motions (less than 3 standard deviations), except at four sites. Three of these four were sites constrained to geodetic reference frame velocities. The GPS velocities were then used to estimate angular velocities for eight tectonic plates. Absolute angular velocities derived from the GPS data agree with the no net rotation (NNR) NUVEL-1A model within 95% confidence except for the Pacific plate. Our pole of rotation for the Pacific plate lies 11.5 deg west of the NNR NUVEL-1A pole, with an angular speed 10% faster. Our relative angular velocities agree with NUVEL-1A except for some involving the Pacific plate. While our Pacific-North America angular velocity differs significantly from NUVEL-1A, our model and NUVEL-1A predict very small differences in relative motion along the Pacific-North America plate boundary itself. Our Pacific-Australia and Pacific- Eurasia angular velocities are significantly faster than NUVEL-1A, predicting more rapid convergence at these two plate boundaries. Along the East Pacific Pise, our Pacific-Nazca angular velocity agrees in both rate and azimuth with NUVFL-1A.

  1. Operationality Improvement Control of Electric Power Assisted Wheelchair by Fuzzy Algorithm Considering Posture Angle

    Science.gov (United States)

    Murakami, Hiroki; Seki, Hirokazu; Minakata, Hideaki; Tadakuma, Susumu

    This paper describes a novel operationality improvement control for electric power assisted wheelchairs. “Electric power assisted wheelchair” which assists the driving force by electric motors is expected to be widely used as a mobility support system for elderly people and disabled people, however, the performance of the straight and circular road driving must be further improved because the two wheels drive independently. This paper proposes a novel operationality improvement control by fuzzy algorithm to realize the stable driving on straight and circular roads. The suitable assisted torque of the right and left wheels is determined by fuzzy algorithm based on the posture angular velocity, the posture angle of the wheelchair, the human input torque proportion and the total human torque of the right and left wheels. Some experiments on the practical roads show the effectiveness of the proposed control system.

  2. Three dimensional reflection velocity analysis based on velocity model scan; Model scan ni yoru sanjigen hanshaha sokudo kaiseki

    Energy Technology Data Exchange (ETDEWEB)

    Minegishi, M; Tsuru, T [Japan National Oil Corp., Tokyo (Japan); Matsuoka, T [Japan Petroleum Exploration Corp., Tokyo (Japan)

    1996-05-01

    Introduced herein is a reflection wave velocity analysis method using model scanning as a method for velocity estimation across a section, the estimation being useful in the construction of a velocity structure model in seismic exploration. In this method, a stripping type analysis is carried out, wherein optimum structure parameters are determined for reflection waves one after the other beginning with those from shallower parts. During this process, the velocity structures previously determined for the shallower parts are fixed and only the lowest of the layers undergoing analysis at the time is subjected to model scanning. To consider the bending of ray paths at each velocity boundaries involving shallower parts, the ray path tracing method is utilized for the calculation of the reflection travel time curve for the reflection surface being analyzed. Out of the reflection wave travel time curves calculated using various velocity structure models, one that suits best the actual reflection travel time is detected. The degree of matching between the calculated result and actual result is measured by use of data semblance in a time window provided centering about the calculated reflective wave travel time. The structure parameter is estimated on the basis of conditions for the maximum semblance. 1 ref., 4 figs.

  3. Electrical Resistance Tomography for Visualization of Moving Objects Using a Spatiotemporal Total Variation Regularization Algorithm

    Directory of Open Access Journals (Sweden)

    Bo Chen

    2018-05-01

    Full Text Available Electrical resistance tomography (ERT has been considered as a data collection and image reconstruction method in many multi-phase flow application areas due to its advantages of high speed, low cost and being non-invasive. In order to improve the quality of the reconstructed images, the Total Variation algorithm attracts abundant attention due to its ability to solve large piecewise and discontinuous conductivity distributions. In industrial processing tomography (IPT, techniques such as ERT have been used to extract important flow measurement information. For a moving object inside a pipe, a velocity profile can be calculated from the cross correlation between signals generated from ERT sensors. Many previous studies have used two sets of 2D ERT measurements based on pixel-pixel cross correlation, which requires two ERT systems. In this paper, a method for carrying out flow velocity measurement using a single ERT system is proposed. A novel spatiotemporal total variation regularization approach is utilised to exploit sparsity both in space and time in 4D, and a voxel-voxel cross correlation method is adopted for measurement of flow profile. Result shows that the velocity profile can be calculated with a single ERT system and that the volume fraction and movement can be monitored using the proposed method. Both semi-dynamic experimental and static simulation studies verify the suitability of the proposed method. For in plane velocity profile, a 3D image based on temporal 2D images produces velocity profile with accuracy of less than 1% error and a 4D image for 3D velocity profiling shows an error of 4%.

  4. Cosmic string induced peculiar velocities

    International Nuclear Information System (INIS)

    van Dalen, A.; Schramm, D.N.

    1987-02-01

    We calculate analytically the probability distribution for peculiar velocities on scales from 10h -1 to 60h -1 Mpc with cosmic string loops as the dominant source of primordial gravitational perturbations. We consider a range of parameters βGμ appropriate for both hot (HDM) and cold (CDM) dark matter scenarios. An Ω = 1 CDM Universe is assumed with the loops randomly placed on a smooth background. It is shown how the effects can be estimated of loops breaking up and being born with a spectrum of sizes. It is found that to obtain large scale streaming velocities of at least 400 km/s it is necessary that either a large value for βGμ or the effect of loop fissioning and production details be considerable. Specifically, for optimal CDM string parameters Gμ = 10 -6 , β = 9, h = .5, and scales of 60h -1 Mpc, the parent size spectrum must be 36 times larger than the evolved daughter spectrum to achieve peculiar velocities of at least 400 km/s with a probability of 63%. With this scenario the microwave background dipole will be less than 800 km/s with only a 10% probability. The string induced velocity spectrum is relatively flat out to scales of about 2t/sub eq//a/sub eq/ and then drops off rather quickly. The flatness is a signature of string models of galaxy formation. With HDM a larger value of βGμ is necessary for galaxy formation since accretion on small scales starts later. Hence, with HDM, the peculiar velocity spectrum will be larger on large scales and the flat region will extend to larger scales. If large scale peculiar velocities greater than 400 km/s are real then it is concluded that strings plus CDM have difficulties. The advantages of strings plus HDM in this regard will be explored in greater detail in a later paper. 27 refs., 4 figs., 1 tab

  5. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  6. Pseudo-deterministic Algorithms

    OpenAIRE

    Goldwasser , Shafi

    2012-01-01

    International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...

  7. Radial velocities of RR Lyrae stars

    International Nuclear Information System (INIS)

    Hawley, S.L.; Barnes, T.G. III

    1985-01-01

    283 spectra of 57 RR Lyrae stars have been obtained using the 2.1-m telescope at McDonald Observatory. Radial velocities were determined using a software cross-correlation technique. New mean radial velocities were determined for 46 of the stars. 11 references

  8. The Reliability of Individualized Load-Velocity Profiles.

    Science.gov (United States)

    Banyard, Harry G; Nosaka, K; Vernon, Alex D; Haff, G Gregory

    2017-11-15

    This study examined the reliability of peak velocity (PV), mean propulsive velocity (MPV), and mean velocity (MV) in the development of load-velocity profiles (LVP) in the full depth free-weight back squat performed with maximal concentric effort. Eighteen resistance-trained men performed a baseline one-repetition maximum (1RM) back squat trial and three subsequent 1RM trials used for reliability analyses, with 48-hours interval between trials. 1RM trials comprised lifts from six relative loads including 20, 40, 60, 80, 90, and 100% 1RM. Individualized LVPs for PV, MPV, or MV were derived from loads that were highly reliable based on the following criteria: intra-class correlation coefficient (ICC) >0.70, coefficient of variation (CV) ≤10%, and Cohen's d effect size (ES) 0.05) between trials, movement velocities, or between linear regression versus second order polynomial fits. PV 20-100% , MPV 20-90% , and MV 20-90% are reliable and can be utilized to develop LVPs using linear regression. Conceptually, LVPs can be used to monitor changes in movement velocity and employed as a method for adjusting sessional training loads according to daily readiness.

  9. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...

  10. Global catalog of earthquake rupture velocities shows anticorrelation between stress drop and rupture velocity

    Science.gov (United States)

    Chounet, Agnès; Vallée, Martin; Causse, Mathieu; Courboulex, Françoise

    2018-05-01

    Application of the SCARDEC method provides the apparent source time functions together with seismic moment, depth, and focal mechanism, for most of the recent earthquakes with magnitude larger than 5.6-6. Using this large dataset, we have developed a method to systematically invert for the rupture direction and average rupture velocity Vr, when unilateral rupture propagation dominates. The approach is applied to all the shallow (z earthquakes of the catalog over the 1992-2015 time period. After a careful validation process, rupture properties for a catalog of 96 earthquakes are obtained. The subsequent analysis of this catalog provides several insights about the seismic rupture process. We first report that up-dip ruptures are more abundant than down-dip ruptures for shallow subduction interface earthquakes, which can be understood as a consequence of the material contrast between the slab and the overriding crust. Rupture velocities, which are searched without any a-priori up to the maximal P wave velocity (6000-8000 m/s), are found between 1200 m/s and 4500 m/s. This observation indicates that no earthquakes propagate over long distances with rupture velocity approaching the P wave velocity. Among the 23 ruptures faster than 3100 m/s, we observe both documented supershear ruptures (e.g. the 2001 Kunlun earthquake), and undocumented ruptures that very likely include a supershear phase. We also find that the correlation of Vr with the source duration scaled to the seismic moment (Ts) is very weak. This directly implies that both Ts and Vr are anticorrelated with the stress drop Δσ. This result has implications for the assessment of the peak ground acceleration (PGA) variability. As shown by Causse and Song (2015), an anticorrelation between Δσ and Vr significantly reduces the predicted PGA variability, and brings it closer to the observed variability.

  11. Cerenkov detector for heavy-ion velocity measurements

    International Nuclear Information System (INIS)

    Olson, D.L.; Baumgartner, M.; Dufour, J.P.; Girard, J.G.; Greiner, D.E.; Lindstrom, P.J.; Symons, T.J.M.; Crawford, H.J.

    1984-08-01

    We have developed a highly sensitive velocity measuring detector using total-internal-reflection Cerenkov counters of a type mentioned by Jelly in 1958. If the velocity of the particle is above the threshold for total-internal-reflection these counters have a charge resolution of sigma = 0.18e for a 3mm thick glass radiator. For the velocity measurement we use a fused silica radiator so that the velocity of the particles are near the threshold for total-internal reflection. For momentum-analyzed projectile fragments of 1.6 GeV/nucleon 40 Ar, we have measured a mass resolution of sigma = 0.1u for isotope identification

  12. Conduction velocity of antigravity muscle action potentials.

    Science.gov (United States)

    Christova, L; Kosarov, D; Christova, P

    1992-01-01

    The conduction velocity of the impulses along the muscle fibers is one of the parameters of the extraterritorial potentials of the motor units allowing for the evaluation of the functional state of the muscles. There are no data about the conduction velocities of antigravity muscleaction potentials. In this paper we offer a method for measuring conduction velocity of potentials of single MUs and the averaged potentials of the interference electromiogram (IEMG) lead-off by surface electrodes from mm. sternocleidomastoideus, trapezius, deltoideus (caput laterale) and vastus medialis. The measured mean values of the conduction velocity of antigravity muscles potentials can be used for testing the functional state of the muscles.

  13. Double path-integral migration velocity analysis: a real data example

    International Nuclear Information System (INIS)

    Costa, Jessé C; Schleicher, Jörg

    2011-01-01

    Path-integral imaging forms an image with no knowledge of the velocity model by summing over the migrated images obtained for a set of migration velocity models. Double path-integral imaging migration extracts the stationary velocities, i.e. those velocities at which common-image gathers align horizontally, as a byproduct. An application of the technique to a real data set demonstrates that quantitative information about the time migration velocity model can be determined by double path-integral migration velocity analysis. Migrated images using interpolations with different regularizations of the extracted velocities prove the high quality of the resulting time-migration velocity information. The so-obtained velocity model can then be used as a starting model for subsequent velocity analysis tools like migration tomography or other tomographic methods

  14. Determination of Critical Conditions for Puncturing Almonds Using Coupled Response Surface Methodology and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Mahmood Mahmoodi-Eshkaftaki

    2013-01-01

    Full Text Available In this study, the effect of seed moisture content, probe diameter and loading velocity (puncture conditions on some mechanical properties of almond kernel and peeled almond kernel is considered to model a relationship between the puncture conditions and rupture energy. Furthermore, distribution of the mechanical properties is determined. The main objective is to determine the critical values of mechanical properties significant for peeling machines. The response surface methodology was used to find the relationship between the input parameters and the output responses, and the fitness function was applied to measure the optimal values using the genetic algorithm. Two-parameter Weibull function was used to describe the distribution of mechanical properties. Based on the Weibull parameter values, i.e. shape parameter (β and scale parameter (η calculated for each property, the mechanical distribution variations were completely described and it was confirmed that the mechanical properties are rule governed, which makes the Weibull function suitable for estimating their distributions. The energy model estimated using response surface methodology shows that the mechanical properties relate exponentially to the moisture, and polynomially to the loading velocity and probe diameter, which enabled successful estimation of the rupture energy (R²=0.94. The genetic algorithm calculated the critical values of seed moisture, probe diameter, and loading velocity to be 18.11 % on dry mass basis, 0.79 mm, and 0.15 mm/min, respectively, and optimum rupture energy of 1.97·10-³ J. These conditions were used for comparison with new samples, where the rupture energy was experimentally measured to be 2.68 and 2.21·10-³ J for kernel and peeled kernel, respectively, which was nearly in agreement with our model results.

  15. Migration velocity analysis using pre-stack wave fields

    KAUST Repository

    Alkhalifah, Tariq Ali; Wu, Zedong

    2016-01-01

    Using both image and data domains to perform velocity inversion can help us resolve the long and short wavelength components of the velocity model, usually in that order. This translates to integrating migration velocity analysis into full waveform

  16. Phase velocity enhancement of linear explosive shock tubes

    Science.gov (United States)

    Loiseau, Jason; Serge, Matthew; Szirti, Daniel; Higgins, Andrew; Tanguay, Vincent

    2011-06-01

    Strong, high density shocks can be generated by sequentially detonating a hollow cylinder of explosives surrounding a thin-walled, pressurized tube. Implosion of the tube results in a pinch that travels at the detonation velocity of the explosive and acts like a piston to drive a shock into the gas ahead of it. In order to increase the maximum shock velocities that can be obtained, a phase velocity generator can be used to drag an oblique detonation wave along the gas tube at a velocity much higher than the base detonation velocity of the explosive. Since yielding and failure of the gas tube is the primary limitation of these devices, it is desirable to retain the dynamic confinement effects of a heavy-walled tamper without interfering with operation of the phase velocity generator. This was accomplished by cutting a slit into the tamper and introducing a phased detonation wave such that it asymmetrically wraps around the gas tube. This type of configuration has been previously experimentally verified to produce very strong shocks but the post-shock pressure and shock velocity limits have not been investigated. This study measured the shock trajectory for various fill pressures and phase velocities to ascertain the limiting effects of tube yield, detonation obliquity and pinch aspect ratio.

  17. Critical Landau Velocity in Helium Nanodroplets

    NARCIS (Netherlands)

    Brauer, N.B.; Smolarek, S.; Loginov, E.; Mateo, D.; Hernando, A.; Pi, M.; Barranco, M.; Buma, W.J.; Drabbels, M.

    2013-01-01

    The best-known property of superfluid helium is the vanishing viscosity that objects experience while moving through the liquid with speeds below the so-called critical Landau velocity. This critical velocity is generally considered a macroscopic property as it is related to the collective

  18. A new method for measurement of granular velocities

    International Nuclear Information System (INIS)

    Nyborg Andersen, B.

    1984-01-01

    A new, supplementary method to measure granular velocities is presented. The method utilizes the Doppler shift caused by the line of sight component of the solar rotation to cause a wavelength shift through spectral lines as function of heliocentric angle. By measuring the center-to-limb variation of the granular intensity fluctations at different wavelength positions in the lines, the velocities are found. To do this, assumptions regarding the geometrical structure of the velocity and intensity fields have to be made. Preliminary application of the method results in a steep velocity gradient suggesting zero velocity at a hight of 200 km above tau 500 = 1. Possible causes are discussed

  19. A new algorithm for three-dimensional joint inversion of body wave and surface wave data and its application to the Southern California plate boundary region

    Science.gov (United States)

    Fang, Hongjian; Zhang, Haijiang; Yao, Huajian; Allam, Amir; Zigone, Dimitri; Ben-Zion, Yehuda; Thurber, Clifford; van der Hilst, Robert D.

    2016-05-01

    We introduce a new algorithm for joint inversion of body wave and surface wave data to get better 3-D P wave (Vp) and S wave (Vs) velocity models by taking advantage of the complementary strengths of each data set. Our joint inversion algorithm uses a one-step inversion of surface wave traveltime measurements at different periods for 3-D Vs and Vp models without constructing the intermediate phase or group velocity maps. This allows a more straightforward modeling of surface wave traveltime data with the body wave arrival times. We take into consideration the sensitivity of surface wave data with respect to Vp in addition to its large sensitivity to Vs, which means both models are constrained by two different data types. The method is applied to determine 3-D crustal Vp and Vs models using body wave and Rayleigh wave data in the Southern California plate boundary region, which has previously been studied with both double-difference tomography method using body wave arrival times and ambient noise tomography method with Rayleigh and Love wave group velocity dispersion measurements. Our approach creates self-consistent and unique models with no prominent gaps, with Rayleigh wave data resolving shallow and large-scale features and body wave data constraining relatively deeper structures where their ray coverage is good. The velocity model from the joint inversion is consistent with local geological structures and produces better fits to observed seismic waveforms than the current Southern California Earthquake Center (SCEC) model.

  20. Sensitivity of ground motion parameters to local site effects for areas characterised by a thick buried low-velocity layer.

    Science.gov (United States)

    Farrugia, Daniela; Galea, Pauline; D'Amico, Sebastiano; Paolucci, Enrico

    2016-04-01

    It is well known that earthquake damage at a particular site depends on the source, the path that the waves travel through and the local geology. The latter is capable of amplifying and changing the frequency content of the incoming seismic waves. In regions of sparse or no strong ground motion records, like Malta (Central Mediterranean), ground motion simulations are used to obtain parameters for purposes of seismic design and analysis. As an input to ground motion simulations, amplification functions related to the shallow subsurface are required. Shear-wave velocity profiles of several sites on the Maltese islands were obtained using the Horizontal-to-Vertical Spectral Ratio (H/V), the Extended Spatial Auto-Correlation (ESAC) technique and the Genetic Algorithm. The sites chosen were all characterised by a layer of Blue Clay, which can be up to 75 m thick, underlying the Upper Coralline Limestone, a fossiliferous coarse grained limestone. This situation gives rise to a velocity inversion. Available borehole data generally extends down till the top of the Blue Clay layer therefore the only way to check the validity of the modelled shear-wave velocity profile is through the thickness of the topmost layer. Surface wave methods are characterised by uncertainties related to the measurements and the model used for interpretation. Moreover the inversion procedure is also highly non-unique. Such uncertainties are not commonly included in site response analysis. Yet, the propagation of uncertainties from the extracted dispersion curves to inversion solutions can lead to significant differences in the simulations (Boaga et al., 2011). In this study, a series of sensitivity analyses will be presented with the aim of better identifying those stratigraphic properties which can perturb the ground motion simulation results. The stochastic one-dimensional site response analysis algorithm, Extended Source Simulation (EXSIM; Motazedian and Atkinson, 2005), was used to perform

  1. Performance of a vector velocity estimator

    DEFF Research Database (Denmark)

    Munk, Peter; Jensen, Jørgen Arendt

    1998-01-01

    tracking can be found in the literature, but no method with a satisfactory performance has been found that can be used in a commercial implementation. A method for estimation of the velocity vector is presented. Here an oscillation transverse to the ultrasound beam is generated, so that a transverse motion...... in an autocorrelation approach that yields both the axial and the lateral velocity, and thus the velocity vector. The method has the advantage that a standard array transducer and a modified digital beamformer, like those used in modern ultrasound scanners, is sufficient to obtain the information needed. The signal...

  2. Artificial Intelligence Estimation of Carotid-Femoral Pulse Wave Velocity using Carotid Waveform.

    Science.gov (United States)

    Tavallali, Peyman; Razavi, Marianne; Pahlevan, Niema M

    2018-01-17

    In this article, we offer an artificial intelligence method to estimate the carotid-femoral Pulse Wave Velocity (PWV) non-invasively from one uncalibrated carotid waveform measured by tonometry and few routine clinical variables. Since the signal processing inputs to this machine learning algorithm are sensor agnostic, the presented method can accompany any medical instrument that provides a calibrated or uncalibrated carotid pressure waveform. Our results show that, for an unseen hold back test set population in the age range of 20 to 69, our model can estimate PWV with a Root-Mean-Square Error (RMSE) of 1.12 m/sec compared to the reference method. The results convey the fact that this model is a reliable surrogate of PWV. Our study also showed that estimated PWV was significantly associated with an increased risk of CVDs.

  3. Extending molecular simulation time scales: Parallel in time integrations for high-level quantum chemistry and complex force representations.

    Science.gov (United States)

    Bylaska, Eric J; Weare, Jonathan Q; Weare, John H

    2013-08-21

    Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f (e.g., Verlet algorithm), is available to propagate the system from time ti (trajectory positions and velocities xi = (ri, vi)) to time ti + 1 (xi + 1) by xi + 1 = fi(xi), the dynamics problem spanning an interval from t0[ellipsis (horizontal)]tM can be transformed into a root finding problem, F(X) = [xi - f(x(i - 1)]i = 1, M = 0, for the trajectory variables. The root finding problem is solved using a variety of root finding techniques, including quasi-Newton and preconditioned quasi-Newton schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed, and the effectiveness of various approaches to solving the root finding problem is tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations, such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl + 4H2O AIMD simulation at the MP2 level. The maximum speedup (serial execution/timeparallel execution time) obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations, the algorithms achieved speedups of up to 14.3. The parallel in time algorithms can be implemented in a

  4. Extending molecular simulation time scales: Parallel in time integrations for high-level quantum chemistry and complex force representations

    International Nuclear Information System (INIS)

    Bylaska, Eric J.; Weare, Jonathan Q.; Weare, John H.

    2013-01-01

    Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f (e.g., Verlet algorithm), is available to propagate the system from time t i (trajectory positions and velocities x i = (r i , v i )) to time t i+1 (x i+1 ) by x i+1 = f i (x i ), the dynamics problem spanning an interval from t 0 …t M can be transformed into a root finding problem, F(X) = [x i − f(x (i−1 )] i =1,M = 0, for the trajectory variables. The root finding problem is solved using a variety of root finding techniques, including quasi-Newton and preconditioned quasi-Newton schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed, and the effectiveness of various approaches to solving the root finding problem is tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations, such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl + 4H 2 O AIMD simulation at the MP2 level. The maximum speedup ((serial execution time)/(parallel execution time) ) obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations, the algorithms achieved speedups of up to 14.3. The parallel in time algorithms can be implemented in a

  5. Burning velocity measurements of nitrogen-containing compounds.

    Science.gov (United States)

    Takizawa, Kenji; Takahashi, Akifumi; Tokuhashi, Kazuaki; Kondo, Shigeo; Sekiya, Akira

    2008-06-30

    Burning velocity measurements of nitrogen-containing compounds, i.e., ammonia (NH3), methylamine (CH3NH2), ethylamine (C2H5NH2), and propylamine (C3H7NH2), were carried out to assess the flammability of potential natural refrigerants. The spherical-vessel (SV) method was used to measure the burning velocity over a wide range of sample and air concentrations. In addition, flame propagation was directly observed by the schlieren photography method, which showed that the spherical flame model was applicable to flames with a burning velocity higher than approximately 5 cm s(-1). For CH3NH2, the nozzle burner method was also used to confirm the validity of the results obtained by closed vessel methods. We obtained maximum burning velocities (Su0,max) of 7.2, 24.7, 26.9, and 28.3 cm s(-1) for NH3, CH3NH2, C2H5NH2, and C3H7NH2, respectively. It was noted that the burning velocities of NH3 and CH3NH2 were as high as those of the typical hydrofluorocarbon refrigerants difluoromethane (HFC-32, Su0,max=6.7 cm s(-1)) and 1,1-difluoroethane (HFC-152a, Su0,max=23.6 cm s(-1)), respectively. The burning velocities were compared with those of the parent alkanes, and it was found that introducing an NH2 group into hydrocarbon molecules decreases their burning velocity.

  6. Navigation Algorithm Using Fuzzy Control Method in Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Cviklovič Vladimír

    2016-03-01

    Full Text Available The issue of navigation methods is being continuously developed globally. The aim of this article is to test the fuzzy control algorithm for track finding in mobile robotics. The concept of an autonomous mobile robot EN20 has been designed to test its behaviour. The odometry navigation method was used. The benefits of fuzzy control are in the evidence of mobile robot’s behaviour. These benefits are obtained when more physical variables on the base of more input variables are controlled at the same time. In our case, there are two input variables - heading angle and distance, and two output variables - the angular velocity of the left and right wheel. The autonomous mobile robot is moving with human logic.

  7. Multiple joint muscle function with ageing: the force-velocity and power-velocity relationships in young and older men.

    Science.gov (United States)

    Allison, Sarah J; Brooke-Wavell, Katherine; Folland, Jonathan P

    2013-05-01

    Whilst extensive research has detailed the loss of muscle strength with ageing for isolated single joint actions, there has been little attention to power production during more functionally relevant multiple joint movements. The extent to which force or velocity are responsible for the loss in power with ageing is also equivocal. The aim of this study was to evaluate the contribution of force and velocity to the differences in power with age by comparing the force-velocity and power-velocity relationships in young and older men during a multiple joint leg press movement. Twenty-one older men (66 ± 3 years) and twenty-three young men (24 ± 2 years) completed a series of isometric (maximum and explosive) and dynamic contractions on a leg press dynamometer instrumented to record force and displacement. The force-velocity relationship was lower for the older men as reflected by their 19 % lower maximum isometric strength (p decrement in force was greater and therefore the major explanation for the attenuation of power during a functionally relevant multiple joint movement.

  8. An Adaptive Filtering Algorithm Based on Genetic Algorithm-Backpropagation Network

    Directory of Open Access Journals (Sweden)

    Kai Hu

    2013-01-01

    Full Text Available A new image filtering algorithm is proposed. GA-BPN algorithm uses genetic algorithm (GA to decide weights in a back propagation neural network (BPN. It has better global optimal characteristics than traditional optimal algorithm. In this paper, we used GA-BPN to do image noise filter researching work. Firstly, this paper uses training samples to train GA-BPN as the noise detector. Then, we utilize the well-trained GA-BPN to recognize noise pixels in target image. And at last, an adaptive weighted average algorithm is used to recover noise pixels recognized by GA-BPN. Experiment data shows that this algorithm has better performance than other filters.

  9. Fuzzy algorithms to generate level controllers for nuclear power plant steam generators

    International Nuclear Information System (INIS)

    Moon, Byung Soo; Park, Jae Chang; Kim, Dong Hwa; Kim, Byung Koo

    1993-01-01

    In this paper, we present two sets of fuzzy algorithms for the steam generater level control; one for the high power operations where the flow error is available and the other for the low power operations where the flow error is not available. These are converted to a PID type controller for the high power case and to a quadratic function form of a controller for the low power case. These controllers are implemented on the Compact Nuclear Simulator at Korea Atomic Energy Research Institute and tested by a set of four simulation experiments for each. For both cases, the results show that the total variation of the level error and of the flow error are about 50% of those by the PI controllers with about one half of the control action. For the high power case, this is mainly due to the fact that a combination of two PD type controllers in the velocity algorithm form rather than a combination of two PI type controllers in the position algorithm form is used. For the low power case, the controller is essentially a PID type with a very small integral component where the average values for the derivative component input and for the controller output are used. (Author)

  10. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  11. Comparison of high group velocity accelerating structures

    International Nuclear Information System (INIS)

    Farkas, Z.D.; Wilson, P.B.

    1987-02-01

    It is well known that waveguides with no perturbations have phase velocities greater than the velocity of light c. If the waveguide dimensions are chosen so that the phase velocity is only moderately greater than c, only small perturbations are required to reduce the phase velocity to be synchronous with a high energy particle bunch. Such a lightly loaded accelerator structure will have smaller longitudinal and transverse wake potentials and hence will lead to lower emittance growth in an accelerated beam. Since these structures are lightly loaded, their group velocities are only slightly less than c and not in the order of 0.01c, as is the case for the standard disk-loaded structures. To ascertain that the peak and average power requirements for these structures are not prohibitive, we examine the elastance and the Q for several traveling wave structures: phase slip structures, bellows-like structures, and lightly loaded disk-loaded structures

  12. A glance at velocity structure of Izmir

    Energy Technology Data Exchange (ETDEWEB)

    Özer, Çağlar, E-mail: caglar.ozer@deu.edu.tr [Dokuz Eylul University, Faculty of Engineering, Geophysical Engineering Department, Izmir (Turkey); Dokuz Eylul University, The Graduate School of Natural and Applied Sciences, Department of Geophysical Engineering, Izmir (Turkey); Polat, Orhan, E-mail: orhan.polat@deu.edu.tr [Dokuz Eylul University, Faculty of Engineering, Geophysical Engineering Department, Izmir (Turkey)

    2016-04-18

    In this study; we investigated velocity structure of Izmir and surroundings. We used local earthquake data which was recorded by different type of instruments and obtained high resolution 3D sections. We selected more than 400 earthquakes which were occurred between 2010 and 2013. Examined tomographic sections especially in Izmir along coastal areas (Mavisehir-Inciraltı); revealed the low speed zone. Along this low-speed zone; it is consistent with the results obtained from the stratigraphic section and surface geology. While; low velocity zones are associated with faults and water content; high velocity is related to magmatic rocks or compact rocks. Along Karsıyaka, Seferihisar, Orhanlı, Izmir fault zones; low P velocity was observed. When examined higher elevations of the topography; which are composed of soured magmatic material is dominated by high P velocity. In all horizontal sections; resolution decreasing with increasing depth. The reason for this; the reduction of earthquakes causes ray tracing problems.

  13. Coding of Velocity Storage in the Vestibular Nuclei

    Directory of Open Access Journals (Sweden)

    Sergei B. Yakushin

    2017-08-01

    Full Text Available Semicircular canal afferents sense angular acceleration and output angular velocity with a short time constant of ≈4.5 s. This output is prolonged by a central integrative network, velocity storage that lengthens the time constants of eye velocity. This mechanism utilizes canal, otolith, and visual (optokinetic information to align the axis of eye velocity toward the spatial vertical when head orientation is off-vertical axis. Previous studies indicated that vestibular-only (VO and vestibular-pause-saccade (VPS neurons located in the medial and superior vestibular nucleus could code all aspects of velocity storage. A recently developed technique enabled prolonged recording while animals were rotated and received optokinetic stimulation about a spatial vertical axis while upright, side-down, prone, and supine. Firing rates of 33 VO and 8 VPS neurons were studied in alert cynomolgus monkeys. Majority VO neurons were closely correlated with the horizontal component of velocity storage in head coordinates, regardless of head orientation in space. Approximately, half of all tested neurons (46% code horizontal component of velocity in head coordinates, while the other half (54% changed their firing rates as the head was oriented relative to the spatial vertical, coding the horizontal component of eye velocity in spatial coordinates. Some VO neurons only coded the cross-coupled pitch or roll components that move the axis of eye rotation toward the spatial vertical. Sixty-five percent of these VO and VPS neurons were more sensitive to rotation in one direction (predominantly contralateral, providing directional orientation for the subset of VO neurons on either side of the brainstem. This indicates that the three-dimensional velocity storage integrator is composed of directional subsets of neurons that are likely to be the bases for the spatial characteristics of velocity storage. Most VPS neurons ceased firing during drowsiness, but the firing

  14. Convex hull ranking algorithm for multi-objective evolutionary algorithms

    NARCIS (Netherlands)

    Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.

    2012-01-01

    Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity

  15. A modified CoRoT detrend algorithm and the discovery of a new planetary companion

    Science.gov (United States)

    Boufleur, Rodrigo C.; Emilio, Marcelo; Janot-Pacheco, Eduardo; Andrade, Laerte; Ferraz-Mello, Sylvio; do Nascimento, José-Dias, Jr.; de La Reza, Ramiro

    2018-01-01

    We present MCDA, a modification of the COnvection ROtation and planetary Transits (CoRoT) detrend algorithm (CDA) suitable to detrend chromatic light curves. By means of robust statistics and better handling of short-term variability, the implementation decreases the systematic light-curve variations and improves the detection of exoplanets when compared with the original algorithm. All CoRoT chromatic light curves (a total of 65 655) were analysed with our algorithm. Dozens of new transit candidates and all previously known CoRoT exoplanets were rediscovered in those light curves using a box-fitting algorithm. For three of the new cases, spectroscopic measurements of the candidates' host stars were retrieved from the ESO Science Archive Facility and used to calculate stellar parameters and, in the best cases, radial velocities. In addition to our improved detrend technique, we announce the discovery of a planet that orbits a 0.79_{-0.09}^{+0.08} R⊙ star with a period of 6.718 37 ± 0.000 01 d and has 0.57_{-0.05}^{+0.06} RJ and 0.15 ± 0.10 MJ. We also present the analysis of two cases in which parameters found suggest the existence of possible planetary companions.

  16. Superluminal Velocities in the Synchronized Space-Time

    Directory of Open Access Journals (Sweden)

    Medvedev S. Yu.

    2014-07-01

    Full Text Available Within the framework of the non-gravitational generalization of the special relativity, a problem of possible superluminal motion of particles and signals is considered. It has been proven that for the particles with non-zero mass the existence of anisotropic light barrier with the shape dependent on the reference frame velocity results from the Tangherlini transformations. The maximal possible excess of neutrino velocity over the absolute velocity of light related to the Earth (using th e clock with instantaneous synchronization has been estimated. The illusoriness of t he acausality problem has been illustrated and conclusion is made on the lack of the upper limit of velocities of signals of informational nature.

  17. Imaging chemical reactions - 3D velocity mapping

    Science.gov (United States)

    Chichinin, A. I.; Gericke, K.-H.; Kauczok, S.; Maul, C.

    Visualising a collision between an atom or a molecule or a photodissociation (half-collision) of a molecule on a single particle and single quantum level is like watching the collision of billiard balls on a pool table: Molecular beams or monoenergetic photodissociation products provide the colliding reactants at controlled velocity before the reaction products velocity is imaged directly with an elaborate camera system, where one should keep in mind that velocity is, in general, a three-dimensional (3D) vectorial property which combines scattering angles and speed. If the processes under study have no cylindrical symmetry, then only this 3D product velocity vector contains the full information of the elementary process under study.

  18. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  19. Estimation of blood velocities using ultrasound

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    imaging, and, finally, some of the more recent experimental techniques. The authors shows that the Doppler shift, usually considered the way velocity is detected, actually, plays a minor role in pulsed systems. Rather, it is the shift of position of signals between pulses that is used in velocity...

  20. PSpectRe: a pseudo-spectral code for (P)reheating

    International Nuclear Information System (INIS)

    Easther, Richard; Finkel, Hal; Roth, Nathaniel

    2010-01-01

    PSpectRe is a C++ program that uses Fourier-space pseudo-spectral methods to evolve interacting scalar fields in an expanding universe. PSpectRe is optimized for the analysis of parametric resonance in the post-inflationary universe and provides an alternative to finite differencing codes, such as Defrost and LatticeEasy. PSpectRe has both second- (Velocity-Verlet) and fourth-order (Runge-Kutta) time integrators. Given the same number of spatial points and/or momentum modes, PSpectRe is not significantly slower than finite differencing codes, despite the need for multiple Fourier transforms at each timestep, and exhibits excellent energy conservation. Further, by computing the post-resonance equation of state, we show that in some circumstances PSpectRe obtains reliable results while using substantially fewer points than a finite differencing code. PSpectRe is designed to be easily extended to other problems in early-universe cosmology, including the generation of gravitational waves during phase transitions and pre-inflationary bubble collisions. Specific applications of this code will be described in future work

  1. Temperature and center-limb variations of transition region velocities

    International Nuclear Information System (INIS)

    Athay, R.G.; Dere, K.P.

    1989-01-01

    HRTS data from the Spacelab 2 mission are used to derive the center-limb and temperature variations of the mean velocity and the velocity variance in the solar chromosphere and transition zone. The mean velocity is found to vary much more rapidly from center to limb and with temperature than does the velocity variance. Also, the mean velocity shows a characteristic signature at some magnetic neutral lines in accordance with the findings of Klimchuk (1987) from Solar Maximum Mission (SMM) data. The velocity variance does not show a characteristic signature at the neutral lines but shows an inverse correlation with intensity. The latter is interpreted as reduced velocity variance in strong field regions. The results are discussed in terms of downflow along lines of force in magnetic arcades. 23 refs

  2. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  3. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.

    2014-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  4. The Velocity Distribution of Isolated Radio Pulsars

    Science.gov (United States)

    Arzoumanian, Z.; Chernoff, D. F.; Cordes, J. M.; White, Nicholas E. (Technical Monitor)

    2002-01-01

    We infer the velocity distribution of radio pulsars based on large-scale 0.4 GHz pulsar surveys. We do so by modelling evolution of the locations, velocities, spins, and radio luminosities of pulsars; calculating pulsed flux according to a beaming model and random orientation angles of spin and beam; applying selection effects of pulsar surveys; and comparing model distributions of measurable pulsar properties with survey data using a likelihood function. The surveys analyzed have well-defined characteristics and cover approx. 95% of the sky. We maximize the likelihood in a 6-dimensional space of observables P, dot-P, DM, absolute value of b, mu, F (period, period derivative, dispersion measure, Galactic latitude, proper motion, and flux density). The models we test are described by 12 parameters that characterize a population's birth rate, luminosity, shutoff of radio emission, birth locations, and birth velocities. We infer that the radio beam luminosity (i) is comparable to the energy flux of relativistic particles in models for spin-driven magnetospheres, signifying that radio emission losses reach nearly 100% for the oldest pulsars; and (ii) scales approximately as E(exp 1/2) which, in magnetosphere models, is proportional to the voltage drop available for acceleration of particles. We find that a two-component velocity distribution with characteristic velocities of 90 km/ s and 500 km/ s is greatly preferred to any one-component distribution; this preference is largely immune to variations in other population parameters, such as the luminosity or distance scale, or the assumed spin-down law. We explore some consequences of the preferred birth velocity distribution: (1) roughly 50% of pulsars in the solar neighborhood will escape the Galaxy, while approx. 15% have velocities greater than 1000 km/ s (2) observational bias against high velocity pulsars is relatively unimportant for surveys that reach high Galactic absolute value of z distances, but is severe for

  5. Archimedean copula estimation of distribution algorithm based on artificial bee colony algorithm

    Institute of Scientific and Technical Information of China (English)

    Haidong Xu; Mingyan Jiang; Kun Xu

    2015-01-01

    The artificial bee colony (ABC) algorithm is a com-petitive stochastic population-based optimization algorithm. How-ever, the ABC algorithm does not use the social information and lacks the knowledge of the problem structure, which leads to in-sufficiency in both convergent speed and searching precision. Archimedean copula estimation of distribution algorithm (ACEDA) is a relatively simple, time-economic and multivariate correlated EDA. This paper proposes a novel hybrid algorithm based on the ABC algorithm and ACEDA cal ed Archimedean copula estima-tion of distribution based on the artificial bee colony (ACABC) algorithm. The hybrid algorithm utilizes ACEDA to estimate the distribution model and then uses the information to help artificial bees to search more efficiently in the search space. Six bench-mark functions are introduced to assess the performance of the ACABC algorithm on numerical function optimization. Experimen-tal results show that the ACABC algorithm converges much faster with greater precision compared with the ABC algorithm, ACEDA and the global best (gbest)-guided ABC (GABC) algorithm in most of the experiments.

  6. Neutron stars velocities and magnetic fields

    Science.gov (United States)

    Paret, Daryel Manreza; Martinez, A. Perez; Ayala, Alejandro.; Piccinelli, G.; Sanchez, A.

    2018-01-01

    We study a model that explain neutron stars velocities due to the anisotropic emission of neutrinos. Strong magnetic fields present in neutron stars are the source of the anisotropy in the system. To compute the velocity of the neutron star we model its core as composed by strange quark matter and analice the properties of a magnetized quark gas at finite temperature and density. Specifically we have obtained the electron polarization and the specific heat of magnetized fermions as a functions of the temperature, chemical potential and magnetic field which allow us to study the velocity of the neutron star as a function of these parameters.

  7. Examples of in-vivo blood vector velocity estimation

    DEFF Research Database (Denmark)

    Udesen, Jesper; Nielsen, Michael Bachmann; Nielsen, Kristian R.

    2007-01-01

    In this paper examples of in-vivo blood vector velocity images of the carotid artery are presented. The transverse oscillation (TO) method for blood vector velocity estimation has been used to estimate the vector velocities and the method is first evaluated in a circulating flowrig where...

  8. Evaluation of force-velocity and power-velocity relationship of arm muscles.

    Science.gov (United States)

    Sreckovic, Sreten; Cuk, Ivan; Djuric, Sasa; Nedeljkovic, Aleksandar; Mirkov, Dragan; Jaric, Slobodan

    2015-08-01

    A number of recent studies have revealed an approximately linear force-velocity (F-V) and, consequently, a parabolic power-velocity (P-V) relationship of multi-joint tasks. However, the measurement characteristics of their parameters have been neglected, particularly those regarding arm muscles, which could be a problem for using the linear F-V model in both research and routine testing. Therefore, the aims of the present study were to evaluate the strength, shape, reliability, and concurrent validity of the F-V relationship of arm muscles. Twelve healthy participants performed maximum bench press throws against loads ranging from 20 to 70 % of their maximum strength, and linear regression model was applied on the obtained range of F and V data. One-repetition maximum bench press and medicine ball throw tests were also conducted. The observed individual F-V relationships were exceptionally strong (r = 0.96-0.99; all P stronger relationships. The reliability of parameters obtained from the linear F-V regressions proved to be mainly high (ICC > 0.80), while their concurrent validity regarding directly measured F, P, and V ranged from high (for maximum F) to medium-to-low (for maximum P and V). The findings add to the evidence that the linear F-V and, consequently, parabolic P-V models could be used to study the mechanical properties of muscular systems, as well as to design a relatively simple, reliable, and ecologically valid routine test of the muscle ability of force, power, and velocity production.

  9. The solidification velocity of nickel and titanium alloys

    Science.gov (United States)

    Altgilbers, Alex Sho

    2002-09-01

    The solidification velocity of several Ni-Ti, Ni-Sn, Ni-Si, Ti-Al and Ti-Ni alloys were measured as a function of undercooling. From these results, a model for alloy solidification was developed that can be used to predict the solidification velocity as a function of undercooling more accurately. During this investigation a phenomenon was observed in the solidification velocity that is a direct result of the addition of the various alloying elements to nickel and titanium. The additions of the alloying elements resulted in an additional solidification velocity plateau at intermediate undercoolings. Past work has shown a solidification velocity plateau at high undercoolings can be attributed to residual oxygen. It is shown that a logistic growth model is a more accurate model for predicting the solidification of alloys. Additionally, a numerical model is developed from simple description of the effect of solute on the solidification velocity, which utilizes a Boltzmann logistic function to predict the plateaus that occur at intermediate undercoolings.

  10. A new car-following model considering velocity anticipation

    International Nuclear Information System (INIS)

    Jun-Fang, Tian; Bin, Jia; Xin-Gang, Li; Zi-You, Gao

    2010-01-01

    The full velocity difference model proposed by Jiang et al. [2001 Phys. Rev. E 64 017101] has been improved by introducing velocity anticipation. Velocity anticipation means the follower estimates the future velocity of the leader. The stability condition of the new model is obtained by using the linear stability theory. Theoretical results show that the stability region increases when we increase the anticipation time interval. The mKdV equation is derived to describe the kink–antikink soliton wave and obtain the coexisting stability line. The delay time of car motion and kinematic wave speed at jam density are obtained in this model. Numerical simulations exhibit that when we increase the anticipation time interval enough, the new model could avoid accidents under urgent braking cases. Also, the traffic jam could be suppressed by considering the anticipation velocity. All results demonstrate that this model is an improvement on the full velocity difference model. (general)

  11. On the origin of high-velocity runaway stars

    Science.gov (United States)

    Gvaramadze, Vasilii V.; Gualandris, Alessia; Portegies Zwart, Simon

    2009-06-01

    We explore the hypothesis that some high-velocity runaway stars attain their peculiar velocities in the course of exchange encounters between hard massive binaries and a very massive star (either an ordinary 50-100Msolar star or a more massive one, formed through runaway mergers of ordinary stars in the core of a young massive star cluster). In this process, one of the binary components becomes gravitationally bound to the very massive star, while the second one is ejected, sometimes with a high speed. We performed three-body scattering experiments and found that early B-type stars (the progenitors of the majority of neutron stars) can be ejected with velocities of >~200-400kms-1 (typical of pulsars), while 3-4Msolar stars can attain velocities of >~300-400kms-1 (typical of the bound population of halo late B-type stars). We also found that the ejected stars can occasionally attain velocities exceeding the Milky Ways's escape velocity.

  12. A filtered backprojection algorithm with characteristics of the iterative landweber algorithm

    OpenAIRE

    L. Zeng, Gengsheng

    2012-01-01

    Purpose: In order to eventually develop an analytical algorithm with noise characteristics of an iterative algorithm, this technical note develops a window function for the filtered backprojection (FBP) algorithm in tomography that behaves as an iterative Landweber algorithm.

  13. Study on velocity field in a wire wrapped fuel pin bundle of sodium cooled reactor. Detailed velocity distribution in a subchannel

    International Nuclear Information System (INIS)

    Sato, Hiroyuki; Kobayashi, Jun; Miyakoshi, Hiroyuki; Kamide, Hideki

    2009-01-01

    A sodium cooled fast reactor is designed to attain a high burn-up core in a feasibility study on commercialized fast reactor cycle systems. In high burn-up fuel subassemblies, deformation of fuel pin due to the swelling and thermal bowing may decrease local flow velocity via change of flow area in the subassembly and influence the heat removal capability. Therefore, it is of importance to obtain the flow velocity distribution in a wire wrapped pin bundle. A 2.5 times enlarged 7-pin bundle water model was applied to investigate the detailed velocity distribution in an inner subchannel surrounded by 3 pins with wrapping wire. The test section consisted of a hexagonal acrylic duct tube and fluorinated resin pins which had nearly the same refractive index with that of water and a high light transmission rate. The velocity distribution in an inner subchannel with the wrapping wire was measured by PIV (Particle Image Velocimetry) through the front and lateral sides of the duct tube. In the vertical velocity distribution in a narrow space between the pins, the wrapping wire decreased the velocity downstream of the wire and asymmetric flow distribution was formed between the pin and wire. In the horizontal velocity distribution, swirl flow around the wrapping wire was obviously observed. The measured velocity data are useful for code validation of pin bundle thermalhydraulics. (author)

  14. Sensor Data Quality and Angular Rate Down-Selection Algorithms on SLS EM-1

    Science.gov (United States)

    Park, Thomas; Smith, Austin; Oliver, T. Emerson

    2018-01-01

    The NASA Space Launch System Block 1 launch vehicle is equipped with an Inertial Navigation System (INS) and multiple Rate Gyro Assemblies (RGA) that are used in the Guidance, Navigation, and Control (GN&C) algorithms. The INS provides the inertial position, velocity, and attitude of the vehicle along with both angular rate and specific force measurements. Additionally, multiple sets of co-located rate gyros supply angular rate data. The collection of angular rate data, taken along the launch vehicle, is used to separate out vehicle motion from flexible body dynamics. Since the system architecture uses redundant sensors, the capability was developed to evaluate the health (or validity) of the independent measurements. A suite of Sensor Data Quality (SDQ) algorithms is responsible for assessing the angular rate data from the redundant sensors. When failures are detected, SDQ will take the appropriate action and disqualify or remove faulted sensors from forward processing. Additionally, the SDQ algorithms contain logic for down-selecting the angular rate data used by the GNC software from the set of healthy measurements. This paper explores the trades and analyses that were performed in selecting a set of robust fault-detection algorithms included in the GN&C flight software. These trades included both an assessment of hardware-provided health and status data as well as an evaluation of different algorithms based on time-to-detection, type of failures detected, and probability of detecting false positives. We then provide an overview of the algorithms used for both fault-detection and measurement down selection. We next discuss the role of trajectory design, flexible-body models, and vehicle response to off-nominal conditions in setting the detection thresholds. Lastly, we present lessons learned from software integration and hardware-in-the-loop testing.

  15. Seismotectonic Implications Of Clustered Regional GPS Velocities In The San Francisco Bay Region, California

    Science.gov (United States)

    Graymer, R. W.; Simpson, R.

    2012-12-01

    We have used a hierarchical agglomerative clustering algorithm with Euclidean distance and centroid linkage, applied to continuous GPS observations for the Bay region available from the U.S. Geological Survey website. This analysis reveals 4 robust, spatially coherent clusters that coincide with 4 first-order structural blocks separated by 3 major fault systems: San Andreas (SA), Southern/Central Calaveras-Hayward-Rodgers Creek-Maacama (HAY), and Northern Calaveras-Concord-Green Valley-Berryessa-Bartlett Springs (NCAL). Because observations seaward of the San Gregorio (SG) fault are few in number, the cluster to the west of SA may actually contain 2 major structural blocks not adequately resolved: the Pacific plate to the west of the northern SA and a Peninsula block between the Peninsula SA and the SG fault. The average inter-block velocities are 11, 10, and 9 mm/yr across SA, HAY, and NCAL respectively. There appears to be a significant component of fault-normal compression across NCAL, whereas SA and HAY faults appear to be, on regional average, purely strike-slip. The velocities for the Sierra Nevada - Great Valley (SNGV) block to the west of NCAL are impressive in their similarity. The cluster of these velocities in a velocity plot forms a tighter grouping compared with the groupings for the other cluster blocks, suggesting a more rigid behavior for this block than the others. We note that for 4 clusters, none of the 3 cluster boundaries illuminate geologic structures other than north-northwest trending dominantly strike-slip faults, so plate motion is not accommodated by large-scale fault-parallel compression or extension in the region or by significant plastic deformation , at least over the time span of the GPS observations. Complexities of interseismic deformation of the upper crust do not allow simple application of inter-block velocities as long-term slip rates on bounding faults. However, 2D dislocation models using inter-block velocities and typical

  16. Comparative performance of conventional OPC concrete and HPC designed by densified mixture design algorithm

    Science.gov (United States)

    Huynh, Trong-Phuoc; Hwang, Chao-Lung; Yang, Shu-Ti

    2017-12-01

    This experimental study evaluated the performance of normal ordinary Portland cement (OPC) concrete and high-performance concrete (HPC) that were designed by the conventional method (ACI) and densified mixture design algorithm (DMDA) method, respectively. Engineering properties and durability performance of both the OPC and HPC samples were studied using the tests of workability, compressive strength, water absorption, ultrasonic pulse velocity, and electrical surface resistivity. Test results show that the HPC performed good fresh property and further showed better performance in terms of strength and durability as compared to the OPC.

  17. Reliability of force-velocity relationships during deadlift high pull.

    Science.gov (United States)

    Lu, Wei; Boyas, Sébastien; Jubeau, Marc; Rahmani, Abderrahmane

    2017-11-13

    This study aimed to evaluate the within- and between-session reliability of force, velocity and power performances and to assess the force-velocity relationship during the deadlift high pull (DHP). Nine participants performed two identical sessions of DHP with loads ranging from 30 to 70% of body mass. The force was measured by a force plate under the participants' feet. The velocity of the 'body + lifted mass' system was calculated by integrating the acceleration and the power was calculated as the product of force and velocity. The force-velocity relationships were obtained from linear regression of both mean and peak values of force and velocity. The within- and between-session reliability was evaluated by using coefficients of variation (CV) and intraclass correlation coefficients (ICC). Results showed that DHP force-velocity relationships were significantly linear (R² > 0.90, p  0.94), mean and peak velocities showed a good agreement (CV reliable and can therefore be utilised as a tool to characterise individuals' muscular profiles.

  18. Design of artificial neural networks using a genetic algorithm to predict collection efficiency in venturi scrubbers.

    Science.gov (United States)

    Taheri, Mahboobeh; Mohebbi, Ali

    2008-08-30

    In this study, a new approach for the auto-design of neural networks, based on a genetic algorithm (GA), has been used to predict collection efficiency in venturi scrubbers. The experimental input data, including particle diameter, throat gas velocity, liquid to gas flow rate ratio, throat hydraulic diameter, pressure drop across the venturi scrubber and collection efficiency as an output, have been used to create a GA-artificial neural network (ANN) model. The testing results from the model are in good agreement with the experimental data. Comparison of the results of the GA optimized ANN model with the results from the trial-and-error calibrated ANN model indicates that the GA-ANN model is more efficient. Finally, the effects of operating parameters such as liquid to gas flow rate ratio, throat gas velocity, and particle diameter on collection efficiency were determined.

  19. High-velocity runaway stars from three-body encounters

    Science.gov (United States)

    Gvaramadze, V. V.; Gualandris, A.; Portegies Zwart, S.

    2010-01-01

    We performed numerical simulations of dynamical encounters between hard, massive binaries and a very massive star (VMS; formed through runaway mergers of ordinary stars in the dense core of a young massive star cluster) to explore the hypothesis that this dynamical process could be responsible for the origin of high-velocity (≥ 200 - 400 km s-1) early or late B-type stars. We estimated the typical velocities produced in encounters between very tight massive binaries and VMSs (of mass of ≥ 200 M⊙) and found that about 3 - 4% of all encounters produce velocities ≥ 400 km s-1, while in about 2% of encounters the escapers attain velocities exceeding the Milky Ways's escape velocity. We therefore argue that the origin of high-velocity (≥ 200 - 400 km s-1) runaway stars and at least some so-called hypervelocity stars could be associated with dynamical encounters between the tightest massive binaries and VMSs formed in the cores of star clusters. We also simulated dynamical encounters between tight massive binaries and single ordinary 50 - 100 M⊙ stars. We found that from 1 to ≃ 4% of these encounters can produce runaway stars with velocities of ≥ 300 - 400 km s-1 (typical of the bound population of high-velocity halo B-type stars) and occasionally (in less than 1% of encounters) produce hypervelocity (≥ 700 km s-1) late B-type escapers.

  20. Determination of viscosity through terminal velocity: use of the drag force with a quadratic term in velocity

    DEFF Research Database (Denmark)

    Vertchenko, Lev; Vertchenko, Larissa

    2017-01-01

    A correction to the term with quadratic dependency of the velocity in the Oseen´s drag force by a dimensionless factor is proposed in order to determine the viscosity of glycerin through the measurement of the terminal velocity of spheres falling inside the fluid. This factor incorporates the eff...

  1. A meteor head echo analysis algorithm for the lower VHF band

    Directory of Open Access Journals (Sweden)

    J. Kero

    2012-04-01

    Full Text Available We have developed an automated analysis scheme for meteor head echo observations by the 46.5 MHz Middle and Upper atmosphere (MU radar near Shigaraki, Japan (34.85° N, 136.10° E. The analysis procedure computes meteoroid range, velocity and deceleration as functions of time with unprecedented accuracy and precision. This is crucial for estimations of meteoroid mass and orbital parameters as well as investigations of the meteoroid-atmosphere interaction processes. In this paper we present this analysis procedure in detail. The algorithms use a combination of single-pulse-Doppler, time-of-flight and pulse-to-pulse phase correlation measurements to determine the radial velocity to within a few tens of metres per second with 3.12 ms time resolution. Equivalently, the precision improvement is at least a factor of 20 compared to previous single-pulse measurements. Such a precision reveals that the deceleration increases significantly during the intense part of a meteoroid's ablation process in the atmosphere. From each received pulse, the target range is determined to within a few tens of meters, or the order of a few hundredths of the 900 m long range gates. This is achieved by transmitting a 13-bit Barker code oversampled by a factor of two at reception and using a novel range interpolation technique. The meteoroid velocity vector is determined from the estimated radial velocity by carefully taking the location of the meteor target and the angle from its trajectory to the radar beam into account. The latter is determined from target range and bore axis offset. We have identified and solved the signal processing issue giving rise to the peculiar signature in signal to noise ratio plots reported by Galindo et al. (2011, and show how to use the range interpolation technique to differentiate the effect of signal processing from physical processes.

  2. Super-Encryption Implementation Using Monoalphabetic Algorithm and XOR Algorithm for Data Security

    Science.gov (United States)

    Rachmawati, Dian; Andri Budiman, Mohammad; Aulia, Indra

    2018-03-01

    The exchange of data that occurs offline and online is very vulnerable to the threat of data theft. In general, cryptography is a science and art to maintain data secrecy. An encryption is a cryptography algorithm in which data is transformed into cipher text, which is something that is unreadable and meaningless so it cannot be read or understood by other parties. In super-encryption, two or more encryption algorithms are combined to make it more secure. In this work, Monoalphabetic algorithm and XOR algorithm are combined to form a super- encryption. Monoalphabetic algorithm works by changing a particular letter into a new letter based on existing keywords while the XOR algorithm works by using logic operation XOR Since Monoalphabetic algorithm is a classical cryptographic algorithm and XOR algorithm is a modern cryptographic algorithm, this scheme is expected to be both easy-to-implement and more secure. The combination of the two algorithms is capable of securing the data and restoring it back to its original form (plaintext), so the data integrity is still ensured.

  3. Paintball velocity as a function of distance traveled

    Directory of Open Access Journals (Sweden)

    Pat Chiarawongse

    2008-06-01

    Full Text Available The relationship between the distance a paintball travels through air and its velocity is investigated by firing a paintball into a ballistic pendulum from a range of distances. The motion of the pendulum was filmed and analyzed by using video analysis software. The velocity of the paintball on impact was calculated from the maximum horizontal displacement of the pendulum. It is shown that the velocity of a paintball decreases exponentially with distance traveled, as expected. The average muzzle velocity of the paint balls is found with an estimate of the drag coefficient.

  4. Paintball velocity as a function of distance traveled

    Directory of Open Access Journals (Sweden)

    Pat Chiarawongse

    2008-06-01

    Full Text Available The relationship between the distance a paintball travels through air and its velocity is investigated by firing a paintball into a ballistic pendulum from a range of distances. The motion of the pendulum was filmed and analyzed by using video analysis software. The velocity of the paintball on impact was calculated from the maximum horizontal displacement of the pendulum. It is shown that the velocity of a paintball decreases exponentially with distance traveled, as expected. The average muzzle velocity of the paint balls is found with an estimate of the drag coefficient

  5. Ultrasonic device for real-time sewage velocity and suspended particles concentration measurements.

    Science.gov (United States)

    Abda, F; Azbaid, A; Ensminger, D; Fischer, S; François, P; Schmitt, P; Pallarès, A

    2009-01-01

    In the frame of a technological research and innovation network in water and environment technologies (RITEAU, Réseau de Recherche et d'Innovation Technologique Eau et Environnement), our research group, in collaboration with industrial partners and other research institutions, has been in charge of the development of a suitable flowmeter: an ultrasonic device measuring simultaneously the water flow and the concentration of size classes of suspended particles. Working on the pulsed ultrasound principle, our multi-frequency device (1 to 14 MHz) allows flow velocity and water height measurement and estimation of suspended solids concentration. Velocity measurements rely on the coherent Doppler principle. A self developed frequency estimator, so called Spectral Identification method, was used and compared to the classical Pulse-Pair method. Several measurements campaigns on one wastewater collector of the French city of Strasbourg gave very satisfactory results and showed smaller standard deviation values for the Doppler frequency extracted by the Spectral Identification method. A specific algorithm was also developed for the water height measurements. It relies on the water surface acoustic impedance rupture and its peak localisation and behaviour in the collected backscattering data. This algorithm was positively tested on long time measurements on the same wastewater collector. A large part of the article is devoted to the measurements of the suspended solids concentrations. Our data analysis consists in the adaptation of the well described acoustic behaviour of sand to the behaviour of wastewater particles. Both acoustic attenuation and acoustic backscattering data over multiple frequencies are analyzed for the extrapolation of size classes and respective concentrations. Under dry weather conditions, the massic backscattering coefficient and the overall size distribution showed similar evolution whatever the measurement site was and were suggesting a global

  6. Diffraction imaging and velocity analysis using oriented velocity continuation

    KAUST Repository

    Decker, Luke; Fomel, Sergey

    2014-01-01

    -space-slope coordinates. The extrapolation is described by a convection-type partial differential equation and implemented efficiently in the Fourier domain. Synthetic and field data experiments show that the proposed algorithm is able to detect accurate time

  7. Streaming Velocities and the Baryon Acoustic Oscillation Scale.

    Science.gov (United States)

    Blazek, Jonathan A; McEwen, Joseph E; Hirata, Christopher M

    2016-03-25

    At the epoch of decoupling, cosmic baryons had supersonic velocities relative to the dark matter that were coherent on large scales. These velocities subsequently slow the growth of small-scale structure and, via feedback processes, can influence the formation of larger galaxies. We examine the effect of streaming velocities on the galaxy correlation function, including all leading-order contributions for the first time. We find that the impact on the baryon acoustic oscillation (BAO) peak is dramatically enhanced (by a factor of ∼5) over the results of previous investigations, with the primary new effect due to advection: if a galaxy retains memory of the primordial streaming velocity, it does so at its Lagrangian, rather than Eulerian, position. Since correlations in the streaming velocity change rapidly at the BAO scale, this advection term can cause a significant shift in the observed BAO position. If streaming velocities impact tracer density at the 1% level, compared to the linear bias, the recovered BAO scale is shifted by approximately 0.5%. This new effect, which is required to preserve Galilean invariance, greatly increases the importance of including streaming velocities in the analysis of upcoming BAO measurements and opens a new window to the astrophysics of galaxy formation.

  8. A new estimator for vector velocity estimation [medical ultrasonics

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2001-01-01

    A new estimator for determining the two-dimensional velocity vector using a pulsed ultrasound field is derived. The estimator uses a transversely modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation...... be introduced, and the velocity estimation is done at a fixed depth in tissue to reduce the influence of a spatial velocity spread. Examples for different velocity vectors and field conditions are shown using both simple and more complex field simulations. A relative accuracy of 10.1% is obtained...

  9. GALAXIES IN ΛCDM WITH HALO ABUNDANCE MATCHING: LUMINOSITY-VELOCITY RELATION, BARYONIC MASS-VELOCITY RELATION, VELOCITY FUNCTION, AND CLUSTERING

    International Nuclear Information System (INIS)

    Trujillo-Gomez, Sebastian; Klypin, Anatoly; Primack, Joel; Romanowsky, Aaron J.

    2011-01-01

    It has long been regarded as difficult if not impossible for a cosmological model to account simultaneously for the galaxy luminosity, mass, and velocity distributions. We revisit this issue using a modern compilation of observational data along with the best available large-scale cosmological simulation of dark matter (DM). We find that the standard cosmological model, used in conjunction with halo abundance matching (HAM) and simple dynamical corrections, fits—at least on average—all basic statistics of galaxies with circular velocities V circ > 80 km s –1 calculated at a radius of ∼10 kpc. Our primary observational constraint is the luminosity-velocity (LV) relation—which generalizes the Tully-Fisher and Faber-Jackson relations in allowing all types of galaxies to be included, and provides a fundamental benchmark to be reproduced by any theory of galaxy formation. We have compiled data for a variety of galaxies ranging from dwarf irregulars to giant ellipticals. The data present a clear monotonic LV relation from ∼50 km s –1 to ∼500 km s –1 , with a bend below ∼80 km s –1 and a systematic offset between late- and early-type galaxies. For comparison to theory, we employ our new ΛCDM 'Bolshoi' simulation of DM, which has unprecedented mass and force resolution over a large cosmological volume, while using an up-to-date set of cosmological parameters. We use HAM to assign rank-ordered galaxy luminosities to the DM halos, a procedure that automatically fits the empirical luminosity function and provides a predicted LV relation that can be checked against observations. The adiabatic contraction of DM halos in response to the infall of the baryons is included as an optional model ingredient. The resulting predictions for the LV relation are in excellent agreement with the available data on both early-type and late-type galaxies for the luminosity range from M r = –14 to M r = –22. We also compare our predictions for the 'cold' baryon mass (i

  10. Doppler Lidar Vertical Velocity Statistics Value-Added Product

    Energy Technology Data Exchange (ETDEWEB)

    Newsom, R. K. [DOE ARM Climate Research Facility, Washington, DC (United States); Sivaraman, C. [DOE ARM Climate Research Facility, Washington, DC (United States); Shippert, T. R. [DOE ARM Climate Research Facility, Washington, DC (United States); Riihimaki, L. D. [DOE ARM Climate Research Facility, Washington, DC (United States)

    2015-07-01

    Accurate height-resolved measurements of higher-order statistical moments of vertical velocity fluctuations are crucial for improved understanding of turbulent mixing and diffusion, convective initiation, and cloud life cycles. The Atmospheric Radiation Measurement (ARM) Climate Research Facility operates coherent Doppler lidar systems at several sites around the globe. These instruments provide measurements of clear-air vertical velocity profiles in the lower troposphere with a nominal temporal resolution of 1 sec and height resolution of 30 m. The purpose of the Doppler lidar vertical velocity statistics (DLWSTATS) value-added product (VAP) is to produce height- and time-resolved estimates of vertical velocity variance, skewness, and kurtosis from these raw measurements. The VAP also produces estimates of cloud properties, including cloud-base height (CBH), cloud frequency, cloud-base vertical velocity, and cloud-base updraft fraction.

  11. Evolution of velocity dispersion along cold collisionless flows

    International Nuclear Information System (INIS)

    Banik, Nilanjan; Sikivie, Pierre

    2016-01-01

    We found that the infall of cold dark matter onto a galaxy produces cold collisionless flows and caustics in its halo. If a signal is found in the cavity detector of dark matter axions, the flows will be readily apparent as peaks in the energy spectrum of photons from axion conversion, allowing the densities, velocity vectors and velocity dispersions of the flows to be determined. We also discuss the evolution of velocity dispersion along cold collisionless flows in one and two dimensions. A technique is presented for obtaining the leading behaviour of the velocity dispersion near caustics. The results are used to derive an upper limit on the energy dispersion of the Big Flow from the sharpness of its nearby caustic, and a prediction for the dispersions in its velocity components

  12. Ultrasonic velocity measurements in expanded liquid mercury

    International Nuclear Information System (INIS)

    Suzuki, K.; Inutake, M.; Fujiwaka, S.

    1977-10-01

    In this paper we present the first results of the sound velocity measurements in expanded liquid mercury. The measurements were made at temperatures up to 1600 0 C and pressures up to 1700 kg/cm 2 by means of an ultrasonic pulse transmission/echo technique which was newly developed for such high temperature/pressure condition. When the density is larger than 9 g/cm 3 , the observed sound velocity decreases linearly with decreasing density. At densities smaller than 9 g/cm 3 , the linear dependence on the density is no longer observed. The observed sound velocity approaches a minimum near the liquid-gas critical point (rho sub(cr) asymptotically equals 5.5 g/cm 3 ). The existing theories for sound velocity in liquid metals fail to explain the observed results. (auth.)

  13. Measurement of glacier velocity at Pik Lenin, Tajikistan, by feature tracking

    Science.gov (United States)

    Kumari, S.; Ghosh, S. K.; Buchroithner, M. F.

    2014-11-01

    Glaciers, especially in mountain area are sensitive indicators of climate fluctuations and also contribute to present rates of sea level rise. In Central Asia, these glaciers are the primary resource for fresh water. Understanding the seasonal behavior of these glaciers would help to make efficient use of the available water reservoir. Different methods have been employed to study glacier displacements in past. The conventional survey techniques are very cost-intensive and highly depend on accessibility to high mountain glaciers also directs us to look for new ways to study these areas. Here remote sensing comes in handy with freely available data and a good coverage with high spatial and temporal resolution. Optical satellite imagery, available free can be effectively used for research purpose. The glacier in this region fed lake Karakul (380 km2), the largest Lake in Tajikistan. The objective is to study the displacement tendency of the Glacier in Pik Lenin area using Landsat 7 dataset. Normalized cross correlation algorithm has been implemented via CIAS to estimate the motion of glacier surface. A number of combination of reference block and search area size were tested for 30 m resolution dataset. As a result the specifications: reference block size of 15 pixels and search area size of 10 pixels was found out as the best set of parameters and considered for further processing. The study derives a reliable set of data depicting the velocities in the glacier which after post processing shows peak velocity of 121 m/y of the glacier.

  14. Simulation of High Velocity Impact on Composite Structures - Model Implementation and Validation

    Science.gov (United States)

    Schueler, Dominik; Toso-Pentecôte, Nathalie; Voggenreiter, Heinz

    2016-08-01

    High velocity impact on composite aircraft structures leads to the formation of flexural waves that can cause severe damage to the structure. Damage and failure can occur within the plies and/or in the resin rich interface layers between adjacent plies. In the present paper a modelling methodology is documented that captures intra- and inter-laminar damage and their interrelations by use of shell element layers representing sub-laminates that are connected with cohesive interface layers to simulate delamination. This approach allows the simulation of large structures while still capturing the governing damage mechanisms and their interactions. The paper describes numerical algorithms for the implementation of a Ladevèze continuum damage model for the ply and methods to derive input parameters for the cohesive zone model. By comparison with experimental results from gas gun impact tests the potential and limitations of the modelling approach are discussed.

  15. Evaluation of arterial stiffness by finger-toe pulse wave velocity: optimization of signal processing and clinical validation.

    Science.gov (United States)

    Obeid, Hasan; Khettab, Hakim; Marais, Louise; Hallab, Magid; Laurent, Stéphane; Boutouyrie, Pierre

    2017-08-01

    Carotid-femoral pulse wave velocity (PWV) (cf-PWV) is the gold standard for measuring aortic stiffness. Finger-toe PWV (ft-PWV) is a simpler noninvasive method for measuring arterial stiffness. Although the validity of the method has been previously assessed, its accuracy can be improved. ft-PWV is determined on the basis of a patented height chart for the distance and the pulse transit time (PTT) between the finger and the toe pulpar arteries signals (ft-PTT). The objective of the first study, performed in 66 patients, was to compare different algorithms (intersecting tangents, maximum of the second derivative, 10% threshold and cross-correlation) for determining the foot of the arterial pulse wave, thus the ft-PTT. The objective of the second study, performed in 101 patients, was to investigate different signal processing chains to improve the concordance of ft-PWV with the gold-standard cf-PWV. Finger-toe PWV (ft-PWV) was calculated using the four algorithms. The best correlations relating ft-PWV and cf-PWV, and relating ft-PTT and carotid-femoral PTT were obtained with the maximum of the second derivative algorithm [PWV: r = 0.56, P < 0.0001, root mean square error (RMSE) = 0.9 m/s; PTT: r = 0.61, P < 0.001, RMSE = 12 ms]. The three other algorithms showed lower correlations. The correlation between ft-PTT and carotid-femoral PTT further improved (r = 0.81, P < 0.0001, RMSE = 5.4 ms) when the maximum of the second derivative algorithm was combined with an optimized signal processing chain. Selecting the maximum of the second derivative algorithm for detecting the foot of the pressure waveform, and combining it with an optimized signal processing chain, improved the accuracy of ft-PWV measurement in the current population sample. Thus, it makes ft-PWV very promising for the simple noninvasive determination of aortic stiffness in clinical practice.

  16. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

    Science.gov (United States)

    Bektešević, Dino; Vinković, Dejan

    2017-11-01

    Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

  17. End-systolic stress-velocity relation and circumferential fiber velocity shortening for analysing left ventricular function in mice

    Energy Technology Data Exchange (ETDEWEB)

    Fayssoil, A. [Cardiologie, Hopital europeen Georges Pompidou, 20, rue le blanc, Paris (France)], E-mail: fayssoil2000@yahoo.fr; Renault, G. [CNRS UMR 8104, Inserm, U567, Institut Cochin, Universite Paris Descartes, Paris (France); Fougerousse, F. [Genethon, RD, Evry (France)

    2009-08-15

    Traditionally, analysing left ventricular (LV) performance relies on echocardiography by evaluating shortening fraction (SF) in mice. SF is influenced by load conditions. End-systolic stress-velocity (ESSV) relation and circumferential fiber velocity (VcF) shortening are more relevant parameters for evaluating systolic function regardless load conditions particularly in mice's models of heart failure.

  18. Results of verification and investigation of wind velocity field forecast. Verification of wind velocity field forecast model

    International Nuclear Information System (INIS)

    Ogawa, Takeshi; Kayano, Mitsunaga; Kikuchi, Hideo; Abe, Takeo; Saga, Kyoji

    1995-01-01

    In Environmental Radioactivity Research Institute, the verification and investigation of the wind velocity field forecast model 'EXPRESS-1' have been carried out since 1991. In fiscal year 1994, as the general analysis, the validity of weather observation data, the local features of wind field, and the validity of the positions of monitoring stations were investigated. The EXPRESS which adopted 500 m mesh so far was improved to 250 m mesh, and the heightening of forecast accuracy was examined, and the comparison with another wind velocity field forecast model 'SPEEDI' was carried out. As the results, there are the places where the correlation with other points of measurement is high and low, and it was found that for the forecast of wind velocity field, by excluding the data of the points with low correlation or installing simplified observation stations to take their data in, the forecast accuracy is improved. The outline of the investigation, the general analysis of weather observation data and the improvements of wind velocity field forecast model and forecast accuracy are reported. (K.I.)

  19. Measurement of vortex velocities over a wide range of vortex age, downstream distance and free stream velocity

    Science.gov (United States)

    Rorke, J. B.; Moffett, R. C.

    1977-01-01

    A wind tunnel test was conducted to obtain vortex velocity signatures over a wide parameter range encompassing the data conditions of several previous researchers while maintaining a common instrumentation and test facility. The generating wing panel was configured with both a revolved airfoil tip shape and a square tip shape and had a semispan aspect of 4.05/1.0 with a 121.9 cm span. Free stream velocity was varied from 6.1 m/sec to 76.2 m/sec and the vortex core velocities were measured at locations 3, 6, 12, 24 and 48 chordlengths downstream of the wing trailing edge, yielding vortex ages up to 2.0 seconds. Wing pitch angles of 6, 8, 9 and 12 deg were investigated. Detailed surface pressure distributions and wing force measurements were obtained for each wing tip configuration. Correlation with vortex velocity data taken in previous experiments is good. During the rollup process, vortex core parameters appear to be dependent primarily on vortex age. Trending in the plateau and decay regions is more complex and the machanisms appear to be more unstable.

  20. Golden Sine Algorithm: A Novel Math-Inspired Algorithm

    Directory of Open Access Journals (Sweden)

    TANYILDIZI, E.

    2017-05-01

    Full Text Available In this study, Golden Sine Algorithm (Gold-SA is presented as a new metaheuristic method for solving optimization problems. Gold-SA has been developed as a new search algorithm based on population. This math-based algorithm is inspired by sine that is a trigonometric function. In the algorithm, random individuals are created as many as the number of search agents with uniform distribution for each dimension. The Gold-SA operator searches to achieve a better solution in each iteration by trying to bring the current situation closer to the target value. The solution space is narrowed by the golden section so that the areas that are supposed to give only good results are scanned instead of the whole solution space scan. In the tests performed, it is seen that Gold-SA has better results than other population based methods. In addition, Gold-SA has fewer algorithm-dependent parameters and operators than other metaheuristic methods, increasing the importance of this method by providing faster convergence of this new method.