Algorithmic Issues in Modeling Motion
DEFF Research Database (Denmark)
Agarwal, P. K; Guibas, L. J; Edelsbrunner, H.
2003-01-01
This article is a survey of research areas in which motion plays a pivotal role. The aim of the article is to review current approaches to modeling motion together with related data structures and algorithms, and to summarize the challenges that lie ahead in producing a more unified theory of mot...
Motion Model Employment using interacting Motion Model Algorithm
DEFF Research Database (Denmark)
Hussain, Dil Muhammad Akbar
2006-01-01
The paper presents a simulation study to track a maneuvering target using a selective approach in choosing Interacting Multiple Models (IMM) algorithm to provide a wider coverage to track such targets. Initially, there are two motion models in the system to track a target. Probability of each m...
Models and Algorithms for Tracking Target with Coordinated Turn Motion
Directory of Open Access Journals (Sweden)
Xianghui Yuan
2014-01-01
Full Text Available Tracking target with coordinated turn (CT motion is highly dependent on the models and algorithms. First, the widely used models are compared in this paper—coordinated turn (CT model with known turn rate, augmented coordinated turn (ACT model with Cartesian velocity, ACT model with polar velocity, CT model using a kinematic constraint, and maneuver centered circular motion model. Then, in the single model tracking framework, the tracking algorithms for the last four models are compared and the suggestions on the choice of models for different practical target tracking problems are given. Finally, in the multiple models (MM framework, the algorithm based on expectation maximization (EM algorithm is derived, including both the batch form and the recursive form. Compared with the widely used interacting multiple model (IMM algorithm, the EM algorithm shows its effectiveness.
Energy Technology Data Exchange (ETDEWEB)
Dhou, S; Williams, C [Brigham and Women’s Hospital / Harvard Medical School, Boston, MA (United States); Ionascu, D [William Beaumont Hospital, Royal Oak, MI (United States); Lewis, J [University of California at Los Angeles, Los Angeles, CA (United States)
2016-06-15
Purpose: To study the variability of patient-specific motion models derived from 4-dimensional CT (4DCT) images using different deformable image registration (DIR) algorithms for lung cancer stereotactic body radiotherapy (SBRT) patients. Methods: Motion models are derived by 1) applying DIR between each 4DCT image and a reference image, resulting in a set of displacement vector fields (DVFs), and 2) performing principal component analysis (PCA) on the DVFs, resulting in a motion model (a set of eigenvectors capturing the variations in the DVFs). Three DIR algorithms were used: 1) Demons, 2) Horn-Schunck, and 3) iterative optical flow. The motion models derived were compared using patient 4DCT scans. Results: Motion models were derived and the variations were evaluated according to three criteria: 1) the average root mean square (RMS) difference which measures the absolute difference between the components of the eigenvectors, 2) the dot product between the eigenvectors which measures the angular difference between the eigenvectors in space, and 3) the Euclidean Model Norm (EMN), which is calculated by summing the dot products of an eigenvector with the first three eigenvectors from the reference motion model in quadrature. EMN measures how well an eigenvector can be reconstructed using another motion model derived using a different DIR algorithm. Results showed that comparing to a reference motion model (derived using the Demons algorithm), the eigenvectors of the motion model derived using the iterative optical flow algorithm has smaller RMS, larger dot product, and larger EMN values than those of the motion model derived using Horn-Schunck algorithm. Conclusion: The study showed that motion models vary depending on which DIR algorithms were used to derive them. The choice of a DIR algorithm may affect the accuracy of the resulting model, and it is important to assess the suitability of the algorithm chosen for a particular application. This project was supported
International Nuclear Information System (INIS)
Dhou, S; Williams, C; Ionascu, D; Lewis, J
2016-01-01
Purpose: To study the variability of patient-specific motion models derived from 4-dimensional CT (4DCT) images using different deformable image registration (DIR) algorithms for lung cancer stereotactic body radiotherapy (SBRT) patients. Methods: Motion models are derived by 1) applying DIR between each 4DCT image and a reference image, resulting in a set of displacement vector fields (DVFs), and 2) performing principal component analysis (PCA) on the DVFs, resulting in a motion model (a set of eigenvectors capturing the variations in the DVFs). Three DIR algorithms were used: 1) Demons, 2) Horn-Schunck, and 3) iterative optical flow. The motion models derived were compared using patient 4DCT scans. Results: Motion models were derived and the variations were evaluated according to three criteria: 1) the average root mean square (RMS) difference which measures the absolute difference between the components of the eigenvectors, 2) the dot product between the eigenvectors which measures the angular difference between the eigenvectors in space, and 3) the Euclidean Model Norm (EMN), which is calculated by summing the dot products of an eigenvector with the first three eigenvectors from the reference motion model in quadrature. EMN measures how well an eigenvector can be reconstructed using another motion model derived using a different DIR algorithm. Results showed that comparing to a reference motion model (derived using the Demons algorithm), the eigenvectors of the motion model derived using the iterative optical flow algorithm has smaller RMS, larger dot product, and larger EMN values than those of the motion model derived using Horn-Schunck algorithm. Conclusion: The study showed that motion models vary depending on which DIR algorithms were used to derive them. The choice of a DIR algorithm may affect the accuracy of the resulting model, and it is important to assess the suitability of the algorithm chosen for a particular application. This project was supported
International Nuclear Information System (INIS)
Ali, I; Ahmad, S; Alsbou, N
2015-01-01
Purpose: To develop 4D-cone-beam CT (CBCT) algorithm by motion modeling that extracts actual length, CT numbers level and motion amplitude of a mobile target retrospective to image reconstruction by motion modeling. Methods: The algorithm used three measurable parameters: apparent length and blurred CT number distribution of a mobile target obtained from CBCT images to determine actual length, CT-number value of the stationary target, and motion amplitude. The predictions of this algorithm were tested with mobile targets that with different well-known sizes made from tissue-equivalent gel which was inserted into a thorax phantom. The phantom moved sinusoidally in one-direction to simulate respiratory motion using eight amplitudes ranging 0–20mm. Results: Using this 4D-CBCT algorithm, three unknown parameters were extracted that include: length of the target, CT number level, speed or motion amplitude for the mobile targets retrospective to image reconstruction. The motion algorithms solved for the three unknown parameters using measurable apparent length, CT number level and gradient for a well-defined mobile target obtained from CBCT images. The motion model agreed with measured apparent lengths which were dependent on the actual target length and motion amplitude. The gradient of the CT number distribution of the mobile target is dependent on the stationary CT number level, actual target length and motion amplitude. Motion frequency and phase did not affect the elongation and CT number distribution of the mobile target and could not be determined. Conclusion: A 4D-CBCT motion algorithm was developed to extract three parameters that include actual length, CT number level and motion amplitude or speed of mobile targets directly from reconstructed CBCT images without prior knowledge of the stationary target parameters. This algorithm provides alternative to 4D-CBCT without requirement to motion tracking and sorting of the images into different breathing phases
Energy Technology Data Exchange (ETDEWEB)
Ali, I; Ahmad, S [University of Oklahoma Health Sciences, Oklahoma City, OK (United States); Alsbou, N [Ohio Northern University, Ada, OH (United States)
2015-06-15
Purpose: To develop 4D-cone-beam CT (CBCT) algorithm by motion modeling that extracts actual length, CT numbers level and motion amplitude of a mobile target retrospective to image reconstruction by motion modeling. Methods: The algorithm used three measurable parameters: apparent length and blurred CT number distribution of a mobile target obtained from CBCT images to determine actual length, CT-number value of the stationary target, and motion amplitude. The predictions of this algorithm were tested with mobile targets that with different well-known sizes made from tissue-equivalent gel which was inserted into a thorax phantom. The phantom moved sinusoidally in one-direction to simulate respiratory motion using eight amplitudes ranging 0–20mm. Results: Using this 4D-CBCT algorithm, three unknown parameters were extracted that include: length of the target, CT number level, speed or motion amplitude for the mobile targets retrospective to image reconstruction. The motion algorithms solved for the three unknown parameters using measurable apparent length, CT number level and gradient for a well-defined mobile target obtained from CBCT images. The motion model agreed with measured apparent lengths which were dependent on the actual target length and motion amplitude. The gradient of the CT number distribution of the mobile target is dependent on the stationary CT number level, actual target length and motion amplitude. Motion frequency and phase did not affect the elongation and CT number distribution of the mobile target and could not be determined. Conclusion: A 4D-CBCT motion algorithm was developed to extract three parameters that include actual length, CT number level and motion amplitude or speed of mobile targets directly from reconstructed CBCT images without prior knowledge of the stationary target parameters. This algorithm provides alternative to 4D-CBCT without requirement to motion tracking and sorting of the images into different breathing phases
An Improved Perturb and Observe Algorithm for Photovoltaic Motion Carriers
Peng, Lele; Xu, Wei; Li, Liming; Zheng, Shubin
2018-03-01
An improved perturbation and observation algorithm for photovoltaic motion carriers is proposed in this paper. The model of the proposed algorithm is given by using Lambert W function and tangent error method. Moreover, by using matlab and experiment of photovoltaic system, the tracking performance of the proposed algorithm is tested. And the results demonstrate that the improved algorithm has fast tracking speed and high efficiency. Furthermore, the energy conversion efficiency by the improved method has increased by nearly 8.2%.
Telban, Robert J.
While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input
Modeling and Engineering Algorithms for Mobile Data
DEFF Research Database (Denmark)
Blunck, Henrik; Hinrichs, Klaus; Sondern, Joëlle
2006-01-01
In this paper, we present an object-oriented approach to modeling mobile data and algorithms operating on such data. Our model is general enough to capture any kind of continuous motion while at the same time allowing for encompassing algorithms optimized for specific types of motion. Such motion...
Directory of Open Access Journals (Sweden)
G-A. Tselentis
2010-12-01
Full Text Available Complex application domains involve difficult pattern classification problems. This paper introduces a model of MMI attenuation and its dependence on engineering ground motion parameters based on artificial neural networks (ANNs and genetic algorithms (GAs. The ultimate goal of this investigation is to evaluate the target-region applicability of ground-motion attenuation relations developed for a host region based on training an ANN using the seismic patterns of the host region. This ANN learning is based on supervised learning using existing data from past earthquakes. The combination of these two learning procedures (that is, GA and ANN allows us to introduce a new method for pattern recognition in the context of seismological applications. The performance of this new GA-ANN regression method has been evaluated using a Greek seismological database with satisfactory results.
Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches
Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.
2005-01-01
While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.
A Motion Estimation Algorithm Using DTCWT and ARPS
Directory of Open Access Journals (Sweden)
Unan Y. Oktiawati
2013-09-01
Full Text Available In this paper, a hybrid motion estimation algorithm utilizing the Dual Tree Complex Wavelet Transform (DTCWT and the Adaptive Rood Pattern Search (ARPS block is presented. The proposed algorithm first transforms each video sequence with DTCWT. The frame n of the video sequence is used as a reference input and the frame n+2 is used to find the motion vector. Next, the ARPS block search algorithm is carried out and followed by an inverse DTCWT. The motion compensation is then carried out on each inversed frame n and motion vector. The results show that PSNR can be improved for mobile device without depriving its quality. The proposed algorithm also takes less memory usage compared to the DCT-based algorithm. The main contribution of this work is a hybrid wavelet-based motion estimation algorithm for mobile devices. Other contribution is the visual quality scoring system as used in section 6.
Alsbou, Nesreen; Ahmad, Salahuddin; Ali, Imad
2016-05-17
A motion algorithm has been developed to extract length, CT number level and motion amplitude of a mobile target from cone-beam CT (CBCT) images. The algorithm uses three measurable parameters: Apparent length and blurred CT number distribution of a mobile target obtained from CBCT images to determine length, CT-number value of the stationary target, and motion amplitude. The predictions of this algorithm are tested with mobile targets having different well-known sizes that are made from tissue-equivalent gel which is inserted into a thorax phantom. The phantom moves sinusoidally in one-direction to simulate respiratory motion using eight amplitudes ranging 0-20 mm. Using this motion algorithm, three unknown parameters are extracted that include: Length of the target, CT number level, speed or motion amplitude for the mobile targets from CBCT images. The motion algorithm solves for the three unknown parameters using measured length, CT number level and gradient for a well-defined mobile target obtained from CBCT images. The motion model agrees with the measured lengths which are dependent on the target length and motion amplitude. The gradient of the CT number distribution of the mobile target is dependent on the stationary CT number level, the target length and motion amplitude. Motion frequency and phase do not affect the elongation and CT number distribution of the mobile target and could not be determined. A motion algorithm has been developed to extract three parameters that include length, CT number level and motion amplitude or speed of mobile targets directly from reconstructed CBCT images without prior knowledge of the stationary target parameters. This algorithm provides alternative to 4D-CBCT without requirement of motion tracking and sorting of the images into different breathing phases. The motion model developed here works well for tumors that have simple shapes, high contrast relative to surrounding tissues and move nearly in regular motion pattern
Articulated Human Motion Tracking Using Sequential Immune Genetic Algorithm
Directory of Open Access Journals (Sweden)
Yi Li
2013-01-01
Full Text Available We formulate human motion tracking as a high-dimensional constrained optimization problem. A novel generative method is proposed for human motion tracking in the framework of evolutionary computation. The main contribution is that we introduce immune genetic algorithm (IGA for pose optimization in latent space of human motion. Firstly, we perform human motion analysis in the learnt latent space of human motion. As the latent space is low dimensional and contents the prior knowledge of human motion, it makes pose analysis more efficient and accurate. Then, in the search strategy, we apply IGA for pose optimization. Compared with genetic algorithm and other evolutionary methods, its main advantage is the ability to use the prior knowledge of human motion. We design an IGA-based method to estimate human pose from static images for initialization of motion tracking. And we propose a sequential IGA (S-IGA algorithm for motion tracking by incorporating the temporal continuity information into the traditional IGA. Experimental results on different videos of different motion types show that our IGA-based pose estimation method can be used for initialization of motion tracking. The S-IGA-based motion tracking method can achieve accurate and stable tracking of 3D human motion.
Benchmarking motion planning algorithms for bin-picking applications
DEFF Research Database (Denmark)
Iversen, Thomas Fridolin; Ellekilde, Lars-Peter
2017-01-01
Purpose For robot motion planning there exists a large number of different algorithms, each appropriate for a certain domain, and the right choice of planner depends on the specific use case. The purpose of this paper is to consider the application of bin picking and benchmark a set of motion...... planning algorithms to identify which are most suited in the given context. Design/methodology/approach The paper presents a selection of motion planning algorithms and defines benchmarks based on three different bin-picking scenarios. The evaluation is done based on a fixed set of tasks, which are planned...... and executed on a real and a simulated robot. Findings The benchmarking shows a clear difference between the planners and generally indicates that algorithms integrating optimization, despite longer planning time, perform better due to a faster execution. Originality/value The originality of this work lies...
Dikbas, Salih; Altunbasak, Yucel
2013-08-01
In this paper, a new low-complexity true-motion estimation (TME) algorithm is proposed for video processing applications, such as motion-compensated temporal frame interpolation (MCTFI) or motion-compensated frame rate up-conversion (MCFRUC). Regular motion estimation, which is often used in video coding, aims to find the motion vectors (MVs) to reduce the temporal redundancy, whereas TME aims to track the projected object motion as closely as possible. TME is obtained by imposing implicit and/or explicit smoothness constraints on the block-matching algorithm. To produce better quality-interpolated frames, the dense motion field at interpolation time is obtained for both forward and backward MVs; then, bidirectional motion compensation using forward and backward MVs is applied by mixing both elegantly. Finally, the performance of the proposed algorithm for MCTFI is demonstrated against recently proposed methods and smoothness constraint optical flow employed by a professional video production suite. Experimental results show that the quality of the interpolated frames using the proposed method is better when compared with the MCFRUC techniques.
Evaluating and comparing algorithms for respiratory motion prediction
International Nuclear Information System (INIS)
Ernst, F; Dürichen, R; Schlaefer, A; Schweikard, A
2013-01-01
In robotic radiosurgery, it is necessary to compensate for systematic latencies arising from target tracking and mechanical constraints. This compensation is usually achieved by means of an algorithm which computes the future target position. In most scientific works on respiratory motion prediction, only one or two algorithms are evaluated on a limited amount of very short motion traces. The purpose of this work is to gain more insight into the real world capabilities of respiratory motion prediction methods by evaluating many algorithms on an unprecedented amount of data. We have evaluated six algorithms, the normalized least mean squares (nLMS), recursive least squares (RLS), multi-step linear methods (MULIN), wavelet-based multiscale autoregression (wLMS), extended Kalman filtering, and ε-support vector regression (SVRpred) methods, on an extensive database of 304 respiratory motion traces. The traces were collected during treatment with the CyberKnife (Accuray, Inc., Sunnyvale, CA, USA) and feature an average length of 71 min. Evaluation was done using a graphical prediction toolkit, which is available to the general public, as is the data we used. The experiments show that the nLMS algorithm—which is one of the algorithms currently used in the CyberKnife—is outperformed by all other methods. This is especially true in the case of the wLMS, the SVRpred, and the MULIN algorithms, which perform much better. The nLMS algorithm produces a relative root mean square (RMS) error of 75% or less (i.e., a reduction in error of 25% or more when compared to not doing prediction) in only 38% of the test cases, whereas the MULIN and SVRpred methods reach this level in more than 77%, the wLMS algorithm in more than 84% of the test cases. Our work shows that the wLMS algorithm is the most accurate algorithm and does not require parameter tuning, making it an ideal candidate for clinical implementation. Additionally, we have seen that the structure of a patient
Hand based visual intent recognition algorithm for wheelchair motion
CSIR Research Space (South Africa)
Luhandjula, T
2010-05-01
Full Text Available This paper describes an algorithm for a visual human-machine interface that infers a person’s intention from the motion of the hand. Work in progress shows a proof of concept tested on static images. The context for which this solution is intended...
New inverse synthetic aperture radar algorithm for translational motion compensation
Bocker, Richard P.; Henderson, Thomas B.; Jones, Scott A.; Frieden, B. R.
1991-10-01
Inverse synthetic aperture radar (ISAR) is an imaging technique that shows real promise in classifying airborne targets in real time under all weather conditions. Over the past few years a large body of ISAR data has been collected and considerable effort has been expended to develop algorithms to form high-resolution images from this data. One important goal of workers in this field is to develop software that will do the best job of imaging under the widest range of conditions. The success of classifying targets using ISAR is predicated upon forming highly focused radar images of these targets. Efforts to develop highly focused imaging computer software have been challenging, mainly because the imaging depends on and is affected by the motion of the target, which in general is not precisely known. Specifically, the target generally has both rotational motion about some axis and translational motion as a whole with respect to the radar. The slant-range translational motion kinematic quantities must be first accurately estimated from the data and compensated before the image can be focused. Following slant-range motion compensation, the image is further focused by determining and correcting for target rotation. The use of the burst derivative measure is proposed as a means to improve the computational efficiency of currently used ISAR algorithms. The use of this measure in motion compensation ISAR algorithms for estimating the slant-range translational motion kinematic quantities of an uncooperative target is described. Preliminary tests have been performed on simulated as well as actual ISAR data using both a Sun 4 workstation and a parallel processing transputer array. Results indicate that the burst derivative measure gives significant improvement in processing speed over the traditional entropy measure now employed.
Evaluation of feature detection algorithms for structure from motion
CSIR Research Space (South Africa)
Govender, N
2009-11-01
Full Text Available technique with an application to stereo vision,” in International Joint Conference on Artificial Intelligence, April 1981. [17] C.Tomasi and T.Kanade, “Detection and tracking of point fetaures,” Carnegie Mellon, Tech. Rep., April 1991. [18] P. Torr... Algorithms for Structure from Motion Natasha Govender Mobile Intelligent Autonomous Systems CSIR Pretoria Email: ngovender@csir.co.za Abstract—Structure from motion is a widely-used technique in computer vision to perform 3D reconstruction. The 3D...
A batch Algorithm for Implicit Non-Rigid Shape and Motion Recovery
DEFF Research Database (Denmark)
Bartoli, Adrien; Olsen, Søren Ingvor
2005-01-01
The recovery of 3D shape and camera motion for non-rigid scenes from single-camera video footage is a very important problem in computer vision. The low-rank shape model consists in regarding the deformations as linear combinations of basis shapes. Most algorithms for reconstructing the parameters...... of this model along with camera motion are based on three main steps. Given point tracks and the rank, or equivalently the number of basis shapes, they factorize a measurement matrix containing all point tracks, from which the camera motion and basis shapes are extracted and refined in a bundle adjustment...
Canonical algorithms for numerical integration of charged particle motion equations
Efimov, I. N.; Morozov, E. A.; Morozova, A. R.
2017-02-01
A technique for numerically integrating the equation of charged particle motion in a magnetic field is considered. It is based on the canonical transformations of the phase space in Hamiltonian mechanics. The canonical transformations make the integration process stable against counting error accumulation. The integration algorithms contain a minimum possible amount of arithmetics and can be used to design accelerators and devices of electron and ion optics.
Preliminary study on helical CT algorithms for patient motion estimation and compensation
International Nuclear Information System (INIS)
Wang, G.; Vannier, M.W.
1995-01-01
Helical computed tomography (helical/spiral CT) has replaced conventional CT in many clinical applications. In current helical CT, a patient is assumed to be rigid and motionless during scanning and planar projection sets are produced from raw data via longitudinal interpolation. However, rigid patient motion is a problem in some cases (such as in the skull base and temporal bone imaging). Motion artifacts thus generated in reconstructed images can prevent accurate diagnosis. Modeling a uniform translational movement, the authors address how patient motion is ascertained and how it may be compensated. First, mismatch between adjacent fan-beam projections of the same orientation is determined via classical correlation, which is approximately proportional to the patient displacement projected onto an axis orthogonal to the central ray of the involved fan-beam. Then, the patient motion vector (the patient displacement per gantry rotation) is estimated from its projections using a least-square-root method. To suppress motion artifacts, adaptive interpolation algorithms are developed that synthesize full-scan and half-scan planar projection data sets, respectively. In the adaptive scheme, the interpolation is performed along inclined paths dependent upon the patient motion vector. The simulation results show that the patient motion vector can be accurately and reliably estimated using their correlation and least-square-root algorithm, patient motion artifacts can be effectively suppressed via adaptive interpolation, and adaptive half-scan interpolation is advantageous compared with its full-scale counterpart in terms of high contrast image resolution
Algorithm for generating a Brownian motion on a sphere
International Nuclear Information System (INIS)
Carlsson, Tobias; Elvingson, Christer; Ekholm, Tobias
2010-01-01
We present a new algorithm for generation of a random walk on a two-dimensional sphere. The algorithm is obtained by viewing the 2-sphere as the equator in the 3-sphere surrounded by an infinitesimally thin band with boundary which reflects Brownian particles and then applying known effective methods for generating Brownian motion on the 3-sphere. To test the method, the diffusion coefficient was calculated in computer simulations using the new algorithm and, for comparison, also using a commonly used method in which the particle takes a Brownian step in the tangent plane to the 2-sphere and is then projected back to the spherical surface. The two methods are in good agreement for short time steps, while the method presented in this paper continues to give good results also for larger time steps, when the alternative method becomes unstable.
A Fast Algorithm to Simulate Droplet Motions in Oil/Water Two Phase Flow
Zhang, Tao
2017-06-09
To improve the research methods in petroleum industry, we develop a fast algorithm to simulate droplet motions in oil and water two phase flow, using phase field model to describe the phase distribution in the flow process. An efficient partial difference equation solver—Shift-Matrix method is applied here, to speed up the calculation coding in high-level language, i.e. Matlab and R. An analytical solution of order parameter is derived, to define the initial condition of phase distribution. The upwind scheme is applied in our algorithm, to make it energy decay stable, which results in the fast speed of calculation. To make it more clear and understandable, we provide the specific code for forming the coefficient matrix used in Shift-Matrix Method. Our algorithm is compared with other methods in different scales, including Front Tracking and VOSET method in macroscopic and LBM method using RK model in mesoscopic scale. In addition, we compare the result of droplet motion under gravity using our algorithm with the empirical formula common used in industry. The result proves the high efficiency and robustness of our algorithm and it’s then used to simulate the motions of multiple droplets under gravity and cross-direction forces, which is more practical in industry and can be extended to wider application.
Watson, Robert A
2014-08-01
To test the hypothesis that machine learning algorithms increase the predictive power to classify surgical expertise using surgeons' hand motion patterns. In 2012 at the University of North Carolina at Chapel Hill, 14 surgical attendings and 10 first- and second-year surgical residents each performed two bench model venous anastomoses. During the simulated tasks, the participants wore an inertial measurement unit on the dorsum of their dominant (right) hand to capture their hand motion patterns. The pattern from each bench model task performed was preprocessed into a symbolic time series and labeled as expert (attending) or novice (resident). The labeled hand motion patterns were processed and used to train a Support Vector Machine (SVM) classification algorithm. The trained algorithm was then tested for discriminative/predictive power against unlabeled (blinded) hand motion patterns from tasks not used in the training. The Lempel-Ziv (LZ) complexity metric was also measured from each hand motion pattern, with an optimal threshold calculated to separately classify the patterns. The LZ metric classified unlabeled (blinded) hand motion patterns into expert and novice groups with an accuracy of 70% (sensitivity 64%, specificity 80%). The SVM algorithm had an accuracy of 83% (sensitivity 86%, specificity 80%). The results confirmed the hypothesis. The SVM algorithm increased the predictive power to classify blinded surgical hand motion patterns into expert versus novice groups. With further development, the system used in this study could become a viable tool for low-cost, objective assessment of procedural proficiency in a competency-based curriculum.
Motion estimation for video coding efficient algorithms and architectures
Chakrabarti, Indrajit; Chatterjee, Sumit Kumar
2015-01-01
The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.
Ernst, Floris; Bruder, Ralf; Schlaefer, Alexander; Schweikard, Achim
2011-01-01
Recently, radiosurgical treatment of cardiac arrhythmia, especially atrial fibrillation, has been proposed. Using the CyberKnife, focussed radiation will be used to create ablation lines on the beating heart to block unwanted electrical activity. Since this procedure requires high accuracy, the inevitable latency of the system (i.e., the robotic manipulator following the motion of the heart) has to be compensated for. We examine the applicability of prediction algorithms developed for respiratory motion prediction to the prediction of pulsatory motion. We evaluated the MULIN, nLMS, wLMS, SVRpred and EKF algorithms. The test data used has been recorded using external infrared position sensors, 3D ultrasound and the NavX catheter systems. With this data, we have shown that the error from latency can be reduced by at least 10 and as much as 75% (44% average), depending on the type of signal. It has also been shown that, although the SVRpred algorithm was successful in most cases, it was outperformed by the simple nLMS algorithm, the EKF or the wLMS algorithm in a number of cases. We have shown that prediction of cardiac motion is possible and that the algorithms known from respiratory motion prediction are applicable. Since pulsation is more regular than respiration, more research will have to be done to improve frequency-tracking algorithms, like the EKF method, which performed better than expected from their behaviour on respiratory motion traces.
Cohesive Motion Control Algorithm for Formation of Multiple Autonomous Agents
Directory of Open Access Journals (Sweden)
Debabrata Atta
2010-01-01
Full Text Available This paper presents a motion control strategy for a rigid and constraint consistent formation that can be modeled by a directed graph whose each vertex represents individual agent kinematics and each of directed edges represents distance constraints maintained by an agent, called follower, to its neighbouring agent. A rigid and constraint consistent graph is called persistent graph. A persistent graph is minimally persistent if it is persistent, and no edge can be removed without losing its persistence. An acyclic (free of cycles in its sensing pattern minimally persistent graph of Leader-Follower structure has been considered here which can be constructed from an initial Leader-Follower seed (initial graph with two vertices, one is Leader and another one is First Follower and one edge in between them is directed towards Leader by Henneberg sequence (a procedure of growing a graph containing only vertex additions. A set of nonlinear optimization-based decentralized control laws for mobile autonomous point agents in two dimensional plane have been proposed. An infinitesimal deviation in formation shape created continuous motion of Leader is compensated by corresponding continuous motion of other agents fulfilling the shortest path criteria.
Motion tolerant iterative reconstruction algorithm for cone-beam helical CT imaging
Energy Technology Data Exchange (ETDEWEB)
Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu [Hitachi Medical Corporation, Chiba-ken (Japan). CT System Div.
2011-07-01
We have developed a new advanced iterative reconstruction algorithm for cone-beam helical CT. The features of this algorithm are: (a) it uses separable paraboloidal surrogate (SPS) technique as a foundation for reconstruction to reduce noise and cone-beam artifact, (b) it uses a view weight in the back-projection process to reduce motion artifact. To confirm the improvement of our proposed algorithm over other existing algorithm, such as Feldkamp-Davis-Kress (FDK) or SPS algorithm, we compared the motion artifact reduction, image noise reduction (standard deviation of CT number), and cone-beam artifact reduction on simulated and clinical data set. Our results demonstrate that the proposed algorithm dramatically reduces motion artifacts compared with the SPS algorithm, and decreases image noise compared with the FDK algorithm. In addition, the proposed algorithm potentially improves time resolution of iterative reconstruction. (orig.)
Joint model of motion and anatomy for PET image reconstruction
International Nuclear Information System (INIS)
Qiao Feng; Pan Tinsu; Clark, John W. Jr.; Mawlawi, Osama
2007-01-01
Anatomy-based positron emission tomography (PET) image enhancement techniques have been shown to have the potential for improving PET image quality. However, these techniques assume an accurate alignment between the anatomical and the functional images, which is not always valid when imaging the chest due to respiratory motion. In this article, we present a joint model of both motion and anatomical information by integrating a motion-incorporated PET imaging system model with an anatomy-based maximum a posteriori image reconstruction algorithm. The mismatched anatomical information due to motion can thus be effectively utilized through this joint model. A computer simulation and a phantom study were conducted to assess the efficacy of the joint model, whereby motion and anatomical information were either modeled separately or combined. The reconstructed images in each case were compared to corresponding reference images obtained using a quadratic image prior based maximum a posteriori reconstruction algorithm for quantitative accuracy. Results of these studies indicated that while modeling anatomical information or motion alone improved the PET image quantitation accuracy, a larger improvement in accuracy was achieved when using the joint model. In the computer simulation study and using similar image noise levels, the improvement in quantitation accuracy compared to the reference images was 5.3% and 19.8% when using anatomical or motion information alone, respectively, and 35.5% when using the joint model. In the phantom study, these results were 5.6%, 5.8%, and 19.8%, respectively. These results suggest that motion compensation is important in order to effectively utilize anatomical information in chest imaging using PET. The joint motion-anatomy model presented in this paper provides a promising solution to this problem
Ground Motion Models for Future Linear Colliders
International Nuclear Information System (INIS)
Seryi, Andrei
2000-01-01
Optimization of the parameters of a future linear collider requires comprehensive models of ground motion. Both general models of ground motion and specific models of the particular site and local conditions are essential. Existing models are not completely adequate, either because they are too general, or because they omit important peculiarities of ground motion. The model considered in this paper is based on recent ground motion measurements performed at SLAC and at other accelerator laboratories, as well as on historical data. The issues to be studied for the models to become more predictive are also discussed
Multiagent scheduling models and algorithms
Agnetis, Alessandro; Gawiejnowicz, Stanisław; Pacciarelli, Dario; Soukhal, Ameur
2014-01-01
This book presents multi-agent scheduling models in which subsets of jobs sharing the same resources are evaluated by different criteria. It discusses complexity results, approximation schemes, heuristics and exact algorithms.
Hard Ware Implementation of Diamond Search Algorithm for Motion Estimation and Object Tracking
International Nuclear Information System (INIS)
Hashimaa, S.M.; Mahmoud, I.I.; Elazm, A.A.
2009-01-01
Object tracking is very important task in computer vision. Fast search algorithms emerged as important search technique to achieve real time tracking results. To enhance the performance of these algorithms, we advocate the hardware implementation of such algorithms. Diamond search block matching motion estimation has been proposed recently to reduce the complexity of motion estimation. In this paper we selected the diamond search algorithm (DS) for implementation using FPGA. This is due to its fundamental role in all fast search patterns. The proposed architecture is simulated and synthesized using Xilinix and modelsim soft wares. The results agree with the algorithm implementation in Matlab environment.
A Finite State Machine Approach to Algorithmic Lateral Inhibition for Real-Time Motion Detection †
Directory of Open Access Journals (Sweden)
María T. López
2018-05-01
Full Text Available Many researchers have explored the relationship between recurrent neural networks and finite state machines. Finite state machines constitute the best-characterized computational model, whereas artificial neural networks have become a very successful tool for modeling and problem solving. The neurally-inspired lateral inhibition method, and its application to motion detection tasks, have been successfully implemented in recent years. In this paper, control knowledge of the algorithmic lateral inhibition (ALI method is described and applied by means of finite state machines, in which the state space is constituted from the set of distinguishable cases of accumulated charge in a local memory. The article describes an ALI implementation for a motion detection task. For the implementation, we have chosen to use one of the members of the 16-nm Kintex UltraScale+ family of Xilinx FPGAs. FPGAs provide the necessary accuracy, resolution, and precision to run neural algorithms alongside current sensor technologies. The results offered in this paper demonstrate that this implementation provides accurate object tracking performance on several datasets, obtaining a high F-score value (0.86 for the most complex sequence used. Moreover, it outperforms implementations of a complete ALI algorithm and a simplified version of the ALI algorithm—named “accumulative computation”—which was run about ten years ago, now reaching real-time processing times that were simply not achievable at that time for ALI.
Parallel Algorithms for Model Checking
van de Pol, Jaco; Mousavi, Mohammad Reza; Sgall, Jiri
2017-01-01
Model checking is an automated verification procedure, which checks that a model of a system satisfies certain properties. These properties are typically expressed in some temporal logic, like LTL and CTL. Algorithms for LTL model checking (linear time logic) are based on automata theory and graph
Motion sickness: a negative reinforcement model.
Bowins, Brad
2010-01-15
Theories pertaining to the "why" of motion sickness are in short supply relative to those detailing the "how." Considering the profoundly disturbing and dysfunctional symptoms of motion sickness, it is difficult to conceive of why this condition is so strongly biologically based in humans and most other mammalian and primate species. It is posited that motion sickness evolved as a potent negative reinforcement system designed to terminate motion involving sensory conflict or postural instability. During our evolution and that of many other species, motion of this type would have impaired evolutionary fitness via injury and/or signaling weakness and vulnerability to predators. The symptoms of motion sickness strongly motivate the individual to terminate the offending motion by early avoidance, cessation of movement, or removal of oneself from the source. The motion sickness negative reinforcement mechanism functions much like pain to strongly motivate evolutionary fitness preserving behavior. Alternative why theories focusing on the elimination of neurotoxins and the discouragement of motion programs yielding vestibular conflict suffer from several problems, foremost that neither can account for the rarity of motion sickness in infants and toddlers. The negative reinforcement model proposed here readily accounts for the absence of motion sickness in infants and toddlers, in that providing strong motivation to terminate aberrant motion does not make sense until a child is old enough to act on this motivation.
Energy Technology Data Exchange (ETDEWEB)
Ali, I; Algan, O; Ahmad, S [University of Oklahoma Health Sciences, Oklahoma City, OK (United States); Alsbou, N [University of Central Oklahoma, Edmond, OK (United States)
2016-06-15
Purpose: To model patient motion and produce four-dimensional (4D) optimized dose distributions that consider motion-artifacts in the dose calculation during the treatment planning process. Methods: An algorithm for dose calculation is developed where patient motion is considered in dose calculation at the stage of the treatment planning. First, optimal dose distributions are calculated for the stationary target volume where the dose distributions are optimized considering intensity-modulated radiation therapy (IMRT). Second, a convolution-kernel is produced from the best-fitting curve which matches the motion trajectory of the patient. Third, the motion kernel is deconvolved with the initial dose distribution optimized for the stationary target to produce a dose distribution that is optimized in four-dimensions. This algorithm is tested with measured doses using a mobile phantom that moves with controlled motion patterns. Results: A motion-optimized dose distribution is obtained from the initial dose distribution of the stationary target by deconvolution with the motion-kernel of the mobile target. This motion-optimized dose distribution is equivalent to that optimized for the stationary target using IMRT. The motion-optimized and measured dose distributions are tested with the gamma index with a passing rate of >95% considering 3% dose-difference and 3mm distance-to-agreement. If the dose delivery per beam takes place over several respiratory cycles, then the spread-out of the dose distributions is only dependent on the motion amplitude and not affected by motion frequency and phase. This algorithm is limited to motion amplitudes that are smaller than the length of the target along the direction of motion. Conclusion: An algorithm is developed to optimize dose in 4D. Besides IMRT that provides optimal dose coverage for a stationary target, it extends dose optimization to 4D considering target motion. This algorithm provides alternative to motion management
Earthquake source model using strong motion displacement
Indian Academy of Sciences (India)
The strong motion displacement records available during an earthquake can be treated as the response of the earth as the a structural system to unknown forces acting at unknown locations. Thus, if the part of the earth participating in ground motion is modelled as a known finite elastic medium, one can attempt to model the ...
Conditional shape models for cardiac motion estimation
DEFF Research Database (Denmark)
Metz, Coert; Baka, Nora; Kirisli, Hortense
2010-01-01
We propose a conditional statistical shape model to predict patient specific cardiac motion from the 3D end-diastolic CTA scan. The model is built from 4D CTA sequences by combining atlas based segmentation and 4D registration. Cardiac motion estimation is, for example, relevant in the dynamic...
Model-Based Motion Tracking of Infants
DEFF Research Database (Denmark)
Olsen, Mikkel Damgaard; Herskind, Anna; Nielsen, Jens Bo
2014-01-01
Even though motion tracking is a widely used technique to analyze and measure human movements, only a few studies focus on motion tracking of infants. In recent years, a number of studies have emerged focusing on analyzing the motion pattern of infants, using computer vision. Most of these studies...... are based on 2D images, but few are based on 3D information. In this paper, we present a model-based approach for tracking infants in 3D. The study extends a novel study on graph-based motion tracking of infants and we show that the extension improves the tracking results. A 3D model is constructed...
Modeling repetitive motions using structured light.
Xu, Yi; Aliaga, Daniel G
2010-01-01
Obtaining models of dynamic 3D objects is an important part of content generation for computer graphics. Numerous methods have been extended from static scenarios to model dynamic scenes. If the states or poses of the dynamic object repeat often during a sequence (but not necessarily periodically), we call such a repetitive motion. There are many objects, such as toys, machines, and humans, undergoing repetitive motions. Our key observation is that when a motion-state repeats, we can sample the scene under the same motion state again but using a different set of parameters; thus, providing more information of each motion state. This enables robustly acquiring dense 3D information difficult for objects with repetitive motions using only simple hardware. After the motion sequence, we group temporally disjoint observations of the same motion state together and produce a smooth space-time reconstruction of the scene. Effectively, the dynamic scene modeling problem is converted to a series of static scene reconstructions, which are easier to tackle. The varying sampling parameters can be, for example, structured-light patterns, illumination directions, and viewpoints resulting in different modeling techniques. Based on this observation, we present an image-based motion-state framework and demonstrate our paradigm using either a synchronized or an unsynchronized structured-light acquisition method.
Motion Vector Estimation Using Line-Square Search Block Matching Algorithm for Video Sequences
Directory of Open Access Journals (Sweden)
Guo Bao-long
2004-09-01
Full Text Available Motion estimation and compensation techniques are widely used for video coding applications but the real-time motion estimation is not easily achieved due to its enormous computations. In this paper, a new fast motion estimation algorithm based on line search is presented, in which computation complexity is greatly reduced by using the line search strategy and a parallel search pattern. Moreover, the accurate search is achieved because the small square search pattern is used. It has a best-case scenario of only 9 search points, which is 4 search points less than the diamond search algorithm. Simulation results show that, compared with the previous techniques, the LSPS algorithm significantly reduces the computational requirements for finding motion vectors, and also produces close performance in terms of motion compensation errors.
Motion Estimation Using the Firefly Algorithm in Ultrasonic Image Sequence of Soft Tissue
Directory of Open Access Journals (Sweden)
Chih-Feng Chao
2015-01-01
Full Text Available Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method.
State Generation Method for Humanoid Motion Planning Based on Genetic Algorithm
Directory of Open Access Journals (Sweden)
Xuyang Wang
2012-05-01
Full Text Available A new approach to generate the original motion data for humanoid motion planning is presented in this paper. And a state generator is developed based on the genetic algorithm, which enables users to generate various motion states without using any reference motion data. By specifying various types of constraints such as configuration constraints and contact constraints, the state generator can generate stable states that satisfy the constraint conditions for humanoid robots. To deal with the multiple constraints and inverse kinematics, the state generation is finally simplified as a problem of optimizing and searching. In our method, we introduce a convenient mathematic representation for the constraints involved in the state generator, and solve the optimization problem with the genetic algorithm to acquire a desired state. To demonstrate the effectiveness and advantage of the method, a number of motion states are generated according to the requirements of the motion.
State Generation Method for Humanoid Motion Planning Based on Genetic Algorithm
Directory of Open Access Journals (Sweden)
Xuyang Wang
2008-11-01
Full Text Available A new approach to generate the original motion data for humanoid motion planning is presented in this paper. And a state generator is developed based on the genetic algorithm, which enables users to generate various motion states without using any reference motion data. By specifying various types of constraints such as configuration constraints and contact constraints, the state generator can generate stable states that satisfy the constraint conditions for humanoid robots.To deal with the multiple constraints and inverse kinematics, the state generation is finally simplified as a problem of optimizing and searching. In our method, we introduce a convenient mathematic representation for the constraints involved in the state generator, and solve the optimization problem with the genetic algorithm to acquire a desired state. To demonstrate the effectiveness and advantage of the method, a number of motion states are generated according to the requirements of the motion.
A human motion model based on maps for navigation systems
Directory of Open Access Journals (Sweden)
Kaiser Susanna
2011-01-01
Full Text Available Abstract Foot-mounted indoor positioning systems work remarkably well when using additionally the knowledge of floor-plans in the localization algorithm. Walls and other structures naturally restrict the motion of pedestrians. No pedestrian can walk through walls or jump from one floor to another when considering a building with different floor-levels. By incorporating known floor-plans in sequential Bayesian estimation processes such as particle filters (PFs, long-term error stability can be achieved as long as the map is sufficiently accurate and the environment sufficiently constraints pedestrians' motion. In this article, a new motion model based on maps and floor-plans is introduced that is capable of weighting the possible headings of the pedestrian as a function of the local environment. The motion model is derived from a diffusion algorithm that makes use of the principle of a source effusing gas and is used in the weighting step of a PF implementation. The diffusion algorithm is capable of including floor-plans as well as maps with areas of different degrees of accessibility. The motion model more effectively represents the probability density function of possible headings that are restricted by maps and floor-plans than a simple binary weighting of particles (i.e., eliminating those that crossed walls and keeping the rest. We will show that the motion model will help for obtaining better performance in critical navigation scenarios where two or more modes may be competing for some of the time (multi-modal scenarios.
Analytical guide wire motion algorithm for simulation of endovascular interventions
Konings, M. K.; van de Kraats, E. B.; Alderliesten, T.; Niessen, W. J.
2003-01-01
Performing minimally invasive vascular interventions requires proper training, as a guide wire needs to be manipulated, by the tail, under fluoroscopic guidance. To provide a training environment, the motion of the guide wire inside the human vasculature can be simulated by computer. Such a
Modeling and identification for robot motion control
Kostic, D.; Jager, de A.G.; Steinbuch, M.; Kurfess, T.R.
2004-01-01
This chapter deals with the problems of robot modelling and identification for high-performance model-based motion control. A derivation of robot kinematic and dynamic models was explained. Modelling of friction effects was also discussed. Use of a writing task to establish correctness of the models
International Nuclear Information System (INIS)
Guo, Li; Li, Pei; Pan, Cong; Cheng, Yuxuan; Ding, Zhihua; Li, Peng; Liao, Rujia; Hu, Weiwei; Chen, Zhong
2016-01-01
The complex-based OCT angiography (Angio-OCT) offers high motion contrast by combining both the intensity and phase information. However, due to involuntary bulk tissue motions, complex-valued OCT raw data are processed sequentially with different algorithms for correcting bulk image shifts (BISs), compensating global phase fluctuations (GPFs) and extracting flow signals. Such a complicated procedure results in massive computational load. To mitigate such a problem, in this work, we present an inter-frame complex-correlation (CC) algorithm. The CC algorithm is suitable for parallel processing of both flow signal extraction and BIS correction, and it does not need GPF compensation. This method provides high processing efficiency and shows superiority in motion contrast. The feasibility and performance of the proposed CC algorithm is demonstrated using both flow phantom and live animal experiments. (paper)
Modeling and synthesis of strong ground motion
Indian Academy of Sciences (India)
There have been many developments in modeling techniques, and ... damage life and property in a city or region. How- ... quake of 26 January 2001 as a case study. 2. ...... quake derived from a dense strong-motion network; Bull. Seismol.
Parametric modelling of nonstationary platform deck motions
Digital Repository Service at National Institute of Oceanography (India)
Mandal, S.
with fast Fourier transform spectra and show good agreement. However, the higher order maximum entropy model can be used for better representation of nonstationary motions. This method also reduces long time series of nonstationary offshore data into a few...
Energy Technology Data Exchange (ETDEWEB)
Ali, I; Ahmad, S [University of Oklahoma Health Sciences, Oklahoma City, OK (United States); Alsbou, N [Department of Electrical and Computer Engineering, Ada, OH (United States)
2014-06-01
Purpose: A motion algorithm was developed to extract actual length, CT-numbers and motion amplitude of a mobile target imaged with cone-beam-CT (CBCT) retrospective to image-reconstruction. Methods: The motion model considered a mobile target moving with a sinusoidal motion and employed three measurable parameters: apparent length, CT number level and gradient of a mobile target obtained from CBCT images to extract information about the actual length and CT number value of the stationary target and motion amplitude. The algorithm was verified experimentally with a mobile phantom setup that has three targets with different sizes manufactured from homogenous tissue-equivalent gel material embedded into a thorax phantom. The phantom moved sinusoidal in one-direction using eight amplitudes (0–20mm) and a frequency of 15-cycles-per-minute. The model required imaging parameters such as slice thickness, imaging time. Results: This motion algorithm extracted three unknown parameters: length of the target, CT-number-level, motion amplitude for a mobile target retrospective to CBCT image reconstruction. The algorithm relates three unknown parameters to measurable apparent length, CT-number-level and gradient for well-defined mobile targets obtained from CBCT images. The motion model agreed with measured apparent lengths which were dependent on actual length of the target and motion amplitude. The cumulative CT-number for a mobile target was dependent on CT-number-level of the stationary target and motion amplitude. The gradient of the CT-distribution of mobile target is dependent on the stationary CT-number-level, actual target length along the direction of motion, and motion amplitude. Motion frequency and phase did not affect the elongation and CT-number distributions of mobile targets when imaging time included several motion cycles. Conclusion: The motion algorithm developed in this study has potential applications in diagnostic CT imaging and radiotherapy to extract
Algorithm for motion control of an exoskeleton during verticalization
Directory of Open Access Journals (Sweden)
Jatsun Sergey
2016-01-01
Full Text Available This paper considers lower limb exoskeleton that performs sit-to-stand motion. The work is focused on the control system design. An application of a null space projection methods for solving inverse kinematics problem is discussed. An adaptive multi-input multi-output regulator for the system is presented with the motivation for that choice. Results of the simulation for different versions of the regulator are shown.
Directory of Open Access Journals (Sweden)
Jiaying Du
2018-04-01
Full Text Available Motion sensors such as MEMS gyroscopes and accelerometers are characterized by a small size, light weight, high sensitivity, and low cost. They are used in an increasing number of applications. However, they are easily influenced by environmental effects such as temperature change, shock, and vibration. Thus, signal processing is essential for minimizing errors and improving signal quality and system stability. The aim of this work is to investigate and present a systematic review of different signal error reduction algorithms that are used for MEMS gyroscope-based motion analysis systems for human motion analysis or have the potential to be used in this area. A systematic search was performed with the search engines/databases of the ACM Digital Library, IEEE Xplore, PubMed, and Scopus. Sixteen papers that focus on MEMS gyroscope-related signal processing and were published in journals or conference proceedings in the past 10 years were found and fully reviewed. Seventeen algorithms were categorized into four main groups: Kalman-filter-based algorithms, adaptive-based algorithms, simple filter algorithms, and compensation-based algorithms. The algorithms were analyzed and presented along with their characteristics such as advantages, disadvantages, and time limitations. A user guide to the most suitable signal processing algorithms within this area is presented.
Du, Jiaying; Gerdtman, Christer; Lindén, Maria
2018-04-06
Motion sensors such as MEMS gyroscopes and accelerometers are characterized by a small size, light weight, high sensitivity, and low cost. They are used in an increasing number of applications. However, they are easily influenced by environmental effects such as temperature change, shock, and vibration. Thus, signal processing is essential for minimizing errors and improving signal quality and system stability. The aim of this work is to investigate and present a systematic review of different signal error reduction algorithms that are used for MEMS gyroscope-based motion analysis systems for human motion analysis or have the potential to be used in this area. A systematic search was performed with the search engines/databases of the ACM Digital Library, IEEE Xplore, PubMed, and Scopus. Sixteen papers that focus on MEMS gyroscope-related signal processing and were published in journals or conference proceedings in the past 10 years were found and fully reviewed. Seventeen algorithms were categorized into four main groups: Kalman-filter-based algorithms, adaptive-based algorithms, simple filter algorithms, and compensation-based algorithms. The algorithms were analyzed and presented along with their characteristics such as advantages, disadvantages, and time limitations. A user guide to the most suitable signal processing algorithms within this area is presented.
SPATIAL MOTION OF THE MAGELLANIC CLOUDS: TIDAL MODELS RULED OUT?
International Nuclear Information System (INIS)
Ruzicka, Adam; Palous, Jan; Theis, Christian
2009-01-01
Recently, Kallivayalil et al. derived new values of the proper motion for the Large and Small Magellanic Clouds (LMC and SMC, respectively). The spatial velocities of both Clouds are unexpectedly higher than their previous values resulting from agreement between the available theoretical models of the Magellanic System and the observations of neutral hydrogen (H I) associated with the LMC and the SMC. Such proper motion estimates are likely to be at odds with the scenarios for creation of the large-scale structures in the Magellanic System suggested so far. We investigated this hypothesis for the pure tidal models, as they were the first ones devised to explain the evolution of the Magellanic System, and the tidal stripping is intrinsically involved in every model assuming the gravitational interaction. The parameter space for the Milky Way (MW)-LMC-SMC interaction was analyzed by a robust search algorithm (genetic algorithm) combined with a fast, restricted N-body model of the interaction. Our method extended the known variety of evolutionary scenarios satisfying the observed kinematics and morphology of the Magellanic large-scale structures. Nevertheless, assuming the tidal interaction, no satisfactory reproduction of the H I data available for the Magellanic Clouds was achieved with the new proper motions. We conclude that for the proper motion data by Kallivayalil et al., within their 1σ errors, the dynamical evolution of the Magellanic System with the currently accepted total mass of the MW cannot be explained in the framework of pure tidal models. The optimal value for the western component of the LMC proper motion was found to be μ W lmc ∼> -1.3 mas yr -1 in case of tidal models. It corresponds to the reduction of the Kallivayalil et al. value for μ W lmc by ∼ 40% in its magnitude.
Complex fluids modeling and algorithms
Saramito, Pierre
2016-01-01
This book presents a comprehensive overview of the modeling of complex fluids, including many common substances, such as toothpaste, hair gel, mayonnaise, liquid foam, cement and blood, which cannot be described by Navier-Stokes equations. It also offers an up-to-date mathematical and numerical analysis of the corresponding equations, as well as several practical numerical algorithms and software solutions for the approximation of the solutions. It discusses industrial (molten plastics, forming process), geophysical (mud flows, volcanic lava, glaciers and snow avalanches), and biological (blood flows, tissues) modeling applications. This book is a valuable resource for undergraduate students and researchers in applied mathematics, mechanical engineering and physics.
Realistic Modeling of Seismic Wave Ground Motion in Beijing City
Ding, Z.; Romanelli, F.; Chen, Y. T.; Panza, G. F.
Algorithms for the calculation of synthetic seismograms in laterally heterogeneous anelastic media have been applied to model the ground motion in Beijing City. The synthetic signals are compared with the few available seismic recordings (1998, Zhangbei earthquake) and with the distribution of observed macroseismic intensity (1976, Tangshan earthquake). The synthetic three-component seismograms have been computed for the Xiji area and Beijing City. The numerical results show that the thick Tertiary and Quaternary sediments are responsible for the severe amplification of the seismic ground motion. Such a result is well correlated with the abnormally high macroseismic intensity zone in the Xiji area associated with the 1976 Tangshan earthquake as well as with the ground motion recorded in Beijing city in the wake of the 1998 Zhangbei earthquake.
Yao, Jianchu; Warren, Steve
2004-01-01
Pulse oximeters are mainstays for acquiring blood oxygen saturation in static environments such as hospital rooms. However, motion artifacts prevent their broad in wearable, ambulatory environments. To this end, we present a novel algorithm to separate the motion artifacts from plethysmographic data gathered by pulse oximeters. This algorithm, based on the Beer-Lambert law, requires photoplethysmographic data acquired at three excitation wavelengths. The algorithm can calculate venous blood oxygen saturation (SvO2) as well as arterial blood oxygen saturation (SaO2). Preliminary results indicate that the extraction of the venous signal, which is assumed to be most affected by motions, is successful with data acquired from a reflectance-mode sensor.
Internal Motion Estimation by Internal-external Motion Modeling for Lung Cancer Radiotherapy.
Chen, Haibin; Zhong, Zichun; Yang, Yiwei; Chen, Jiawei; Zhou, Linghong; Zhen, Xin; Gu, Xuejun
2018-02-27
The aim of this study is to develop an internal-external correlation model for internal motion estimation for lung cancer radiotherapy. Deformation vector fields that characterize the internal-external motion are obtained by respectively registering the internal organ meshes and external surface meshes from the 4DCT images via a recently developed local topology preserved non-rigid point matching algorithm. A composite matrix is constructed by combing the estimated internal phasic DVFs with external phasic and directional DVFs. Principle component analysis is then applied to the composite matrix to extract principal motion characteristics, and generate model parameters to correlate the internal-external motion. The proposed model is evaluated on a 4D NURBS-based cardiac-torso (NCAT) synthetic phantom and 4DCT images from five lung cancer patients. For tumor tracking, the center of mass errors of the tracked tumor are 0.8(±0.5)mm/0.8(±0.4)mm for synthetic data, and 1.3(±1.0)mm/1.2(±1.2)mm for patient data in the intra-fraction/inter-fraction tracking, respectively. For lung tracking, the percent errors of the tracked contours are 0.06(±0.02)/0.07(±0.03) for synthetic data, and 0.06(±0.02)/0.06(±0.02) for patient data in the intra-fraction/inter-fraction tracking, respectively. The extensive validations have demonstrated the effectiveness and reliability of the proposed model in motion tracking for both the tumor and the lung in lung cancer radiotherapy.
Directory of Open Access Journals (Sweden)
Apurva Samdurkar
2018-06-01
Full Text Available Object tracking is one of the main fields within computer vision. Amongst various methods/ approaches for object detection and tracking, the background subtraction approach makes the detection of object easier. To the detected object, apply the proposed block matching algorithm for generating the motion vectors. The existing diamond search (DS and cross diamond search algorithms (CDS are studied and experiments are carried out on various standard video data sets and user defined data sets. Based on the study and analysis of these two existing algorithms a modified diamond search pattern (MDS algorithm is proposed using small diamond shape search pattern in initial step and large diamond shape (LDS in further steps for motion estimation. The initial search pattern consists of five points in small diamond shape pattern and gradually grows into a large diamond shape pattern, based on the point with minimum cost function. The algorithm ends with the small shape pattern at last. The proposed MDS algorithm finds the smaller motion vectors and fewer searching points than the existing DS and CDS algorithms. Further, object detection is carried out by using background subtraction approach and finally, MDS motion estimation algorithm is used for tracking the object in color video sequences. The experiments are carried out by using different video data sets containing a single object. The results are evaluated and compared by using the evaluation parameters like average searching points per frame and average computational time per frame. The experimental results show that the MDS performs better than DS and CDS on average search point and average computation time.
MOSHFIT: algorithms for occlusion-tolerant mean shape and rigid motion from 3D movement data.
Mitchelson, Joel R
2013-09-03
This work addresses the use of 3D point data to measure rigid motions, in the presence of occlusion and without reference to a prior model of relative point locations. This is a problem where cluster-based measurement techniques are used (e.g. for measuring limb movements) and no static calibration trial is available. The same problem arises when performing the task known as roving capture, in which a mobile 3D movement analysis system is moved through a volume with static markers in unknown locations and the ego-motion of the system is required in order to understand biomechanical activity in the environment. To provide a solution for both of these applications, the new concept of a visibility graph is introduced, and is combined with a generalised procrustes method adapted from ones used by the biological shape statistics and computer graphics communities. Recent results on shape space manifolds are applied to show sufficient conditions for convergence to unique solution. Algorithm source code is available and referenced here. Processing speed and rate of convergence are demonstrated using simulated data. Positional and angular accuracy are shown to be equivalent to approaches which require full calibration, to within a small fraction of input resolution. Typical processing times for sub-micron convergence are found to be fractions of a second, so the method is suitable for workflows where there may be time pressure such as in sports science and clinical analysis. Copyright © 2013 Elsevier Ltd. All rights reserved.
Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm
Directory of Open Access Journals (Sweden)
Man Zhang
2017-10-01
Full Text Available Precise azimuth-variant motion compensation (MOCO is an essential and difficult task for high-resolution synthetic aperture radar (SAR imagery. In conventional post-filtering approaches, residual azimuth-variant motion errors are generally compensated through a set of spatial post-filters, where the coarse-focused image is segmented into overlapped blocks concerning the azimuth-dependent residual errors. However, image domain post-filtering approaches, such as precise topography- and aperture-dependent motion compensation algorithm (PTA, have difficulty of robustness in declining, when strong motion errors are involved in the coarse-focused image. In this case, in order to capture the complete motion blurring function within each image block, both the block size and the overlapped part need necessary extension leading to degeneration of efficiency and robustness inevitably. Herein, a frequency domain fast back-projection algorithm (FDFBPA is introduced to deal with strong azimuth-variant motion errors. FDFBPA disposes of the azimuth-variant motion errors based on a precise azimuth spectrum expression in the azimuth wavenumber domain. First, a wavenumber domain sub-aperture processing strategy is introduced to accelerate computation. After that, the azimuth wavenumber spectrum is partitioned into a set of wavenumber blocks, and each block is formed into a sub-aperture coarse resolution image via the back-projection integral. Then, the sub-aperture images are straightforwardly fused together in azimuth wavenumber domain to obtain a full resolution image. Moreover, chirp-Z transform (CZT is also introduced to implement the sub-aperture back-projection integral, increasing the efficiency of the algorithm. By disusing the image domain post-filtering strategy, robustness of the proposed algorithm is improved. Both simulation and real-measured data experiments demonstrate the effectiveness and superiority of the proposal.
Locust Collective Motion and Its Modeling.
Directory of Open Access Journals (Sweden)
Gil Ariel
2015-12-01
Full Text Available Over the past decade, technological advances in experimental and animal tracking techniques have motivated a renewed theoretical interest in animal collective motion and, in particular, locust swarming. This review offers a comprehensive biological background followed by comparative analysis of recent models of locust collective motion, in particular locust marching, their settings, and underlying assumptions. We describe a wide range of recent modeling and simulation approaches, from discrete agent-based models of self-propelled particles to continuous models of integro-differential equations, aimed at describing and analyzing the fascinating phenomenon of locust collective motion. These modeling efforts have a dual role: The first views locusts as a quintessential example of animal collective motion. As such, they aim at abstraction and coarse-graining, often utilizing the tools of statistical physics. The second, which originates from a more biological perspective, views locust swarming as a scientific problem of its own exceptional merit. The main goal should, thus, be the analysis and prediction of natural swarm dynamics. We discuss the properties of swarm dynamics using the tools of statistical physics, as well as the implications for laboratory experiments and natural swarms. Finally, we stress the importance of a combined-interdisciplinary, biological-theoretical effort in successfully confronting the challenges that locusts pose at both the theoretical and practical levels.
Automatic differentiation algorithms in model analysis
Huiskes, M.J.
2002-01-01
Title: Automatic differentiation algorithms in model analysis
Author: M.J. Huiskes
Date: 19 March, 2002
In this thesis automatic differentiation algorithms and derivative-based methods
Modelling the motion of meteors in the Earth's atmosphere
International Nuclear Information System (INIS)
Rodrigues, Hilário
2013-01-01
This work discusses the motion of meteors in the Earth's atmosphere. The equations of motion of the projectile are presented and a simplified numerical approach to solve them is discussed. An algorithm for solving the equations of motion is constructed, and implemented in a very simple way using Excel software. The paper is intended as an example of the application of Newton's laws of motion at undergraduate level. (paper)
Ground Motion Prediction Models for Caucasus Region
Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino
2016-04-01
Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.
Directory of Open Access Journals (Sweden)
V. Jayaraj
2010-08-01
Full Text Available A Non-linear adaptive decision based algorithm with robust motion estimation technique is proposed for removal of impulse noise, Gaussian noise and mixed noise (impulse and Gaussian with edge and fine detail preservation in images and videos. The algorithm includes detection of corrupted pixels and the estimation of values for replacing the corrupted pixels. The main advantage of the proposed algorithm is that an appropriate filter is used for replacing the corrupted pixel based on the estimation of the noise variance present in the filtering window. This leads to reduced blurring and better fine detail preservation even at the high mixed noise density. It performs both spatial and temporal filtering for removal of the noises in the filter window of the videos. The Improved Cross Diamond Search Motion Estimation technique uses Least Median Square as a cost function, which shows improved performance than other motion estimation techniques with existing cost functions. The results show that the proposed algorithm outperforms the other algorithms in the visual point of view and in Peak Signal to Noise Ratio, Mean Square Error and Image Enhancement Factor.
Directory of Open Access Journals (Sweden)
Ravil’ Kudermetov
2018-02-01
Full Text Available Nowadays multi-core processors are installed almost in each modern workstation, but the question of these computational resources effective utilization is still a topical one. In this paper the four-point block one-step integration method is considered, the parallel algorithm of this method is proposed and the Java programmatic implementation of this algorithm is discussed. The effectiveness of the proposed algorithm is demonstrated by way of spacecraft attitude motion simulation. The results of this work can be used for practical simulation of dynamic systems that are described by ordinary differential equations. The results are also applicable to the development and debugging of computer programs that integrate the dynamic and kinematic equations of the angular motion of a rigid body.
A priori motion models for four-dimensional reconstruction in gated cardiac SPECT
International Nuclear Information System (INIS)
Lalush, D.S.; Tsui, B.M.W.; Cui, Lin
1996-01-01
We investigate the benefit of incorporating a priori assumptions about cardiac motion in a fully four-dimensional (4D) reconstruction algorithm for gated cardiac SPECT. Previous work has shown that non-motion-specific 4D Gibbs priors enforcing smoothing in time and space can control noise while preserving resolution. In this paper, we evaluate methods for incorporating known heart motion in the Gibbs prior model. The new model is derived by assigning motion vectors to each 4D voxel, defining the movement of that volume of activity into the neighboring time frames. Weights for the Gibbs cliques are computed based on these open-quotes most likelyclose quotes motion vectors. To evaluate, we employ the mathematical cardiac-torso (MCAT) phantom with a new dynamic heart model that simulates the beating and twisting motion of the heart. Sixteen realistically-simulated gated datasets were generated, with noise simulated to emulate a real Tl-201 gated SPECT study. Reconstructions were performed using several different reconstruction algorithms, all modeling nonuniform attenuation and three-dimensional detector response. These include ML-EM with 4D filtering, 4D MAP-EM without prior motion assumption, and 4D MAP-EM with prior motion assumptions. The prior motion assumptions included both the correct motion model and incorrect models. Results show that reconstructions using the 4D prior model can smooth noise and preserve time-domain resolution more effectively than 4D linear filters. We conclude that modeling of motion in 4D reconstruction algorithms can be a powerful tool for smoothing noise and preserving temporal resolution in gated cardiac studies
Two-component wind fields over ocean waves using atmospheric lidar and motion estimation algorithms
Mayor, S. D.
2016-02-01
Numerical models, such as large eddy simulations, are capable of providing stunning visualizations of the air-sea interface. One reason for this is the inherent spatial nature of such models. As compute power grows, models are able to provide higher resolution visualizations over larger domains revealing intricate details of the interactions of ocean waves and the airflow over them. Spatial observations on the other hand, which are necessary to validate the simulations, appear to lag behind models. The rough ocean environment of the real world is an additional challenge. One method of providing spatial observations of fluid flow is that of particle image velocimetry (PIV). PIV has been successfully applied to many problems in engineering and the geosciences. This presentation will show recent research results that demonstate that a PIV-style approach using pulsed-fiber atmospheric elastic backscatter lidar hardware and wavelet-based optical flow motion estimation software can reveal two-component wind fields over rough ocean surfaces. Namely, a recently-developed compact lidar was deployed for 10 days in March of 2015 in the Eureka, California area. It scanned over the ocean. Imagery reveal that breaking ocean waves provide copius amounts of particulate matter for the lidar to detect and for the motion estimation algorithms to retrieve wind vectors from. The image below shows two examples of results from the experiment. The left panel shows the elastic backscatter intensity (copper shades) under a field of vectors that was retrieved by the wavelet-based optical flow algorithm from two scans that took about 15 s each to acquire. The vectors, that reveal offshore flow toward the NW, were decimated for clarity. The bright aerosol features along the right edge of the sector scan were caused by ocean waves breaking on the beach. The right panel is the result of scanning over the ocean on a day when wave amplitudes ranged from 8-12 feet and whitecaps offshore beyond the
Directory of Open Access Journals (Sweden)
Simon Fong
2015-01-01
Full Text Available Human motion sensing technology gains tremendous popularity nowadays with practical applications such as video surveillance for security, hand signing, and smart-home and gaming. These applications capture human motions in real-time from video sensors, the data patterns are nonstationary and ever changing. While the hardware technology of such motion sensing devices as well as their data collection process become relatively mature, the computational challenge lies in the real-time analysis of these live feeds. In this paper we argue that traditional data mining methods run short of accurately analyzing the human activity patterns from the sensor data stream. The shortcoming is due to the algorithmic design which is not adaptive to the dynamic changes in the dynamic gesture motions. The successor of these algorithms which is known as data stream mining is evaluated versus traditional data mining, through a case of gesture recognition over motion data by using Microsoft Kinect sensors. Three different subjects were asked to read three comic strips and to tell the stories in front of the sensor. The data stream contains coordinates of articulation points and various positions of the parts of the human body corresponding to the actions that the user performs. In particular, a novel technique of feature selection using swarm search and accelerated PSO is proposed for enabling fast preprocessing for inducing an improved classification model in real-time. Superior result is shown in the experiment that runs on this empirical data stream. The contribution of this paper is on a comparative study between using traditional and data stream mining algorithms and incorporation of the novel improved feature selection technique with a scenario where different gesture patterns are to be recognized from streaming sensor data.
Sampling-Based Motion Planning Algorithms for Replanning and Spatial Load Balancing
Energy Technology Data Exchange (ETDEWEB)
Boardman, Beth Leigh [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-10-12
The common theme of this dissertation is sampling-based motion planning with the two key contributions being in the area of replanning and spatial load balancing for robotic systems. Here, we begin by recalling two sampling-based motion planners: the asymptotically optimal rapidly-exploring random tree (RRT*), and the asymptotically optimal probabilistic roadmap (PRM*). We also provide a brief background on collision cones and the Distributed Reactive Collision Avoidance (DRCA) algorithm. The next four chapters detail novel contributions for motion replanning in environments with unexpected static obstacles, for multi-agent collision avoidance, and spatial load balancing. First, we show improved performance of the RRT* when using the proposed Grandparent-Connection (GP) or Focused-Refinement (FR) algorithms. Next, the Goal Tree algorithm for replanning with unexpected static obstacles is detailed and proven to be asymptotically optimal. A multi-agent collision avoidance problem in obstacle environments is approached via the RRT*, leading to the novel Sampling-Based Collision Avoidance (SBCA) algorithm. The SBCA algorithm is proven to guarantee collision free trajectories for all of the agents, even when subject to uncertainties in the knowledge of the other agents’ positions and velocities. Given that a solution exists, we prove that livelocks and deadlock will lead to the cost to the goal being decreased. We introduce a new deconfliction maneuver that decreases the cost-to-come at each step. This new maneuver removes the possibility of livelocks and allows a result to be formed that proves convergence to the goal configurations. Finally, we present a limited range Graph-based Spatial Load Balancing (GSLB) algorithm which fairly divides a non-convex space among multiple agents that are subject to differential constraints and have a limited travel distance. The GSLB is proven to converge to a solution when maximizing the area covered by the agents. The analysis
A Robust Subpixel Motion Estimation Algorithm Using HOS in the Parametric Domain
Directory of Open Access Journals (Sweden)
Ibn-Elhaj E
2009-01-01
Full Text Available Motion estimation techniques are widely used in todays video processing systems. The most frequently used techniques are the optical flow method and phase correlation method. The vast majority of these algorithms consider noise-free data. Thus, in the case of the image sequences are severely corrupted by additive Gaussian (perhaps non-Gaussian noises of unknown covariance, the classical techniques will fail to work because they will also estimate the noise spatial correlation. In this paper, we have studied this topic from a viewpoint different from the above to explore the fundamental limits in image motion estimation. Our scheme is based on subpixel motion estimation algorithm using bispectrum in the parametric domain. The motion vector of a moving object is estimated by solving linear equations involving third-order hologram and the matrix containing Dirac delta function. Simulation results are presented and compared to the optical flow and phase correlation algorithms; this approach provides more reliable displacement estimates particularly for complex noisy image sequences. In our simulation, we used the database freely available on the web.
A Robust Subpixel Motion Estimation Algorithm Using HOS in the Parametric Domain
Directory of Open Access Journals (Sweden)
E. M. Ismaili Aalaoui
2009-02-01
Full Text Available Motion estimation techniques are widely used in todays video processing systems. The most frequently used techniques are the optical flow method and phase correlation method. The vast majority of these algorithms consider noise-free data. Thus, in the case of the image sequences are severely corrupted by additive Gaussian (perhaps non-Gaussian noises of unknown covariance, the classical techniques will fail to work because they will also estimate the noise spatial correlation. In this paper, we have studied this topic from a viewpoint different from the above to explore the fundamental limits in image motion estimation. Our scheme is based on subpixel motion estimation algorithm using bispectrum in the parametric domain. The motion vector of a moving object is estimated by solving linear equations involving third-order hologram and the matrix containing Dirac delta function. Simulation results are presented and compared to the optical flow and phase correlation algorithms; this approach provides more reliable displacement estimates particularly for complex noisy image sequences. In our simulation, we used the database freely available on the web.
Using Load Balancing to Scalably Parallelize Sampling-Based Motion Planning Algorithms
Fidel, Adam; Jacobs, Sam Ade; Sharma, Shishir; Amato, Nancy M.; Rauchwerger, Lawrence
2014-01-01
Motion planning, which is the problem of computing feasible paths in an environment for a movable object, has applications in many domains ranging from robotics, to intelligent CAD, to protein folding. The best methods for solving this PSPACE-hard problem are so-called sampling-based planners. Recent work introduced uniform spatial subdivision techniques for parallelizing sampling-based motion planning algorithms that scaled well. However, such methods are prone to load imbalance, as planning time depends on region characteristics and, for most problems, the heterogeneity of the sub problems increases as the number of processors increases. In this work, we introduce two techniques to address load imbalance in the parallelization of sampling-based motion planning algorithms: an adaptive work stealing approach and bulk-synchronous redistribution. We show that applying these techniques to representatives of the two major classes of parallel sampling-based motion planning algorithms, probabilistic roadmaps and rapidly-exploring random trees, results in a more scalable and load-balanced computation on more than 3,000 cores. © 2014 IEEE.
Using Load Balancing to Scalably Parallelize Sampling-Based Motion Planning Algorithms
Fidel, Adam
2014-05-01
Motion planning, which is the problem of computing feasible paths in an environment for a movable object, has applications in many domains ranging from robotics, to intelligent CAD, to protein folding. The best methods for solving this PSPACE-hard problem are so-called sampling-based planners. Recent work introduced uniform spatial subdivision techniques for parallelizing sampling-based motion planning algorithms that scaled well. However, such methods are prone to load imbalance, as planning time depends on region characteristics and, for most problems, the heterogeneity of the sub problems increases as the number of processors increases. In this work, we introduce two techniques to address load imbalance in the parallelization of sampling-based motion planning algorithms: an adaptive work stealing approach and bulk-synchronous redistribution. We show that applying these techniques to representatives of the two major classes of parallel sampling-based motion planning algorithms, probabilistic roadmaps and rapidly-exploring random trees, results in a more scalable and load-balanced computation on more than 3,000 cores. © 2014 IEEE.
Realistic modeling of seismic wave ground motion in Beijing City
International Nuclear Information System (INIS)
Ding, Z.; Chen, Y.T.; Romanelli, F.; Panza, G.F.
2002-05-01
Advanced algorithms for the calculation of synthetic seismograms in laterally heterogeneous anelastic media have been applied to model the ground motion in Beijing City. The synthetic signals are compared with the few available seismic recordings (1998, Zhangbei earthquake) and with the distribution of the observed macroseismic intensity (1976, Tangshan earthquake). The synthetic 3-component seismograms have been computed in the Xiji area and in Beijing town. The numerical results show that the thick Tertiary and Quaternary sediments are responsible of the severe amplification of the seismic ground motion. Such a result is well correlated with the abnormally high macroseismic intensity zone (Xiji area) associated to the 1976 Tangshan earthquake and with the records in Beijing town, associated to the 1998 Zhangbei earthquake. (author)
The use of vestibular models for design and evaluation of flight simulator motion
Bussolari, Steven R.; Young, Laurence R.; Lee, Alfred T.
1989-01-01
Quantitative models for the dynamics of the human vestibular system are applied to the design and evaluation of flight simulator platform motion. An optimal simulator motion control algorithm is generated to minimize the vector difference between perceived spatial orientation estimated in flight and in simulation. The motion controller has been implemented on the Vertical Motion Simulator at NASA Ames Research Center and evaluated experimentally through measurement of pilot performance and subjective rating during VTOL aircraft simulation. In general, pilot performance in a longitudinal tracking task (formation flight) did not appear to be sensitive to variations in platform motion condition as long as motion was present. However, pilot assessment of motion fidelity by means of a rating scale designed for this purpose, were sensitive to motion controller design. Platform motion generated with the optimal motion controller was found to be generally equivalent to that generated by conventional linear crossfeed washout. The vestibular models are used to evaluate the motion fidelity of transport category aircraft (Boeing 727) simulation in a pilot performance and simulator acceptability study at the Man-Vehicle Systems Research Facility at NASA Ames Research Center. Eighteen airline pilots, currently flying B-727, were given a series of flight scenarios in the simulator under various conditions of simulator motion. The scenarios were chosen to reflect the flight maneuvers that these pilots might expect to be given during a routine pilot proficiency check. Pilot performance and subjective rating of simulator fidelity was relatively insensitive to the motion condition, despite large differences in the amplitude of motion provided. This lack of sensitivity may be explained by means of the vestibular models, which predict little difference in the modeled motion sensations of the pilots when different motion conditions are imposed.
Multi-model approach to characterize human handwriting motion.
Chihi, I; Abdelkrim, A; Benrejeb, M
2016-02-01
This paper deals with characterization and modelling of human handwriting motion from two forearm muscle activity signals, called electromyography signals (EMG). In this work, an experimental approach was used to record the coordinates of a pen tip moving on the (x, y) plane and EMG signals during the handwriting act. The main purpose is to design a new mathematical model which characterizes this biological process. Based on a multi-model approach, this system was originally developed to generate letters and geometric forms written by different writers. A Recursive Least Squares algorithm is used to estimate the parameters of each sub-model of the multi-model basis. Simulations show good agreement between predicted results and the recorded data.
FPSoC-Based Architecture for a Fast Motion Estimation Algorithm in H.264/AVC
Directory of Open Access Journals (Sweden)
Obianuju Ndili
2009-01-01
Full Text Available There is an increasing need for high quality video on low power, portable devices. Possible target applications range from entertainment and personal communications to security and health care. While H.264/AVC answers the need for high quality video at lower bit rates, it is significantly more complex than previous coding standards and thus results in greater power consumption in practical implementations. In particular, motion estimation (ME, in H.264/AVC consumes the largest power in an H.264/AVC encoder. It is therefore critical to speed-up integer ME in H.264/AVC via fast motion estimation (FME algorithms and hardware acceleration. In this paper, we present our hardware oriented modifications to a hybrid FME algorithm, our architecture based on the modified algorithm, and our implementation and prototype on a PowerPC-based Field Programmable System on Chip (FPSoC. Our results show that the modified hybrid FME algorithm on average, outperforms previous state-of-the-art FME algorithms, while its losses when compared with FSME, in terms of PSNR performance and computation time, are insignificant. We show that although our implementation platform is FPGA-based, our implementation results compare favourably with previous architectures implemented on ASICs. Finally we also show an improvement over some existing architectures implemented on FPGAs.
A novel directional asymmetric sampling search algorithm for fast block-matching motion estimation
Li, Yue-e.; Wang, Qiang
2011-11-01
This paper proposes a novel directional asymmetric sampling search (DASS) algorithm for video compression. Making full use of the error information (block distortions) of the search patterns, eight different direction search patterns are designed for various situations. The strategy of local sampling search is employed for the search of big-motion vector. In order to further speed up the search, early termination strategy is adopted in procedure of DASS. Compared to conventional fast algorithms, the proposed method has the most satisfactory PSNR values for all test sequences.
Reproducibility of UAV-based earth surface topography based on structure-from-motion algorithms.
Clapuyt, François; Vanacker, Veerle; Van Oost, Kristof
2014-05-01
A representation of the earth surface at very high spatial resolution is crucial to accurately map small geomorphic landforms with high precision. Very high resolution digital surface models (DSM) can then be used to quantify changes in earth surface topography over time, based on differencing of DSMs taken at various moments in time. However, it is compulsory to have both high accuracy for each topographic representation and consistency between measurements over time, as DSM differencing automatically leads to error propagation. This study investigates the reproducibility of reconstructions of earth surface topography based on structure-from-motion (SFM) algorithms. To this end, we equipped an eight-propeller drone with a standard reflex camera. This equipment can easily be deployed in the field, as it is a lightweight, low-cost system in comparison with classic aerial photo surveys and terrestrial or airborne LiDAR scanning. Four sets of aerial photographs were created for one test field. The sets of airphotos differ in focal length, and viewing angles, i.e. nadir view and ground-level view. In addition, the importance of the accuracy of ground control points for the construction of a georeferenced point cloud was assessed using two different GPS devices with horizontal accuracy at resp. the sub-meter and sub-decimeter level. Airphoto datasets were processed with SFM algorithm and the resulting point clouds were georeferenced. Then, the surface representations were compared with each other to assess the reproducibility of the earth surface topography. Finally, consistency between independent datasets is discussed.
Multiscale sampling model for motion integration.
Sherbakov, Lena; Yazdanbakhsh, Arash
2013-09-30
Biologically plausible strategies for visual scene integration across spatial and temporal domains continues to be a challenging topic. The fundamental question we address is whether classical problems in motion integration, such as the aperture problem, can be solved in a model that samples the visual scene at multiple spatial and temporal scales in parallel. We hypothesize that fast interareal connections that allow feedback of information between cortical layers are the key processes that disambiguate motion direction. We developed a neural model showing how the aperture problem can be solved using different spatial sampling scales between LGN, V1 layer 4, V1 layer 6, and area MT. Our results suggest that multiscale sampling, rather than feedback explicitly, is the key process that gives rise to end-stopped cells in V1 and enables area MT to solve the aperture problem without the need for calculating intersecting constraints or crafting intricate patterns of spatiotemporal receptive fields. Furthermore, the model explains why end-stopped cells no longer emerge in the absence of V1 layer 6 activity (Bolz & Gilbert, 1986), why V1 layer 4 cells are significantly more end-stopped than V1 layer 6 cells (Pack, Livingstone, Duffy, & Born, 2003), and how it is possible to have a solution to the aperture problem in area MT with no solution in V1 in the presence of driving feedback. In summary, while much research in the field focuses on how a laminar architecture can give rise to complicated spatiotemporal receptive fields to solve problems in the motion domain, we show that one can reframe motion integration as an emergent property of multiscale sampling achieved concurrently within lamina and across multiple visual areas.
A Single Image Deblurring Algorithm for Nonuniform Motion Blur Using Uniform Defocus Map Estimation
Directory of Open Access Journals (Sweden)
Chia-Feng Chang
2017-01-01
Full Text Available One of the most common artifacts in digital photography is motion blur. When capturing an image under dim light by using a handheld camera, the tendency of the photographer’s hand to shake causes the image to blur. In response to this problem, image deblurring has become an active topic in computational photography and image processing in recent years. From the view of signal processing, image deblurring can be reduced to a deconvolution problem if the kernel function of the motion blur is assumed to be shift invariant. However, the kernel function is not always shift invariant in real cases; for example, in-plane rotation of a camera or a moving object can blur different parts of an image according to different kernel functions. An image that is degraded by multiple blur kernels is called a nonuniform blur image. In this paper, we propose a novel single image deblurring algorithm for nonuniform motion blur images that is blurred by moving object. First, a proposed uniform defocus map method is presented for measurement of the amounts and directions of motion blur. These blurred regions are then used to estimate point spread functions simultaneously. Finally, a fast deconvolution algorithm is used to restore the nonuniform blur image. We expect that the proposed method can achieve satisfactory deblurring of a single nonuniform blur image.
Winkler, Stefan; Rangaswamy, Karthik; Tedjokusumo, Jefry; Zhou, ZhiYing
2008-02-01
Determining the self-motion of a camera is useful for many applications. A number of visual motion-tracking algorithms have been developed till date, each with their own advantages and restrictions. Some of them have also made their foray into the mobile world, powering augmented reality-based applications on phones with inbuilt cameras. In this paper, we compare the performances of three feature or landmark-guided motion tracking algorithms, namely marker-based tracking with MXRToolkit, face tracking based on CamShift, and MonoSLAM. We analyze and compare the complexity, accuracy, sensitivity, robustness and restrictions of each of the above methods. Our performance tests are conducted over two stages: The first stage of testing uses video sequences created with simulated camera movements along the six degrees of freedom in order to compare accuracy in tracking, while the second stage analyzes the robustness of the algorithms by testing for manipulative factors like image scaling and frame-skipping.
A Review of Point-Wise Motion Tracking Algorithms for Fetal Magnetic Resonance Imaging.
Chikop, Shivaprasad; Koulagi, Girish; Kumbara, Ankita; Geethanath, Sairam
2016-01-01
We review recent feature-based tracking algorithms as applied to fetal magnetic resonance imaging (MRI). Motion in fetal MRI is an active and challenging area of research, but the challenge can be mitigated by strategies related to patient setup, acquisition, reconstruction, and image processing. We focus on fetal motion correction through methods based on tracking algorithms for registration of slices with similar anatomy in multiple volumes. We describe five motion detection algorithms based on corner detection and region-based methods through pseudocodes, illustrating the results of their application to fetal MRI. We compare the performance of these methods on the basis of error in registration and minimum number of feature points required for registration. Harris, a corner detection method, provides similar error when compared to the other methods and has the lowest number of feature points required at that error level. We do not discuss group-wise methods here. Finally, we attempt to communicate the application of available feature extraction methods to fetal MRI.
Clapuyt, Francois; Vanacker, Veerle; Van Oost, Kristof
2016-05-01
Combination of UAV-based aerial pictures and Structure-from-Motion (SfM) algorithm provides an efficient, low-cost and rapid framework for remote sensing and monitoring of dynamic natural environments. This methodology is particularly suitable for repeated topographic surveys in remote or poorly accessible areas. However, temporal analysis of landform topography requires high accuracy of measurements and reproducibility of the methodology as differencing of digital surface models leads to error propagation. In order to assess the repeatability of the SfM technique, we surveyed a study area characterized by gentle topography with an UAV platform equipped with a standard reflex camera, and varied the focal length of the camera and location of georeferencing targets between flights. Comparison of different SfM-derived topography datasets shows that precision of measurements is in the order of centimetres for identical replications which highlights the excellent performance of the SfM workflow, all parameters being equal. The precision is one order of magnitude higher for 3D topographic reconstructions involving independent sets of ground control points, which results from the fact that the accuracy of the localisation of ground control points strongly propagates into final results.
Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie
2018-01-01
As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.
Facial motion parameter estimation and error criteria in model-based image coding
Liu, Yunhai; Yu, Lu; Yao, Qingdong
2000-04-01
Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.
Ground Motion Prediction Model Using Artificial Neural Network
Dhanya, J.; Raghukanth, S. T. G.
2018-03-01
This article focuses on developing a ground motion prediction equation based on artificial neural network (ANN) technique for shallow crustal earthquakes. A hybrid technique combining genetic algorithm and Levenberg-Marquardt technique is used for training the model. The present model is developed to predict peak ground velocity, and 5% damped spectral acceleration. The input parameters for the prediction are moment magnitude ( M w), closest distance to rupture plane ( R rup), shear wave velocity in the region ( V s30) and focal mechanism ( F). A total of 13,552 ground motion records from 288 earthquakes provided by the updated NGA-West2 database released by Pacific Engineering Research Center are utilized to develop the model. The ANN architecture considered for the model consists of 192 unknowns including weights and biases of all the interconnected nodes. The performance of the model is observed to be within the prescribed error limits. In addition, the results from the study are found to be comparable with the existing relations in the global database. The developed model is further demonstrated by estimating site-specific response spectra for Shimla city located in Himalayan region.
International Nuclear Information System (INIS)
De Agostini, A.; Moretti, R.; Belletti, S.; Maira, G.; Magri, G.C.; Bestagno, M.
1992-01-01
The correction of organ movements in sequential radionuclide renography was done using an iterative algorithm that, by means of a set of rectangular regions of interest (ROIs), did not require any anatomical marker or manual elaboration of frames. The realignment programme here proposed is quite independent of the spatial and temporal distribution of activity and analyses the rotational movement in a simplified but reliable way. The position of the object inside a frame is evaluated by choosing the best ROI in a set of ROIs shifted 1 pixel around the central one. Statistical tests have to be fulfilled by the algorithm in order to activate the realignment procedure. Validation of the algorithm was done for different acquisition set-ups and organ movements. Results, summarized in Table 1, show that in about 90% of the stimulated experiments the algorithm is able to correct the movements of the object with a maximum error less of equal to 1 pixel limit. The usefulness of the realignment programme was demonstrated with sequential radionuclide renography as a typical clinical application. The algorithm-corrected curves of a 1-year-old patient were completely different from those obtained without a motion correction procedure. The algorithm may be applicable also to other types of scintigraphic examinations, besides functional imaging in which the realignment of frames of the dynamic sequence was an intrinsic demand. (orig.)
Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system
Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen
2016-05-01
Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.
Zaychik, Kirill; Cardullo, Frank; George, Gary; Kelly, Lon C.
2009-01-01
In order to use the Hess Structural Model to predict the need for certain cueing systems, George and Cardullo significantly expanded it by adding motion feedback to the model and incorporating models of the motion system dynamics, motion cueing algorithm and a vestibular system. This paper proposes a methodology to evaluate effectiveness of these innovations by performing a comparison analysis of the model performance with and without the expanded motion feedback. The proposed methodology is composed of two stages. The first stage involves fine-tuning parameters of the original Hess structural model in order to match the actual control behavior recorded during the experiments at NASA Visual Motion Simulator (VMS) facility. The parameter tuning procedure utilizes a new automated parameter identification technique, which was developed at the Man-Machine Systems Lab at SUNY Binghamton. In the second stage of the proposed methodology, an expanded motion feedback is added to the structural model. The resulting performance of the model is then compared to that of the original one. As proposed by Hess, metrics to evaluate the performance of the models include comparison against the crossover models standards imposed on the crossover frequency and phase margin of the overall man-machine system. Preliminary results indicate the advantage of having the model of the motion system and motion cueing incorporated into the model of the human operator. It is also demonstrated that the crossover frequency and the phase margin of the expanded model are well within the limits imposed by the crossover model.
Cheng, Xuemin; Hao, Qun; Xie, Mengdi
2016-04-07
Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.
Energy Technology Data Exchange (ETDEWEB)
Chun, Se Young [School of Electrical and Computer Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan (Korea, Republic of)
2016-03-15
PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples.
Comparison of Nonlinear Model Results Using Modified Recorded and Synthetic Ground Motions
International Nuclear Information System (INIS)
Spears, Robert E.; Wilkins, J. Kevin
2011-01-01
A study has been performed that compares results of nonlinear model runs using two sets of earthquake ground motion time histories that have been modified to fit the same design response spectra. The time histories include applicable modified recorded earthquake ground motion time histories and synthetic ground motion time histories. The modified recorded earthquake ground motion time histories are modified from time history records that are selected based on consistent magnitude and distance. The synthetic ground motion time histories are generated using appropriate Fourier amplitude spectrums, Arias intensity, and drift correction. All of the time history modification is performed using the same algorithm to fit the design response spectra. The study provides data to demonstrate that properly managed synthetic ground motion time histories are reasonable for use in nonlinear seismic analysis.
Simulating intrafraction prostate motion with a random walk model
Directory of Open Access Journals (Sweden)
Tobias Pommer, PhD
2017-07-01
Conclusions: Random walk modeling is feasible and recreated the characteristics of the observed prostate motion. Introducing artificial transient motion did not improve the overall agreement, although the first 30 seconds of the traces were better reproduced. The model provides a simple estimate of prostate motion during delivery of radiation therapy.
A Highly Parallel and Scalable Motion Estimation Algorithm with GPU for HEVC
Directory of Open Access Journals (Sweden)
Yun-gang Xue
2017-01-01
Full Text Available We propose a highly parallel and scalable motion estimation algorithm, named multilevel resolution motion estimation (MLRME for short, by combining the advantages of local full search and downsampling. By subsampling a video frame, a large amount of computation is saved. While using the local full-search method, it can exploit massive parallelism and make full use of the powerful modern many-core accelerators, such as GPU and Intel Xeon Phi. We implanted the proposed MLRME into HM12.0, and the experimental results showed that the encoding quality of the MLRME method is close to that of the fast motion estimation in HEVC, which declines by less than 1.5%. We also implemented the MLRME with CUDA, which obtained 30–60x speed-up compared to the serial algorithm on single CPU. Specifically, the parallel implementation of MLRME on a GTX 460 GPU can meet the real-time coding requirement with about 25 fps for the 2560×1600 video format, while, for 832×480, the performance is more than 100 fps.
Improved Collaborative Filtering Algorithm using Topic Model
Directory of Open Access Journals (Sweden)
Liu Na
2016-01-01
Full Text Available Collaborative filtering algorithms make use of interactions rates between users and items for generating recommendations. Similarity among users or items is calculated based on rating mostly, without considering explicit properties of users or items involved. In this paper, we proposed collaborative filtering algorithm using topic model. We describe user-item matrix as document-word matrix and user are represented as random mixtures over item, each item is characterized by a distribution over users. The experiments showed that the proposed algorithm achieved better performance compared the other state-of-the-art algorithms on Movie Lens data sets.
A parallelizable real-time motion tracking algorithm with applications to ultrasonic strain imaging
International Nuclear Information System (INIS)
Jiang, J; Hall, T J
2007-01-01
Ultrasound-based mechanical strain imaging systems utilize signals from conventional diagnostic ultrasound systems to image tissue elasticity contrast that provides new diagnostically valuable information. Previous works (Hall et al 2003 Ultrasound Med. Biol. 29 427, Zhu and Hall 2002 Ultrason. Imaging 24 161) demonstrated that uniaxial deformation with minimal elevation motion is preferred for breast strain imaging and real-time strain image feedback to operators is important to accomplish this goal. The work reported here enhances the real-time speckle tracking algorithm with two significant modifications. One fundamental change is that the proposed algorithm is a column-based algorithm (a column is defined by a line of data parallel to the ultrasound beam direction, i.e. an A-line), as opposed to a row-based algorithm (a row is defined by a line of data perpendicular to the ultrasound beam direction). Then, displacement estimates from its adjacent columns provide good guidance for motion tracking in a significantly reduced search region to reduce computational cost. Consequently, the process of displacement estimation can be naturally split into at least two separated tasks, computed in parallel, propagating outward from the center of the region of interest (ROI). The proposed algorithm has been implemented and optimized in a Windows (registered) system as a stand-alone ANSI C++ program. Results of preliminary tests, using numerical and tissue-mimicking phantoms, and in vivo tissue data, suggest that high contrast strain images can be consistently obtained with frame rates (10 frames s -1 ) that exceed our previous methods
Parallel algorithm for determining motion vectors in ice floe images by matching edge features
Manohar, M.; Ramapriyan, H. K.; Strong, J. P.
1988-01-01
A parallel algorithm is described to determine motion vectors of ice floes using time sequences of images of the Arctic ocean obtained from the Synthetic Aperture Radar (SAR) instrument flown on-board the SEASAT spacecraft. Researchers describe a parallel algorithm which is implemented on the MPP for locating corresponding objects based on their translationally and rotationally invariant features. The algorithm first approximates the edges in the images by polygons or sets of connected straight-line segments. Each such edge structure is then reduced to a seed point. Associated with each seed point are the descriptions (lengths, orientations and sequence numbers) of the lines constituting the corresponding edge structure. A parallel matching algorithm is used to match packed arrays of such descriptions to identify corresponding seed points in the two images. The matching algorithm is designed such that fragmentation and merging of ice floes are taken into account by accepting partial matches. The technique has been demonstrated to work on synthetic test patterns and real image pairs from SEASAT in times ranging from .5 to 0.7 seconds for 128 x 128 images.
Pengpen, T; Soleimani, M
2015-06-13
Cone beam computed tomography (CBCT) is an imaging modality that has been used in image-guided radiation therapy (IGRT). For applications such as lung radiation therapy, CBCT images are greatly affected by the motion artefacts. This is mainly due to low temporal resolution of CBCT. Recently, a dual modality of electrical impedance tomography (EIT) and CBCT has been proposed, in which the high temporal resolution EIT imaging system provides motion data to a motion-compensated algebraic reconstruction technique (ART)-based CBCT reconstruction software. High computational time associated with ART and indeed other variations of ART make it less practical for real applications. This paper develops a motion-compensated conjugate gradient least-squares (CGLS) algorithm for CBCT. A motion-compensated CGLS offers several advantages over ART-based methods, including possibilities for explicit regularization, rapid convergence and parallel computations. This paper for the first time demonstrates motion-compensated CBCT reconstruction using CGLS and reconstruction results are shown in limited data CBCT considering only a quarter of the full dataset. The proposed algorithm is tested using simulated motion data in generic motion-compensated CBCT as well as measured EIT data in dual EIT-CBCT imaging. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
A scalable method for parallelizing sampling-based motion planning algorithms
Jacobs, Sam Ade; Manavi, Kasra; Burgos, Juan; Denny, Jory; Thomas, Shawna; Amato, Nancy M.
2012-01-01
This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.
A scalable method for parallelizing sampling-based motion planning algorithms
Jacobs, Sam Ade
2012-05-01
This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.
Energy Technology Data Exchange (ETDEWEB)
Pontone, Gianluca; Bertella, Erika; Baggiano, Andrea; Mushtaq, Saima; Loguercio, Monica; Segurini, Chiara; Conte, Edoardo; Beltrama, Virginia; Annoni, Andrea; Formenti, Alberto; Petulla, Maria; Trabattoni, Daniela; Pepi, Mauro [Centro Cardiologico Monzino, IRCCS, Milan (Italy); Andreini, Daniele; Montorsi, Piero; Bartorelli, Antonio L. [Centro Cardiologico Monzino, IRCCS, Milan (Italy); University of Milan, Department of Cardiovascular Sciences and Community Health, Milan (Italy); Guaricci, Andrea I. [University of Foggia, Department of Cardiology, Foggia (Italy)
2016-01-15
The aim of this study was to evaluate the impact of a novel intra-cycle motion correction algorithm (MCA) on overall evaluability and diagnostic accuracy of cardiac computed tomography coronary angiography (CCT). From a cohort of 900 consecutive patients referred for CCT for suspected coronary artery disease (CAD), we enrolled 160 (18 %) patients (mean age 65.3 ± 11.7 years, 101 male) with at least one coronary segment classified as non-evaluable for motion artefacts. The CCT data sets were evaluated using a standard reconstruction algorithm (SRA) and MCA and compared in terms of subjective image quality, evaluability and diagnostic accuracy. The mean heart rate during the examination was 68.3 ± 9.4 bpm. The MCA showed a higher Likert score (3.1 ± 0.9 vs. 2.5 ± 1.1, p < 0.001) and evaluability (94%vs.79 %, p < 0.001) than the SRA. In a 45-patient subgroup studied by clinically indicated invasive coronary angiography, specificity, positive predictive value and accuracy were higher in MCA vs. SRA in segment-based and vessel-based models, respectively (87%vs.73 %, 50%vs.34 %, 85%vs.73 %, p < 0.001 and 62%vs.28 %, 66%vs.51 % and 75%vs.57 %, p < 0.001). In a patient-based model, MCA showed higher accuracy vs. SCA (93%vs.76 %, p < 0.05). MCA can significantly improve subjective image quality, overall evaluability and diagnostic accuracy of CCT. (orig.)
Liveness-Based RRT Algorithm for Autonomous Underwater Vehicles Motion Planning
Directory of Open Access Journals (Sweden)
Yang Li
2017-01-01
Full Text Available Motion planning is a crucial, basic issue in robotics, which aims at driving vehicles or robots towards to a given destination with various constraints, such as obstacles and limited resource. This paper presents a new version of rapidly exploring random trees (RRT, that is, liveness-based RRT (Li-RRT, to address autonomous underwater vehicles (AUVs motion problem. Different from typical RRT, we define an index of each node in the random searching tree, called “liveness” in this paper, to describe the potential effectiveness during the expanding process. We show that Li-RRT is provably probabilistic completeness as original RRT. In addition, the expected time of returning a valid path with Li-RRT is obviously reduced. To verify the efficiency of our algorithm, numerical experiments are carried out in this paper.
Simulating intrafraction prostate motion with a random walk model.
Pommer, Tobias; Oh, Jung Hun; Munck Af Rosenschöld, Per; Deasy, Joseph O
2017-01-01
Prostate motion during radiation therapy (ie, intrafraction motion) can cause unwanted loss of radiation dose to the prostate and increased dose to the surrounding organs at risk. A compact but general statistical description of this motion could be useful for simulation of radiation therapy delivery or margin calculations. We investigated whether prostate motion could be modeled with a random walk model. Prostate motion recorded during 548 radiation therapy fractions in 17 patients was analyzed and used for input in a random walk prostate motion model. The recorded motion was categorized on the basis of whether any transient excursions (ie, rapid prostate motion in the anterior and superior direction followed by a return) occurred in the trace and transient motion. This was separately modeled as a large step in the anterior/superior direction followed by a returning large step. Random walk simulations were conducted with and without added artificial transient motion using either motion data from all observed traces or only traces without transient excursions as model input, respectively. A general estimate of motion was derived with reasonable agreement between simulated and observed traces, especially during the first 5 minutes of the excursion-free simulations. Simulated and observed diffusion coefficients agreed within 0.03, 0.2 and 0.3 mm 2 /min in the left/right, superior/inferior, and anterior/posterior directions, respectively. A rapid increase in variance at the start of observed traces was difficult to reproduce and seemed to represent the patient's need to adjust before treatment. This could be estimated somewhat using artificial transient motion. Random walk modeling is feasible and recreated the characteristics of the observed prostate motion. Introducing artificial transient motion did not improve the overall agreement, although the first 30 seconds of the traces were better reproduced. The model provides a simple estimate of prostate motion during
Multivariate Autoregressive Model Based Heart Motion Prediction Approach for Beating Heart Surgery
Directory of Open Access Journals (Sweden)
Fan Liang
2013-02-01
Full Text Available A robotic tool can enable a surgeon to conduct off-pump coronary artery graft bypass surgery on a beating heart. The robotic tool actively alleviates the relative motion between the point of interest (POI on the heart surface and the surgical tool and allows the surgeon to operate as if the heart were stationary. Since the beating heart's motion is relatively high-band, with nonlinear and nonstationary characteristics, it is difficult to follow. Thus, precise beating heart motion prediction is necessary for the tracking control procedure during the surgery. In the research presented here, we first observe that Electrocardiography (ECG signal contains the causal phase information on heart motion and non-stationary heart rate dynamic variations. Then, we investigate the relationship between ECG signal and beating heart motion using Granger Causality Analysis, which describes the feasibility of the improved prediction of heart motion. Next, we propose a nonlinear time-varying multivariate vector autoregressive (MVAR model based adaptive prediction method. In this model, the significant correlation between ECG and heart motion enables the improvement of the prediction of sharp changes in heart motion and the approximation of the motion with sufficient detail. Dual Kalman Filters (DKF estimate the states and parameters of the model, respectively. Last, we evaluate the proposed algorithm through comparative experiments using the two sets of collected vivo data.
Experimental Evaluation of a Braille-Reading-Inspired Finger Motion Adaptive Algorithm.
Ulusoy, Melda; Sipahi, Rifat
2016-01-01
Braille reading is a complex process involving intricate finger-motion patterns and finger-rubbing actions across Braille letters for the stimulation of appropriate nerves. Although Braille reading is performed by smoothly moving the finger from left-to-right, research shows that even fluent reading requires right-to-left movements of the finger, known as "reversal". Reversals are crucial as they not only enhance stimulation of nerves for correctly reading the letters, but they also show one to re-read the letters that were missed in the first pass. Moreover, it is known that reversals can be performed as often as in every sentence and can start at any location in a sentence. Here, we report experimental results on the feasibility of an algorithm that can render a machine to automatically adapt to reversal gestures of one's finger. Through Braille-reading-analogous tasks, the algorithm is tested with thirty sighted subjects that volunteered in the study. We find that the finger motion adaptive algorithm (FMAA) is useful in achieving cooperation between human finger and the machine. In the presence of FMAA, subjects' performance metrics associated with the tasks have significantly improved as supported by statistical analysis. In light of these encouraging results, preliminary experiments are carried out with five blind subjects with the aim to put the algorithm to test. Results obtained from carefully designed experiments showed that subjects' Braille reading accuracy in the presence of FMAA was more favorable then when FMAA was turned off. Utilization of FMAA in future generation Braille reading devices thus holds strong promise.
Atomic Models for Motional Stark Effects Diagnostics
Energy Technology Data Exchange (ETDEWEB)
Gu, M F; Holcomb, C; Jayakuma, J; Allen, S; Pablant, N A; Burrell, K
2007-07-26
We present detailed atomic physics models for motional Stark effects (MSE) diagnostic on magnetic fusion devices. Excitation and ionization cross sections of the hydrogen or deuterium beam traveling in a magnetic field in collisions with electrons, ions, and neutral gas are calculated in the first Born approximation. The density matrices and polarization states of individual Stark-Zeeman components of the Balmer {alpha} line are obtained for both beam into plasma and beam into gas models. A detailed comparison of the model calculations and the MSE polarimetry and spectral intensity measurements obtained at the DIII-D tokamak is carried out. Although our beam into gas models provide a qualitative explanation for the larger {pi}/{sigma} intensity ratios and represent significant improvements over the statistical population models, empirical adjustment factors ranging from 1.0-2.0 must still be applied to individual line intensities to bring the calculations into full agreement with the observations. Nevertheless, we demonstrate that beam into gas measurements can be used successfully as calibration procedures for measuring the magnetic pitch angle through {pi}/{sigma} intensity ratios. The analyses of the filter-scan polarization spectra from the DIII-D MSE polarimetry system indicate unknown channel and time dependent light contaminations in the beam into gas measurements. Such contaminations may be the main reason for the failure of beam into gas calibration on MSE polarimetry systems.
Directory of Open Access Journals (Sweden)
Gustavo Sanchez
2012-01-01
Full Text Available This paper presents a new fast motion estimation (ME algorithm targeting high resolution digital videos and its efficient hardware architecture design. The new Dynamic Multipoint Diamond Search (DMPDS algorithm is a fast algorithm which increases the ME quality when compared with other fast ME algorithms. The DMPDS achieves a better digital video quality reducing the occurrence of local minima falls, especially in high definition videos. The quality results show that the DMPDS is able to reach an average PSNR gain of 1.85 dB when compared with the well-known Diamond Search (DS algorithm. When compared to the optimum results generated by the Full Search (FS algorithm the DMPDS shows a lose of only 1.03 dB in the PSNR. On the other hand, the DMPDS reached a complexity reduction higher than 45 times when compared to FS. The quality gains related to DS caused an expected increase in the DMPDS complexity which uses 6.4-times more calculations than DS. The DMPDS architecture was designed focused on high performance and low cost, targeting to process Quad Full High Definition (QFHD videos in real time (30 frames per second. The architecture was described in VHDL and synthesized to Altera Stratix 4 and Xilinx Virtex 5 FPGAs. The synthesis results show that the architecture is able to achieve processing rates higher than 53 QFHD fps, reaching the real-time requirements. The DMPDS architecture achieved the highest processing rate when compared to related works in the literature. This high processing rate was obtained designing an architecture with a high operation frequency and low numbers of cycles necessary to process each block.
Directory of Open Access Journals (Sweden)
Taek Seo Jung
2006-03-01
Full Text Available This paper presents an Image Motion Compensation (IMC algorithm for the Korea's Communication, Ocean, and Meteorological Satellite (COMS-1. An IMC algorithm is a priority component of image registration in Image Navigation and Registration (INR system to locate and register radiometric image data. Due to various perturbations, a satellite has orbit and attitude errors with respect to a reference motion. These errors cause depointing of the imager aiming direction, and in consequence cause image distortions. To correct the depointing of the imager aiming direction, a compensation algorithm is designed by adapting different equations from those used for the GOES satellites. The capability of the algorithm is compared with that of existing algorithm applied to the GOES's INR system. The algorithm developed in this paper improves pointing accuracy by 40%, and efficiently compensates the depointings of the imager aiming direction.
Vakanski, A; Ferguson, J M; Lee, S
2016-12-01
The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient's exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient's physician with recommendations for improvement. The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject's performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach employs the recent progress in the field of
Model-based respiratory motion compensation for emission tomography image reconstruction
International Nuclear Information System (INIS)
Reyes, M; Malandain, G; Koulibaly, P M; Gonzalez-Ballester, M A; Darcourt, J
2007-01-01
In emission tomography imaging, respiratory motion causes artifacts in lungs and cardiac reconstructed images, which lead to misinterpretations, imprecise diagnosis, impairing of fusion with other modalities, etc. Solutions like respiratory gating, correlated dynamic PET techniques, list-mode data based techniques and others have been tested, which lead to improvements over the spatial activity distribution in lungs lesions, but which have the disadvantages of requiring additional instrumentation or the need of discarding part of the projection data used for reconstruction. The objective of this study is to incorporate respiratory motion compensation directly into the image reconstruction process, without any additional acquisition protocol consideration. To this end, we propose an extension to the maximum likelihood expectation maximization (MLEM) algorithm that includes a respiratory motion model, which takes into account the displacements and volume deformations produced by the respiratory motion during the data acquisition process. We present results from synthetic simulations incorporating real respiratory motion as well as from phantom and patient data
Reconstructing 3D Tree Models Using Motion Capture and Particle Flow
Directory of Open Access Journals (Sweden)
Jie Long
2013-01-01
Full Text Available Recovering tree shape from motion capture data is a first step toward efficient and accurate animation of trees in wind using motion capture data. Existing algorithms for generating models of tree branching structures for image synthesis in computer graphics are not adapted to the unique data set provided by motion capture. We present a method for tree shape reconstruction using particle flow on input data obtained from a passive optical motion capture system. Initial branch tip positions are estimated from averaged and smoothed motion capture data. Branch tips, as particles, are also generated within a bounding space defined by a stack of bounding boxes or a convex hull. The particle flow, starting at branch tips within the bounding volume under forces, creates tree branches. The forces are composed of gravity, internal force, and external force. The resulting shapes are realistic and similar to the original tree crown shape. Several tunable parameters provide control over branch shape and arrangement.
Model Checking Algorithms for CTMDPs
DEFF Research Database (Denmark)
Buchholz, Peter; Hahn, Ernst Moritz; Hermanns, Holger
2011-01-01
Continuous Stochastic Logic (CSL) can be interpreted over continuoustime Markov decision processes (CTMDPs) to specify quantitative properties of stochastic systems that allow some external control. Model checking CSL formulae over CTMDPs requires then the computation of optimal control strategie...
Fuzzy audit risk modeling algorithm
Directory of Open Access Journals (Sweden)
Zohreh Hajihaa
2011-07-01
Full Text Available Fuzzy logic has created suitable mathematics for making decisions in uncertain environments including professional judgments. One of the situations is to assess auditee risks. During recent years, risk based audit (RBA has been regarded as one of the main tools to fight against fraud. The main issue in RBA is to determine the overall audit risk an auditor accepts, which impact the efficiency of an audit. The primary objective of this research is to redesign the audit risk model (ARM proposed by auditing standards. The proposed model of this paper uses fuzzy inference systems (FIS based on the judgments of audit experts. The implementation of proposed fuzzy technique uses triangular fuzzy numbers to express the inputs and Mamdani method along with center of gravity are incorporated for defuzzification. The proposed model uses three FISs for audit, inherent and control risks, and there are five levels of linguistic variables for outputs. FISs include 25, 25 and 81 rules of if-then respectively and officials of Iranian audit experts confirm all the rules.
Rethinking exchange market models as optimization algorithms
Luquini, Evandro; Omar, Nizam
2018-02-01
The exchange market model has mainly been used to study the inequality problem. Although the human society inequality problem is very important, the exchange market models dynamics until stationary state and its capability of ranking individuals is interesting in itself. This study considers the hypothesis that the exchange market model could be understood as an optimization procedure. We present herein the implications for algorithmic optimization and also the possibility of a new family of exchange market models
New Parallel Algorithms for Landscape Evolution Model
Jin, Y.; Zhang, H.; Shi, Y.
2017-12-01
Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.
Inter-fraction variations in respiratory motion models
Energy Technology Data Exchange (ETDEWEB)
McClelland, J R; Modat, M; Ourselin, S; Hawkes, D J [Centre for Medical Image Computing, University College London (United Kingdom); Hughes, S; Qureshi, A; Ahmad, S; Landau, D B, E-mail: j.mcclelland@cs.ucl.ac.uk [Department of Oncology, Guy' s and St Thomas' s Hospitals NHS Trust, London (United Kingdom)
2011-01-07
Respiratory motion can vary dramatically between the planning stage and the different fractions of radiotherapy treatment. Motion predictions used when constructing the radiotherapy plan may be unsuitable for later fractions of treatment. This paper presents a methodology for constructing patient-specific respiratory motion models and uses these models to evaluate and analyse the inter-fraction variations in the respiratory motion. The internal respiratory motion is determined from the deformable registration of Cine CT data and related to a respiratory surrogate signal derived from 3D skin surface data. Three different models for relating the internal motion to the surrogate signal have been investigated in this work. Data were acquired from six lung cancer patients. Two full datasets were acquired for each patient, one before the course of radiotherapy treatment and one at the end (approximately 6 weeks later). Separate models were built for each dataset. All models could accurately predict the respiratory motion in the same dataset, but had large errors when predicting the motion in the other dataset. Analysis of the inter-fraction variations revealed that most variations were spatially varying base-line shifts, but changes to the anatomy and the motion trajectories were also observed.
Clarke, R.; Lintereur, L.; Bahm, C.
2016-01-01
A desire for more complete documentation of the National Aeronautics and Space Administration (NASA) Armstrong Flight Research Center (AFRC), Edwards, California legacy code used in the core simulation has led to this e ort to fully document the oblate Earth six-degree-of-freedom equations of motion and integration algorithm. The authors of this report have taken much of the earlier work of the simulation engineering group and used it as a jumping-o point for this report. The largest addition this report makes is that each element of the equations of motion is traced back to first principles and at no point is the reader forced to take an equation on faith alone. There are no discoveries of previously unknown principles contained in this report; this report is a collection and presentation of textbook principles. The value of this report is that those textbook principles are herein documented in standard nomenclature that matches the form of the computer code DERIVC. Previous handwritten notes are much of the backbone of this work, however, in almost every area, derivations are explicitly shown to assure the reader that the equations which make up the oblate Earth version of the computer routine, DERIVC, are correct.
Rahman, Nurul Hidayah Ab; Abdullah, Nurul Azma; Hamid, Isredza Rahmi A.; Wen, Chuah Chai; Jelani, Mohamad Shafiqur Rahman Mohd
2017-10-01
Closed-Circuit TV (CCTV) system is one of the technologies in surveillance field to solve the problem of detection and monitoring by providing extra features such as email alert or motion detection. However, detecting and alerting the admin on CCTV system may complicate due to the complexity to integrate the main program with an external Application Programming Interface (API). In this study, pixel processing algorithm is applied due to its efficiency and SMS alert is added as an alternative solution for users who opted out email alert system or have no Internet connection. A CCTV system with SMS alert (CMDSA) was developed using evolutionary prototyping methodology. The system interface was implemented using Microsoft Visual Studio while the backend components, which are database and coding, were implemented on SQLite database and C# programming language, respectively. The main modules of CMDSA are motion detection, capturing and saving video, image processing and Short Message Service (SMS) alert functions. Subsequently, the system is able to reduce the processing time making the detection process become faster, reduce the space and memory used to run the program and alerting the system admin instantly.
A method to quantitate regional wall motion in left ventriculography using Hildreth algorithm
Energy Technology Data Exchange (ETDEWEB)
Terashima, Mikio [Hyogo Red Cross Blood Center (Japan); Naito, Hiroaki; Sato, Yoshinobu; Tamura, Shinichi; Kurosawa, Tsutomu
1998-06-01
Quantitative measurement of ventricular wall motion is indispensable for objective evaluation of cardiac function associated with coronary artery disease. We have modified the Hildreth`s algorithm to estimate excursions of the ventricular wall on left ventricular images yielded by various imaging techniques. Tagging cine-MRI was carried out on 7 healthy volunteers. The original Hildreth method, the modified Hildreth method and the centerline method were applied to the outlines of the images obtained, to estimate excursion of the left ventricular wall and regional shortening and to evaluate the accuracy of these methods when measuring these parameters, compared to the values of these parameters measured actually using the attached tags. The accuracy of the original Hildreth method was comparable to that of the centerline method, while the modified Hildreth method was significantly more accurate than the centerline method (P<0.05). Regional shortening as estimated using the modified Hildreth method differed less from the actually measured regional shortening than did the shortening estimated using the centerline method (P<0.05). The modified Hildreth method allowed reasonable estimation of left ventricular wall excursion in all cases where it was applied. These results indicate that when applied to left ventriculograms for ventricular wall motion analysis, the modified Hildreth method is more useful than the original Hildreth method. (author)
Nonlinear Model Predictive Control of a Cable-Robot-Based Motion Simulator
DEFF Research Database (Denmark)
Katliar, Mikhail; Fischer, Joerg; Frison, Gianluca
2017-01-01
In this paper we present the implementation of a model-predictive controller (MPC) for real-time control of a cable-robot-based motion simulator. The controller computes control inputs such that a desired acceleration and angular velocity at a defined point in simulator's cabin are tracked while...... satisfying constraints imposed by working space and allowed cable forces of the robot. In order to fully use the simulator capabilities, we propose an approach that includes the motion platform actuation in the MPC model. The tracking performance and computation time of the algorithm are investigated...
Energy Technology Data Exchange (ETDEWEB)
Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul [Department of Biomedical Engineering, College of Medicine, Catholic University of Korea, Seoul, Korea 131-700 and Research Institute of Biomedical Engineering, Catholic University of Korea, Seoul, 131-700 (Korea, Republic of); Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States) and Department of Radiation Oncology, Asan Medical Center, Seoul, 138-736 (Korea, Republic of); Department of Biomedical Engineering, College of Medicine, Catholic University of Korea, Seoul, 131-700 and Research Institute of Biomedical Engineering, Catholic University of Korea, Seoul, 131-700 (Korea, Republic of); Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States) and Radiation Physics Laboratory, Sydney Medical School, University of Sydney, 2006 (Australia)
2011-07-15
Purpose: In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. Methods: The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a {gamma}-test with a 3%/3 mm criterion. Results: The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the {gamma}-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation
Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul
2011-07-01
In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a tgamma-test with a 3%/3 mm criterion. The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the gamma-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation. The delivery efficiency of
Model based development of engine control algorithms
Dekker, H.J.; Sturm, W.L.
1996-01-01
Model based development of engine control systems has several advantages. The development time and costs are strongly reduced because much of the development and optimization work is carried out by simulating both engine and control system. After optimizing the control algorithm it can be executed
Algorithms and Models for the Web Graph
Gleich, David F.; Komjathy, Julia; Litvak, Nelli
2015-01-01
This volume contains the papers presented at WAW2015, the 12th Workshop on Algorithms and Models for the Web-Graph held during December 10–11, 2015, in Eindhoven. There were 24 submissions. Each submission was reviewed by at least one, and on average two, Program Committee members. The committee
Biomechanical interpretation of a free-breathing lung motion model
International Nuclear Information System (INIS)
Zhao Tianyu; White, Benjamin; Lamb, James; Low, Daniel A; Moore, Kevin L; Yang Deshan; Mutic, Sasa; Lu Wei
2011-01-01
The purpose of this paper is to develop a biomechanical model for free-breathing motion and compare it to a published heuristic five-dimensional (5D) free-breathing lung motion model. An ab initio biomechanical model was developed to describe the motion of lung tissue during free breathing by analyzing the stress–strain relationship inside lung tissue. The first-order approximation of the biomechanical model was equivalent to a heuristic 5D free-breathing lung motion model proposed by Low et al in 2005 (Int. J. Radiat. Oncol. Biol. Phys. 63 921–9), in which the motion was broken down to a linear expansion component and a hysteresis component. To test the biomechanical model, parameters that characterize expansion, hysteresis and angles between the two motion components were reported independently and compared between two models. The biomechanical model agreed well with the heuristic model within 5.5% in the left lungs and 1.5% in the right lungs for patients without lung cancer. The biomechanical model predicted that a histogram of angles between the two motion components should have two peaks at 39.8° and 140.2° in the left lungs and 37.1° and 142.9° in the right lungs. The data from the 5D model verified the existence of those peaks at 41.2° and 148.2° in the left lungs and 40.1° and 140° in the right lungs for patients without lung cancer. Similar results were also observed for the patients with lung cancer, but with greater discrepancies. The maximum-likelihood estimation of hysteresis magnitude was reported to be 2.6 mm for the lung cancer patients. The first-order approximation of the biomechanical model fit the heuristic 5D model very well. The biomechanical model provided new insights into breathing motion with specific focus on motion trajectory hysteresis.
Dynamic Airspace Managment - Models and Algorithms
Cheng, Peng; Geng, Rui
2010-01-01
This chapter investigates the models and algorithms for implementing the concept of Dynamic Airspace Management. Three models are discussed. First two models are about how to use or adjust air route dynamically in order to speed up air trafï¬c ï¬‚ow and reduce delay. The third model gives a way to dynamically generate the optimal sector conï¬guration for an air trafï¬c control center to both balance the controllerâ€™s workload and save control resources. The ï¬rst model, called the Dynami...
Building Mathematical Models of Simple Harmonic and Damped Motion.
Edwards, Thomas
1995-01-01
By developing a sequence of mathematical models of harmonic motion, shows that mathematical models are not right or wrong, but instead are better or poorer representations of the problem situation. (MKR)
Optimization in engineering models and algorithms
Sioshansi, Ramteen
2017-01-01
This textbook covers the fundamentals of optimization, including linear, mixed-integer linear, nonlinear, and dynamic optimization techniques, with a clear engineering focus. It carefully describes classical optimization models and algorithms using an engineering problem-solving perspective, and emphasizes modeling issues using many real-world examples related to a variety of application areas. Providing an appropriate blend of practical applications and optimization theory makes the text useful to both practitioners and students, and gives the reader a good sense of the power of optimization and the potential difficulties in applying optimization to modeling real-world systems. The book is intended for undergraduate and graduate-level teaching in industrial engineering and other engineering specialties. It is also of use to industry practitioners, due to the inclusion of real-world applications, opening the door to advanced courses on both modeling and algorithm development within the industrial engineering ...
Collaborative real-time motion video analysis by human observer and image exploitation algorithms
Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen
2015-05-01
Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.
Sundaramoorthi, Ganesh
2012-09-13
This paper presents a novel medical image registration algorithm that explicitly models the physical constraints imposed by objects or sub-structures of objects that have differing material composition and border each other, which is the case in most medical registration applications. Typical medical image registration algorithms ignore these constraints and therefore are not physically viable, and to incorporate these constraints would require prior segmentation of the image into regions of differing material composition, which is a difficult problem in itself. We present a mathematical model and algorithm for incorporating these physical constraints into registration / motion and deformation estimation that does not require a segmentation of different material regions. Our algorithm is a joint estimation of different material regions and the motion/deformation within these regions. Therefore, the segmentation of different material regions is automatically provided in addition to the image registration satisfying the physical constraints. The algorithm identifies differing material regions (sub-structures or objects) as regions where the deformation has different characteristics. We demonstrate the effectiveness of our method on the analysis of cardiac MRI which includes the detection of the left ventricle boundary and its deformation. The experimental results indicate the potential of the algorithm as an assistant tool for the quantitative analysis of cardiac functions in the diagnosis of heart disease.
Modeling a space-variant cortical representation for apparent motion.
Wurbs, Jeremy; Mingolla, Ennio; Yazdanbakhsh, Arash
2013-08-06
Receptive field sizes of neurons in early primate visual areas increase with eccentricity, as does temporal processing speed. The fovea is evidently specialized for slow, fine movements while the periphery is suited for fast, coarse movements. In either the fovea or periphery discrete flashes can produce motion percepts. Grossberg and Rudd (1989) used traveling Gaussian activity profiles to model long-range apparent motion percepts. We propose a neural model constrained by physiological data to explain how signals from retinal ganglion cells to V1 affect the perception of motion as a function of eccentricity. Our model incorporates cortical magnification, receptive field overlap and scatter, and spatial and temporal response characteristics of retinal ganglion cells for cortical processing of motion. Consistent with the finding of Baker and Braddick (1985), in our model the maximum flash distance that is perceived as an apparent motion (Dmax) increases linearly as a function of eccentricity. Baker and Braddick (1985) made qualitative predictions about the functional significance of both stimulus and visual system parameters that constrain motion perception, such as an increase in the range of detectable motions as a function of eccentricity and the likely role of higher visual processes in determining Dmax. We generate corresponding quantitative predictions for those functional dependencies for individual aspects of motion processing. Simulation results indicate that the early visual pathway can explain the qualitative linear increase of Dmax data without reliance on extrastriate areas, but that those higher visual areas may serve as a modulatory influence on the exact Dmax increase.
An evolutionary algorithm for model selection
Energy Technology Data Exchange (ETDEWEB)
Bicker, Karl [CERN, Geneva (Switzerland); Chung, Suh-Urk; Friedrich, Jan; Grube, Boris; Haas, Florian; Ketzer, Bernhard; Neubert, Sebastian; Paul, Stephan; Ryabchikov, Dimitry [Technische Univ. Muenchen (Germany)
2013-07-01
When performing partial-wave analyses of multi-body final states, the choice of the fit model, i.e. the set of waves to be used in the fit, can significantly alter the results of the partial wave fit. Traditionally, the models were chosen based on physical arguments and by observing the changes in log-likelihood of the fits. To reduce possible bias in the model selection process, an evolutionary algorithm was developed based on a Bayesian goodness-of-fit criterion which takes into account the model complexity. Starting from systematically constructed pools of waves which contain significantly more waves than the typical fit model, the algorithm yields a model with an optimal log-likelihood and with a number of partial waves which is appropriate for the number of events in the data. Partial waves with small contributions to the total intensity are penalized and likely to be dropped during the selection process, as are models were excessive correlations between single waves occur. Due to the automated nature of the model selection, a much larger part of the model space can be explored than would be possible in a manual selection. In addition the method allows to assess the dependence of the fit result on the fit model which is an important contribution to the systematic uncertainty.
Rolling Shutter Motion Deblurring
Su, Shuochen
2015-06-07
Although motion blur and rolling shutter deformations are closely coupled artifacts in images taken with CMOS image sensors, the two phenomena have so far mostly been treated separately, with deblurring algorithms being unable to handle rolling shutter wobble, and rolling shutter algorithms being incapable of dealing with motion blur. We propose an approach that delivers sharp and undis torted output given a single rolling shutter motion blurred image. The key to achieving this is a global modeling of the camera motion trajectory, which enables each scanline of the image to be deblurred with the corresponding motion segment. We show the results of the proposed framework through experiments on synthetic and real data.
SU-C-BRF-07: A Pattern Fusion Algorithm for Multi-Step Ahead Prediction of Surrogate Motion
International Nuclear Information System (INIS)
Zawisza, I; Yan, H; Yin, F
2014-01-01
Purpose: To assure that tumor motion is within the radiation field during high-dose and high-precision radiosurgery, real-time imaging and surrogate monitoring are employed. These methods are useful in providing real-time tumor/surrogate motion but no future information is available. In order to anticipate future tumor/surrogate motion and track target location precisely, an algorithm is developed and investigated for estimating surrogate motion multiple-steps ahead. Methods: The study utilized a one-dimensional surrogate motion signal divided into three components: (a) training component containing the primary data including the first frame to the beginning of the input subsequence; (b) input subsequence component of the surrogate signal used as input to the prediction algorithm: (c) output subsequence component is the remaining signal used as the known output of the prediction algorithm for validation. The prediction algorithm consists of three major steps: (1) extracting subsequences from training component which best-match the input subsequence according to given criterion; (2) calculating weighting factors from these best-matched subsequence; (3) collecting the proceeding parts of the subsequences and combining them together with assigned weighting factors to form output. The prediction algorithm was examined for several patients, and its performance is assessed based on the correlation between prediction and known output. Results: Respiratory motion data was collected for 20 patients using the RPM system. The output subsequence is the last 50 samples (∼2 seconds) of a surrogate signal, and the input subsequence was 100 (∼3 seconds) frames prior to the output subsequence. Based on the analysis of correlation coefficient between predicted and known output subsequence, the average correlation is 0.9644±0.0394 and 0.9789±0.0239 for equal-weighting and relative-weighting strategies, respectively. Conclusion: Preliminary results indicate that the prediction
Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.
1989-01-01
Several techniques to perform static and dynamic load balancing techniques for vision systems are presented. These techniques are novel in the sense that they capture the computational requirements of a task by examining the data when it is produced. Furthermore, they can be applied to many vision systems because many algorithms in different systems are either the same, or have similar computational characteristics. These techniques are evaluated by applying them on a parallel implementation of the algorithms in a motion estimation system on a hypercube multiprocessor system. The motion estimation system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from different time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters. It is shown that the performance gains when these data decomposition and load balancing techniques are used are significant and the overhead of using these techniques is minimal.
Markov chains models, algorithms and applications
Ching, Wai-Ki; Ng, Michael K; Siu, Tak-Kuen
2013-01-01
This new edition of Markov Chains: Models, Algorithms and Applications has been completely reformatted as a text, complete with end-of-chapter exercises, a new focus on management science, new applications of the models, and new examples with applications in financial risk management and modeling of financial data.This book consists of eight chapters. Chapter 1 gives a brief introduction to the classical theory on both discrete and continuous time Markov chains. The relationship between Markov chains of finite states and matrix theory will also be highlighted. Some classical iterative methods
Genetic Algorithm Based Microscale Vehicle Emissions Modelling
Directory of Open Access Journals (Sweden)
Sicong Zhu
2015-01-01
Full Text Available There is a need to match emission estimations accuracy with the outputs of transport models. The overall error rate in long-term traffic forecasts resulting from strategic transport models is likely to be significant. Microsimulation models, whilst high-resolution in nature, may have similar measurement errors if they use the outputs of strategic models to obtain traffic demand predictions. At the microlevel, this paper discusses the limitations of existing emissions estimation approaches. Emission models for predicting emission pollutants other than CO2 are proposed. A genetic algorithm approach is adopted to select the predicting variables for the black box model. The approach is capable of solving combinatorial optimization problems. Overall, the emission prediction results reveal that the proposed new models outperform conventional equations in terms of accuracy and robustness.
Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Voit, Michael; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen
2017-05-01
Real-time motion video analysis is a challenging and exhausting task for the human observer, particularly in safety and security critical domains. Hence, customized video analysis systems providing functions for the analysis of subtasks like motion detection or target tracking are welcome. While such automated algorithms relieve the human operators from performing basic subtasks, they impose additional interaction duties on them. Prior work shows that, e.g., for interaction with target tracking algorithms, a gaze-enhanced user interface is beneficial. In this contribution, we present an investigation on interaction with an independent motion detection (IDM) algorithm. Besides identifying an appropriate interaction technique for the user interface - again, we compare gaze-based and traditional mouse-based interaction - we focus on the benefit an IDM algorithm might provide for an UAS video analyst. In a pilot study, we exposed ten subjects to the task of moving target detection in UAS video data twice, once performing with automatic support, once performing without it. We compare the two conditions considering performance in terms of effectiveness (correct target selections). Additionally, we report perceived workload (measured using the NASA-TLX questionnaire) and user satisfaction (measured using the ISO 9241-411 questionnaire). The results show that a combination of gaze input and automated IDM algorithm provides valuable support for the human observer, increasing the number of correct target selections up to 62% and reducing workload at the same time.
Modelling Evolutionary Algorithms with Stochastic Differential Equations.
Heredia, Jorge Pérez
2017-11-20
There has been renewed interest in modelling the behaviour of evolutionary algorithms (EAs) by more traditional mathematical objects, such as ordinary differential equations or Markov chains. The advantage is that the analysis becomes greatly facilitated due to the existence of well established methods. However, this typically comes at the cost of disregarding information about the process. Here, we introduce the use of stochastic differential equations (SDEs) for the study of EAs. SDEs can produce simple analytical results for the dynamics of stochastic processes, unlike Markov chains which can produce rigorous but unwieldy expressions about the dynamics. On the other hand, unlike ordinary differential equations (ODEs), they do not discard information about the stochasticity of the process. We show that these are especially suitable for the analysis of fixed budget scenarios and present analogues of the additive and multiplicative drift theorems from runtime analysis. In addition, we derive a new more general multiplicative drift theorem that also covers non-elitist EAs. This theorem simultaneously allows for positive and negative results, providing information on the algorithm's progress even when the problem cannot be optimised efficiently. Finally, we provide results for some well-known heuristics namely Random Walk (RW), Random Local Search (RLS), the (1+1) EA, the Metropolis Algorithm (MA), and the Strong Selection Weak Mutation (SSWM) algorithm.
Relative Motion Modeling and Autonomous Navigation Accuracy
2016-11-15
74-79. [39] T. Carter and M. Humi, “Clohessy-Wiltshire Equations Modified to Include Quadratic Drag,” Journal of Guidance, Control and Dynamics...the effect of the absolute and differential effects of the equatorial bulge term, J2, in the linearized equations for the relative motion of...perturbation equations up to third-order used for separating the secular and periodic effects can be written as where , , , and are the Kamiltonians of
On a PCA-based lung motion model
Energy Technology Data Exchange (ETDEWEB)
Li Ruijiang; Lewis, John H; Jia Xun; Jiang, Steve B [Department of Radiation Oncology and Center for Advanced Radiotherapy Technologies, University of California San Diego, 3855 Health Sciences Dr, La Jolla, CA 92037-0843 (United States); Zhao Tianyu; Wuenschel, Sara; Lamb, James; Yang Deshan; Low, Daniel A [Department of Radiation Oncology, Washington University School of Medicine, 4921 Parkview Pl, St. Louis, MO 63110-1093 (United States); Liu Weifeng, E-mail: sbjiang@ucsd.edu [Amazon.com Inc., 701 5th Ave. Seattle, WA 98104 (United States)
2011-09-21
Respiration-induced organ motion is one of the major uncertainties in lung cancer radiotherapy and is crucial to be able to accurately model the lung motion. Most work so far has focused on the study of the motion of a single point (usually the tumor center of mass), and much less work has been done to model the motion of the entire lung. Inspired by the work of Zhang et al (2007 Med. Phys. 34 4772-81), we believe that the spatiotemporal relationship of the entire lung motion can be accurately modeled based on principle component analysis (PCA) and then a sparse subset of the entire lung, such as an implanted marker, can be used to drive the motion of the entire lung (including the tumor). The goal of this work is twofold. First, we aim to understand the underlying reason why PCA is effective for modeling lung motion and find the optimal number of PCA coefficients for accurate lung motion modeling. We attempt to address the above important problems both in a theoretical framework and in the context of real clinical data. Second, we propose a new method to derive the entire lung motion using a single internal marker based on the PCA model. The main results of this work are as follows. We derived an important property which reveals the implicit regularization imposed by the PCA model. We then studied the model using two mathematical respiratory phantoms and 11 clinical 4DCT scans for eight lung cancer patients. For the mathematical phantoms with cosine and an even power (2n) of cosine motion, we proved that 2 and 2n PCA coefficients and eigenvectors will completely represent the lung motion, respectively. Moreover, for the cosine phantom, we derived the equivalence conditions for the PCA motion model and the physiological 5D lung motion model (Low et al 2005 Int. J. Radiat. Oncol. Biol. Phys. 63 921-9). For the clinical 4DCT data, we demonstrated the modeling power and generalization performance of the PCA model. The average 3D modeling error using PCA was within 1
On a PCA-based lung motion model.
Li, Ruijiang; Lewis, John H; Jia, Xun; Zhao, Tianyu; Liu, Weifeng; Wuenschel, Sara; Lamb, James; Yang, Deshan; Low, Daniel A; Jiang, Steve B
2011-09-21
Respiration-induced organ motion is one of the major uncertainties in lung cancer radiotherapy and is crucial to be able to accurately model the lung motion. Most work so far has focused on the study of the motion of a single point (usually the tumor center of mass), and much less work has been done to model the motion of the entire lung. Inspired by the work of Zhang et al (2007 Med. Phys. 34 4772-81), we believe that the spatiotemporal relationship of the entire lung motion can be accurately modeled based on principle component analysis (PCA) and then a sparse subset of the entire lung, such as an implanted marker, can be used to drive the motion of the entire lung (including the tumor). The goal of this work is twofold. First, we aim to understand the underlying reason why PCA is effective for modeling lung motion and find the optimal number of PCA coefficients for accurate lung motion modeling. We attempt to address the above important problems both in a theoretical framework and in the context of real clinical data. Second, we propose a new method to derive the entire lung motion using a single internal marker based on the PCA model. The main results of this work are as follows. We derived an important property which reveals the implicit regularization imposed by the PCA model. We then studied the model using two mathematical respiratory phantoms and 11 clinical 4DCT scans for eight lung cancer patients. For the mathematical phantoms with cosine and an even power (2n) of cosine motion, we proved that 2 and 2n PCA coefficients and eigenvectors will completely represent the lung motion, respectively. Moreover, for the cosine phantom, we derived the equivalence conditions for the PCA motion model and the physiological 5D lung motion model (Low et al 2005 Int. J. Radiat. Oncol. Biol. Phys. 63 921-9). For the clinical 4DCT data, we demonstrated the modeling power and generalization performance of the PCA model. The average 3D modeling error using PCA was within 1
On a PCA-based lung motion model
International Nuclear Information System (INIS)
Li Ruijiang; Lewis, John H; Jia Xun; Jiang, Steve B; Zhao Tianyu; Wuenschel, Sara; Lamb, James; Yang Deshan; Low, Daniel A; Liu Weifeng
2011-01-01
Respiration-induced organ motion is one of the major uncertainties in lung cancer radiotherapy and is crucial to be able to accurately model the lung motion. Most work so far has focused on the study of the motion of a single point (usually the tumor center of mass), and much less work has been done to model the motion of the entire lung. Inspired by the work of Zhang et al (2007 Med. Phys. 34 4772-81), we believe that the spatiotemporal relationship of the entire lung motion can be accurately modeled based on principle component analysis (PCA) and then a sparse subset of the entire lung, such as an implanted marker, can be used to drive the motion of the entire lung (including the tumor). The goal of this work is twofold. First, we aim to understand the underlying reason why PCA is effective for modeling lung motion and find the optimal number of PCA coefficients for accurate lung motion modeling. We attempt to address the above important problems both in a theoretical framework and in the context of real clinical data. Second, we propose a new method to derive the entire lung motion using a single internal marker based on the PCA model. The main results of this work are as follows. We derived an important property which reveals the implicit regularization imposed by the PCA model. We then studied the model using two mathematical respiratory phantoms and 11 clinical 4DCT scans for eight lung cancer patients. For the mathematical phantoms with cosine and an even power (2n) of cosine motion, we proved that 2 and 2n PCA coefficients and eigenvectors will completely represent the lung motion, respectively. Moreover, for the cosine phantom, we derived the equivalence conditions for the PCA motion model and the physiological 5D lung motion model (Low et al 2005 Int. J. Radiat. Oncol. Biol. Phys. 63 921-9). For the clinical 4DCT data, we demonstrated the modeling power and generalization performance of the PCA model. The average 3D modeling error using PCA was within 1
Lagrangian and hamiltonian algorithms applied to the elar ged DGL model
International Nuclear Information System (INIS)
Batlle, C.; Roman-Roy, N.
1988-01-01
We analyse a model of two interating relativistic particles which is useful to illustrate the equivalence between the Dirac-Bergmann and the geometrical presympletic constraint algorithms. Both the lagrangian and hamiltonian formalisms are deeply analysed and we also find and discuss the equations of motion. (Autor)
Guo, Yi; Hohil, Myron; Desai, Sachi V.
2007-10-01
Proposed are techniques toward using collaborative robots for infrastructure security applications by utilizing them for mobile sensor suites. A vast number of critical facilities/technologies must be protected against unauthorized intruders. Employing a team of mobile robots working cooperatively can alleviate valuable human resources. Addressed are the technical challenges for multi-robot teams in security applications and the implementation of multi-robot motion planning algorithm based on the patrolling and threat response scenario. A neural network based methodology is exploited to plan a patrolling path with complete coverage. Also described is a proof-of-principle experimental setup with a group of Pioneer 3-AT and Centibot robots. A block diagram of the system integration of sensing and planning will illustrate the robot to robot interaction to operate as a collaborative unit. The proposed approach singular goal is to overcome the limits of previous approaches of robots in security applications and enabling systems to be deployed for autonomous operation in an unaltered environment providing access to an all encompassing sensor suite.
Application of an Image Tracking Algorithm in Fire Ant Motion Experiment
Directory of Open Access Journals (Sweden)
Lichuan Gui
2009-04-01
Full Text Available An image tracking algorithm, which was originally used with the particle image velocimetry (PIV to determine velocities of buoyant solid particles in water, is modified and applied in the presented work to detect motion of fire ant on a planar surface. A group of fire ant workers are put to the bottom of a tub and excited with vibration of selected frequency and intensity. The moving fire ants are captured with an image system that successively acquires image frames of high digital resolution. The background noise in the imaging recordings is extracted by averaging hundreds of frames and removed from each frame. The individual fire ant images are identified with a recursive digital filter, and then they are tracked between frames according to the size, brightness, shape, and orientation angle of the ant image. The speed of an individual ant is determined with the displacement of its images and the time interval between frames. The trail of the individual fire ant is determined with the image tracking results, and a statistical analysis is conducted for all the fire ants in the group. The purpose of the experiment is to investigate the response of fire ants to the substrate vibration. Test results indicate that the fire ants move faster after being excited, but the number of active ones are not increased even after a strong excitation.
International Nuclear Information System (INIS)
Gao Wa; Zha Fu-Sheng; Li Man-Tian; Song Bao-Yu
2014-01-01
This paper develops a fast filtering algorithm based on vibration systems theory and neural information exchange approach. The characters, including the derivation process and parameter analysis, are discussed and the feasibility and the effectiveness are testified by the filtering performance compared with various filtering methods, such as the fast wavelet transform algorithm, the particle filtering method and our previously developed single degree of freedom vibration system filtering algorithm, according to simulation and practical approaches. Meanwhile, the comparisons indicate that a significant advantage of the proposed fast filtering algorithm is its extremely fast filtering speed with good filtering performance. Further, the developed fast filtering algorithm is applied to the navigation and positioning system of the micro motion robot, which is a high real-time requirement for the signals preprocessing. Then, the preprocessing data is used to estimate the heading angle error and the attitude angle error of the micro motion robot. The estimation experiments illustrate the high practicality of the proposed fast filtering algorithm. (general)
Energy Technology Data Exchange (ETDEWEB)
Jaskowiak, J; Ahmad, S; Ali, I [University of Oklahoma Health Sciences Center, Oklahoma City, OK (United States); Alsbou, N [Ohio Northern University, Ada, OH (United States)
2015-06-15
Purpose: To investigate correlation of displacement vector fields (DVF) calculated by deformable image registration algorithms with motion parameters in helical axial and cone-beam CT images with motion artifacts. Methods: A mobile thorax phantom with well-known targets with different sizes that were made from water-equivalent material and inserted in foam to simulate lung lesions. The thorax phantom was imaged with helical, axial and cone-beam CT. The phantom was moved with a cyclic motion with different motion amplitudes and frequencies along the superior-inferior direction. Different deformable image registration algorithms including demons, fast demons, Horn-Shunck and iterative-optical-flow from the DIRART software were used to deform CT images for the phantom with different motion patterns. The CT images of the mobile phantom were deformed to CT images of the stationary phantom. Results: The values of displacement vectors calculated by deformable image registration algorithm correlated strongly with motion amplitude where large displacement vectors were calculated for CT images with large motion amplitudes. For example, the maximal displacement vectors were nearly equal to the motion amplitudes (5mm, 10mm or 20mm) at interfaces between the mobile targets lung tissue, while the minimal displacement vectors were nearly equal to negative the motion amplitudes. The maximal and minimal displacement vectors matched with edges of the blurred targets along the Z-axis (motion-direction), while DVF’s were small in the other directions. This indicates that the blurred edges by phantom motion were shifted largely to match with the actual target edge. These shifts were nearly equal to the motion amplitude. Conclusions: The DVF from deformable-image registration algorithms correlated well with motion amplitude of well-defined mobile targets. This can be used to extract motion parameters such as amplitude. However, as motion amplitudes increased, image artifacts increased
Neurons compute internal models of the physical laws of motion.
Angelaki, Dora E; Shaikh, Aasef G; Green, Andrea M; Dickman, J David
2004-07-29
A critical step in self-motion perception and spatial awareness is the integration of motion cues from multiple sensory organs that individually do not provide an accurate representation of the physical world. One of the best-studied sensory ambiguities is found in visual processing, and arises because of the inherent uncertainty in detecting the motion direction of an untextured contour moving within a small aperture. A similar sensory ambiguity arises in identifying the actual motion associated with linear accelerations sensed by the otolith organs in the inner ear. These internal linear accelerometers respond identically during translational motion (for example, running forward) and gravitational accelerations experienced as we reorient the head relative to gravity (that is, head tilt). Using new stimulus combinations, we identify here cerebellar and brainstem motion-sensitive neurons that compute a solution to the inertial motion detection problem. We show that the firing rates of these populations of neurons reflect the computations necessary to construct an internal model representation of the physical equations of motion.
Onboard Risk-Aware Real-Time Motion Planning Algorithms for Spacecraft Maneuvering
National Aeronautics and Space Administration — Unlocking the next generation of complex missions for autonomous spacecraft will require significant advances in robust motion planning. The aim of motion planning...
Modeling of earthquake ground motion in the frequency domain
Thrainsson, Hjortur
In recent years, the utilization of time histories of earthquake ground motion has grown considerably in the design and analysis of civil structures. It is very unlikely, however, that recordings of earthquake ground motion will be available for all sites and conditions of interest. Hence, there is a need for efficient methods for the simulation and spatial interpolation of earthquake ground motion. In addition to providing estimates of the ground motion at a site using data from adjacent recording stations, spatially interpolated ground motions can also be used in design and analysis of long-span structures, such as bridges and pipelines, where differential movement is important. The objective of this research is to develop a methodology for rapid generation of horizontal earthquake ground motion at any site for a given region, based on readily available source, path and site characteristics, or (sparse) recordings. The research includes two main topics: (i) the simulation of earthquake ground motion at a given site, and (ii) the spatial interpolation of earthquake ground motion. In topic (i), models are developed to simulate acceleration time histories using the inverse discrete Fourier transform. The Fourier phase differences, defined as the difference in phase angle between adjacent frequency components, are simulated conditional on the Fourier amplitude. Uniformly processed recordings from recent California earthquakes are used to validate the simulation models, as well as to develop prediction formulas for the model parameters. The models developed in this research provide rapid simulation of earthquake ground motion over a wide range of magnitudes and distances, but they are not intended to replace more robust geophysical models. In topic (ii), a model is developed in which Fourier amplitudes and Fourier phase angles are interpolated separately. A simple dispersion relationship is included in the phase angle interpolation. The accuracy of the interpolation
Evaluation of Real-Time Hand Motion Tracking Using a Range Camera and the Mean-Shift Algorithm
Lahamy, H.; Lichti, D.
2011-09-01
Several sensors have been tested for improving the interaction between humans and machines including traditional web cameras, special gloves, haptic devices, cameras providing stereo pairs of images and range cameras. Meanwhile, several methods are described in the literature for tracking hand motion: the Kalman filter, the mean-shift algorithm and the condensation algorithm. In this research, the combination of a range camera and the simple version of the mean-shift algorithm has been evaluated for its capability for hand motion tracking. The evaluation was assessed in terms of position accuracy of the tracking trajectory in x, y and z directions in the camera space and the time difference between image acquisition and image display. Three parameters have been analyzed regarding their influence on the tracking process: the speed of the hand movement, the distance between the camera and the hand and finally the integration time of the camera. Prior to the evaluation, the required warm-up time of the camera has been measured. This study has demonstrated the suitability of the range camera used in combination with the mean-shift algorithm for real-time hand motion tracking but for very high speed hand movement in the traverse plane with respect to the camera, the tracking accuracy is low and requires improvement.
Measurements of boat motion in waves at Durban harbour for qualitative validation of motion model
CSIR Research Space (South Africa)
Mosikare, OR
2010-09-01
Full Text Available in Waves at Durban Harbour for Qualitative Validation of Motion Model O.R. Mosikare1,2, N.J. Theron1, W. Van der Molen 1 University of Pretoria, South Africa, 0001 2Council for Scientific and Industrial Research, Meiring Naude Rd, Brummeria, 0001... stream_source_info Mosikare_2010.pdf.txt stream_content_type text/plain stream_size 3033 Content-Encoding UTF-8 stream_name Mosikare_2010.pdf.txt Content-Type text/plain; charset=UTF-8 Measurements of Boat Motion...
Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing
2015-11-21
Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right, anterior-posterior, and superior-inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation.
International Nuclear Information System (INIS)
Tehrani, Joubin Nasehi; Wang, Jing; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu
2015-01-01
Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney–Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney–Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney–Rivlin material model along left-right, anterior–posterior, and superior–inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation. (paper)
Boson models of quadrupole collective motion
International Nuclear Information System (INIS)
Zelevinskij, V.G.
1985-01-01
The subject of the lecture is the low-lying excitations of even-even (e-e) spherical nuclei. The predominant role of the quadrupole mode, which determines the structure of spectra and transitions, is obvious on the background of shell periodicity and pair correlations. Typical E2-transitions are strengthened Ω ∼ A 2/3 times in comparison with single particle evaluations. Together with the regularity of the whole picture it gives evidence about collectivization of quadrupole motion. The collective states are combined in bands, where the transition probability are especially great; frequencies ω of the strengthened transitions are small in comparison with pair separation energies of 2 E-bar ∼ 2 MeV. Thus, the description of low-lying excitations of spherical nuclei has to be based on three principles: collectivity (Ω >> 1), adiabaticity (τ ≡ ω/2E-bar << 1) and quadrupole symmetry
A high and low noise model for strong motion accelerometers
Clinton, J. F.; Cauzzi, C.; Olivieri, M.
2010-12-01
We present reference noise models for high-quality strong motion accelerometer installations. We use continuous accelerometer data acquired by the Swiss Seismological Service (SED) since 2006 and other international high-quality accelerometer network data to derive very broadband (50Hz-100s) high and low noise models. The proposed noise models are compared to the Peterson (1993) low and high noise models designed for broadband seismometers; the datalogger self-noise; background noise levels at existing Swiss strong motion stations; and typical earthquake signals recorded in Switzerland and worldwide. The standard strong motion station operated by the SED consists of a Kinemetrics Episensor (2g clip level; flat acceleration response from 200 Hz to DC; insulated sensor / datalogger systems placed in vault quality sites. At all frequencies, there is at least one order of magnitude between the ALNM and the AHNM; at high frequencies (> 1Hz) this extends to 2 orders of magnitude. This study provides remarkable confirmation of the capability of modern strong motion accelerometers to record low-amplitude ground motions with seismic observation quality. In particular, an accelerometric station operating at the ALNM is capable of recording the full spectrum of near source earthquakes, out to 100 km, down to M2. Of particular interest for the SED, this study provides acceptable noise limits for candidate sites for the on-going Strong Motion Network modernisation.
Pyramid algorithms as models of human cognition
Pizlo, Zygmunt; Li, Zheng
2003-06-01
There is growing body of experimental evidence showing that human perception and cognition involves mechanisms that can be adequately modeled by pyramid algorithms. The main aspect of those mechanisms is hierarchical clustering of information: visual images, spatial relations, and states as well as transformations of a problem. In this paper we review prior psychophysical and simulation results on visual size transformation, size discrimination, speed-accuracy tradeoff, figure-ground segregation, and the traveling salesman problem. We also present our new results on graph search and on the 15-puzzle.
Modeling Trees with a Space Colonization Algorithm
Morell Higueras, Marc
2014-01-01
[CATALÀ] Aquest TFG tracta la implementació d'un algorisme de generació procedural que construeixi una estructura reminiscent a la d'un arbre de clima temperat, i també la implementació del pas de l'estructura a un model tridimensional, acompanyat de l'eina per a visualitzar el resultat i fer-ne l'exportació [ANGLÈS] This TFG consists of the implementation of a procedural generation algorithm that builds a structure reminiscent of that of a temperate climate tree, and also consists of the ...
Genetic Algorithms Principles Towards Hidden Markov Model
Directory of Open Access Journals (Sweden)
Nabil M. Hewahi
2011-10-01
Full Text Available In this paper we propose a general approach based on Genetic Algorithms (GAs to evolve Hidden Markov Models (HMM. The problem appears when experts assign probability values for HMM, they use only some limited inputs. The assigned probability values might not be accurate to serve in other cases related to the same domain. We introduce an approach based on GAs to find
out the suitable probability values for the HMM to be mostly correct in more cases than what have been used to assign the probability values.
Field theory of large amplitude collective motion. A schematic model
International Nuclear Information System (INIS)
Reinhardt, H.
1978-01-01
By using path integral methods the equation for large amplitude collective motion for a schematic two-level model is derived. The original fermion theory is reformulated in terms of a collective (Bose) field. The classical equation of motion for the collective field coincides with the time-dependent Hartree-Fock equation. Its classical solution is quantized by means of the field-theoretical generalization of the WKB method. (author)
Directory of Open Access Journals (Sweden)
Dashan Zhang
2016-04-01
Full Text Available The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.
Directory of Open Access Journals (Sweden)
V. E. Marley
2015-01-01
Full Text Available Summary. The concept of algorithmic models appeared from the algorithmic approach in which the simulated object, the phenomenon appears in the form of process, subject to strict rules of the algorithm, which placed the process of operation of the facility. Under the algorithmic model is the formalized description of the scenario subject specialist for the simulated process, the structure of which is comparable with the structure of the causal and temporal relationships between events of the process being modeled, together with all information necessary for its software implementation. To represent the structure of algorithmic models used algorithmic network. Normally, they were defined as loaded finite directed graph, the vertices which are mapped to operators and arcs are variables, bound by operators. The language of algorithmic networks has great features, the algorithms that it can display indifference the class of all random algorithms. In existing systems, automation modeling based on algorithmic nets, mainly used by operators working with real numbers. Although this reduces their ability, but enough for modeling a wide class of problems related to economy, environment, transport, technical processes. The task of modeling the execution of schedules and network diagrams is relevant and useful. There are many counting systems, network graphs, however, the monitoring process based analysis of gaps and terms of graphs, no analysis of prediction execution schedule or schedules. The library is designed to build similar predictive models. Specifying source data to obtain a set of projections from which to choose one and take it for a new plan.
Scheinost, Dustin; Hampson, Michelle; Qiu, Maolin; Bhawnani, Jitendra; Constable, R Todd; Papademetris, Xenophon
2013-07-01
Real-time functional magnetic resonance imaging (rt-fMRI) has recently gained interest as a possible means to facilitate the learning of certain behaviors. However, rt-fMRI is limited by processing speed and available software, and continued development is needed for rt-fMRI to progress further and become feasible for clinical use. In this work, we present an open-source rt-fMRI system for biofeedback powered by a novel Graphics Processing Unit (GPU) accelerated motion correction strategy as part of the BioImage Suite project ( www.bioimagesuite.org ). Our system contributes to the development of rt-fMRI by presenting a motion correction algorithm that provides an estimate of motion with essentially no processing delay as well as a modular rt-fMRI system design. Using empirical data from rt-fMRI scans, we assessed the quality of motion correction in this new system. The present algorithm performed comparably to standard (non real-time) offline methods and outperformed other real-time methods based on zero order interpolation of motion parameters. The modular approach to the rt-fMRI system allows the system to be flexible to the experiment and feedback design, a valuable feature for many applications. We illustrate the flexibility of the system by describing several of our ongoing studies. Our hope is that continuing development of open-source rt-fMRI algorithms and software will make this new technology more accessible and adaptable, and will thereby accelerate its application in the clinical and cognitive neurosciences.
Superintegrability of geodesic motion on the sausage model
Arutyunov, Gleb; Heinze, Martin; Medina-Rincon, Daniel
2017-06-01
Reduction of the η-deformed sigma model on AdS_5× S5 to the two-dimensional squashed sphere (S^2)η can be viewed as a special case of the Fateev sausage model where the coupling constant ν is imaginary. We show that geodesic motion in this model is described by a certain superintegrable mechanical system with four-dimensional phase space. This is done by means of explicitly constructing three integrals of motion which satisfy the sl(2) Poisson algebra relations, albeit being non-polynomial in momenta. Further, we find a canonical transformation which transforms the Hamiltonian of this mechanical system to the one describing the geodesic motion on the usual two-sphere. By inverting this transformation we map geodesics on this auxiliary two-sphere back to the sausage model. This paper is a tribute to the memory of Prof Petr Kulish.
Comtois, Gary; Mendelson, Yitzhak; Ramuka, Piyush
2007-01-01
Wearable physiological monitoring using a pulse oximeter would enable field medics to monitor multiple injuries simultaneously, thereby prioritizing medical intervention when resources are limited. However, a primary factor limiting the accuracy of pulse oximetry is poor signal-to-noise ratio since photoplethysmographic (PPG) signals, from which arterial oxygen saturation (SpO2) and heart rate (HR) measurements are derived, are compromised by movement artifacts. This study was undertaken to quantify SpO2 and HR errors induced by certain motion artifacts utilizing accelerometry-based adaptive noise cancellation (ANC). Since the fingers are generally more vulnerable to motion artifacts, measurements were performed using a custom forehead-mounted wearable pulse oximeter developed for real-time remote physiological monitoring and triage applications. This study revealed that processing motion-corrupted PPG signals by least mean squares (LMS) and recursive least squares (RLS) algorithms can be effective to reduce SpO2 and HR errors during jogging, but the degree of improvement depends on filter order. Although both algorithms produced similar improvements, implementing the adaptive LMS algorithm is advantageous since it requires significantly less operations.
Nonlinear model predictive control theory and algorithms
Grüne, Lars
2017-01-01
This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...
Geomagnetic field models for satellite angular motion studies
Ovchinnikov, M. Yu.; Penkov, V. I.; Roldugin, D. S.; Pichuzhkina, A. V.
2018-03-01
Four geomagnetic field models are discussed: IGRF, inclined, direct and simplified dipoles. Geomagnetic induction vector expressions are provided in different reference frames. Induction vector behavior is compared for different models. Models applicability for the analysis of satellite motion is studied from theoretical and engineering perspectives. Relevant satellite dynamics analysis cases using analytical and numerical techniques are provided. These cases demonstrate the benefit of a certain model for a specific dynamics study. Recommendations for models usage are summarized in the end.
Skornitzke, S; Fritz, F; Klauss, M; Pahn, G; Hansen, J; Hirsch, J; Grenacher, L; Kauczor, H-U; Stiller, W
2015-02-01
To compare six different scenarios for correcting for breathing motion in abdominal dual-energy CT (DECT) perfusion measurements. Rigid [RRComm(80 kVp)] and non-rigid [NRComm(80 kVp)] registration of commercially available CT perfusion software, custom non-rigid registration [NRCustom(80 kVp], demons algorithm) and a control group [CG(80 kVp)] without motion correction were evaluated using 80 kVp images. Additionally, NRCustom was applied to dual-energy (DE)-blended [NRCustom(DE)] and virtual non-contrast [NRCustom(VNC)] images, yielding six evaluated scenarios. After motion correction, perfusion maps were calculated using a combined maximum slope/Patlak model. For qualitative evaluation, three blinded radiologists independently rated motion correction quality and resulting perfusion maps on a four-point scale (4 = best, 1 = worst). For quantitative evaluation, relative changes in metric values, R(2) and residuals of perfusion model fits were calculated. For motion-corrected images, mean ratings differed significantly [NRCustom(80 kVp) and NRCustom(DE), 3.3; NRComm(80 kVp), 3.1; NRCustom(VNC), 2.9; RRComm(80 kVp), 2.7; CG(80 kVp), 2.7; all p VNC), 22.8%; RRComm(80 kVp), 0.6%; CG(80 kVp), 0%]. Regarding perfusion maps, NRCustom(80 kVp) and NRCustom(DE) were rated highest [NRCustom(80 kVp), 3.1; NRCustom(DE), 3.0; NRComm(80 kVp), 2.8; NRCustom(VNC), 2.6; CG(80 kVp), 2.5; RRComm(80 kVp), 2.4] and had significantly higher R(2) and lower residuals. Correlation between qualitative and quantitative evaluation was low to moderate. Non-rigid motion correction improves spatial alignment of the target region and fit of CT perfusion models. Using DE-blended and DE-VNC images for deformable registration offers no significant improvement. Non-rigid algorithms improve the quality of abdominal CT perfusion measurements but do not benefit from DECT post processing.
A review of ocean chlorophyll algorithms and primary production models
Li, Jingwen; Zhou, Song; Lv, Nan
2015-12-01
This paper mainly introduces the five ocean chlorophyll concentration inversion algorithm and 3 main models for computing ocean primary production based on ocean chlorophyll concentration. Through the comparison of five ocean chlorophyll inversion algorithm, sums up the advantages and disadvantages of these algorithm,and briefly analyzes the trend of ocean primary production model.
Adaptive Numerical Algorithms in Space Weather Modeling
Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.;
2010-01-01
Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical
A model of ATL ground motion for storage rings
International Nuclear Information System (INIS)
Wolski, Andrzej; Walker, Nicholas J.
2003-01-01
Low emittance electron storage rings, such as those used in third generation light sources or linear collider damping rings, rely for their performance on highly stable alignment of the lattice components. Even if all vibration and environmental noise sources could be suppressed, diffusive ground motion will lead to orbit drift and emittance growth. Understanding such motion is important for predicting the performance of a planned accelerator and designing a correction system. A description (known as the ATL model) of ground motion over relatively long time scales has been developed and has become the standard for studies of the long straight beamlines in linear colliders. Here, we show how the model may be developed to include beamlines of any geometry. We apply the model to the NLC and TESLA damping rings, to compare their relative stability under different conditions
Semi-flocking algorithm for motion control of mobile sensors in large-scale surveillance systems.
Semnani, Samaneh Hosseini; Basir, Otman A
2015-01-01
The ability of sensors to self-organize is an important asset in surveillance sensor networks. Self-organize implies self-control at the sensor level and coordination at the network level. Biologically inspired approaches have recently gained significant attention as a tool to address the issue of sensor control and coordination in sensor networks. These approaches are exemplified by the two well-known algorithms, namely, the Flocking algorithm and the Anti-Flocking algorithm. Generally speaking, although these two biologically inspired algorithms have demonstrated promising performance, they expose deficiencies when it comes to their ability to maintain simultaneous robust dynamic area coverage and target coverage. These two coverage performance objectives are inherently conflicting. This paper presents Semi-Flocking, a biologically inspired algorithm that benefits from key characteristics of both the Flocking and Anti-Flocking algorithms. The Semi-Flocking algorithm approaches the problem by assigning a small flock of sensors to each target, while at the same time leaving some sensors free to explore the environment. This allows the algorithm to strike balance between robust area coverage and target coverage. Such balance is facilitated via flock-sensor coordination. The performance of the proposed Semi-Flocking algorithm is examined and compared with other two flocking-based algorithms once using randomly moving targets and once using a standard walking pedestrian dataset. The results of both experiments show that the Semi-Flocking algorithm outperforms both the Flocking algorithm and the Anti-Flocking algorithm with respect to the area of coverage and the target coverage objectives. Furthermore, the results show that the proposed algorithm demonstrates shorter target detection time and fewer undetected targets than the other two flocking-based algorithms.
Predictive 3D search algorithm for multi-frame motion estimation
Lim, Hong Yin; Kassim, A.A.; With, de P.H.N.
2008-01-01
Multi-frame motion estimation introduced in recent video standards such as H.264/AVC, helps to improve the rate-distortion performance and hence the video quality. This, however, comes at the expense of having a much higher computational complexity. In multi-frame motion estimation, there exists
Realistic modelling of observed seismic motion in complex sedimentary basins
International Nuclear Information System (INIS)
Faeh, D.; Panza, G.F.
1994-03-01
Three applications of a numerical technique are illustrated to model realistically the seismic ground motion for complex two-dimensional structures. First we consider a sedimentary basin in the Friuli region, and we model strong motion records from an aftershock of the 1976 earthquake. Then we simulate the ground motion caused in Rome by the 1915, Fucino (Italy) earthquake, and we compare our modelling with the damage distribution observed in the town. Finally we deal with the interpretation of ground motion recorded in Mexico City, as a consequence of earthquakes in the Mexican subduction zone. The synthetic signals explain the major characteristics (relative amplitudes, spectral amplification, frequency content) of the considered seismograms, and the space distribution of the available macroseismic data. For the sedimentary basin in the Friuli area, parametric studies demonstrate the relevant sensitivity of the computed ground motion to small changes in the subsurface topography of the sedimentary basin, and in the velocity and quality factor of the sediments. The total energy of ground motion, determined from our numerical simulation in Rome, is in very good agreement with the distribution of damage observed during the Fucino earthquake. For epicentral distances in the range 50km-100km, the source location and not only the local soil conditions control the local effects. For Mexico City, the observed ground motion can be explained as resonance effects and as excitation of local surface waves, and the theoretical and the observed maximum spectral amplifications are very similar. In general, our numerical simulations permit the estimate of the maximum and average spectral amplification for specific sites, i.e. are a very powerful tool for accurate micro-zonation. (author). 38 refs, 19 figs, 1 tab
Lagrangian speckle model and tissue-motion estimation--theory.
Maurice, R L; Bertrand, M
1999-07-01
It is known that when a tissue is subjected to movements such as rotation, shearing, scaling, etc., changes in speckle patterns that result act as a noise source, often responsible for most of the displacement-estimate variance. From a modeling point of view, these changes can be thought of as resulting from two mechanisms: one is the motion of the speckles and the other, the alterations of their morphology. In this paper, we propose a new tissue-motion estimator to counteract these speckle decorrelation effects. The estimator is based on a Lagrangian description of the speckle motion. This description allows us to follow local characteristics of the speckle field as if they were a material property. This method leads to an analytical description of the decorrelation in a way which enables the derivation of an appropriate inverse filter for speckle restoration. The filter is appropriate for linear geometrical transformation of the scattering function (LT), i.e., a constant-strain region of interest (ROI). As the LT itself is a parameter of the filter, a tissue-motion estimator can be formulated as a nonlinear minimization problem, seeking the best match between the pre-tissue-motion image and a restored-speckle post-motion image. The method is tested, using simulated radio-frequency (RF) images of tissue undergoing axial shear.
A genetic algorithm for solving supply chain network design model
Firoozi, Z.; Ismail, N.; Ariafar, S. H.; Tang, S. H.; Ariffin, M. K. M. A.
2013-09-01
Network design is by nature costly and optimization models play significant role in reducing the unnecessary cost components of a distribution network. This study proposes a genetic algorithm to solve a distribution network design model. The structure of the chromosome in the proposed algorithm is defined in a novel way that in addition to producing feasible solutions, it also reduces the computational complexity of the algorithm. Computational results are presented to show the algorithm performance.
Rate-control algorithms testing by using video source model
DEFF Research Database (Denmark)
Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Anna
2008-01-01
In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set.......In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set....
Motion estimation by data assimilation in reduced dynamic models
International Nuclear Information System (INIS)
Drifi, Karim
2013-01-01
Motion estimation is a major challenge in the field of image sequence analysis. This thesis is a study of the dynamics of geophysical flows visualized by satellite imagery. Satellite image sequences are currently underused for the task of motion estimation. A good understanding of geophysical flows allows a better analysis and forecast of phenomena in domains such as oceanography and meteorology. Data assimilation provides an excellent framework for achieving a compromise between heterogeneous data, especially numerical models and observations. Hence, in this thesis we set out to apply variational data assimilation methods to estimate motion on image sequences. As one of the major drawbacks of applying these assimilation techniques is the considerable computation time and memory required, we therefore define and use a model reduction method in order to significantly decrease the necessary computation time and the memory. We then explore the possibilities that reduced models provide for motion estimation, particularly the possibility of strictly imposing some known constraints on the computed solutions. In particular, we show how to estimate a divergence free motion with boundary conditions on a complex spatial domain [fr
Markov dynamic models for long-timescale protein motion.
Chiang, Tsung-Han
2010-06-01
Molecular dynamics (MD) simulation is a well-established method for studying protein motion at the atomic scale. However, it is computationally intensive and generates massive amounts of data. One way of addressing the dual challenges of computation efficiency and data analysis is to construct simplified models of long-timescale protein motion from MD simulation data. In this direction, we propose to use Markov models with hidden states, in which the Markovian states represent potentially overlapping probabilistic distributions over protein conformations. We also propose a principled criterion for evaluating the quality of a model by its ability to predict long-timescale protein motions. Our method was tested on 2D synthetic energy landscapes and two extensively studied peptides, alanine dipeptide and the villin headpiece subdomain (HP-35 NleNle). One interesting finding is that although a widely accepted model of alanine dipeptide contains six states, a simpler model with only three states is equally good for predicting long-timescale motions. We also used the constructed Markov models to estimate important kinetic and dynamic quantities for protein folding, in particular, mean first-passage time. The results are consistent with available experimental measurements.
Markov dynamic models for long-timescale protein motion.
Chiang, Tsung-Han; Hsu, David; Latombe, Jean-Claude
2010-01-01
Molecular dynamics (MD) simulation is a well-established method for studying protein motion at the atomic scale. However, it is computationally intensive and generates massive amounts of data. One way of addressing the dual challenges of computation efficiency and data analysis is to construct simplified models of long-timescale protein motion from MD simulation data. In this direction, we propose to use Markov models with hidden states, in which the Markovian states represent potentially overlapping probabilistic distributions over protein conformations. We also propose a principled criterion for evaluating the quality of a model by its ability to predict long-timescale protein motions. Our method was tested on 2D synthetic energy landscapes and two extensively studied peptides, alanine dipeptide and the villin headpiece subdomain (HP-35 NleNle). One interesting finding is that although a widely accepted model of alanine dipeptide contains six states, a simpler model with only three states is equally good for predicting long-timescale motions. We also used the constructed Markov models to estimate important kinetic and dynamic quantities for protein folding, in particular, mean first-passage time. The results are consistent with available experimental measurements.
Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.
2009-02-01
Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.
International Nuclear Information System (INIS)
Raghunath, N; Faber, T L; Suryanarayanan, S; Votaw, J R
2009-01-01
Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.
Energy Technology Data Exchange (ETDEWEB)
Raghunath, N; Faber, T L; Suryanarayanan, S; Votaw, J R [Department of Radiology, Emory University Hospital, 1364 Clifton Road, N.E. Atlanta, GA 30322 (United States)], E-mail: John.Votaw@Emory.edu
2009-02-07
Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.
Schilders, W.H.A.; Meijer, P.B.L.; Ciggaar, E.
2008-01-01
In this paper we discuss the use of the state-space modelling MOESP algorithm to generate precise information about the number of neurons and hidden layers in dynamic neural networks developed for the behavioural modelling of electronic circuits. The Bartels–Stewart algorithm is used to transform
Generalized Jaynes-Cummings model as a quantum search algorithm
International Nuclear Information System (INIS)
Romanelli, A.
2009-01-01
We propose a continuous time quantum search algorithm using a generalization of the Jaynes-Cummings model. In this model the states of the atom are the elements among which the algorithm realizes the search, exciting resonances between the initial and the searched states. This algorithm behaves like Grover's algorithm; the optimal search time is proportional to the square root of the size of the search set and the probability to find the searched state oscillates periodically in time. In this frame, it is possible to reinterpret the usual Jaynes-Cummings model as a trivial case of the quantum search algorithm.
Aeon: Synthesizing Scheduling Algorithms from High-Level Models
Monette, Jean-Noël; Deville, Yves; van Hentenryck, Pascal
This paper describes the aeon system whose aim is to synthesize scheduling algorithms from high-level models. A eon, which is entirely written in comet, receives as input a high-level model for a scheduling application which is then analyzed to generate a dedicated scheduling algorithm exploiting the structure of the model. A eon provides a variety of synthesizers for generating complete or heuristic algorithms. Moreover, synthesizers are compositional, making it possible to generate complex hybrid algorithms naturally. Preliminary experimental results indicate that this approach may be competitive with state-of-the-art search algorithms.
Pair shell model description of collective motions
International Nuclear Information System (INIS)
Chen Hsitseng; Feng Dahsuan
1996-01-01
The shell model in the pair basis has been reviewed with a case study of four particles in a spherical single-j shell. By analyzing the wave functions according to their pair components, the novel concept of the optimum pairs was developed which led to the proposal of a generalized pair mean-field method to solve the many-body problem. The salient feature of the method is its ability to handle within the framework of the spherical shell model a rotational system where the usual strong configuration mixing complexity is so simplified that it is now possible to obtain analytically the band head energies and the moments of inertia. We have also examined the effects of pair truncation on rotation and found the slow convergence of adding higher spin pairs. Finally, we found that when the SDI and Q .Q interactions are of equal strengths, the optimum pair approximation is still valid. (orig.)
Visualization of logistic algorithm in Wilson model
Glushchenko, A. S.; Rodin, V. A.; Sinegubov, S. V.
2018-05-01
Economic order quantity (EOQ), defined by the Wilson's model, is widely used at different stages of production and distribution of different products. It is useful for making decisions in the management of inventories, providing a more efficient business operation and thus bringing more economic benefits. There is a large amount of reference material and extensive computer shells that help solving various logistics problems. However, the use of large computer environments is not always justified and requires special user training. A tense supply schedule in a logistics model is optimal, if, and only if, the planning horizon coincides with the beginning of the next possible delivery. For all other possible planning horizons, this plan is not optimal. It is significant that when the planning horizon changes, the plan changes immediately throughout the entire supply chain. In this paper, an algorithm and a program for visualizing models of the optimal value of supplies and their number, depending on the magnitude of the planned horizon, have been obtained. The program allows one to trace (visually and quickly) all main parameters of the optimal plan on the charts. The results of the paper represent a part of the authors’ research work in the field of optimization of protection and support services of ports in the Russian North.
Interactive Modelling and Simulation of Human Motion
DEFF Research Database (Denmark)
Engell-Nørregård, Morten Pol
menneskers led, der udviser både ikke-konveksitet og flere frihedsgrader • En generel og alsidig model for aktivering af bløde legemer. Modellen kan anvendes som et animations værktøj, men er lige så velegnet til simulering af menneskelige muskler, da den opfylder de grundlæggende fysiske principper......Dansk resumé Denne ph.d.-afhandling beskæftiger sig med modellering og simulation af menneskelig bevægelse. Emnerne i denne afhandling har mindst to ting til fælles. For det første beskæftiger de sig med menneskelig bevægelse. Selv om de udviklede modeller også kan benyttes til andre ting,er det...... primære fokus på at modellere den menneskelige krop. For det andet, beskæftiger de sig alle med simulering som et redskab til at syntetisere bevægelse og dermed skabe animationer. Dette er en vigtigt pointe, da det betyder, at vi ikke kun skaber værktøjer til animatorer, som de kan bruge til at lave sjove...
Visual fatigue modeling for stereoscopic video shot based on camera motion
Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing
2014-11-01
As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.
Predicting kinetics using musculoskeletal modeling and inertial motion capture
Karatsidis, Angelos; Jung, Moonki; Schepers, H. Martin; Bellusci, Giovanni; de Zee, Mark; Veltink, Peter H.; Andersen, Michael Skipper
2018-01-01
Inverse dynamic analysis using musculoskeletal modeling is a powerful tool, which is utilized in a range of applications to estimate forces in ligaments, muscles, and joints, non-invasively. To date, the conventional input used in this analysis is derived from optical motion capture (OMC) and force
Innovative technologies to accurately model waves and moored ship motions
CSIR Research Space (South Africa)
van der Molen, W
2010-09-01
Full Text Available Late in 2009 CSIR Built Environment in Stellenbosch was awarded a contract to carry out extensive physical and numerical modelling to study the wave conditions and associated moored ship motions, for the design of a new iron ore export jetty for BHP...
Brownian motion model with stochastic parameters for asset prices
Ching, Soo Huei; Hin, Pooi Ah
2013-09-01
The Brownian motion model may not be a completely realistic model for asset prices because in real asset prices the drift μ and volatility σ may change over time. Presently we consider a model in which the parameter x = (μ,σ) is such that its value x (t + Δt) at a short time Δt ahead of the present time t depends on the value of the asset price at time t + Δt as well as the present parameter value x(t) and m-1 other parameter values before time t via a conditional distribution. The Malaysian stock prices are used to compare the performance of the Brownian motion model with fixed parameter with that of the model with stochastic parameter.
Continuous Time Dynamic Contraflow Models and Algorithms
Directory of Open Access Journals (Sweden)
Urmila Pyakurel
2016-01-01
Full Text Available The research on evacuation planning problem is promoted by the very challenging emergency issues due to large scale natural or man-created disasters. It is the process of shifting the maximum number of evacuees from the disastrous areas to the safe destinations as quickly and efficiently as possible. Contraflow is a widely accepted model for good solution of evacuation planning problem. It increases the outbound road capacity by reversing the direction of roads towards the safe destination. The continuous dynamic contraflow problem sends the maximum number of flow as a flow rate from the source to the sink in every moment of time unit. We propose the mathematical model for the continuous dynamic contraflow problem. We present efficient algorithms to solve the maximum continuous dynamic contraflow and quickest continuous contraflow problems on single source single sink arbitrary networks and continuous earliest arrival contraflow problem on single source single sink series-parallel networks with undefined supply and demand. We also introduce an approximation solution for continuous earliest arrival contraflow problem on two-terminal arbitrary networks.
Bouc–Wen hysteresis model identification using Modified Firefly Algorithm
International Nuclear Information System (INIS)
Zaman, Mohammad Asif; Sikder, Urmita
2015-01-01
The parameters of Bouc–Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc–Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc–Wen model parameters. Finally, the proposed method is used to find the Bouc–Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data. - Highlights: • We describe a new method to find the Bouc–Wen hysteresis model parameters. • We propose a Modified Firefly Algorithm. • We compare our method with existing methods to find that the proposed method performs better. • We use our model to fit experimental results. Good agreement is found
Bouc–Wen hysteresis model identification using Modified Firefly Algorithm
Energy Technology Data Exchange (ETDEWEB)
Zaman, Mohammad Asif, E-mail: zaman@stanford.edu [Department of Electrical Engineering, Stanford University (United States); Sikder, Urmita [Department of Electrical Engineering and Computer Sciences, University of California, Berkeley (United States)
2015-12-01
The parameters of Bouc–Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc–Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc–Wen model parameters. Finally, the proposed method is used to find the Bouc–Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data. - Highlights: • We describe a new method to find the Bouc–Wen hysteresis model parameters. • We propose a Modified Firefly Algorithm. • We compare our method with existing methods to find that the proposed method performs better. • We use our model to fit experimental results. Good agreement is found.
Directory of Open Access Journals (Sweden)
Ji Li
2016-10-01
Full Text Available A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir
2016-10-14
A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.
Universal algorithms and programs for calculating the motion parameters in the two-body problem
Bakhshiyan, B. T.; Sukhanov, A. A.
1979-01-01
The algorithms and FORTRAN programs for computing positions and velocities, orbital elements and first and second partial derivatives in the two-body problem are presented. The algorithms are applicable for any value of eccentricity and are convenient for computing various navigation parameters.
Surrogate-driven deformable motion model for organ motion tracking in particle radiation therapy
Fassi, Aurora; Seregni, Matteo; Riboldi, Marco; Cerveri, Pietro; Sarrut, David; Battista Ivaldi, Giovanni; Tabarelli de Fatis, Paola; Liotta, Marco; Baroni, Guido
2015-02-01
The aim of this study is the development and experimental testing of a tumor tracking method for particle radiation therapy, providing the daily respiratory dynamics of the patient’s thoraco-abdominal anatomy as a function of an external surface surrogate combined with an a priori motion model. The proposed tracking approach is based on a patient-specific breathing motion model, estimated from the four-dimensional (4D) planning computed tomography (CT) through deformable image registration. The model is adapted to the interfraction baseline variations in the patient’s anatomical configuration. The driving amplitude and phase parameters are obtained intrafractionally from a respiratory surrogate signal derived from the external surface displacement. The developed technique was assessed on a dataset of seven lung cancer patients, who underwent two repeated 4D CT scans. The first 4D CT was used to build the respiratory motion model, which was tested on the second scan. The geometric accuracy in localizing lung lesions, mediated over all breathing phases, ranged between 0.6 and 1.7 mm across all patients. Errors in tracking the surrounding organs at risk, such as lungs, trachea and esophagus, were lower than 1.3 mm on average. The median absolute variation in water equivalent path length (WEL) within the target volume did not exceed 1.9 mm-WEL for simulated particle beams. A significant improvement was achieved compared with error compensation based on standard rigid alignment. The present work can be regarded as a feasibility study for the potential extension of tumor tracking techniques in particle treatments. Differently from current tracking methods applied in conventional radiotherapy, the proposed approach allows for the dynamic localization of all anatomical structures scanned in the planning CT, thus providing complete information on density and WEL variations required for particle beam range adaptation.
Indian Academy of Sciences (India)
polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.
International Nuclear Information System (INIS)
Shepard, A; Matrosic, C; Zagzebski, J; Bednarz, B
2016-01-01
Purpose: To develop an advanced testbed that combines a 3D motion stage and ultrasound phantom to optimize and validate 2D and 3D tracking algorithms for real-time motion management during radiation therapy. Methods: A Siemens S2000 Ultrasound scanner utilizing a 9L4 transducer was coupled with the Washington University 4D Phantom to simulate patient motion. The transducer was securely fastened to the 3D stage and positioned to image three cylinders of varying contrast in a Gammex 404GS LE phantom. The transducer was placed within a water bath above the phantom in order to maintain sufficient coupling for the entire range of simulated motion. A programmed motion sequence was used to move the transducer during image acquisition and a cine video was acquired for one minute to allow for long sequence tracking. Images were analyzed using a normalized cross-correlation block matching tracking algorithm and compared to the known motion of the transducer relative to the phantom. Results: The setup produced stable ultrasound motion traces consistent with those programmed into the 3D motion stage. The acquired ultrasound images showed minimal artifacts and an image quality that was more than suitable for tracking algorithm verification. Comparisons of a block matching tracking algorithm with the known motion trace for the three features resulted in an average tracking error of 0.59 mm. Conclusion: The high accuracy and programmability of the 4D phantom allows for the acquisition of ultrasound motion sequences that are highly customizable; allowing for focused analysis of some common pitfalls of tracking algorithms such as partial feature occlusion or feature disappearance, among others. The design can easily be modified to adapt to any probe such that the process can be extended to 3D acquisition. Further development of an anatomy specific phantom better resembling true anatomical landmarks could lead to an even more robust validation. This work is partially funded by NIH
Energy Technology Data Exchange (ETDEWEB)
Shepard, A; Matrosic, C; Zagzebski, J; Bednarz, B [University of Wisconsin, Madison, WI (United States)
2016-06-15
Purpose: To develop an advanced testbed that combines a 3D motion stage and ultrasound phantom to optimize and validate 2D and 3D tracking algorithms for real-time motion management during radiation therapy. Methods: A Siemens S2000 Ultrasound scanner utilizing a 9L4 transducer was coupled with the Washington University 4D Phantom to simulate patient motion. The transducer was securely fastened to the 3D stage and positioned to image three cylinders of varying contrast in a Gammex 404GS LE phantom. The transducer was placed within a water bath above the phantom in order to maintain sufficient coupling for the entire range of simulated motion. A programmed motion sequence was used to move the transducer during image acquisition and a cine video was acquired for one minute to allow for long sequence tracking. Images were analyzed using a normalized cross-correlation block matching tracking algorithm and compared to the known motion of the transducer relative to the phantom. Results: The setup produced stable ultrasound motion traces consistent with those programmed into the 3D motion stage. The acquired ultrasound images showed minimal artifacts and an image quality that was more than suitable for tracking algorithm verification. Comparisons of a block matching tracking algorithm with the known motion trace for the three features resulted in an average tracking error of 0.59 mm. Conclusion: The high accuracy and programmability of the 4D phantom allows for the acquisition of ultrasound motion sequences that are highly customizable; allowing for focused analysis of some common pitfalls of tracking algorithms such as partial feature occlusion or feature disappearance, among others. The design can easily be modified to adapt to any probe such that the process can be extended to 3D acquisition. Further development of an anatomy specific phantom better resembling true anatomical landmarks could lead to an even more robust validation. This work is partially funded by NIH
Fast algorithms for transport models. Final report
International Nuclear Information System (INIS)
Manteuffel, T.A.
1994-01-01
This project has developed a multigrid in space algorithm for the solution of the S N equations with isotropic scattering in slab geometry. The algorithm was developed for the Modified Linear Discontinuous (MLD) discretization in space which is accurate in the thick diffusion limit. It uses a red/black two-cell μ-line relaxation. This relaxation solves for all angles on two adjacent spatial cells simultaneously. It takes advantage of the rank-one property of the coupling between angles and can perform this inversion in O(N) operations. A version of the multigrid in space algorithm was programmed on the Thinking Machines Inc. CM-200 located at LANL. It was discovered that on the CM-200 a block Jacobi type iteration was more efficient than the block red/black iteration. Given sufficient processors all two-cell block inversions can be carried out simultaneously with a small number of parallel steps. The bottleneck is the need for sums of N values, where N is the number of discrete angles, each from a different processor. These are carried out by machine intrinsic functions and are well optimized. The overall algorithm has computational complexity O(log(M)), where M is the number of spatial cells. The algorithm is very efficient and represents the state-of-the-art for isotropic problems in slab geometry. For anisotropic scattering in slab geometry, a multilevel in angle algorithm was developed. A parallel version of the multilevel in angle algorithm has also been developed. Upon first glance, the shifted transport sweep has limited parallelism. Once the right-hand-side has been computed, the sweep is completely parallel in angle, becoming N uncoupled initial value ODE's. The author has developed a cyclic reduction algorithm that renders it parallel with complexity O(log(M)). The multilevel in angle algorithm visits log(N) levels, where shifted transport sweeps are performed. The overall complexity is O(log(N)log(M))
Clustering Of Left Ventricular Wall Motion Patterns
Bjelogrlic, Z.; Jakopin, J.; Gyergyek, L.
1982-11-01
A method for detection of wall regions with similar motion was presented. A model based on local direction information was used to measure the left ventricular wall motion from cineangiographic sequence. Three time functions were used to define segmental motion patterns: distance of a ventricular contour segment from the mean contour, the velocity of a segment and its acceleration. Motion patterns were clustered by the UPGMA algorithm and by an algorithm based on K-nearest neighboor classification rule.
DEFF Research Database (Denmark)
Olsen, Emil; Boye, Jenny Katrine; Pfau, Thilo
2012-01-01
and use robust and validated algorithms. It is the objective of this study to compare accuracy (bias) and precision (SD) for five published human and equine motion capture foot-on/off and stance phase detection algorithms during walk. Six horses were walked over 8 seamlessly embedded force plates...... of mass generally provides the most accurate and precise results in walk....
Yu, Fei; Hui, Mei; Zhao, Yue-jin
2009-08-01
The image block matching algorithm based on motion vectors of correlative pixels in oblique direction is presented for digital image stabilization. The digital image stabilization is a new generation of image stabilization technique which can obtains the information of relative motion among frames of dynamic image sequences by the method of digital image processing. In this method the matching parameters are calculated from the vectors projected in the oblique direction. The matching parameters based on the vectors contain the information of vectors in transverse and vertical direction in the image blocks at the same time. So the better matching information can be obtained after making correlative operation in the oblique direction. And an iterative weighted least square method is used to eliminate the error of block matching. The weights are related with the pixels' rotational angle. The center of rotation and the global emotion estimation of the shaking image can be obtained by the weighted least square from the estimation of each block chosen evenly from the image. Then, the shaking image can be stabilized with the center of rotation and the global emotion estimation. Also, the algorithm can run at real time by the method of simulated annealing in searching method of block matching. An image processing system based on DSP was used to exam this algorithm. The core processor in the DSP system is TMS320C6416 of TI, and the CCD camera with definition of 720×576 pixels was chosen as the input video signal. Experimental results show that the algorithm can be performed at the real time processing system and have an accurate matching precision.
Directory of Open Access Journals (Sweden)
Abdenaceur Boudlal
2010-01-01
Full Text Available This article investigates a new method of motion estimation based on block matching criterion through the modeling of image blocks by a mixture of two and three Gaussian distributions. Mixture parameters (weights, means vectors, and covariance matrices are estimated by the Expectation Maximization algorithm (EM which maximizes the log-likelihood criterion. The similarity between a block in the current image and the more resembling one in a search window on the reference image is measured by the minimization of Extended Mahalanobis distance between the clusters of mixture. Performed experiments on sequences of real images have given good results, and PSNR reached 3 dB.
Efficient Implementation Algorithms for Homogenized Energy Models
National Research Council Canada - National Science Library
Braun, Thomas R; Smith, Ralph C
2005-01-01
... for real-time control implementation. In this paper, we develop algorithms employing lookup tables which permit the high speed implementation of formulations which incorporate relaxation mechanisms and electromechanical coupling...
A mixture model for robust point matching under multi-layer motion.
Directory of Open Access Journals (Sweden)
Jiayi Ma
Full Text Available This paper proposes an efficient mixture model for establishing robust point correspondences between two sets of points under multi-layer motion. Our algorithm starts by creating a set of putative correspondences which can contain a number of false correspondences, or outliers, in addition to the true correspondences (inliers. Next we solve for correspondence by interpolating a set of spatial transformations on the putative correspondence set based on a mixture model, which involves estimating a consensus of inlier points whose matching follows a non-parametric geometrical constraint. We formulate this as a maximum a posteriori (MAP estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation. We further provide a fast implementation based on sparse approximation which can achieve a significant speed-up without much performance degradation. We illustrate the proposed method on 2D and 3D real images for sparse feature correspondence, as well as a public available dataset for shape matching. The quantitative results demonstrate that our method is robust to non-rigid deformation and multi-layer/large discontinuous motion.
Chaos control of Hastings–Powell model by combining chaotic motions
International Nuclear Information System (INIS)
Danca, Marius-F.; Chattopadhyay, Joydev
2016-01-01
In this paper, we propose a Parameter Switching (PS) algorithm as a new chaos control method for the Hastings–Powell (HP) system. The PS algorithm is a convergent scheme that switches the control parameter within a set of values while the controlled system is numerically integrated. The attractor obtained with the PS algorithm matches the attractor obtained by integrating the system with the parameter replaced by the averaged value of the switched parameter values. The switching rule can be applied periodically or randomly over a set of given values. In this way, every stable cycle of the HP system can be approximated if its underlying parameter value equalizes the average value of the switching values. Moreover, the PS algorithm can be viewed as a generalization of Parrondo's game, which is applied for the first time to the HP system, by showing that losing strategy can win: “losing + losing = winning.” If “loosing” is replaced with “chaos” and, “winning” with “order” (as the opposite to “chaos”), then by switching the parameter value in the HP system within two values, which generate chaotic motions, the PS algorithm can approximate a stable cycle so that symbolically one can write “chaos + chaos = regular.” Also, by considering a different parameter control, new complex dynamics of the HP model are revealed.
Chaos control of Hastings–Powell model by combining chaotic motions
Energy Technology Data Exchange (ETDEWEB)
Danca, Marius-F., E-mail: danca@rist.ro [Romanian Institute of Science and Technology, 400487 Cluj-Napoca (Romania); Chattopadhyay, Joydev, E-mail: joydev@isical.ac.in [Agricultural and Ecological Research Unit Indian Statistical Institute, 203, B. T. Road, Kolkata 700 108 (India)
2016-04-15
In this paper, we propose a Parameter Switching (PS) algorithm as a new chaos control method for the Hastings–Powell (HP) system. The PS algorithm is a convergent scheme that switches the control parameter within a set of values while the controlled system is numerically integrated. The attractor obtained with the PS algorithm matches the attractor obtained by integrating the system with the parameter replaced by the averaged value of the switched parameter values. The switching rule can be applied periodically or randomly over a set of given values. In this way, every stable cycle of the HP system can be approximated if its underlying parameter value equalizes the average value of the switching values. Moreover, the PS algorithm can be viewed as a generalization of Parrondo's game, which is applied for the first time to the HP system, by showing that losing strategy can win: “losing + losing = winning.” If “loosing” is replaced with “chaos” and, “winning” with “order” (as the opposite to “chaos”), then by switching the parameter value in the HP system within two values, which generate chaotic motions, the PS algorithm can approximate a stable cycle so that symbolically one can write “chaos + chaos = regular.” Also, by considering a different parameter control, new complex dynamics of the HP model are revealed.
Chaos control of Hastings-Powell model by combining chaotic motions.
Danca, Marius-F; Chattopadhyay, Joydev
2016-04-01
In this paper, we propose a Parameter Switching (PS) algorithm as a new chaos control method for the Hastings-Powell (HP) system. The PS algorithm is a convergent scheme that switches the control parameter within a set of values while the controlled system is numerically integrated. The attractor obtained with the PS algorithm matches the attractor obtained by integrating the system with the parameter replaced by the averaged value of the switched parameter values. The switching rule can be applied periodically or randomly over a set of given values. In this way, every stable cycle of the HP system can be approximated if its underlying parameter value equalizes the average value of the switching values. Moreover, the PS algorithm can be viewed as a generalization of Parrondo's game, which is applied for the first time to the HP system, by showing that losing strategy can win: "losing + losing = winning." If "loosing" is replaced with "chaos" and, "winning" with "order" (as the opposite to "chaos"), then by switching the parameter value in the HP system within two values, which generate chaotic motions, the PS algorithm can approximate a stable cycle so that symbolically one can write "chaos + chaos = regular." Also, by considering a different parameter control, new complex dynamics of the HP model are revealed.
Efficient spiking neural network model of pattern motion selectivity in visual cortex.
Beyeler, Michael; Richert, Micah; Dutt, Nikil D; Krichmar, Jeffrey L
2014-07-01
Simulating large-scale models of biological motion perception is challenging, due to the required memory to store the network structure and the computational power needed to quickly solve the neuronal dynamics. A low-cost yet high-performance approach to simulating large-scale neural network models in real-time is to leverage the parallel processing capability of graphics processing units (GPUs). Based on this approach, we present a two-stage model of visual area MT that we believe to be the first large-scale spiking network to demonstrate pattern direction selectivity. In this model, component-direction-selective (CDS) cells in MT linearly combine inputs from V1 cells that have spatiotemporal receptive fields according to the motion energy model of Simoncelli and Heeger. Pattern-direction-selective (PDS) cells in MT are constructed by pooling over MT CDS cells with a wide range of preferred directions. Responses of our model neurons are comparable to electrophysiological results for grating and plaid stimuli as well as speed tuning. The behavioral response of the network in a motion discrimination task is in agreement with psychophysical data. Moreover, our implementation outperforms a previous implementation of the motion energy model by orders of magnitude in terms of computational speed and memory usage. The full network, which comprises 153,216 neurons and approximately 40 million synapses, processes 20 frames per second of a 40 × 40 input video in real-time using a single off-the-shelf GPU. To promote the use of this algorithm among neuroscientists and computer vision researchers, the source code for the simulator, the network, and analysis scripts are publicly available.
Equations of motion for a (non-linear) scalar field model as derived from the field equations
International Nuclear Information System (INIS)
Kaniel, S.; Itin, Y.
2006-01-01
The problem of derivation of the equations of motion from the field equations is considered. Einstein's field equations have a specific analytical form: They are linear in the second order derivatives and quadratic in the first order derivatives of the field variables. We utilize this particular form and propose a novel algorithm for the derivation of the equations of motion from the field equations. It is based on the condition of the balance between the singular terms of the field equation. We apply the algorithm to a non-linear Lorentz invariant scalar field model. We show that it results in the Newton law of attraction between the singularities of the field moved on approximately geodesic curves. The algorithm is applicable to the N-body problem of the Lorentz invariant field equations. (Abstract Copyright [2006], Wiley Periodicals, Inc.)
Loop algorithms for quantum simulations of fermion models on lattices
International Nuclear Information System (INIS)
Kawashima, N.; Gubernatis, J.E.; Evertz, H.G.
1994-01-01
Two cluster algorithms, based on constructing and flipping loops, are presented for world-line quantum Monte Carlo simulations of fermions and are tested on the one-dimensional repulsive Hubbard model. We call these algorithms the loop-flip and loop-exchange algorithms. For these two algorithms and the standard world-line algorithm, we calculated the autocorrelation times for various physical quantities and found that the ordinary world-line algorithm, which uses only local moves, suffers from very long correlation times that makes not only the estimate of the error difficult but also the estimate of the average values themselves difficult. These difficulties are especially severe in the low-temperature, large-U regime. In contrast, we find that new algorithms, when used alone or in combinations with themselves and the standard algorithm, can have significantly smaller autocorrelation times, in some cases being smaller by three orders of magnitude. The new algorithms, which use nonlocal moves, are discussed from the point of view of a general prescription for developing cluster algorithms. The loop-flip algorithm is also shown to be ergodic and to belong to the grand canonical ensemble. Extensions to other models and higher dimensions are briefly discussed
Energy Technology Data Exchange (ETDEWEB)
Sakai, Fumikazu; Tsuuchi, Yasuhiko; Suzuki, Keiko; Ueno, Keiko; Yamada, Takayuki; Okawa, Tomohiko [Tokyo Women`s Medical Coll. (Japan); Yun, Shen; Horiuchi, Tetsuya; Kimura, Fumiko
1998-05-01
We describe our trial to reduce cardiac motion artifacts on HR-CT images caused by cardiac pulsation by combining use of subsecond CT (scan time 0.8 s) and a special cine reconstruction algorithm (cine reconstruction algorithm with 180-degree helical interpolation). Eleven to 51 HR-CT images were reconstructed with the special cine reconstruction algorithm at the pitch of 0.1 (0.08 s) from the data obtained by two to six contigious rotation scans at the same level. Images with the fewest cardiac motion artifacts were selected for evaluation. These images were compared with those reconstructed with a conventional cine reconstruction algorithm and step-by-step scan. In spite of its increased radiation exposure, technical complexity and slight degradation of spatial resolution, our method was useful in reducing cardiac motion artifacts on HR-CT images in regions adjacent to the heart. (author)
On modeling animal movements using Brownian motion with measurement error.
Pozdnyakov, Vladimir; Meyer, Thomas; Wang, Yu-Bo; Yan, Jun
2014-02-01
Modeling animal movements with Brownian motion (or more generally by a Gaussian process) has a long tradition in ecological studies. The recent Brownian bridge movement model (BBMM), which incorporates measurement errors, has been quickly adopted by ecologists because of its simplicity and tractability. We discuss some nontrivial properties of the discrete-time stochastic process that results from observing a Brownian motion with added normal noise at discrete times. In particular, we demonstrate that the observed sequence of random variables is not Markov. Consequently the expected occupation time between two successively observed locations does not depend on just those two observations; the whole path must be taken into account. Nonetheless, the exact likelihood function of the observed time series remains tractable; it requires only sparse matrix computations. The likelihood-based estimation procedure is described in detail and compared to the BBMM estimation.
Quantum Brownian motion model for the stock market
Meng, Xiangyi; Zhang, Jian-Wei; Guo, Hong
2016-06-01
It is believed by the majority today that the efficient market hypothesis is imperfect because of market irrationality. Using the physical concepts and mathematical structures of quantum mechanics, we construct an econophysical framework for the stock market, based on which we analogously map massive numbers of single stocks into a reservoir consisting of many quantum harmonic oscillators and their stock index into a typical quantum open system-a quantum Brownian particle. In particular, the irrationality of stock transactions is quantitatively considered as the Planck constant within Heisenberg's uncertainty relationship of quantum mechanics in an analogous manner. We analyze real stock data of Shanghai Stock Exchange of China and investigate fat-tail phenomena and non-Markovian behaviors of the stock index with the assistance of the quantum Brownian motion model, thereby interpreting and studying the limitations of the classical Brownian motion model for the efficient market hypothesis from a new perspective of quantum open system dynamics.
Jiang, Yanhua; Xiong, Guangming; Chen, Huiyan; Lee, Dah-Jye
2014-01-01
This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC) scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments. PMID:25256109
Directory of Open Access Journals (Sweden)
Yanhua Jiang
2014-09-01
Full Text Available This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments.
Influence of constitutive models on ground motion predictions
International Nuclear Information System (INIS)
Baron, M.L.; Nelson, I.; Sandler, I.
1973-01-01
In recent years, the development of mathematical models for the study of ground shock effects in soil, or rock media, or both, has made important progress. Three basic types of advanced models have been studied: (1) elastic ideally plastic models, (2) variable moduli models and (3) elastic nonideally plastic capped models. The ground shock response in the superseismic range of a 1-MT air burst on a homogeneous halfspace of a soil is considered. Each of the three types of models was fitted to laboratory test data and calculations were made for each case. The results from all three models are comparable only when the stress paths in uniaxial strain are comparable for complete load-unload cycles. Otherwise, major differences occur in the lateral motions and stresses. Consequently, material property laboratory data now include the stress path whenever possible for modeling purposes. (U.S.)
Human body motion tracking based on quantum-inspired immune cloning algorithm
Han, Hong; Yue, Lichuan; Jiao, Licheng; Wu, Xing
2009-10-01
In a static monocular camera system, to gain a perfect 3D human body posture is a great challenge for Computer Vision technology now. This paper presented human postures recognition from video sequences using the Quantum-Inspired Immune Cloning Algorithm (QICA). The algorithm included three parts. Firstly, prior knowledge of human beings was used, the key joint points of human could be detected automatically from the human contours and skeletons which could be thinning from the contours; And due to the complexity of human movement, a forecasting mechanism of occlusion joint points was addressed to get optimum 2D key joint points of human body; And then pose estimation recovered by optimizing between the 2D projection of 3D human key joint points and 2D detection key joint points using QICA, which recovered the movement of human body perfectly, because this algorithm could acquire not only the global optimal solution, but the local optimal solution.
International Nuclear Information System (INIS)
Feng, Y; Olsen, J.; Parikh, P.; Noel, C; Wooten, H; Du, D; Mutic, S; Hu, Y; Kawrakow, I; Dempsey, J
2014-01-01
Purpose: Evaluate commonly used segmentation algorithms on a commercially available real-time MR image guided radiotherapy (MR-IGRT) system (ViewRay), compare the strengths and weaknesses of each method, with the purpose of improving motion tracking for more accurate radiotherapy. Methods: MR motion images of bladder, kidney, duodenum, and liver tumor were acquired for three patients using a commercial on-board MR imaging system and an imaging protocol used during MR-IGRT. A series of 40 frames were selected for each case to cover at least 3 respiratory cycles. Thresholding, Canny edge detection, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE), along with the ViewRay treatment planning and delivery system (TPDS) were included in the comparisons. To evaluate the segmentation results, an expert manual contouring of the organs or tumor from a physician was used as a ground-truth. Metrics value of sensitivity, specificity, Jaccard similarity, and Dice coefficient were computed for comparison. Results: In the segmentation of single image frame, all methods successfully segmented the bladder and kidney, but only FKM, KHM and TPDS were able to segment the liver tumor and the duodenum. For segmenting motion image series, the TPDS method had the highest sensitivity, Jarccard, and Dice coefficients in segmenting bladder and kidney, while FKM and KHM had a slightly higher specificity. A similar pattern was observed when segmenting the liver tumor and the duodenum. The Canny method is not suitable for consistently segmenting motion frames in an automated process, while thresholding and RD-LSE cannot consistently segment a liver tumor and the duodenum. Conclusion: The study compared six different segmentation methods and showed the effectiveness of the ViewRay TPDS algorithm in segmenting motion images during MR-IGRT. Future studies include a selection of conformal segmentation methods based on image/organ-specific information
Energy Technology Data Exchange (ETDEWEB)
Feng, Y; Olsen, J.; Parikh, P.; Noel, C; Wooten, H; Du, D; Mutic, S; Hu, Y [Washington University, St. Louis, MO (United States); Kawrakow, I; Dempsey, J [Washington University, St. Louis, MO (United States); ViewRay Co., Oakwood Village, OH (United States)
2014-06-01
Purpose: Evaluate commonly used segmentation algorithms on a commercially available real-time MR image guided radiotherapy (MR-IGRT) system (ViewRay), compare the strengths and weaknesses of each method, with the purpose of improving motion tracking for more accurate radiotherapy. Methods: MR motion images of bladder, kidney, duodenum, and liver tumor were acquired for three patients using a commercial on-board MR imaging system and an imaging protocol used during MR-IGRT. A series of 40 frames were selected for each case to cover at least 3 respiratory cycles. Thresholding, Canny edge detection, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE), along with the ViewRay treatment planning and delivery system (TPDS) were included in the comparisons. To evaluate the segmentation results, an expert manual contouring of the organs or tumor from a physician was used as a ground-truth. Metrics value of sensitivity, specificity, Jaccard similarity, and Dice coefficient were computed for comparison. Results: In the segmentation of single image frame, all methods successfully segmented the bladder and kidney, but only FKM, KHM and TPDS were able to segment the liver tumor and the duodenum. For segmenting motion image series, the TPDS method had the highest sensitivity, Jarccard, and Dice coefficients in segmenting bladder and kidney, while FKM and KHM had a slightly higher specificity. A similar pattern was observed when segmenting the liver tumor and the duodenum. The Canny method is not suitable for consistently segmenting motion frames in an automated process, while thresholding and RD-LSE cannot consistently segment a liver tumor and the duodenum. Conclusion: The study compared six different segmentation methods and showed the effectiveness of the ViewRay TPDS algorithm in segmenting motion images during MR-IGRT. Future studies include a selection of conformal segmentation methods based on image/organ-specific information
Biomechanical model-based displacement estimation in micro-sensor motion capture
International Nuclear Information System (INIS)
Meng, X L; Sun, S Y; Wu, J K; Zhang, Z Q; 3 Building, 21 Heng Mui Keng Terrace (Singapore))" data-affiliation=" (Department of Electrical and Computer Engineering, National University of Singapore (NUS), 02-02-10 I3 Building, 21 Heng Mui Keng Terrace (Singapore))" >Wong, W C
2012-01-01
In micro-sensor motion capture systems, the estimation of the body displacement in the global coordinate system remains a challenge due to lack of external references. This paper proposes a self-contained displacement estimation method based on a human biomechanical model to track the position of walking subjects in the global coordinate system without any additional supporting infrastructures. The proposed approach makes use of the biomechanics of the lower body segments and the assumption that during walking there is always at least one foot in contact with the ground. The ground contact joint is detected based on walking gait characteristics and used as the external references of the human body. The relative positions of the other joints are obtained from hierarchical transformations based on the biomechanical model. Anatomical constraints are proposed to apply to some specific joints of the lower body to further improve the accuracy of the algorithm. Performance of the proposed algorithm is compared with an optical motion capture system. The method is also demonstrated in outdoor and indoor long distance walking scenarios. The experimental results demonstrate clearly that the biomechanical model improves the displacement accuracy within the proposed framework. (paper)
Energy Technology Data Exchange (ETDEWEB)
Rottmann, Joerg; Berbeco, Ross [Brigham and Women’s Hospital, Dana-Farber Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 (United States)
2014-12-15
Purpose: Precise prediction of respiratory motion is a prerequisite for real-time motion compensation techniques such as beam, dynamic couch, or dynamic multileaf collimator tracking. Collection of tumor motion data to train the prediction model is required for most algorithms. To avoid exposure of patients to additional dose from imaging during this procedure, the feasibility of training a linear respiratory motion prediction model with an external surrogate signal is investigated and its performance benchmarked against training the model with tumor positions directly. Methods: The authors implement a lung tumor motion prediction algorithm based on linear ridge regression that is suitable to overcome system latencies up to about 300 ms. Its performance is investigated on a data set of 91 patient breathing trajectories recorded from fiducial marker tracking during radiotherapy delivery to the lung of ten patients. The expected 3D geometric error is quantified as a function of predictor lookahead time, signal sampling frequency and history vector length. Additionally, adaptive model retraining is evaluated, i.e., repeatedly updating the prediction model after initial training. Training length for this is gradually increased with incoming (internal) data availability. To assess practical feasibility model calculation times as well as various minimum data lengths for retraining are evaluated. Relative performance of model training with external surrogate motion data versus tumor motion data is evaluated. However, an internal–external motion correlation model is not utilized, i.e., prediction is solely driven by internal motion in both cases. Results: Similar prediction performance was achieved for training the model with external surrogate data versus internal (tumor motion) data. Adaptive model retraining can substantially boost performance in the case of external surrogate training while it has little impact for training with internal motion data. A minimum
Fireworks algorithm for mean-VaR/CVaR models
Zhang, Tingting; Liu, Zhifeng
2017-10-01
Intelligent algorithms have been widely applied to portfolio optimization problems. In this paper, we introduce a novel intelligent algorithm, named fireworks algorithm, to solve the mean-VaR/CVaR model for the first time. The results show that, compared with the classical genetic algorithm, fireworks algorithm not only improves the optimization accuracy and the optimization speed, but also makes the optimal solution more stable. We repeat our experiments at different confidence levels and different degrees of risk aversion, and the results are robust. It suggests that fireworks algorithm has more advantages than genetic algorithm in solving the portfolio optimization problem, and it is feasible and promising to apply it into this field.
Engineering of Algorithms for Hidden Markov models and Tree Distances
DEFF Research Database (Denmark)
Sand, Andreas
Bioinformatics is an interdisciplinary scientific field that combines biology with mathematics, statistics and computer science in an effort to develop computational methods for handling, analyzing and learning from biological data. In the recent decades, the amount of available biological data has...... speed up all the classical algorithms for analyses and training of hidden Markov models. And I show how two particularly important algorithms, the forward algorithm and the Viterbi algorithm, can be accelerated through a reformulation of the algorithms and a somewhat more complicated parallelization...... contribution to the theoretically fastest set of algorithms presently available to compute two closely related measures of tree distance, the triplet distance and the quartet distance. And I further demonstrate that they are also the fastest algorithms in almost all cases when tested in practice....
International Nuclear Information System (INIS)
Kruis, Matthijs F.; Kamer, Jeroen B. van de; Houweling, Antonetta C.; Sonke, Jan-Jakob; Belderbos, José S.A.; Herk, Marcel van
2013-01-01
Purpose: Four-dimensional positron emission tomography (4D PET) imaging of the thorax produces sharper images with reduced motion artifacts. Current radiation therapy planning systems, however, do not facilitate 4D plan optimization. When images are acquired in a 2-minute time slot, the signal-to-noise ratio of each 4D frame is low, compromising image quality. The purpose of this study was to implement and evaluate the construction of mid-position 3D PET scans, with motion compensated using a 4D computed tomography (CT)-derived motion model. Methods and Materials: All voxels of 4D PET were registered to the time-averaged position by using a motion model derived from the 4D CT frames. After the registration the scans were summed, resulting in a motion-compensated 3D mid-position PET scan. The method was tested with a phantom dataset as well as data from 27 lung cancer patients. Results: PET motion compensation using a CT-based motion model improved image quality of both phantoms and patients in terms of increased maximum SUV (SUV max ) values and decreased apparent volumes. In homogenous phantom data, a strong relationship was found between the amplitude-to-diameter ratio and the effects of the method. In heterogeneous patient data, the effect correlated better with the motion amplitude. In case of large amplitudes, motion compensation may increase SUV max up to 25% and reduce the diameter of the 50% SUV max volume by 10%. Conclusions: 4D CT-based motion-compensated mid-position PET scans provide improved quantitative data in terms of uptake values and volumes at the time-averaged position, thereby facilitating more accurate radiation therapy treatment planning of pulmonary lesions
Active Brownian motion models and applications to ratchets
Fiasconaro, A.; Ebeling, W.; Gudowska-Nowak, E.
2008-10-01
We give an overview over recent studies on the model of Active Brownian Motion (ABM) coupled to reservoirs providing free energy which may be converted into kinetic energy of motion. First, we present an introduction to a general concept of active Brownian particles which are capable to take up energy from the source and transform part of it in order to perform various activities. In the second part of our presentation we consider applications of ABM to ratchet systems with different forms of differentiable potentials. Both analytical and numerical evaluations are discussed for three cases of sinusoidal, staircaselike and Mateos ratchet potentials, also with the additional loads modelled by tilted potential structure. In addition, stochastic character of the kinetics is investigated by considering perturbation by Gaussian white noise which is shown to be responsible for driving the directionality of the asymptotic flux in the ratchet. This stochastically driven directionality effect is visualized as a strong nonmonotonic dependence of the statistics of the right versus left trajectories of motion leading to a net current of particles. Possible applications of the ratchet systems to molecular motors are also briefly discussed.
Computationally efficient model predictive control algorithms a neural network approach
Ławryńczuk, Maciej
2014-01-01
This book thoroughly discusses computationally efficient (suboptimal) Model Predictive Control (MPC) techniques based on neural models. The subjects treated include: · A few types of suboptimal MPC algorithms in which a linear approximation of the model or of the predicted trajectory is successively calculated on-line and used for prediction. · Implementation details of the MPC algorithms for feedforward perceptron neural models, neural Hammerstein models, neural Wiener models and state-space neural models. · The MPC algorithms based on neural multi-models (inspired by the idea of predictive control). · The MPC algorithms with neural approximation with no on-line linearization. · The MPC algorithms with guaranteed stability and robustness. · Cooperation between the MPC algorithms and set-point optimization. Thanks to linearization (or neural approximation), the presented suboptimal algorithms do not require d...
TU-EF-304-04: A Heart Motion Model for Proton Scanned Beam Chest Radiotherapy
International Nuclear Information System (INIS)
White, B; Kiely, J Blanco; Lin, L; Freedman, G; Both, S; Vennarini, S; Santhanam, A; Low, D
2015-01-01
Purpose: To model fast-moving heart surface motion as a function of cardiac-phase in order to compensate for the lack of cardiac-gating in evaluating accurate dose to coronary structures. Methods: Ten subjects were prospectively imaged with a breath-hold, cardiac-gated MRI protocol to determine heart surface motion. Radial and planar views of the heart were resampled into a 3-dimensional volume representing one heartbeat. A multi-resolution optical flow deformable image registration algorithm determined tissue displacement during the cardiac-cycle. The surface of the heart was modeled as a thin membrane comprised of voxels perpendicular to a pencil beam scanning (PBS) beam. The membrane’s out-of-plane spatial displacement was modeled as a harmonic function with Lame’s equations. Model accuracy was assessed with the root mean squared error (RMSE). The model was applied to a cohort of six chest wall irradiation patients with PBS plans generated on phase-sorted 4DCT. Respiratory motion was separated from the cardiac motion with a previously published technique. Volumetric dose painting was simulated and dose accumulated to validate plan robustness (target coverage variation accepted within 2%). Maximum and mean heart surface dose assessed the dosimetric impact of heart and coronary artery motion. Results: Average and maximum heart surface displacements were 2.54±0.35mm and 3.6mm from the end-diastole phase to the end-systole cardiac-phase respectively. An average RMSE of 0.11±0.04 showed the model to be accurate. Observed errors were greatest between the circumflex artery and mitral valve level of the heart anatomy. Heart surface displacements correspond to a 3.6±1.0% and 5.1±2.3% dosimetric impact on the maximum and mean heart surface DVH indicators respectively. Conclusion: Although heart surface motion parallel to beam’s direction was substantial, its maximum dosimetric impact was 5.1±2.3%. Since PBS delivers low doses to coronary structures relative to
Color-gradient lattice Boltzmann model for simulating droplet motion with contact-angle hysteresis.
Ba, Yan; Liu, Haihu; Sun, Jinju; Zheng, Rongye
2013-10-01
Lattice Boltzmann method (LBM) is an effective tool for simulating the contact-line motion due to the nature of its microscopic dynamics. In contact-line motion, contact-angle hysteresis is an inherent phenomenon, but it is neglected in most existing color-gradient based LBMs. In this paper, a color-gradient based multiphase LBM is developed to simulate the contact-line motion, particularly with the hysteresis of contact angle involved. In this model, the perturbation operator based on the continuum surface force concept is introduced to model the interfacial tension, and the recoloring operator proposed by Latva-Kokko and Rothman is used to produce phase segregation and resolve the lattice pinning problem. At the solid surface, the color-conserving wetting boundary condition [Hollis et al., IMA J. Appl. Math. 76, 726 (2011)] is applied to improve the accuracy of simulations and suppress spurious currents at the contact line. In particular, we present a numerical algorithm to allow for the effect of the contact-angle hysteresis, in which an iterative procedure is used to determine the dynamic contact angle. Numerical simulations are conducted to verify the developed model, including the droplet partial wetting process and droplet dynamical behavior in a simple shear flow. The obtained results are compared with theoretical solutions and experimental data, indicating that the model is able to predict the equilibrium droplet shape as well as the dynamic process of partial wetting and thus permits accurate prediction of contact-line motion with the consideration of contact-angle hysteresis.
Indian Academy of Sciences (India)
to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...
Balakina, E. V.; Zotov, N. M.; Fedin, A. P.
2018-02-01
Modeling of the motion of the elastic wheel of the vehicle in real-time is used in the tasks of constructing different models in the creation of wheeled vehicles motion control electronic systems, in the creation of automobile stand-simulators etc. The accuracy and the reliability of simulation of the parameters of the wheel motion in real-time when rolling with a slip within the given road conditions are determined not only by the choice of the model, but also by the inaccuracy and instability of the numerical calculation. It is established that the inaccuracy and instability of the calculation depend on the size of the step of integration and the numerical method being used. The analysis of these inaccuracy and instability when wheel rolling with a slip was made and recommendations for reducing them were developed. It is established that the total allowable range of steps of integration is 0.001.0.005 s; the strongest instability is manifested in the calculation of the angular and linear accelerations of the wheel; the weakest instability is manifested in the calculation of the translational velocity of the wheel and moving of the center of the wheel; the instability is less at large values of slip angle and on more slippery surfaces. A new method of the average acceleration is suggested, which allows to significantly reduce (up to 100%) the manifesting of instability of the solution in the calculation of all parameters of motion of the elastic wheel for different braking conditions and for the entire range of steps of integration. The results of research can be applied to the selection of control algorithms in vehicles motion control electronic systems and in the testing stand-simulators
DEVELOPMENT OF 2D HUMAN BODY MODELING USING THINNING ALGORITHM
Directory of Open Access Journals (Sweden)
K. Srinivasan
2010-11-01
Full Text Available Monitoring the behavior and activities of people in Video surveillance has gained more applications in Computer vision. This paper proposes a new approach to model the human body in 2D view for the activity analysis using Thinning algorithm. The first step of this work is Background subtraction which is achieved by the frame differencing algorithm. Thinning algorithm has been used to find the skeleton of the human body. After thinning, the thirteen feature points like terminating points, intersecting points, shoulder, elbow, and knee points have been extracted. Here, this research work attempts to represent the body model in three different ways such as Stick figure model, Patch model and Rectangle body model. The activities of humans have been analyzed with the help of 2D model for the pre-defined poses from the monocular video data. Finally, the time consumption and efficiency of our proposed algorithm have been evaluated.
International Nuclear Information System (INIS)
McMahon, Ryan; Berbeco, Ross; Nishioka, Seiko; Ishikawa, Masayori; Papiez, Lech
2008-01-01
An MLC control algorithm for delivering intensity modulated radiation therapy (IMRT) to targets that are undergoing two-dimensional (2D) rigid motion in the beam's eye view (BEV) is presented. The goal of this method is to deliver 3D-derived fluence maps over a moving patient anatomy. Target motion measured prior to delivery is first used to design a set of planned dynamic-MLC (DMLC) sliding-window leaf trajectories. During actual delivery, the algorithm relies on real-time feedback to compensate for target motion that does not agree with the motion measured during planning. The methodology is based on an existing one-dimensional (1D) algorithm that uses on-the-fly intensity calculations to appropriately adjust the DMLC leaf trajectories in real-time during exposure delivery [McMahon et al., Med. Phys. 34, 3211-3223 (2007)]. To extend the 1D algorithm's application to 2D target motion, a real-time leaf-pair shifting mechanism has been developed. Target motion that is orthogonal to leaf travel is tracked by appropriately shifting the positions of all MLC leaves. The performance of the tracking algorithm was tested for a single beam of a fractionated IMRT treatment, using a clinically derived intensity profile and a 2D target trajectory based on measured patient data. Comparisons were made between 2D tracking, 1D tracking, and no tracking. The impact of the tracking lag time and the frequency of real-time imaging were investigated. A study of the dependence of the algorithm's performance on the level of agreement between the motion measured during planning and delivery was also included. Results demonstrated that tracking both components of the 2D motion (i.e., parallel and orthogonal to leaf travel) results in delivered fluence profiles that are superior to those that track the component of motion that is parallel to leaf travel alone. Tracking lag time effects may lead to relatively large intensity delivery errors compared to the other sources of error investigated
Radial motion of the carotid artery wall: A block matching algorithm approach
Directory of Open Access Journals (Sweden)
Effat Soleimani
2012-06-01
Full Text Available Introduction: During recent years, evaluating the relation between mechanical properties of the arterialwall and cardiovascular diseases has been of great importance. On the other hand, motion estimation of thearterial wall using a sequence of noninvasive ultrasonic images and convenient processing methods mightprovide useful information related to biomechanical indexes and elastic properties of the arteries and assistdoctors to discriminate between healthy and diseased arteries. In the present study, a block matching basedalgorithm was introduced to extract radial motion of the carotid artery wall during cardiac cycles.Materials and Methods: The program was implemented to the consecutive ultrasonic images of thecommon carotid artery of 10 healthy men and maximum and mean radial movement of the posterior wall ofthe artery was extracted. Manual measurements were carried out to validate the automatic method andresults of two methods were compared.Results: Paired t-test analysis showed no significant differences between the automatic and manualmethods (P>0.05. There was significant correlation between the changes in the instantaneous radialmovement of the common carotid artery measured with the manual and automatic methods (withcorrelation coefficient 0.935 and P<0.05.Conclusion: Results of the present study showed that by using a semi automated computer analysismethod, with minimizing the user interfere and no attention to the user experience or skill, arterial wallmotion in the radial direction can be extracted from consecutive ultrasonic frames
Model-Free Adaptive Control Algorithm with Data Dropout Compensation
Directory of Open Access Journals (Sweden)
Xuhui Bu
2012-01-01
Full Text Available The convergence of model-free adaptive control (MFAC algorithm can be guaranteed when the system is subject to measurement data dropout. The system output convergent speed gets slower as dropout rate increases. This paper proposes a MFAC algorithm with data compensation. The missing data is first estimated using the dynamical linearization method, and then the estimated value is introduced to update control input. The convergence analysis of the proposed MFAC algorithm is given, and the effectiveness is also validated by simulations. It is shown that the proposed algorithm can compensate the effect of the data dropout, and the better output performance can be obtained.
Evaluation of models generated via hybrid evolutionary algorithms ...
African Journals Online (AJOL)
2016-04-02
Apr 2, 2016 ... Evaluation of models generated via hybrid evolutionary algorithms for the prediction of Microcystis ... evolutionary algorithms (HEA) proved to be highly applica- ble to the hypertrophic reservoirs of South Africa. .... discovered and optimised using a large-scale parallel computational device and relevant soft-.
Fast Algorithms for Fitting Active Appearance Models to Unconstrained Images
Tzimiropoulos, Georgios; Pantic, Maja
2016-01-01
Fitting algorithms for Active Appearance Models (AAMs) are usually considered to be robust but slow or fast but less able to generalize well to unseen variations. In this paper, we look into AAM fitting algorithms and make the following orthogonal contributions: We present a simple “project-out‿
A state-based probabilistic model for tumor respiratory motion prediction
International Nuclear Information System (INIS)
Kalet, Alan; Sandison, George; Schmitz, Ruth; Wu Huanmei
2010-01-01
This work proposes a new probabilistic mathematical model for predicting tumor motion and position based on a finite state representation using the natural breathing states of exhale, inhale and end of exhale. Tumor motion was broken down into linear breathing states and sequences of states. Breathing state sequences and the observables representing those sequences were analyzed using a hidden Markov model (HMM) to predict the future sequences and new observables. Velocities and other parameters were clustered using a k-means clustering algorithm to associate each state with a set of observables such that a prediction of state also enables a prediction of tumor velocity. A time average model with predictions based on average past state lengths was also computed. State sequences which are known a priori to fit the data were fed into the HMM algorithm to set a theoretical limit of the predictive power of the model. The effectiveness of the presented probabilistic model has been evaluated for gated radiation therapy based on previously tracked tumor motion in four lung cancer patients. Positional prediction accuracy is compared with actual position in terms of the overall RMS errors. Various system delays, ranging from 33 to 1000 ms, were tested. Previous studies have shown duty cycles for latencies of 33 and 200 ms at around 90% and 80%, respectively, for linear, no prediction, Kalman filter and ANN methods as averaged over multiple patients. At 1000 ms, the previously reported duty cycles range from approximately 62% (ANN) down to 34% (no prediction). Average duty cycle for the HMM method was found to be 100% and 91 ± 3% for 33 and 200 ms latency and around 40% for 1000 ms latency in three out of four breathing motion traces. RMS errors were found to be lower than linear and no prediction methods at latencies of 1000 ms. The results show that for system latencies longer than 400 ms, the time average HMM prediction outperforms linear, no prediction, and the more
Models and algorithms for biomolecules and molecular networks
DasGupta, Bhaskar
2016-01-01
By providing expositions to modeling principles, theories, computational solutions, and open problems, this reference presents a full scope on relevant biological phenomena, modeling frameworks, technical challenges, and algorithms. * Up-to-date developments of structures of biomolecules, systems biology, advanced models, and algorithms * Sampling techniques for estimating evolutionary rates and generating molecular structures * Accurate computation of probability landscape of stochastic networks, solving discrete chemical master equations * End-of-chapter exercises
Optimization algorithms intended for self-tuning feedwater heater model
International Nuclear Information System (INIS)
Czop, P; Barszcz, T; Bednarz, J
2013-01-01
This work presents a self-tuning feedwater heater model. This work continues the work on first-principle gray-box methodology applied to diagnostics and condition assessment of power plant components. The objective of this work is to review and benchmark the optimization algorithms regarding the time required to achieve the best model fit to operational power plant data. The paper recommends the most effective algorithm to be used in the model adjustment process.
Insertion algorithms for network model database management systems
Mamadolimov, Abdurashid; Khikmat, Saburov
2017-12-01
The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, forms partial order. When a database is large and a query comparison is expensive then the efficiency requirement of managing algorithms is minimizing the number of query comparisons. We consider updating operation for network model database management systems. We develop a new sequantial algorithm for updating operation. Also we suggest a distributed version of the algorithm.
Algorithmic detectability threshold of the stochastic block model
Kawamoto, Tatsuro
2018-03-01
The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.
Directory of Open Access Journals (Sweden)
Hong Li
2017-12-01
Full Text Available By simulating the geomagnetic fields and analyzing thevariation of intensities, this paper presents a model for calculating the objective function ofan Autonomous Underwater Vehicle (AUVgeomagnetic navigation task. By investigating the biologically inspired strategies, the AUV successfullyreachesthe destination duringgeomagnetic navigation without using the priori geomagnetic map. Similar to the pattern of a flatworm, the proposed algorithm relies on a motion pattern to trigger a local searching strategy by detecting the real-time geomagnetic intensity. An adapted strategy is then implemented, which is biased on the specific target. The results show thereliabilityandeffectivenessofthe proposed algorithm.
Concurrent algorithms for nuclear shell model calculations
International Nuclear Information System (INIS)
Mackenzie, L.M.; Macleod, A.M.; Berry, D.J.; Whitehead, R.R.
1988-01-01
The calculation of nuclear properties has proved very successful for light nuclei, but is limited by the power of the present generation of computers. Starting with an analysis of current techniques, this paper discusses how these can be modified to map parallelism inherent in the mathematics onto appropriate parallel machines. A prototype dedicated multiprocessor for nuclear structure calculations, designed and constructed by the authors, is described and evaluated. The approach adopted is discussed in the context of a number of generically similar algorithms. (orig.)
A Developed Artificial Bee Colony Algorithm Based on Cloud Model
Directory of Open Access Journals (Sweden)
Ye Jin
2018-04-01
Full Text Available The Artificial Bee Colony (ABC algorithm is a bionic intelligent optimization method. The cloud model is a kind of uncertainty conversion model between a qualitative concept T ˜ that is presented by nature language and its quantitative expression, which integrates probability theory and the fuzzy mathematics. A developed ABC algorithm based on cloud model is proposed to enhance accuracy of the basic ABC algorithm and avoid getting trapped into local optima by introducing a new select mechanism, replacing the onlooker bees’ search formula and changing the scout bees’ updating formula. Experiments on CEC15 show that the new algorithm has a faster convergence speed and higher accuracy than the basic ABC and some cloud model based ABC variants.
PM Synchronous Motor Dynamic Modeling with Genetic Algorithm ...
African Journals Online (AJOL)
Adel
This paper proposes dynamic modeling simulation for ac Surface Permanent Magnet Synchronous ... Simulations are implemented using MATLAB with its genetic algorithm toolbox. .... selection, the process that drives biological evolution.
Lin, Ray-Quing; Kuang, Weijia
2011-01-01
In this paper, we describe the details of our numerical model for simulating ship solidbody motion in a given environment. In this model, the fully nonlinear dynamical equations governing the time-varying solid-body ship motion under the forces arising from ship wave interactions are solved with given initial conditions. The net force and moment (torque) on the ship body are directly calculated via integration of the hydrodynamic pressure over the wetted surface and the buoyancy effect from the underwater volume of the actual ship hull with a hybrid finite-difference/finite-element method. Neither empirical nor free parametrization is introduced in this model, i.e. no a priori experimental data are needed for modelling. This model is benchmarked with many experiments of various ship hulls for heave, roll and pitch motion. In addition to the benchmark cases, numerical experiments are also carried out for strongly nonlinear ship motion with a fixed heading. These new cases demonstrate clearly the importance of nonlinearities in ship motion modelling.
A Mining Algorithm for Extracting Decision Process Data Models
Directory of Open Access Journals (Sweden)
Cristina-Claudia DOLEAN
2011-01-01
Full Text Available The paper introduces an algorithm that mines logs of user interaction with simulation software. It outputs a model that explicitly shows the data perspective of the decision process, namely the Decision Data Model (DDM. In the first part of the paper we focus on how the DDM is extracted by our mining algorithm. We introduce it as pseudo-code and, then, provide explanations and examples of how it actually works. In the second part of the paper, we use a series of small case studies to prove the robustness of the mining algorithm and how it deals with the most common patterns we found in real logs.
Seismotectonic models and CN algorithm: The case of Italy
International Nuclear Information System (INIS)
Costa, G.; Orozova Stanishkova, I.; Panza, G.F.; Rotwain, I.M.
1995-07-01
The CN algorithm is here utilized both for the intermediate term earthquake prediction and to validate the seismotectonic model of the Italian territory. Using the results of the analysis, made through the CN algorithm and taking into account the seismotectonic model, three areas, one for Northern Italy, one for Central Italy and one for Southern Italy, are defined. Two transition areas, between the three main areas are delineated. The earthquakes which occurred in these two areas contribute to the precursor phenomena identified by the CN algorithm in each main area. (author). 26 refs, 6 figs, 2 tabs
Directory of Open Access Journals (Sweden)
Wijayanti Nurul Khotimah
2017-01-01
Full Text Available Sign Language recognition was used to help people with normal hearing communicate effectively with the deaf and hearing-impaired. Based on survey that conducted by Multi-Center Study in Southeast Asia, Indonesia was on the top four position in number of patients with hearing disability (4.6%. Therefore, the existence of Sign Language recognition is important. Some research has been conducted on this field. Many neural network types had been used for recognizing many kinds of sign languages. However, their performance are need to be improved. This work focuses on the ASL (Alphabet Sign Language in SIBI (Sign System of Indonesian Language which uses one hand and 26 gestures. Here, thirty four features were extracted by using Leap Motion. Further, a new method, Rule Based-Backpropagation Genetic Al-gorithm Neural Network (RB-BPGANN, was used to recognize these Sign Languages. This method is combination of Rule and Back Propagation Neural Network (BPGANN. Based on experiment this pro-posed application can recognize Sign Language up to 93.8% accuracy. It was very good to recognize large multiclass instance and can be solution of overfitting problem in Neural Network algorithm.
TEM in situ cube-corner indentation analysis using ViBe motion detection algorithm
Yano, K. H.; Thomas, S.; Swenson, M. J.; Lu, Y.; Wharry, J. P.
2018-04-01
Transmission electron microscopic (TEM) in situ mechanical testing is a promising method for understanding plasticity in shallow ion irradiated layers and other volume-limited materials. One of the simplest TEM in situ experiments is cube-corner indentation of a lamella, but the subsequent analysis and interpretation of the experiment is challenging, especially in engineering materials with complex microstructures. In this work, we: (a) develop MicroViBE, a motion detection and background subtraction-based post-processing approach, and (b) demonstrate the ability of MicroViBe, in combination with post-mortem TEM imaging, to carry out an unbiased qualitative interpretation of TEM indentation videos. We focus this work around a Fe-9%Cr oxide dispersion strengthened (ODS) alloy, irradiated with Fe2+ ions to 3 dpa at 500 °C. MicroViBe identifies changes in Laue contrast that are induced by the indentation; these changes accumulate throughout the mechanical loading to generate a "heatmap" of features in the original TEM video that change the most during the loading. Dislocation loops with b = ½ identified by post-mortem scanning TEM (STEM) imaging correspond to hotspots on the heatmap, whereas positions of dislocation loops with b = do not correspond to hotspots. Further, MicroViBe enables consistent, objective quantitative approximation of the b = ½ dislocation loop number density.
New Ground Motion Prediction Models for Caucasus Region
Jorjiashvili, N.
2012-12-01
The Caucasus is a region of numerous natural hazards and ensuing disasters. Analysis of the losses due to past disasters indicates the those most catastrophic in the region have historically been due to strong earthquakes. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration because this parameter gives useful information for Seismic Hazard Assessment. Because of this, many peak ground acceleration attenuation relations have been developed by different authors. Besides, a few attenuation relations were developed for Caucasus region: Ambraseys et al. (1996,2005) which were based on entire European region and they were not focused locally on Caucasus Region; Smit et.al. (2000) that was based on a small amount of acceleration data that really is not enough. Since 2003 construction of Georgian Digital Seismic Network has started with the help of number of International organizations, Projects and Private companies. The works conducted involved scientific as well as organizational activities: Resolving technical problems concerning communication and data transmission. Thus, today we have a possibility to get real time data and make scientific research based on digital seismic data. Generally, ground motion and damage are influenced by the magnitude of the earthquake, the distance from the seismic source to site, the local ground conditions and the characteristics of buildings. Estimation of expected ground motion is a fundamental earthquake hazard assessment. This is the reason why this topic is emphasized in this study. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models are obtained by classical, statistical way, regression analysis. Also site ground conditions are considered because the same earthquake recorded at the same distance may cause different damage
Quantitative Methods in Supply Chain Management Models and Algorithms
Christou, Ioannis T
2012-01-01
Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...
A linear time layout algorithm for business process models
Gschwind, T.; Pinggera, J.; Zugal, S.; Reijers, H.A.; Weber, B.
2014-01-01
The layout of a business process model influences how easily it can beunderstood. Existing layout features in process modeling tools often rely on graph representations, but do not take the specific properties of business process models into account. In this paper, we propose an algorithm that is
DiamondTorre Algorithm for High-Performance Wave Modeling
Directory of Open Access Journals (Sweden)
Vadim Levchenko
2016-08-01
Full Text Available Effective algorithms of physical media numerical modeling problems’ solution are discussed. The computation rate of such problems is limited by memory bandwidth if implemented with traditional algorithms. The numerical solution of the wave equation is considered. A finite difference scheme with a cross stencil and a high order of approximation is used. The DiamondTorre algorithm is constructed, with regard to the specifics of the GPGPU’s (general purpose graphical processing unit memory hierarchy and parallelism. The advantages of these algorithms are a high level of data localization, as well as the property of asynchrony, which allows one to effectively utilize all levels of GPGPU parallelism. The computational intensity of the algorithm is greater than the one for the best traditional algorithms with stepwise synchronization. As a consequence, it becomes possible to overcome the above-mentioned limitation. The algorithm is implemented with CUDA. For the scheme with the second order of approximation, the calculation performance of 50 billion cells per second is achieved. This exceeds the result of the best traditional algorithm by a factor of five.
Collaborative filtering recommendation model based on fuzzy clustering algorithm
Yang, Ye; Zhang, Yunhua
2018-05-01
As one of the most widely used algorithms in recommender systems, collaborative filtering algorithm faces two serious problems, which are the sparsity of data and poor recommendation effect in big data environment. In traditional clustering analysis, the object is strictly divided into several classes and the boundary of this division is very clear. However, for most objects in real life, there is no strict definition of their forms and attributes of their class. Concerning the problems above, this paper proposes to improve the traditional collaborative filtering model through the hybrid optimization of implicit semantic algorithm and fuzzy clustering algorithm, meanwhile, cooperating with collaborative filtering algorithm. In this paper, the fuzzy clustering algorithm is introduced to fuzzy clustering the information of project attribute, which makes the project belong to different project categories with different membership degrees, and increases the density of data, effectively reduces the sparsity of data, and solves the problem of low accuracy which is resulted from the inaccuracy of similarity calculation. Finally, this paper carries out empirical analysis on the MovieLens dataset, and compares it with the traditional user-based collaborative filtering algorithm. The proposed algorithm has greatly improved the recommendation accuracy.
Applicability of genetic algorithms to parameter estimation of economic models
Directory of Open Access Journals (Sweden)
Marcel Ševela
2004-01-01
Full Text Available The paper concentrates on capability of genetic algorithms for parameter estimation of non-linear economic models. In the paper we test the ability of genetic algorithms to estimate of parameters of demand function for durable goods and simultaneously search for parameters of genetic algorithm that lead to maximum effectiveness of the computation algorithm. The genetic algorithms connect deterministic iterative computation methods with stochastic methods. In the genteic aůgorithm approach each possible solution is represented by one individual, those life and lifes of all generations of individuals run under a few parameter of genetic algorithm. Our simulations resulted in optimal mutation rate of 15% of all bits in chromosomes, optimal elitism rate 20%. We can not set the optimal extend of generation, because it proves positive correlation with effectiveness of genetic algorithm in all range under research, but its impact is degreasing. The used genetic algorithm was sensitive to mutation rate at most, than to extend of generation. The sensitivity to elitism rate is not so strong.
Internal models and prediction of visual gravitational motion.
Zago, Myrka; McIntyre, Joseph; Senot, Patrice; Lacquaniti, Francesco
2008-06-01
Baurès et al. [Baurès, R., Benguigui, N., Amorim, M.-A., & Siegler, I. A. (2007). Intercepting free falling objects: Better use Occam's razor than internalize Newton's law. Vision Research, 47, 2982-2991] rejected the hypothesis that free-falling objects are intercepted using a predictive model of gravity. They argued instead for "a continuous guide for action timing" based on visual information updated till target capture. Here we show that their arguments are flawed, because they fail to consider the impact of sensori-motor delays on interception behaviour and the need for neural compensation of such delays. When intercepting a free-falling object, the delays can be overcome by a predictive model of the effects of gravity on target motion.
A comparison of algorithms for inference and learning in probabilistic graphical models.
Frey, Brendan J; Jojic, Nebojsa
2005-09-01
Research into methods for reasoning under uncertainty is currently one of the most exciting areas of artificial intelligence, largely because it has recently become possible to record, store, and process large amounts of data. While impressive achievements have been made in pattern classification problems such as handwritten character recognition, face detection, speaker identification, and prediction of gene function, it is even more exciting that researchers are on the verge of introducing systems that can perform large-scale combinatorial analyses of data, decomposing the data into interacting components. For example, computational methods for automatic scene analysis are now emerging in the computer vision community. These methods decompose an input image into its constituent objects, lighting conditions, motion patterns, etc. Two of the main challenges are finding effective representations and models in specific applications and finding efficient algorithms for inference and learning in these models. In this paper, we advocate the use of graph-based probability models and their associated inference and learning algorithms. We review exact techniques and various approximate, computationally efficient techniques, including iterated conditional modes, the expectation maximization (EM) algorithm, Gibbs sampling, the mean field method, variational techniques, structured variational techniques and the sum-product algorithm ("loopy" belief propagation). We describe how each technique can be applied in a vision model of multiple, occluding objects and contrast the behaviors and performances of the techniques using a unifying cost function, free energy.
Corazza, Stefano; Gambaretto, Emiliano; Mündermann, Lars; Andriacchi, Thomas P
2010-04-01
A novel approach for the automatic generation of a subject-specific model consisting of morphological and joint location information is described. The aim is to address the need for efficient and accurate model generation for markerless motion capture (MMC) and biomechanical studies. The algorithm applied and expanded on previous work on human shapes space by embedding location information for ten joint centers in a subject-specific free-form surface. The optimal locations of joint centers in the 3-D mesh were learned through linear regression over a set of nine subjects whose joint centers were known. The model was shown to be sufficiently accurate for both kinematic (joint centers) and morphological (shape of the body) information to allow accurate tracking with MMC systems. The automatic model generation algorithm was applied to 3-D meshes of different quality and resolution such as laser scans and visual hulls. The complete method was tested using nine subjects of different gender, body mass index (BMI), age, and ethnicity. Experimental training error and cross-validation errors were 19 and 25 mm, respectively, on average over the joints of the ten subjects analyzed in the study.
Assessment of cervical cancer with a parameter-free intravoxel incoherent motion imaging algorithm
Energy Technology Data Exchange (ETDEWEB)
Becker, Anton S.; Wurnig, Moritz C.; Boss, Andreas; Ghafoor, Soleen [Institute of Diagnostic and Interventional Radiology, University Hospital of Zurich, Zurich (Switzerland); Perucho, Jose A.; Khong, Pek Lan; Lee, Elaine Y. P. [Dept. of Diagnostic Radiology, The University of Hong Kong, Hong Kong (China)
2017-06-15
To evaluate the feasibility of a parameter-free intravoxel incoherent motion (IVIM) approach in cervical cancer, to assess the optimal b-value threshold, and to preliminarily examine differences in the derived perfusion and diffusion parameters for different histological cancer types. After Institutional Review Board approval, 19 female patients (mean age, 54 years; age range, 37–78 years) gave consent and were enrolled in this prospective magnetic resonance imaging study. Clinical staging and biopsy results were obtained. Echo-planar diffusion weighted sequences at 13 b-values were acquired at 3 tesla field strength. Single-sliced region-of-interest IVIM analysis with adaptive b-value thresholds was applied to each tumor, yielding the optimal fit and the optimal parameters for pseudodiffusion (D*), perfusion fraction (Fp) and diffusion coefficient (D). Monoexponential apparent diffusion coefficient (ADC) was calculated for comparison with D. Biopsy revealed squamous cell carcinoma in 10 patients and adenocarcinoma in 9. The b-value threshold (median [interquartile range]) depended on the histological type and was 35 (22.5–50) s/mm{sup 2} in squamous cell carcinoma and 150 (100–150) s/mm{sup 2} in adenocarcinoma (p < 0.05). Comparing squamous cell vs. adenocarcinoma, D* (45.1 [25.1–60.4] × 10{sup −3} mm{sup 2}/s vs. 12.4 [10.5–21.2] × 10{sup −3} mm{sup 2}/s) and Fp (7.5% [7.0–9.0%] vs. 9.9% [9.0–11.4%]) differed significantly between the subtypes (p < 0.02), whereas D did not (0.89 [0.75–0.94] × 10{sup −3} mm{sup 2}/s vs. 0.90 [0.82–0.97] × 10{sup −3} mm{sup 2}/s, p = 0.27). The residuals did not differ (0.74 [0.60–0.92] vs. 0.94 [0.67–1.01], p = 0.32). The ADC systematically underestimated the magnitude of diffusion restriction compared to D (p < 0.001). The parameter-free IVIM approach is feasible in cervical cancer. The b-value threshold and perfusion-related parameters depend on the tumor histology type.
Approximation Algorithms for Model-Based Diagnosis
Feldman, A.B.
2010-01-01
Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation
Basic Research on Adaptive Model Algorithmic Control
1985-12-01
Control Conference. Richalet, J., A. Rault, J.L. Testud and J. Papon (1978). Model predictive heuristic control: applications to industrial...pp.977-982. Richalet, J., A. Rault, J. L. Testud and J. Papon (1978). Model predictive heuristic control: applications to industrial processes
Motion of dislocation kinks in a simple model crystal
International Nuclear Information System (INIS)
Koizumi, H.; Suzuki, T.
2005-01-01
To investigate the effects of lattice periodicity on kink motion, a molecular-dynamic simulation for a kink in a screw dislocation has been performed in a simple model lattice of diamond type. The Stillinger-Weber potential is assumed to act between atoms. Under applied stresses larger than 0.0027G, a long distance motion of a kink is possible, where G is the shear modulus. A moving kink emits lattice waves and loses its kinetic energy, which is compensated by the applied stress. The kink attains a terminal velocity after moving a few atomic distances. The kink velocity is not proportional to the applied stress, and exceeds the shear wave velocity when the applied stress is larger than 0.026G. The energy loss of the moving kink is one order of magnitude smaller than that of a moving straight dislocation and is about the same order of magnitude as the theoretical value of phonon-scattering mechanisms at room temperature
STcontrol and NEWPORT Motion Controller Model ESP 301 Device
Kapanadze, Giorgi
2015-01-01
Pixel detectors are used to detect particle tracks in LHC experiments. This kind of detectors are built with silicon semiconductor diodes. Ionizing particles create charge in the diode and the reverse bias voltage creates electric field in the diode which causes effective charge collection by the drift of electrons [1]. One of the main parameter of tracker detectors is efficiency. The efficiency as a function of position in the pixel matrix can be evaluated by scanning the matrix with red and infrared lasers. It is important to know what is happening between pixels in terms of efficiency. We perform these measurements to test new type of pixel detectors for the LHC future upgrade in 2023. New type of detectors are needed because the radiation level will be much higher [2]. For the measurements we need to control a stage motion controller (NEWPORT Motion Controller Model ESP 301) with the existing software STcontrol, which is used for readout data from pixel detectors and to control other devices like the lase...
Implementing Modifed Burg Algorithms in Multivariate Subset Autoregressive Modeling
Directory of Open Access Journals (Sweden)
A. Alexandre Trindade
2003-02-01
Full Text Available The large number of parameters in subset vector autoregressive models often leads one to procure fast, simple, and efficient alternatives or precursors to maximum likelihood estimation. We present the solution of the multivariate subset Yule-Walker equations as one such alternative. In recent work, Brockwell, Dahlhaus, and Trindade (2002, show that the Yule-Walker estimators can actually be obtained as a special case of a general recursive Burg-type algorithm. We illustrate the structure of this Algorithm, and discuss its implementation in a high-level programming language. Applications of the Algorithm in univariate and bivariate modeling are showcased in examples. Univariate and bivariate versions of the Algorithm written in Fortran 90 are included in the appendix, and their use illustrated.
Stochastic cluster algorithms for discrete Gaussian (SOS) models
International Nuclear Information System (INIS)
Evertz, H.G.; Hamburg Univ.; Hasenbusch, M.; Marcu, M.; Tel Aviv Univ.; Pinn, K.; Muenster Univ.; Solomon, S.
1990-10-01
We present new Monte Carlo cluster algorithms which eliminate critical slowing down in the simulation of solid-on-solid models. In this letter we focus on the two-dimensional discrete Gaussian model. The algorithms are based on reflecting the integer valued spin variables with respect to appropriately chosen reflection planes. The proper choice of the reflection plane turns out to be crucial in order to obtain a small dynamical exponent z. Actually, the successful versions of our algorithm are a mixture of two different procedures for choosing the reflection plane, one of them ergodic but slow, the other one non-ergodic and also slow when combined with a Metropolis algorithm. (orig.)
Algorithms and procedures in the model based control of accelerators
International Nuclear Information System (INIS)
Bozoki, E.
1987-10-01
The overall design of a Model Based Control system was presented. The system consists of PLUG-IN MODULES, governed by a SUPERVISORY PROGRAM and communicating via SHARED DATA FILES. Models can be ladded or replaced without affecting the oveall system. There can be more then one module (algorithm) to perform the same task. The user can choose the most appropriate algorithm or can compare the results using different algorithms. Calculations, algorithms, file read and write, etc. which are used in more than one module, will be in a subroutine library. This feature will simplify the maintenance of the system. A partial list of modules is presented, specifying the task they perform. 19 refs., 1 fig
U(6)-phonon model of nuclear collective motion
International Nuclear Information System (INIS)
Ganev, H.G.
2015-01-01
The U(6)-phonon model of nuclear collective motion with the semi-direct product structure [HW(21)]U(6) is obtained as a hydrodynamic (macroscopic) limit of the fully microscopic proton–neutron symplectic model (PNSM) with Sp(12, R) dynamical group. The phonon structure of the [HW(21)]U(6) model enables it to simultaneously include the giant monopole and quadrupole, as well as dipole resonances and their coupling to the low-lying collective states. The U(6) intrinsic structure of the [HW(21)]U(6) model, from the other side, gives a framework for the simultaneous shell-model interpretation of the ground state band and the other excited low-lying collective bands. It follows then that the states of the whole nuclear Hilbert space which can be put into one-to-one correspondence with those of a 21-dimensional oscillator with an intrinsic (base) U(6) structure. The latter can be determined in such a way that it is compatible with the proton–neutron structure of the nucleus. The macroscopic limit of the Sp(12, R) algebra, therefore, provides a rigorous mechanism for implementing the unified model ideas of coupling the valence particles to the core collective degrees of freedom within a fully microscopic framework without introducing redundant variables or violating the Pauli principle. (author)
An Improved Nested Sampling Algorithm for Model Selection and Assessment
Zeng, X.; Ye, M.; Wu, J.; WANG, D.
2017-12-01
Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.
Indian Academy of Sciences (India)
ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...
A multiple model approach to respiratory motion prediction for real-time IGRT
International Nuclear Information System (INIS)
Putra, Devi; Haas, Olivier C L; Burnham, Keith J; Mills, John A
2008-01-01
Respiration induces significant movement of tumours in the vicinity of thoracic and abdominal structures. Real-time image-guided radiotherapy (IGRT) aims to adapt radiation delivery to tumour motion during irradiation. One of the main problems for achieving this objective is the presence of time lag between the acquisition of tumour position and the radiation delivery. Such time lag causes significant beam positioning errors and affects the dose coverage. A method to solve this problem is to employ an algorithm that is able to predict future tumour positions from available tumour position measurements. This paper presents a multiple model approach to respiratory-induced tumour motion prediction using the interacting multiple model (IMM) filter. A combination of two models, constant velocity (CV) and constant acceleration (CA), is used to capture respiratory-induced tumour motion. A Kalman filter is designed for each of the local models and the IMM filter is applied to combine the predictions of these Kalman filters for obtaining the predicted tumour position. The IMM filter, likewise the Kalman filter, is a recursive algorithm that is suitable for real-time applications. In addition, this paper proposes a confidence interval (CI) criterion to evaluate the performance of tumour motion prediction algorithms for IGRT. The proposed CI criterion provides a relevant measure for the prediction performance in terms of clinical applications and can be used to specify the margin to accommodate prediction errors. The prediction performance of the IMM filter has been evaluated using 110 traces of 4-minute free-breathing motion collected from 24 lung-cancer patients. The simulation study was carried out for prediction time 0.1-0.6 s with sampling rates 3, 5 and 10 Hz. It was found that the prediction of the IMM filter was consistently better than the prediction of the Kalman filter with the CV or CA model. There was no significant difference of prediction errors for the
International Nuclear Information System (INIS)
Chen, H; Zhen, X; Zhou, L; Gu, X
2016-01-01
Purpose: To propose and validate a novel real-time surface-mesh-based internal organ-external surface motion and deformation tracking method for lung cancer radiotherapy. Methods: Deformation vector fields (DVFs) which characterizes the internal and external motion are obtained by registering the internal organ and tumor contours and external surface meshes to a reference phase in the 4D CT images using a recent developed local topology preserved non-rigid point matching algorithm (TOP). A composite matrix is constructed by combing the estimated internal and external DVFs. Principle component analysis (PCA) is then applied on the composite matrix to extract principal motion characteristics and finally yield the respiratory motion model parameters which correlates the internal and external motion and deformation. The accuracy of the respiratory motion model is evaluated using a 4D NURBS-based cardiac-torso (NCAT) synthetic phantom and three lung cancer cases. The center of mass (COM) difference is used to measure the tumor motion tracking accuracy, and the Dice’s coefficient (DC), percent error (PE) and Housdourf’s distance (HD) are used to measure the agreement between the predicted and ground truth tumor shape. Results: The mean COM is 0.84±0.49mm and 0.50±0.47mm for the phantom and patient data respectively. The mean DC, PE and HD are 0.93±0.01, 0.13±0.03 and 1.24±0.34 voxels for the phantom, and 0.91±0.04, 0.17±0.07 and 3.93±2.12 voxels for the three lung cancer patients, respectively. Conclusions: We have proposed and validate a real-time surface-mesh-based organ motion and deformation tracking method with an internal-external motion modeling. The preliminary results conducted on a synthetic 4D NCAT phantom and 4D CT images from three lung cancer cases show that the proposed method is reliable and accurate in tracking both the tumor motion trajectory and deformation, which can serve as a potential tool for real-time organ motion and deformation
Energy Technology Data Exchange (ETDEWEB)
Chen, H; Zhen, X; Zhou, L [Southern Medical University, Guangzhou, Guangdong (China); Gu, X [UT Southwestern Medical Center, Dallas, TX (United States)
2016-06-15
Purpose: To propose and validate a novel real-time surface-mesh-based internal organ-external surface motion and deformation tracking method for lung cancer radiotherapy. Methods: Deformation vector fields (DVFs) which characterizes the internal and external motion are obtained by registering the internal organ and tumor contours and external surface meshes to a reference phase in the 4D CT images using a recent developed local topology preserved non-rigid point matching algorithm (TOP). A composite matrix is constructed by combing the estimated internal and external DVFs. Principle component analysis (PCA) is then applied on the composite matrix to extract principal motion characteristics and finally yield the respiratory motion model parameters which correlates the internal and external motion and deformation. The accuracy of the respiratory motion model is evaluated using a 4D NURBS-based cardiac-torso (NCAT) synthetic phantom and three lung cancer cases. The center of mass (COM) difference is used to measure the tumor motion tracking accuracy, and the Dice’s coefficient (DC), percent error (PE) and Housdourf’s distance (HD) are used to measure the agreement between the predicted and ground truth tumor shape. Results: The mean COM is 0.84±0.49mm and 0.50±0.47mm for the phantom and patient data respectively. The mean DC, PE and HD are 0.93±0.01, 0.13±0.03 and 1.24±0.34 voxels for the phantom, and 0.91±0.04, 0.17±0.07 and 3.93±2.12 voxels for the three lung cancer patients, respectively. Conclusions: We have proposed and validate a real-time surface-mesh-based organ motion and deformation tracking method with an internal-external motion modeling. The preliminary results conducted on a synthetic 4D NCAT phantom and 4D CT images from three lung cancer cases show that the proposed method is reliable and accurate in tracking both the tumor motion trajectory and deformation, which can serve as a potential tool for real-time organ motion and deformation
International Nuclear Information System (INIS)
McCall, K C; Jeraj, R
2007-01-01
A new approach to the problem of modelling and predicting respiration motion has been implemented. This is a dual-component model, which describes the respiration motion as a non-periodic time series superimposed onto a periodic waveform. A periodic autoregressive moving average algorithm has been used to define a mathematical model of the periodic and non-periodic components of the respiration motion. The periodic components of the motion were found by projecting multiple inhale-exhale cycles onto a common subspace. The component of the respiration signal that is left after removing this periodicity is a partially autocorrelated time series and was modelled as an autoregressive moving average (ARMA) process. The accuracy of the periodic ARMA model with respect to fluctuation in amplitude and variation in length of cycles has been assessed. A respiration phantom was developed to simulate the inter-cycle variations seen in free-breathing and coached respiration patterns. At ±14% variability in cycle length and maximum amplitude of motion, the prediction errors were 4.8% of the total motion extent for a 0.5 s ahead prediction, and 9.4% at 1.0 s lag. The prediction errors increased to 11.6% at 0.5 s and 21.6% at 1.0 s when the respiration pattern had ±34% variations in both these parameters. Our results have shown that the accuracy of the periodic ARMA model is more strongly dependent on the variations in cycle length than the amplitude of the respiration cycles
A model for the pilot's use of motion cues in roll-axis tracking tasks
Levison, W. H.; Junker, A. M.
1977-01-01
Simulated target-following and disturbance-regulation tasks were explored with subjects using visual-only and combined visual and motion cues. The effects of motion cues on task performance and pilot response behavior were appreciably different for the two task configurations and were consistent with data reported in earlier studies for similar task configurations. The optimal-control model for pilot/vehicle systems provided a task-independent framework for accounting for the pilot's use of motion cues. Specifically, the availability of motion cues was modeled by augmenting the set of perceptual variables to include position, rate, acceleration, and accleration-rate of the motion simulator, and results were consistent with the hypothesis of attention-sharing between visual and motion variables. This straightforward informational model allowed accurate model predictions of the effects of motion cues on a variety of response measures for both the target-following and disturbance-regulation tasks.
Huang, Xiaokun; Zhang, You; Wang, Jing
2018-02-01
Reconstructing four-dimensional cone-beam computed tomography (4D-CBCT) images directly from respiratory phase-sorted traditional 3D-CBCT projections can capture target motion trajectory, reduce motion artifacts, and reduce imaging dose and time. However, the limited numbers of projections in each phase after phase-sorting decreases CBCT image quality under traditional reconstruction techniques. To address this problem, we developed a simultaneous motion estimation and image reconstruction (SMEIR) algorithm, an iterative method that can reconstruct higher quality 4D-CBCT images from limited projections using an inter-phase intensity-driven motion model. However, the accuracy of the intensity-driven motion model is limited in regions with fine details whose quality is degraded due to insufficient projection number, which consequently degrades the reconstructed image quality in corresponding regions. In this study, we developed a new 4D-CBCT reconstruction algorithm by introducing biomechanical modeling into SMEIR (SMEIR-Bio) to boost the accuracy of the motion model in regions with small fine structures. The biomechanical modeling uses tetrahedral meshes to model organs of interest and solves internal organ motion using tissue elasticity parameters and mesh boundary conditions. This physics-driven approach enhances the accuracy of solved motion in the organ’s fine structures regions. This study used 11 lung patient cases to evaluate the performance of SMEIR-Bio, making both qualitative and quantitative comparisons between SMEIR-Bio, SMEIR, and the algebraic reconstruction technique with total variation regularization (ART-TV). The reconstruction results suggest that SMEIR-Bio improves the motion model’s accuracy in regions containing small fine details, which consequently enhances the accuracy and quality of the reconstructed 4D-CBCT images.
Co-clustering models, algorithms and applications
Govaert, Gérard
2013-01-01
Cluster or co-cluster analyses are important tools in a variety of scientific areas. The introduction of this book presents a state of the art of already well-established, as well as more recent methods of co-clustering. The authors mainly deal with the two-mode partitioning under different approaches, but pay particular attention to a probabilistic approach. Chapter 1 concerns clustering in general and the model-based clustering in particular. The authors briefly review the classical clustering methods and focus on the mixture model. They present and discuss the use of different mixture
Comparison of parameter estimation algorithms in hydrological modelling
DEFF Research Database (Denmark)
Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan
2006-01-01
Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well......-Marquardt-Levenberg algorithm (implemented in the PEST software), when applied to a steady-state and a transient groundwater model. The results show that PEST can have severe problems in locating the global optimum and in being trapped in local regions of attractions. The global SCE procedure is, in general, more effective...... and provides a better coverage of the Pareto optimal solutions at a lower computational cost....
Applied economic model development algorithm for electronics company
Directory of Open Access Journals (Sweden)
Mikhailov I.
2017-01-01
Full Text Available The purpose of this paper is to report about received experience in the field of creating the actual methods and algorithms that help to simplify development of applied decision support systems. It reports about an algorithm, which is a result of two years research and have more than one-year practical verification. In a case of testing electronic components, the time of the contract conclusion is crucial point to make the greatest managerial mistake. At this stage, it is difficult to achieve a realistic assessment of time-limit and of wage-fund for future work. The creation of estimating model is possible way to solve this problem. In the article is represented an algorithm for creation of those models. The algorithm is based on example of the analytical model development that serves for amount of work estimation. The paper lists the algorithm’s stages and explains their meanings with participants’ goals. The implementation of the algorithm have made possible twofold acceleration of these models development and fulfilment of management’s requirements. The resulting models have made a significant economic effect. A new set of tasks was identified to be further theoretical study.
Dynamic model of gross plasma motion in Scyllac
International Nuclear Information System (INIS)
Miller, G.
1975-01-01
Plasma confinement in a high-beta stellarator such as Scyllac is ended by an unstable long wavelength m = 1 motion of the plasma to the discharge tube wall. Such behavior has been observed in several experiments and is considered well understood theoretically on the basis of the sharp boundary ideal MHD model. However the standard theoretical approach using the energy principle offers little physical insight, and sheds no light on the process by which the plasma reaches an equilibrium configuration starting from the initial conditions created by the theta pinch implosion. It was the purpose of this work to find a more complete explanation of the observed plasma behavior in Scyllac and to apply this to the design of a feedback stabilized experiment. Some general consideration is also given to dynamic stabilization
Optimal dividends in the Brownian motion risk model with interest
Fang, Ying; Wu, Rong
2009-07-01
In this paper, we consider a Brownian motion risk model, and in addition, the surplus earns investment income at a constant force of interest. The objective is to find a dividend policy so as to maximize the expected discounted value of dividend payments. It is well known that optimality is achieved by using a barrier strategy for unrestricted dividend rate. However, ultimate ruin of the company is certain if a barrier strategy is applied. In many circumstances this is not desirable. This consideration leads us to impose a restriction on the dividend stream. We assume that dividends are paid to the shareholders according to admissible strategies whose dividend rate is bounded by a constant. Under this additional constraint, we show that the optimal dividend strategy is formed by a threshold strategy.
Economic Models and Algorithms for Distributed Systems
Neumann, Dirk; Altmann, Jorn; Rana, Omer F
2009-01-01
Distributed computing models for sharing resources such as Grids, Peer-to-Peer systems, or voluntary computing are becoming increasingly popular. This book intends to discover fresh avenues of research and amendments to existing technologies, aiming at the successful deployment of commercial distributed systems
Robust Return Algorithm for Anisotropic Plasticity Models
DEFF Research Database (Denmark)
Tidemann, L.; Krenk, Steen
2017-01-01
Plasticity models can be defined by an energy potential, a plastic flow potential and a yield surface. The energy potential defines the relation between the observable elastic strains ϒe and the energy conjugate stresses Τe and between the non-observable internal strains i and the energy conjugat...
A tractable algorithm for the wellfounded model
Jonker, C.M.; Renardel de Lavalette, G.R.
In the area of general logic programming (negated atoms allowed in the bodies of rules) and reason maintenance systems, the wellfounded model (first defined by Van Gelder, Ross and Schlipf in 1988) is generally considered to be the declarative semantics of the program. In this paper we present
Directory of Open Access Journals (Sweden)
Dipok K. Bora
2016-03-01
Full Text Available We focused on validation of applicability of semi-empirical technique (spectral models and stochastic simulation for the estimation of ground-motion characteristics in the northeastern region (NER of India. In the present study, it is assumed that the point source approximation in far field is valid. The one-dimensional stochastic point source seismological model of Boore (1983 (Boore, DM. 1983. Stochastic simulation of high frequency ground motions based on seismological models of the radiated spectra. Bulletin of Seismological Society of America, 73, 1865–1894. is used for modelling the acceleration time histories. Total ground-motion records of 30 earthquakes of magnitudes lying between MW 4.2 and 6.2 in NER India from March 2008 to April 2013 are used for this study. We considered peak ground acceleration (PGA and pseudospectral acceleration (response spectrum amplitudes with 5% damping ratio at three fundamental natural periods, namely: 0.3, 1.0, and 3.0 s. The spectral models, which work well for PGA, overestimate the pseudospectral acceleration. It seems that there is a strong influence of local site amplification and crustal attenuation (kappa, which control spectral amplitudes at different frequencies. The results would allow analysing regional peculiarities of ground-motion excitation and propagation and updating seismic hazard assessment, both the probabilistic and deterministic approaches.
Dao, Duy; Salehizadeh, S M A; Noh, Yeonsik; Chong, Jo Woon; Cho, Chae Ho; McManus, Dave; Darling, Chad E; Mendelson, Yitzhak; Chon, Ki H
2017-09-01
Motion and noise artifacts (MNAs) impose limits on the usability of the photoplethysmogram (PPG), particularly in the context of ambulatory monitoring. MNAs can distort PPG, causing erroneous estimation of physiological parameters such as heart rate (HR) and arterial oxygen saturation (SpO2). In this study, we present a novel approach, "TifMA," based on using the time-frequency spectrum of PPG to first detect the MNA-corrupted data and next discard the nonusable part of the corrupted data. The term "nonusable" refers to segments of PPG data from which the HR signal cannot be recovered accurately. Two sequential classification procedures were included in the TifMA algorithm. The first classifier distinguishes between MNA-corrupted and MNA-free PPG data. Once a segment of data is deemed MNA-corrupted, the next classifier determines whether the HR can be recovered from the corrupted segment or not. A support vector machine (SVM) classifier was used to build a decision boundary for the first classification task using data segments from a training dataset. Features from time-frequency spectra of PPG were extracted to build the detection model. Five datasets were considered for evaluating TifMA performance: (1) and (2) were laboratory-controlled PPG recordings from forehead and finger pulse oximeter sensors with subjects making random movements, (3) and (4) were actual patient PPG recordings from UMass Memorial Medical Center with random free movements and (5) was a laboratory-controlled PPG recording dataset measured at the forehead while the subjects ran on a treadmill. The first dataset was used to analyze the noise sensitivity of the algorithm. Datasets 2-4 were used to evaluate the MNA detection phase of the algorithm. The results from the first phase of the algorithm (MNA detection) were compared to results from three existing MNA detection algorithms: the Hjorth, kurtosis-Shannon entropy, and time-domain variability-SVM approaches. This last is an approach
Deciphering the crowd: modeling and identification of pedestrian group motion.
Yücel, Zeynep; Zanlungo, Francesco; Ikeda, Tetsushi; Miyashita, Takahiro; Hagita, Norihiro
2013-01-14
Associating attributes to pedestrians in a crowd is relevant for various areas like surveillance, customer profiling and service providing. The attributes of interest greatly depend on the application domain and might involve such social relations as friends or family as well as the hierarchy of the group including the leader or subordinates. Nevertheless, the complex social setting inherently complicates this task. We attack this problem by exploiting the small group structures in the crowd. The relations among individuals and their peers within a social group are reliable indicators of social attributes. To that end, this paper identifies social groups based on explicit motion models integrated through a hypothesis testing scheme. We develop two models relating positional and directional relations. A pair of pedestrians is identified as belonging to the same group or not by utilizing the two models in parallel, which defines a compound hypothesis testing scheme. By testing the proposed approach on three datasets with different environmental properties and group characteristics, it is demonstrated that we achieve an identification accuracy of 87% to 99%. The contribution of this study lies in its definition of positional and directional relation models, its description of compound evaluations, and the resolution of ambiguities with our proposed uncertainty measure based on the local and global indicators of group relation.
Deciphering the Crowd: Modeling and Identification of Pedestrian Group Motion
Directory of Open Access Journals (Sweden)
Norihiro Hagita
2013-01-01
Full Text Available Associating attributes to pedestrians in a crowd is relevant for various areas like surveillance, customer profiling and service providing. The attributes of interest greatly depend on the application domain and might involve such social relations as friends or family as well as the hierarchy of the group including the leader or subordinates. Nevertheless, the complex social setting inherently complicates this task. We attack this problem by exploiting the small group structures in the crowd. The relations among individuals and their peers within a social group are reliable indicators of social attributes. To that end, this paper identifies social groups based on explicit motion models integrated through a hypothesis testing scheme. We develop two models relating positional and directional relations. A pair of pedestrians is identified as belonging to the same group or not by utilizing the two models in parallel, which defines a compound hypothesis testing scheme. By testing the proposed approach on three datasets with different environmental properties and group characteristics, it is demonstrated that we achieve an identification accuracy of 87% to 99%. The contribution of this study lies in its definition of positional and directional relation models, its description of compound evaluations, and the resolution of ambiguities with our proposed uncertainty measure based on the local and global indicators of group relation.
International Nuclear Information System (INIS)
Jani, Shyam S.; Robinson, Clifford G.; Dahlbom, Magnus; White, Benjamin M.; Thomas, David H.; Gaudio, Sergio; Low, Daniel A.; Lamb, James M.
2013-01-01
Purpose: To quantitatively compare the accuracy of tumor volume segmentation in amplitude-based and phase-based respiratory gating algorithms in respiratory-correlated positron emission tomography (PET). Methods and Materials: List-mode fluorodeoxyglucose-PET data was acquired for 10 patients with a total of 12 fluorodeoxyglucose-avid tumors and 9 lymph nodes. Additionally, a phantom experiment was performed in which 4 plastic butyrate spheres with inner diameters ranging from 1 to 4 cm were imaged as they underwent 1-dimensional motion based on 2 measured patient breathing trajectories. PET list-mode data were gated into 8 bins using 2 amplitude-based (equal amplitude bins [A1] and equal counts per bin [A2]) and 2 temporal phase-based gating algorithms. Gated images were segmented using a commercially available gradient-based technique and a fixed 40% threshold of maximum uptake. Internal target volumes (ITVs) were generated by taking the union of all 8 contours per gated image. Segmented phantom ITVs were compared with their respective ground-truth ITVs, defined as the volume subtended by the tumor model positions covering 99% of breathing amplitude. Superior-inferior distances between sphere centroids in the end-inhale and end-exhale phases were also calculated. Results: Tumor ITVs from amplitude-based methods were significantly larger than those from temporal-based techniques (P=.002). For lymph nodes, A2 resulted in ITVs that were significantly larger than either of the temporal-based techniques (P<.0323). A1 produced the largest and most accurate ITVs for spheres with diameters of ≥2 cm (P=.002). No significant difference was shown between algorithms in the 1-cm sphere data set. For phantom spheres, amplitude-based methods recovered an average of 9.5% more motion displacement than temporal-based methods under regular breathing conditions and an average of 45.7% more in the presence of baseline drift (P<.001). Conclusions: Target volumes in images generated
Prediction of strong earthquake motions on rock surface using evolutionary process models
International Nuclear Information System (INIS)
Kameda, H.; Sugito, M.
1984-01-01
Stochastic process models are developed for prediction of strong earthquake motions for engineering design purposes. Earthquake motions with nonstationary frequency content are modeled by using the concept of evolutionary processes. Discussion is focused on the earthquake motions on bed rocks which are important for construction of nuclear power plants in seismic regions. On this basis, two earthquake motion prediction models are developed, one (EMP-IB Model) for prediction with given magnitude and epicentral distance, and the other (EMP-IIB Model) to account for the successive fault ruptures and the site location relative to the fault of great earthquakes. (Author) [pt
International Nuclear Information System (INIS)
Badkul, R; Pokhrel, D; Jiang, H; Lominska, C; Wang, F; Ramanjappa, T
2016-01-01
Purpose: Intra-fractional tumor motion due to respiration may potentially compromise dose delivery for SBRT of lung tumors. Even sufficient margins are used to ensure there is no geometric miss of target volume, there is potential dose blurring effect may present due to motion and could impact the tumor coverage if motions are larger. In this study we investigated dose blurring effect of open fields as well as Lung SBRT patients planned using 2 non-coplanar dynamic conformal arcs(NCDCA) and few conformal beams(CB) calculated with Monte Carlo (MC) based algorithm utilizing phantom with 2D-diode array(MapCheck) and ion-chamber. Methods: SBRT lung patients were planned on Brainlab-iPlan system using 4D-CT scan and ITV were contoured on MIP image set and verified on all breathing phase image sets to account for breathing motion and then 5mm margin was applied to generate PTV. Plans were created using two NCDCA and 4-5 CB 6MV photon calculated using XVMC MC-algorithm. 3 SBRT patients plans were transferred to phantom with MapCheck and 0.125cc ion-chamber inserted in the middle of phantom to calculate dose. Also open field 3×3, 5×5 and 10×10 were calculated on this phantom. Phantom was placed on motion platform with varying motion from 5, 10, 20 and 30 mm with duty cycle of 4 second. Measurements were carried out for open fields as well 3 patients plans at static and various degree of motions. MapCheck planar dose and ion-chamber reading were collected and compared with static measurements and computed values to evaluate the dosimetric effect on tumor coverage due to motion. Results: To eliminate complexity of patients plan 3 simple open fields were also measured to see the dose blurring effect with the introduction of motion. All motion measured ionchamber values were normalized to corresponding static value. For open fields 5×5 and 10×10 normalized central axis ion-chamber values were 1.00 for all motions but for 3×3 they were 1 up to 10mm motion and 0.97 and 0
Simulation of spatiotemporal CT data sets using a 4D MRI-based lung motion model.
Marx, Mirko; Ehrhardt, Jan; Werner, René; Schlemmer, Heinz-Peter; Handels, Heinz
2014-05-01
Four-dimensional CT imaging is widely used to account for motion-related effects during radiotherapy planning of lung cancer patients. However, 4D CT often contains motion artifacts, cannot be used to measure motion variability, and leads to higher dose exposure. In this article, we propose using 4D MRI to acquire motion information for the radiotherapy planning process. From the 4D MRI images, we derive a time-continuous model of the average patient-specific respiratory motion, which is then applied to simulate 4D CT data based on a static 3D CT. The idea of the motion model is to represent the average lung motion over a respiratory cycle by cyclic B-spline curves. The model generation consists of motion field estimation in the 4D MRI data by nonlinear registration, assigning respiratory phases to the motion fields, and applying a B-spline approximation on a voxel-by-voxel basis to describe the average voxel motion over a breathing cycle. To simulate a patient-specific 4D CT based on a static CT of the patient, a multi-modal registration strategy is introduced to transfer the motion model from MRI to the static CT coordinates. Differences between model-based estimated and measured motion vectors are on average 1.39 mm for amplitude-based binning of the 4D MRI data of three patients. In addition, the MRI-to-CT registration strategy is shown to be suitable for the model transformation. The application of our 4D MRI-based motion model for simulating 4D CT images provides advantages over standard 4D CT (less motion artifacts, radiation-free). This makes it interesting for radiotherapy planning.
Directory of Open Access Journals (Sweden)
D. V. Vereshikov
2014-01-01
Full Text Available In article results of researches are presented according to efficiency of application of adaptive algorithm of management by the maneuverable plane realised in a cross-section control path, on the basis of identification of some aerodynamic characteristics of the plane and the indignations caused by formation by an asymmetrical configuration of placing of external suspension. By modelling of movement of the plane during performance S - figurative maneuvers with use of a programmno-modelling complex and the flight stand substantial improvement of characteristics of controllability of the asymmetrical plane and, thus, increase of efficiency of its application is shown.
Differential Evolution algorithm applied to FSW model calibration
Idagawa, H. S.; Santos, T. F. A.; Ramirez, A. J.
2014-03-01
Friction Stir Welding (FSW) is a solid state welding process that can be modelled using a Computational Fluid Dynamics (CFD) approach. These models use adjustable parameters to control the heat transfer and the heat input to the weld. These parameters are used to calibrate the model and they are generally determined using the conventional trial and error approach. Since this method is not very efficient, we used the Differential Evolution (DE) algorithm to successfully determine these parameters. In order to improve the success rate and to reduce the computational cost of the method, this work studied different characteristics of the DE algorithm, such as the evolution strategy, the objective function, the mutation scaling factor and the crossover rate. The DE algorithm was tested using a friction stir weld performed on a UNS S32205 Duplex Stainless Steel.
Elsheikh, Ahmed H.; Wheeler, Mary Fanett; Hoteit, Ibrahim
2014-01-01
A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using
Directory of Open Access Journals (Sweden)
M. V. Vyaznikov
2014-01-01
Full Text Available The paper presents study results of the nonlinear interaction processes between the supporting surface of the track Assembly and the ground in the contact patch, using the mathematical models of friction. For the case blaskapelle motion of a caterpillar, when the resultant of the elementary friction forces is limited by the coupling due to the sliding tracks on the ground, it appears that the increase of the lateral component leads to a decrease of the longitudinal component and the change of direction of the resulting force. As a result, with increasing angular velocity of the tracked vehicle a longitudinal component of the friction force decreases, which is the geometric factor and is defined by the locus of friction for a given type of soil. In the development of this well-known model is considered the general case of friction, which describes the effect of reducing the coefficient of friction in the contact patch at increasing the angular velocity of rotation. To describe this process is used the model of the combined friction which occurs when the surface of the body is doing at the same time the rotational and translational motion. The resulting expression for the resultant of forces of friction and the moment of resistance to rotation based on the decomposition of the first order Pade for a flat spot track Assembly with ground of rectangular shape. With combined friction any arbitrarily small perturbation force acting parallel to the surface of the contact spot, leads to slip. The paper considers the possibility of using the model of the combined friction to research a sustainability curvilinear motion of tracked vehicles. The proposed motion of the machine in the mode of skidding on the basis of the frictionslip. The interpretation of the physical processes occurring in the contact area, on the basis of the theory of the combined friction would allow using this mathematical model in the algorithm structure of automatic traffic control
A new model and simple algorithms for multi-label mumford-shah problems
Hong, Byungwoo
2013-06-01
In this work, we address the multi-label Mumford-Shah problem, i.e., the problem of jointly estimating a partitioning of the domain of the image, and functions defined within regions of the partition. We create algorithms that are efficient, robust to undesirable local minima, and are easy-to-implement. Our algorithms are formulated by slightly modifying the underlying statistical model from which the multi-label Mumford-Shah functional is derived. The advantage of this statistical model is that the underlying variables: the labels and the functions are less coupled than in the original formulation, and the labels can be computed from the functions with more global updates. The resulting algorithms can be tuned to the desired level of locality of the solution: from fully global updates to more local updates. We demonstrate our algorithm on two applications: joint multi-label segmentation and denoising, and joint multi-label motion segmentation and flow estimation. We compare to the state-of-the-art in multi-label Mumford-Shah problems and show that we achieve more promising results. © 2013 IEEE.
Complex motion of elevators in piecewise map model combined with circle map
Nagatani, Takashi
2013-11-01
We study the dynamic behavior in the elevator traffic controlled by capacity when the inflow rate of passengers into elevators varies periodically with time. The dynamics of elevators is described by the piecewise map model combined with the circle map. The motion of the elevators depends on the inflow rate, its period, and the number of elevators. The motion in the piecewise map model combined with the circle map shows a complex behavior different from the motion in the piecewise map model.
Methodology, models and algorithms in thermographic diagnostics
Živčák, Jozef; Madarász, Ladislav; Rudas, Imre J
2013-01-01
This book presents the methodology and techniques of thermographic applications with focus primarily on medical thermography implemented for parametrizing the diagnostics of the human body. The first part of the book describes the basics of infrared thermography, the possibilities of thermographic diagnostics and the physical nature of thermography. The second half includes tools of intelligent engineering applied for the solving of selected applications and projects. Thermographic diagnostics was applied to problematics of paraplegia and tetraplegia and carpal tunnel syndrome (CTS). The results of the research activities were created with the cooperation of the four projects within the Ministry of Education, Science, Research and Sport of the Slovak Republic entitled Digital control of complex systems with two degrees of freedom, Progressive methods of education in the area of control and modeling of complex object oriented systems on aircraft turbocompressor engines, Center for research of control of te...
Detailed modelling of strong ground motion in Trieste
International Nuclear Information System (INIS)
Vaccari, F.; Romanelli, F.; Panza, G.
2005-05-01
Trieste has been included in category IV by the new Italian seismic code. This corresponds to a horizontal acceleration of 0.05g for the anchoring of the elastic response spectrum. A detailed modelling of the ground motion in Trieste has been done for some scenario earthquakes, compatible with the seismotectonic regime of the region. Three-component synthetic seismograms (displacements, velocities and accelerations) have been analyzed to obtain significant parameters of engineering interest. The definition of the seismic input, derived from a comprehensive set of seismograms analyzed in the time and frequency domains, represents a powerful and convenient tool for seismic microzoning. In the specific case of Palazzo Carciotti, depending on the azimuth of the incoming wavefield, an increase of one degree in intensity may be expected due to different amplification patterns, while a nice stability can be seen in the periods corresponding to the peak values, with amplifications around 1 and 2 Hz. For Palazzo Carciotti, the most dangerous scenario considered, for an event of M=6.5 at an epicentral distance of 21 km, modelled taking into account source finiteness and directivity, leads to a peak ground acceleration value of 0.2 g. The seismic code, being based on a probabilistic approach, can be considered representative of the average seismic shaking for the province of Trieste, and can slightly underestimate the seismic input due the seismogenic potential (obtained from the historical seismicity and seismotectonics). Furthermore, relevant local site effects are mostly neglected. Both modelling and observations show that site conditions in the centre of Trieste can amplify the ground motion at the bedrock by a factor of five, in the frequency range of engineering interest. We may therefore expect macroseismic intensities as high as IX (MCS) corresponding to VIII (MSK). Spectral amplifications obtained for the considered scenario earthquakes are strongly event
Modeling Algorithms in SystemC and ACL2
Directory of Open Access Journals (Sweden)
John W. O'Leary
2014-06-01
Full Text Available We describe the formal language MASC, based on a subset of SystemC and intended for modeling algorithms to be implemented in hardware. By means of a special-purpose parser, an algorithm coded in SystemC is converted to a MASC model for the purpose of documentation, which in turn is translated to ACL2 for formal verification. The parser also generates a SystemC variant that is suitable as input to a high-level synthesis tool. As an illustration of this methodology, we describe a proof of correctness of a simple 32-bit radix-4 multiplier.
Algorithmic fault tree construction by component-based system modeling
International Nuclear Information System (INIS)
Majdara, Aref; Wakabayashi, Toshio
2008-01-01
Computer-aided fault tree generation can be easier, faster and less vulnerable to errors than the conventional manual fault tree construction. In this paper, a new approach for algorithmic fault tree generation is presented. The method mainly consists of a component-based system modeling procedure an a trace-back algorithm for fault tree synthesis. Components, as the building blocks of systems, are modeled using function tables and state transition tables. The proposed method can be used for a wide range of systems with various kinds of components, if an inclusive component database is developed. (author)
Algorithm of Dynamic Model Structural Identification of the Multivariable Plant
Directory of Open Access Journals (Sweden)
Л.М. Блохін
2004-02-01
Full Text Available The new algorithm of dynamic model structural identification of the multivariable stabilized plant with observable and unobservable disturbances in the regular operating modes is offered in this paper. With the help of the offered algorithm it is possible to define the “perturbed” models of dynamics not only of the plant, but also the dynamics characteristics of observable and unobservable casual disturbances taking into account the absence of correlation between themselves and control inputs with the unobservable perturbations.
Introduction to genetic algorithms as a modeling tool
International Nuclear Information System (INIS)
Wildberger, A.M.; Hickok, K.A.
1990-01-01
Genetic algorithms are search and classification techniques modeled on natural adaptive systems. This is an introduction to their use as a modeling tool with emphasis on prospects for their application in the power industry. It is intended to provide enough background information for its audience to begin to follow technical developments in genetic algorithms and to recognize those which might impact on electric power engineering. Beginning with a discussion of genetic algorithms and their origin as a model of biological adaptation, their advantages and disadvantages are described in comparison with other modeling tools such as simulation and neural networks in order to provide guidance in selecting appropriate applications. In particular, their use is described for improving expert systems from actual data and they are suggested as an aid in building mathematical models. Using the Thermal Performance Advisor as an example, it is suggested how genetic algorithms might be used to make a conventional expert system and mathematical model of a power plant adapt automatically to changes in the plant's characteristics
Indian Academy of Sciences (India)
algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).
Indian Academy of Sciences (India)
algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...
ABC Algorithm based Fuzzy Modeling of Optical Glucose Detection
Directory of Open Access Journals (Sweden)
SARACOGLU, O. G.
2016-08-01
Full Text Available This paper presents a modeling approach based on the use of fuzzy reasoning mechanism to define a measured data set obtained from an optical sensing circuit. For this purpose, we implemented a simple but effective an in vitro optical sensor to measure glucose content of an aqueous solution. Measured data contain analog voltages representing the absorbance values of three wavelengths measured from an RGB LED in different glucose concentrations. To achieve a desired model performance, the parameters of the fuzzy models are optimized by using the artificial bee colony (ABC algorithm. The modeling results presented in this paper indicate that the fuzzy model optimized by the algorithm provide a successful modeling performance having the minimum mean squared error (MSE of 0.0013 which are in clearly good agreement with the measurements.
Analytic model for surface ground motion with spall induced by underground nuclear tests
International Nuclear Information System (INIS)
MacQueen, D.H.
1982-04-01
This report provides a detailed presentation and critique of a model used to characterize the surface ground motion following a contained, spalling underground nuclear explosion intended for calculation of the resulting atmospheric acoustic pulse. Some examples of its use are included. Some discussion of the general approach of ground motion model parameter extraction, not dependent on the specific model, is also presented
A fractional motion diffusion model for grading pediatric brain tumors.
Karaman, M Muge; Wang, He; Sui, Yi; Engelhard, Herbert H; Li, Yuhua; Zhou, Xiaohong Joe
2016-01-01
To demonstrate the feasibility of a novel fractional motion (FM) diffusion model for distinguishing low- versus high-grade pediatric brain tumors; and to investigate its possible advantage over apparent diffusion coefficient (ADC) and/or a previously reported continuous-time random-walk (CTRW) diffusion model. With approval from the institutional review board and written informed consents from the legal guardians of all participating patients, this study involved 70 children with histopathologically-proven brain tumors (30 low-grade and 40 high-grade). Multi- b -value diffusion images were acquired and analyzed using the FM, CTRW, and mono-exponential diffusion models. The FM parameters, D fm , φ , ψ (non-Gaussian diffusion statistical measures), and the CTRW parameters, D m , α , β (non-Gaussian temporal and spatial diffusion heterogeneity measures) were compared between the low- and high-grade tumor groups by using a Mann-Whitney-Wilcoxon U test. The performance of the FM model for differentiating between low- and high-grade tumors was evaluated and compared with that of the CTRW and the mono-exponential models using a receiver operating characteristic (ROC) analysis. The FM parameters were significantly lower ( p < 0.0001) in the high-grade ( D fm : 0.81 ± 0.26, φ : 1.40 ± 0.10, ψ : 0.42 ± 0.11) than in the low-grade ( D fm : 1.52 ± 0.52, φ : 1.64 ± 0.13, ψ : 0.67 ± 0.13) tumor groups. The ROC analysis showed that the FM parameters offered better specificity (88% versus 73%), sensitivity (90% versus 82%), accuracy (88% versus 78%), and area under the curve (AUC, 93% versus 80%) in discriminating tumor malignancy compared to the conventional ADC. The performance of the FM model was similar to that of the CTRW model. Similar to the CTRW model, the FM model can improve differentiation between low- and high-grade pediatric brain tumors over ADC.
Modeling of diatomic molecule using the Morse potential and the Verlet algorithm
Energy Technology Data Exchange (ETDEWEB)
Fidiani, Elok [Department of Physics, Parahyangan Catholic University, Bandung-Jawa Barat (Indonesia)
2016-03-11
Performing molecular modeling usually uses special software for Molecular Dynamics (MD) such as: GROMACS, NAMD, JMOL etc. Molecular dynamics is a computational method to calculate the time dependent behavior of a molecular system. In this work, MATLAB was used as numerical method for a simple modeling of some diatomic molecules: HCl, H{sub 2} and O{sub 2}. MATLAB is a matrix based numerical software, in order to do numerical analysis, all the functions and equations describing properties of atoms and molecules must be developed manually in MATLAB. In this work, a Morse potential was generated to describe the bond interaction between the two atoms. In order to analyze the simultaneous motion of molecules, the Verlet Algorithm derived from Newton’s Equations of Motion (classical mechanics) was operated. Both the Morse potential and the Verlet algorithm were integrated using MATLAB to derive physical properties and the trajectory of the molecules. The data computed by MATLAB is always in the form of a matrix. To visualize it, Visualized Molecular Dynamics (VMD) was performed. Such method is useful for development and testing some types of interaction on a molecular scale. Besides, this can be very helpful for describing some basic principles of molecular interaction for educational purposes.
Modeling of diatomic molecule using the Morse potential and the Verlet algorithm
International Nuclear Information System (INIS)
Fidiani, Elok
2016-01-01
Performing molecular modeling usually uses special software for Molecular Dynamics (MD) such as: GROMACS, NAMD, JMOL etc. Molecular dynamics is a computational method to calculate the time dependent behavior of a molecular system. In this work, MATLAB was used as numerical method for a simple modeling of some diatomic molecules: HCl, H_2 and O_2. MATLAB is a matrix based numerical software, in order to do numerical analysis, all the functions and equations describing properties of atoms and molecules must be developed manually in MATLAB. In this work, a Morse potential was generated to describe the bond interaction between the two atoms. In order to analyze the simultaneous motion of molecules, the Verlet Algorithm derived from Newton’s Equations of Motion (classical mechanics) was operated. Both the Morse potential and the Verlet algorithm were integrated using MATLAB to derive physical properties and the trajectory of the molecules. The data computed by MATLAB is always in the form of a matrix. To visualize it, Visualized Molecular Dynamics (VMD) was performed. Such method is useful for development and testing some types of interaction on a molecular scale. Besides, this can be very helpful for describing some basic principles of molecular interaction for educational purposes.
An Interactive Personalized Recommendation System Using the Hybrid Algorithm Model
Directory of Open Access Journals (Sweden)
Yan Guo
2017-10-01
Full Text Available With the rapid development of e-commerce, the contradiction between the disorder of business information and customer demand is increasingly prominent. This study aims to make e-commerce shopping more convenient, and avoid information overload, by an interactive personalized recommendation system using the hybrid algorithm model. The proposed model first uses various recommendation algorithms to get a list of original recommendation results. Combined with the customer’s feedback in an interactive manner, it then establishes the weights of corresponding recommendation algorithms. Finally, the synthetic formula of evidence theory is used to fuse the original results to obtain the final recommendation products. The recommendation performance of the proposed method is compared with that of traditional methods. The results of the experimental study through a Taobao online dress shop clearly show that the proposed method increases the efficiency of data mining in the consumer coverage, the consumer discovery accuracy and the recommendation recall. The hybrid recommendation algorithm complements the advantages of the existing recommendation algorithms in data mining. The interactive assigned-weight method meets consumer demand better and solves the problem of information overload. Meanwhile, our study offers important implications for e-commerce platform providers regarding the design of product recommendation systems.
Energy Technology Data Exchange (ETDEWEB)
Fijany, A. [Jet Propulsion Lab., Pasadena, CA (United States); Coley, T.R. [Virtual Chemistry, Inc., San Diego, CA (United States); Cagin, T.; Goddard, W.A. III [California Institute of Technology, Pasadena, CA (United States)
1997-12-31
Successful molecular dynamics (MD) simulation of large systems (> million atoms) for long times (> nanoseconds) requires the integration of constrained equations of motion (CEOM). Constraints are used to eliminate high frequency degrees of freedom (DOF) and to allow the use of rigid bodies. Solving the CEOM allows for larger integration time-steps and helps focus the simulation on the important collective dynamics of chemical, biological, and materials systems. We explore advances in multibody dynamics which have resulted in O(N) algorithms for propagating the CEOM. However, because of their strictly sequential nature, the computational time required by these algorithms does not scale down with increased numbers of processors. We then present the new constraint force algorithm for solving the CEOM and show that this algorithm is fully parallelizable, leading to a computational cost of O(N/P+IogP) for N DOF on P processors.
Indian Academy of Sciences (India)
will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...
Energy Technology Data Exchange (ETDEWEB)
Aagaard, B T; Graves, R W; Rodgers, A; Brocher, T M; Simpson, R W; Dreger, D; Petersson, N A; Larsen, S C; Ma, S; Jachens, R C
2009-11-04
We simulate long-period (T > 1.0-2.0 s) and broadband (T > 0.1 s) ground motions for 39 scenarios earthquakes (Mw 6.7-7.2) involving the Hayward, Calaveras, and Rodgers Creek faults. For rupture on the Hayward fault we consider the effects of creep on coseismic slip using two different approaches, both of which reduce the ground motions compared with neglecting the influence of creep. Nevertheless, the scenario earthquakes generate strong shaking throughout the San Francisco Bay area with about 50% of the urban area experiencing MMI VII or greater for the magnitude 7.0 scenario events. Long-period simulations of the 2007 Mw 4.18 Oakland and 2007 Mw 4.5 Alum Rock earthquakes show that the USGS Bay Area Velocity Model version 08.3.0 permits simulation of the amplitude and duration of shaking throughout the San Francisco Bay area, with the greatest accuracy in the Santa Clara Valley (San Jose area). The ground motions exhibit a strong sensitivity to the rupture length (or magnitude), hypocenter (or rupture directivity), and slip distribution. The ground motions display a much weaker sensitivity to the rise time and rupture speed. Peak velocities, peak accelerations, and spectral accelerations from the synthetic broadband ground motions are, on average, slightly higher than the Next Generation Attenuation (NGA) ground-motion prediction equations. We attribute at least some of this difference to the relatively narrow width of the Hayward fault ruptures. The simulations suggest that the Spudich and Chiou (2008) directivity corrections to the NGA relations could be improved by including a dependence on the rupture speed and increasing the areal extent of rupture directivity with period. The simulations also indicate that the NGA relations may under-predict amplification in shallow sedimentary basins.
Epidemic Processes on Complex Networks : Modelling, Simulation and Algorithms
Van de Bovenkamp, R.
2015-01-01
Local interactions on a graph will lead to global dynamic behaviour. In this thesis we focus on two types of dynamic processes on graphs: the Susceptible-Infected-Susceptilbe (SIS) virus spreading model, and gossip style epidemic algorithms. The largest part of this thesis is devoted to the SIS
Worm Algorithm for CP(N-1) Model
Rindlisbacher, Tobias
2017-01-01
The CP(N-1) model in 2D is an interesting toy model for 4D QCD as it possesses confinement, asymptotic freedom and a non-trivial vacuum structure. Due to the lower dimensionality and the absence of fermions, the computational cost for simulating 2D CP(N-1) on the lattice is much lower than that for simulating 4D QCD. However, to our knowledge, no efficient algorithm for simulating the lattice CP(N-1) model has been tested so far, which also works at finite density. To this end we propose a new type of worm algorithm which is appropriate to simulate the lattice CP(N-1) model in a dual, flux-variables based representation, in which the introduction of a chemical potential does not give rise to any complications. In addition to the usual worm moves where a defect is just moved from one lattice site to the next, our algorithm additionally allows for worm-type moves in the internal variable space of single links, which accelerates the Monte Carlo evolution. We use our algorithm to compare the two popular CP(N-1) l...
Optimisation of Hidden Markov Model using Baum–Welch algorithm
Indian Academy of Sciences (India)
Home; Journals; Journal of Earth System Science; Volume 126; Issue 1. Optimisation of Hidden Markov Model using Baum–Welch algorithm for prediction of maximum and minimum temperature over Indian Himalaya. J C Joshi Tankeshwar Kumar Sunita Srivastava Divya Sachdeva. Volume 126 Issue 1 February 2017 ...
Heterogenous Agents Model with the Worst Out Algorithm
Czech Academy of Sciences Publication Activity Database
Vácha, Lukáš; Vošvrda, Miloslav
-, č. 8 (2006), s. 3-19 ISSN 1801-5999 Institutional research plan: CEZ:AV0Z10750506 Keywords : efficient market hypothesis * fractal market hypothesis * agents' investment horizons * agents' trading strategies * technical trading rules * heterogeneous agent model with stochastic memory * Worst out algorithm Subject RIV: AH - Economics
Application of genetic algorithm in radio ecological models parameter determination
Energy Technology Data Exchange (ETDEWEB)
Pantelic, G. [Institute of Occupatioanl Health and Radiological Protection ' Dr Dragomir Karajovic' , Belgrade (Serbia)
2006-07-01
The method of genetic algorithms was used to determine the biological half-life of 137 Cs in cow milk after the accident in Chernobyl. Methodologically genetic algorithms are based on the fact that natural processes tend to optimize themselves and therefore this method should be more efficient in providing optimal solutions in the modeling of radio ecological and environmental events. The calculated biological half-life of 137 Cs in milk is (32 {+-} 3) days and transfer coefficient from grass to milk is (0.019 {+-} 0.005). (authors)
Application of genetic algorithm in radio ecological models parameter determination
International Nuclear Information System (INIS)
Pantelic, G.
2006-01-01
The method of genetic algorithms was used to determine the biological half-life of 137 Cs in cow milk after the accident in Chernobyl. Methodologically genetic algorithms are based on the fact that natural processes tend to optimize themselves and therefore this method should be more efficient in providing optimal solutions in the modeling of radio ecological and environmental events. The calculated biological half-life of 137 Cs in milk is (32 ± 3) days and transfer coefficient from grass to milk is (0.019 ± 0.005). (authors)
A system for learning statistical motion patterns.
Hu, Weiming; Xiao, Xuejuan; Fu, Zhouyu; Xie, Dan; Tan, Tieniu; Maybank, Steve
2006-09-01
Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy K-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction.
Fuzzy model predictive control algorithm applied in nuclear power plant
International Nuclear Information System (INIS)
Zuheir, Ahmad
2006-01-01
The aim of this paper is to design a predictive controller based on a fuzzy model. The Takagi-Sugeno fuzzy model with an Adaptive B-splines neuro-fuzzy implementation is used and incorporated as a predictor in a predictive controller. An optimization approach with a simplified gradient technique is used to calculate predictions of the future control actions. In this approach, adaptation of the fuzzy model using dynamic process information is carried out to build the predictive controller. The easy description of the fuzzy model and the easy computation of the gradient sector during the optimization procedure are the main advantages of the computation algorithm. The algorithm is applied to the control of a U-tube steam generation unit (UTSG) used for electricity generation. (author)
Model-based Bayesian signal extraction algorithm for peripheral nerves
Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.
2017-10-01
Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10-20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of
International Nuclear Information System (INIS)
Matsubara, Keisuke; Ibaraki, Masanobu; Nakamura, Kazuhiro; Yamaguchi, Hiroshi; Umetsu, Atsushi; Kinoshita, Fumiko; Kinoshita, Toshibumi
2013-01-01
Subject head motion during sequential 15 O positron emission tomography (PET) scans can result in artifacts in cerebral blood flow (CBF) and oxygen metabolism maps. However, to our knowledge, there are no systematic studies examining this issue. Herein, we investigated the effect of head motion on quantification of CBF and oxygen metabolism, and proposed an image-based motion correction method dedicated to 15 O PET study, correcting for transmission-emission mismatch and inter-scan mismatch of emission scans. We analyzed 15 O PET data for patients with major arterial steno-occlusive disease (n=130) to determine the occurrence frequency of head motion during 15 O PET examination. Image-based motion correction without and with realignment between transmission and emission scans, termed simple and 2-step method, respectively, was applied to the cases that showed severe inter-scan motion. Severe inter-scan motion (>3 mm translation or >5deg rotation) was observed in 27 of 520 adjacent scan pairs (5.2%). In these cases, unrealistic values of oxygen extraction fraction (OEF) or cerebrovascular reactivity (CVR) were observed without motion correction. Motion correction eliminated these artifacts. The volume-of-interest (VOI) analysis demonstrated that the motion correction changed the OEF on the middle cerebral artery territory by 17.3% at maximum. The inter-scan motion also affected cerebral blood volume (CBV), cerebral metabolism rate of oxygen (CMRO 2 ) and CBF, which were improved by the motion correction. A difference of VOI values between the simple and 2-step method was also observed. These data suggest that image-based motion correction is useful for accurate measurement of CBF and oxygen metabolism by 15 O PET. (author)
A Novel Respiratory Motion Perturbation Model Adaptable to Patient Breathing Irregularities
Energy Technology Data Exchange (ETDEWEB)
Yuan, Amy [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York (United States); Wei, Jie [Department of Computer Science, City College of New York, New York, New York (United States); Gaebler, Carl P.; Huang, Hailiang; Olek, Devin [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York (United States); Li, Guang, E-mail: lig2@mskcc.org [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York (United States)
2016-12-01
Purpose: To develop a physical, adaptive motion perturbation model to predict tumor motion using feedback from dynamic measurement of breathing conditions to compensate for breathing irregularities. Methods and Materials: A novel respiratory motion perturbation (RMP) model was developed to predict tumor motion variations caused by breathing irregularities. This model contained 2 terms: the initial tumor motion trajectory, measured from 4-dimensional computed tomography (4DCT) images, and motion perturbation, calculated from breathing variations in tidal volume (TV) and breathing pattern (BP). The motion perturbation was derived from the patient-specific anatomy, tumor-specific location, and time-dependent breathing variations. Ten patients were studied, and 2 amplitude-binned 4DCT images for each patient were acquired within 2 weeks. The motion trajectories of 40 corresponding bifurcation points in both 4DCT images of each patient were obtained using deformable image registration. An in-house 4D data processing toolbox was developed to calculate the TV and BP as functions of the breathing phase. The motion was predicted from the simulation 4DCT scan to the treatment 4DCT scan, and vice versa, resulting in 800 predictions. For comparison, noncorrected motion differences and the predictions from a published 5-dimensional model were used. Results: The average motion range in the superoinferior direction was 9.4 ± 4.4 mm, the average ΔTV ranged from 10 to 248 mm{sup 3} (−26% to 61%), and the ΔBP ranged from 0 to 0.2 (−71% to 333%) between the 2 4DCT scans. The mean noncorrected motion difference was 2.0 ± 2.8 mm between 2 4DCT motion trajectories. After applying the RMP model, the mean motion difference was reduced significantly to 1.2 ± 1.8 mm (P=.0018), a 40% improvement, similar to the 1.2 ± 1.8 mm (P=.72) predicted with the 5-dimensional model. Conclusions: A novel physical RMP model was developed with an average accuracy of 1.2 ± 1.8 mm for
Performance modeling of parallel algorithms for solving neutron diffusion problems
International Nuclear Information System (INIS)
Azmy, Y.Y.; Kirk, B.L.
1995-01-01
Neutron diffusion calculations are the most common computational methods used in the design, analysis, and operation of nuclear reactors and related activities. Here, mathematical performance models are developed for the parallel algorithm used to solve the neutron diffusion equation on message passing and shared memory multiprocessors represented by the Intel iPSC/860 and the Sequent Balance 8000, respectively. The performance models are validated through several test problems, and these models are used to estimate the performance of each of the two considered architectures in situations typical of practical applications, such as fine meshes and a large number of participating processors. While message passing computers are capable of producing speedup, the parallel efficiency deteriorates rapidly as the number of processors increases. Furthermore, the speedup fails to improve appreciably for massively parallel computers so that only small- to medium-sized message passing multiprocessors offer a reasonable platform for this algorithm. In contrast, the performance model for the shared memory architecture predicts very high efficiency over a wide range of number of processors reasonable for this architecture. Furthermore, the model efficiency of the Sequent remains superior to that of the hypercube if its model parameters are adjusted to make its processors as fast as those of the iPSC/860. It is concluded that shared memory computers are better suited for this parallel algorithm than message passing computers
Cost optimization model and its heuristic genetic algorithms
International Nuclear Information System (INIS)
Liu Wei; Wang Yongqing; Guo Jilin
1999-01-01
Interest and escalation are large quantity in proportion to the cost of nuclear power plant construction. In order to optimize the cost, the mathematics model of cost optimization for nuclear power plant construction was proposed, which takes the maximum net present value as the optimization goal. The model is based on the activity networks of the project and is an NP problem. A heuristic genetic algorithms (HGAs) for the model was introduced. In the algorithms, a solution is represented with a string of numbers each of which denotes the priority of each activity for assigned resources. The HGAs with this encoding method can overcome the difficulty which is harder to get feasible solutions when using the traditional GAs to solve the model. The critical path of the activity networks is figured out with the concept of predecessor matrix. An example was computed with the HGAP programmed in C language. The results indicate that the model is suitable for the objectiveness, the algorithms is effective to solve the model
GND-PCA-based statistical modeling of diaphragm motion extracted from 4D MRI.
Swastika, Windra; Masuda, Yoshitada; Xu, Rui; Kido, Shoji; Chen, Yen-Wei; Haneishi, Hideaki
2013-01-01
We analyzed a statistical model of diaphragm motion using regular principal component analysis (PCA) and generalized N-dimensional PCA (GND-PCA). First, we generate 4D MRI of respiratory motion from 2D MRI using an intersection profile method. We then extract semiautomatically the diaphragm boundary from the 4D-MRI to get subject-specific diaphragm motion. In order to build a general statistical model of diaphragm motion, we normalize the diaphragm motion in time and spatial domains and evaluate the diaphragm motion model of 10 healthy subjects by applying regular PCA and GND-PCA. We also validate the results using the leave-one-out method. The results show that the first three principal components of regular PCA contain more than 98% of the total variation of diaphragm motion. However, validation using leave-one-out method gives up to 5.0 mm mean of error for right diaphragm motion and 3.8 mm mean of error for left diaphragm motion. Model analysis using GND-PCA provides about 1 mm margin of error and is able to reconstruct the diaphragm model by fewer samples.
A Trap Motion in Validating Muscle Activity Prediction from Musculoskeletal Model using EMG
Wibawa, A. D.; Verdonschot, N.; Halbertsma, J.P.K.; Burgerhof, J.G.M.; Diercks, R.L.; Verkerke, G. J.
2016-01-01
Musculoskeletal modeling nowadays is becoming the most common tool for studying and analyzing human motion. Besides its potential in predicting muscle activity and muscle force during active motion, musculoskeletal modeling can also calculate many important kinetic data that are difficult to measure
Site-specific strong ground motion prediction using 2.5-D modelling
Narayan, J. P.
2001-08-01
An algorithm was developed using the 2.5-D elastodynamic wave equation, based on the displacement-stress relation. One of the most significant advantages of the 2.5-D simulation is that the 3-D radiation pattern can be generated using double-couple point shear-dislocation sources in the 2-D numerical grid. A parsimonious staggered grid scheme was adopted instead of the standard staggered grid scheme, since this is the only scheme suitable for computing the dislocation. This new 2.5-D numerical modelling avoids the extensive computational cost of 3-D modelling. The significance of this exercise is that it makes it possible to simulate the strong ground motion (SGM), taking into account the energy released, 3-D radiation pattern, path effects and local site conditions at any location around the epicentre. The slowness vector (py) was used in the supersonic region for each layer, so that all the components of the inertia coefficient are positive. The double-couple point shear-dislocation source was implemented in the numerical grid using the moment tensor components as the body-force couples. The moment per unit volume was used in both the 3-D and 2.5-D modelling. A good agreement in the 3-D and 2.5-D responses for different grid sizes was obtained when the moment per unit volume was further reduced by a factor equal to the finite-difference grid size in the case of the 2.5-D modelling. The components of the radiation pattern were computed in the xz-plane using 3-D and 2.5-D algorithms for various focal mechanisms, and the results were in good agreement. A comparative study of the amplitude behaviour of the 3-D and 2.5-D wavefronts in a layered medium reveals the spatial and temporal damped nature of the 2.5-D elastodynamic wave equation. 3-D and 2.5-D simulated responses at a site using a different strike direction reveal that strong ground motion (SGM) can be predicted just by rotating the strike of the fault counter-clockwise by the same amount as the azimuth of
Statistical behaviour of adaptive multilevel splitting algorithms in simple models
International Nuclear Information System (INIS)
Rolland, Joran; Simonnet, Eric
2015-01-01
Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection–mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations
Real-time recursive motion segmentation of video data on a programmable device
Wittebrood, R.B; Haan, de G.
2001-01-01
We previously reported on a recursive algorithm enabling real-time object-based motion estimation (OME) of standard definition video on a digital signal processor (DSP). The algorithm approximates the motion of the objects in the image with parametric motion models and creates a segmentation mask by
Energy Technology Data Exchange (ETDEWEB)
Aagaard, B; Brocher, T; Dreger, D; Frankel, A; Graves, R; Harmsen, S; Hartzell, S; Larsen, S; McCandless, K; Nilsson, S; Petersson, N A; Rodgers, A; Sjogreen, B; Tkalcic, H; Zoback, M L
2007-02-09
We estimate the ground motions produced by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.
Whole-Motion Model of Perception during Forward- and Backward-Facing Centrifuge Runs
Holly, Jan E.; Vrublevskis, Arturs; Carlson, Lindsay E.
2009-01-01
Illusory perceptions of motion and orientation arise during human centrifuge runs without vision. Asymmetries have been found between acceleration and deceleration, and between forward-facing and backward-facing runs. Perceived roll tilt has been studied extensively during upright fixed-carriage centrifuge runs, and other components have been studied to a lesser extent. Certain, but not all, perceptual asymmetries in acceleration-vs-deceleration and forward-vs-backward motion can be explained by existing analyses. The immediate acceleration-deceleration roll-tilt asymmetry can be explained by the three-dimensional physics of the external stimulus; in addition, longer-term data has been modeled in a standard way using physiological time constants. However, the standard modeling approach is shown in the present research to predict forward-vs-backward-facing symmetry in perceived roll tilt, contradicting experimental data, and to predict perceived sideways motion, rather than forward or backward motion, around a curve. The present work develops a different whole-motion-based model taking into account the three-dimensional form of perceived motion and orientation. This model predicts perceived forward or backward motion around a curve, and predicts additional asymmetries such as the forward-backward difference in roll tilt. This model is based upon many of the same principles as the standard model, but includes an additional concept of familiarity of motions as a whole. PMID:19208962
Improving permafrost distribution modelling using feature selection algorithms
Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail
2016-04-01
The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its
Two new algorithms to combine kriging with stochastic modelling
Venema, Victor; Lindau, Ralf; Varnai, Tamas; Simmer, Clemens
2010-05-01
Two main groups of statistical methods used in the Earth sciences are geostatistics and stochastic modelling. Geostatistical methods, such as various kriging algorithms, aim at estimating the mean value for every point as well as possible. In case of sparse measurements, such fields have less variability at small scales and a narrower distribution as the true field. This can lead to biases if a nonlinear process is simulated driven by such a kriged field. Stochastic modelling aims at reproducing the statistical structure of the data in space and time. One of the stochastic modelling methods, the so-called surrogate data approach, replicates the value distribution and power spectrum of a certain data set. While stochastic methods reproduce the statistical properties of the data, the location of the measurement is not considered. This requires the use of so-called constrained stochastic models. Because radiative transfer through clouds is a highly nonlinear process, it is essential to model the distribution (e.g. of optical depth, extinction, liquid water content or liquid water path) accurately. In addition, the correlations within the cloud field are important, especially because of horizontal photon transport. This explains the success of surrogate cloud fields for use in 3D radiative transfer studies. Up to now, however, we could only achieve good results for the radiative properties averaged over the field, but not for a radiation measurement located at a certain position. Therefore we have developed a new algorithm that combines the accuracy of stochastic (surrogate) modelling with the positioning capabilities of kriging. In this way, we can automatically profit from the large geostatistical literature and software. This algorithm is similar to the standard iterative amplitude adjusted Fourier transform (IAAFT) algorithm, but has an additional iterative step in which the surrogate field is nudged towards the kriged field. The nudging strength is gradually
Sustainable logistics and transportation optimization models and algorithms
Gakis, Konstantinos; Pardalos, Panos
2017-01-01
Focused on the logistics and transportation operations within a supply chain, this book brings together the latest models, algorithms, and optimization possibilities. Logistics and transportation problems are examined within a sustainability perspective to offer a comprehensive assessment of environmental, social, ethical, and economic performance measures. Featured models, techniques, and algorithms may be used to construct policies on alternative transportation modes and technologies, green logistics, and incentives by the incorporation of environmental, economic, and social measures. Researchers, professionals, and graduate students in urban regional planning, logistics, transport systems, optimization, supply chain management, business administration, information science, mathematics, and industrial and systems engineering will find the real life and interdisciplinary issues presented in this book informative and useful.
A comparison of updating algorithms for large N reduced models
Energy Technology Data Exchange (ETDEWEB)
Pérez, Margarita García [Instituto de Física Teórica UAM-CSIC, Universidad Autónoma de Madrid,Nicolás Cabrera 13-15, E-28049-Madrid (Spain); González-Arroyo, Antonio [Instituto de Física Teórica UAM-CSIC, Universidad Autónoma de Madrid,Nicolás Cabrera 13-15, E-28049-Madrid (Spain); Departamento de Física Teórica, C-XI Universidad Autónoma de Madrid,E-28049 Madrid (Spain); Keegan, Liam [PH-TH, CERN,CH-1211 Geneva 23 (Switzerland); Okawa, Masanori [Graduate School of Science, Hiroshima University,Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Core of Research for the Energetic Universe, Hiroshima University,Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Ramos, Alberto [PH-TH, CERN,CH-1211 Geneva 23 (Switzerland)
2015-06-29
We investigate Monte Carlo updating algorithms for simulating SU(N) Yang-Mills fields on a single-site lattice, such as for the Twisted Eguchi-Kawai model (TEK). We show that performing only over-relaxation (OR) updates of the gauge links is a valid simulation algorithm for the Fabricius and Haan formulation of this model, and that this decorrelates observables faster than using heat-bath updates. We consider two different methods of implementing the OR update: either updating the whole SU(N) matrix at once, or iterating through SU(2) subgroups of the SU(N) matrix, we find the same critical exponent in both cases, and only a slight difference between the two.
A comparison of updating algorithms for large $N$ reduced models
Pérez, Margarita García; Keegan, Liam; Okawa, Masanori; Ramos, Alberto
2015-01-01
We investigate Monte Carlo updating algorithms for simulating $SU(N)$ Yang-Mills fields on a single-site lattice, such as for the Twisted Eguchi-Kawai model (TEK). We show that performing only over-relaxation (OR) updates of the gauge links is a valid simulation algorithm for the Fabricius and Haan formulation of this model, and that this decorrelates observables faster than using heat-bath updates. We consider two different methods of implementing the OR update: either updating the whole $SU(N)$ matrix at once, or iterating through $SU(2)$ subgroups of the $SU(N)$ matrix, we find the same critical exponent in both cases, and only a slight difference between the two.
Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing
2015-01-01
Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional com...
TU-F-17A-03: An Analytical Respiratory Perturbation Model for Lung Motion Prediction
International Nuclear Information System (INIS)
Li, G; Yuan, A; Wei, J
2014-01-01
Purpose: Breathing irregularity is common, causing unreliable prediction in tumor motion for correlation-based surrogates. Both tidal volume (TV) and breathing pattern (BP=ΔVthorax/TV, where TV=ΔVthorax+ΔVabdomen) affect lung motion in anterior-posterior and superior-inferior directions. We developed a novel respiratory motion perturbation (RMP) model in analytical form to account for changes in TV and BP in motion prediction from simulation to treatment. Methods: The RMP model is an analytical function of patient-specific anatomic and physiologic parameters. It contains a base-motion trajectory d(x,y,z) derived from a 4-dimensional computed tomography (4DCT) at simulation and a perturbation term Δd(ΔTV,ΔBP) accounting for deviation at treatment from simulation. The perturbation is dependent on tumor-specific location and patient-specific anatomy. Eleven patients with simulation and treatment 4DCT images were used to assess the RMP method in motion prediction from 4DCT1 to 4DCT2, and vice versa. For each patient, ten motion trajectories of corresponding points in the lower lobes were measured in both 4DCTs: one served as the base-motion trajectory and the other as the ground truth for comparison. In total, 220 motion trajectory predictions were assessed. The motion discrepancy between two 4DCTs for each patient served as a control. An established 5D motion model was used for comparison. Results: The average absolute error of RMP model prediction in superior-inferior direction is 1.6±1.8 mm, similar to 1.7±1.6 mm from the 5D model (p=0.98). Some uncertainty is associated with limited spatial resolution (2.5mm slice thickness) and temporal resolution (10-phases). Non-corrected motion discrepancy between two 4DCTs is 2.6±2.7mm, with the maximum of ±20mm, and correction is necessary (p=0.01). Conclusion: The analytical motion model predicts lung motion with accuracy similar to the 5D model. The analytical model is based on physical relationships, requires no
Kwon, Hyeokjun; Oh, Sechang; Kumar, Prashanth S.; Varadan, Vijay K.
2012-10-01
with conductive gel. However, the gel can be dried when taking long time monitoring. The gold nanowire structure electrodes without consideration of uncomfortable usage of gel are attached on beneath the chest position of a brassiere, and the electrodes convert the physical ECG signal to voltage potential signal. The voltage potential ECG signal is converted to digital signal by AD converter included in microprocessor. The converted ECG signal by AD converter is saved on every 1 sec period in the internal RAM in microprocessor. For transmission of the saved data in the internal RAM to a server computer locating at remote area, the system uses the GPRS communication technology, which can develop the wide area network(WAP) without any gateway and repeater. In addition, the transmission system is operated on client mode of GPRS communication. The remote server is installed a program including the functions of displaying and analyzing the transmitted ECG. To display the ECG data, the program is operated with TCP/IP server mode and static IP address, and to analyze the ECG data, the paper suggests motion artifact remove algorithm including adaptive filter with LMS(least mean square), baseline detection algorithm using predictability estimation theory, a filter with moving weighted factor, low pass filter, peak to peak detection, and interpolation.
Evaluating Multicore Algorithms on the Unified Memory Model
Directory of Open Access Journals (Sweden)
John E. Savage
2009-01-01
Full Text Available One of the challenges to achieving good performance on multicore architectures is the effective utilization of the underlying memory hierarchy. While this is an issue for single-core architectures, it is a critical problem for multicore chips. In this paper, we formulate the unified multicore model (UMM to help understand the fundamental limits on cache performance on these architectures. The UMM seamlessly handles different types of multiple-core processors with varying degrees of cache sharing at different levels. We demonstrate that our model can be used to study a variety of multicore architectures on a variety of applications. In particular, we use it to analyze an option pricing problem using the trinomial model and develop an algorithm for it that has near-optimal memory traffic between cache levels. We have implemented the algorithm on a two Quad-Core Intel Xeon 5310 1.6 GHz processors (8 cores. It achieves a peak performance of 19.5 GFLOPs, which is 38% of the theoretical peak of the multicore system. We demonstrate that our algorithm outperforms compiler-optimized and auto-parallelized code by a factor of up to 7.5.
Prefiltering Model for Homology Detection Algorithms on GPU.
Retamosa, Germán; de Pedro, Luis; González, Ivan; Tamames, Javier
2016-01-01
Homology detection has evolved over the time from heavy algorithms based on dynamic programming approaches to lightweight alternatives based on different heuristic models. However, the main problem with these algorithms is that they use complex statistical models, which makes it difficult to achieve a relevant speedup and find exact matches with the original results. Thus, their acceleration is essential. The aim of this article was to prefilter a sequence database. To make this work, we have implemented a groundbreaking heuristic model based on NVIDIA's graphics processing units (GPUs) and multicore processors. Depending on the sensitivity settings, this makes it possible to quickly reduce the sequence database by factors between 50% and 95%, while rejecting no significant sequences. Furthermore, this prefiltering application can be used together with multiple homology detection algorithms as a part of a next-generation sequencing system. Extensive performance and accuracy tests have been carried out in the Spanish National Centre for Biotechnology (NCB). The results show that GPU hardware can accelerate the execution times of former homology detection applications, such as National Centre for Biotechnology Information (NCBI), Basic Local Alignment Search Tool for Proteins (BLASTP), up to a factor of 4.
Software Piracy Detection Model Using Ant Colony Optimization Algorithm
Astiqah Omar, Nor; Zakuan, Zeti Zuryani Mohd; Saian, Rizauddin
2017-06-01
Internet enables information to be accessible anytime and anywhere. This scenario creates an environment whereby information can be easily copied. Easy access to the internet is one of the factors which contribute towards piracy in Malaysia as well as the rest of the world. According to a survey conducted by Compliance Gap BSA Global Software Survey in 2013 on software piracy, found out that 43 percent of the software installed on PCs around the world was not properly licensed, the commercial value of the unlicensed installations worldwide was reported to be 62.7 billion. Piracy can happen anywhere including universities. Malaysia as well as other countries in the world is faced with issues of piracy committed by the students in universities. Piracy in universities concern about acts of stealing intellectual property. It can be in the form of software piracy, music piracy, movies piracy and piracy of intellectual materials such as books, articles and journals. This scenario affected the owner of intellectual property as their property is in jeopardy. This study has developed a classification model for detecting software piracy. The model was developed using a swarm intelligence algorithm called the Ant Colony Optimization algorithm. The data for training was collected by a study conducted in Universiti Teknologi MARA (Perlis). Experimental results show that the model detection accuracy rate is better as compared to J48 algorithm.
Genetic algorithms and experimental discrimination of SUSY models
International Nuclear Information System (INIS)
Allanach, B.C.; Quevedo, F.; Grellscheid, D.
2004-01-01
We introduce genetic algorithms as a means to estimate the accuracy required to discriminate among different models using experimental observables. We exemplify the technique in the context of the minimal supersymmetric standard model. If supersymmetric particles are discovered, models of supersymmetry breaking will be fit to the observed spectrum and it is beneficial to ask beforehand: what accuracy is required to always allow the discrimination of two particular models and which are the most important masses to observe? Each model predicts a bounded patch in the space of observables once unknown parameters are scanned over. The questions can be answered by minimising a 'distance' measure between the two hypersurfaces. We construct a distance measure that scales like a constant fraction of an observable, since that is how the experimental errors are expected to scale. Genetic algorithms, including concepts such as natural selection, fitness and mutations, provide a solution to the minimisation problem. We illustrate the efficiency of the method by comparing three different classes of string models for which the above questions could not be answered with previous techniques. The required accuracy is in the range accessible to the Large Hadron Collider (LHC) when combined with a future linear collider (LC) facility. The technique presented here can be applied to more general classes of models or observables. (author)
A Subject-Specific Kinematic Model to Predict Human Motion in Exoskeleton-Assisted Gait.
Torricelli, Diego; Cortés, Camilo; Lete, Nerea; Bertelsen, Álvaro; Gonzalez-Vargas, Jose E; Del-Ama, Antonio J; Dimbwadyo, Iris; Moreno, Juan C; Florez, Julian; Pons, Jose L
2018-01-01
The relative motion between human and exoskeleton is a crucial factor that has remarkable consequences on the efficiency, reliability and safety of human-robot interaction. Unfortunately, its quantitative assessment has been largely overlooked in the literature. Here, we present a methodology that allows predicting the motion of the human joints from the knowledge of the angular motion of the exoskeleton frame. Our method combines a subject-specific skeletal model with a kinematic model of a lower limb exoskeleton (H2, Technaid), imposing specific kinematic constraints between them. To calibrate the model and validate its ability to predict the relative motion in a subject-specific way, we performed experiments on seven healthy subjects during treadmill walking tasks. We demonstrate a prediction accuracy lower than 3.5° globally, and around 1.5° at the hip level, which represent an improvement up to 66% compared to the traditional approach assuming no relative motion between the user and the exoskeleton.
Spatio-temporal Rich Model Based Video Steganalysis on Cross Sections of Motion Vector Planes.
Tasdemir, Kasim; Kurugollu, Fatih; Sezer, Sakir
2016-05-11
A rich model based motion vector steganalysis benefiting from both temporal and spatial correlations of motion vectors is proposed in this work. The proposed steganalysis method has a substantially superior detection accuracy than the previous methods, even the targeted ones. The improvement in detection accuracy lies in several novel approaches introduced in this work. Firstly, it is shown that there is a strong correlation, not only spatially but also temporally, among neighbouring motion vectors for longer distances. Therefore, temporal motion vector dependency along side the spatial dependency is utilized for rigorous motion vector steganalysis. Secondly, unlike the filters previously used, which were heuristically designed against a specific motion vector steganography, a diverse set of many filters which can capture aberrations introduced by various motion vector steganography methods is used. The variety and also the number of the filter kernels are substantially more than that of used in previous ones. Besides that, filters up to fifth order are employed whereas the previous methods use at most second order filters. As a result of these, the proposed system captures various decorrelations in a wide spatio-temporal range and provides a better cover model. The proposed method is tested against the most prominent motion vector steganalysis and steganography methods. To the best knowledge of the authors, the experiments section has the most comprehensive tests in motion vector steganalysis field including five stego and seven steganalysis methods. Test results show that the proposed method yields around 20% detection accuracy increase in low payloads and 5% in higher payloads.
Algorithms for testing of fractional dynamics: a practical guide to ARFIMA modelling
International Nuclear Information System (INIS)
Burnecki, Krzysztof; Weron, Aleksander
2014-01-01
In this survey paper we present a systematic methodology which demonstrates how to identify the origins of fractional dynamics. We consider three mechanisms which lead to it, namely fractional Brownian motion, fractional Lévy stable motion and an autoregressive fractionally integrated moving average (ARFIMA) process but we concentrate on the ARFIMA modelling. The methodology is based on statistical tools for identification and validation of the fractional dynamics, in particular on an ARFIMA parameter estimator, an ergodicity test, a self-similarity index estimator based on sample p-variation and a memory parameter estimator based on sample mean-squared displacement. A complete list of algorithms needed for this is provided in appendices A–F. Finally, we illustrate the methodology on various empirical data and show that ARFIMA can be considered as a universal model for fractional dynamics. Thus, we provide a practical guide for experimentalists on how to efficiently use ARFIMA modelling for a large class of anomalous diffusion data. (paper)
Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations
Bang, Youngsuk
Reduced order modeling (ROM) has been recognized as an indispensable approach when the engineering analysis requires many executions of high fidelity simulation codes. Examples of such engineering analyses in nuclear reactor core calculations, representing the focus of this dissertation, include the functionalization of the homogenized few-group cross-sections in terms of the various core conditions, e.g. burn-up, fuel enrichment, temperature, etc. This is done via assembly calculations which are executed many times to generate the required functionalization for use in the downstream core calculations. Other examples are sensitivity analysis used to determine important core attribute variations due to input parameter variations, and uncertainty quantification employed to estimate core attribute uncertainties originating from input parameter uncertainties. ROM constructs a surrogate model with quantifiable accuracy which can replace the original code for subsequent engineering analysis calculations. This is achieved by reducing the effective dimensionality of the input parameter, the state variable, or the output response spaces, by projection onto the so-called active subspaces. Confining the variations to the active subspace allows one to construct an ROM model of reduced complexity which can be solved more efficiently. This dissertation introduces a new algorithm to render reduction with the reduction errors bounded based on a user-defined error tolerance which represents the main challenge of existing ROM techniques. Bounding the error is the key to ensuring that the constructed ROM models are robust for all possible applications. Providing such error bounds represents one of the algorithmic contributions of this dissertation to the ROM state-of-the-art. Recognizing that ROM techniques have been developed to render reduction at different levels, e.g. the input parameter space, the state space, and the response space, this dissertation offers a set of novel
Embodied learning of a generative neural model for biological motion perception and inference.
Schrodt, Fabian; Layher, Georg; Neumann, Heiko; Butz, Martin V
2015-01-01
Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons.
Embodied Learning of a Generative Neural Model for Biological Motion Perception and Inference
Directory of Open Access Journals (Sweden)
Fabian eSchrodt
2015-07-01
Full Text Available Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons.
Micro energy harvesting from ambient motion : modeling, simulation and design
Energy Technology Data Exchange (ETDEWEB)
Blystad, Lars-Cyril
2012-07-01
excitations, but result in the same output power whether the end stops are lossy or not. In contrast end stop loss is important under broadband vibrations. A piezoelectric mesoscale energy harvester was built, and tests of the harvester confirm the predicted behavior from the modeling and simulations. The modeling of the end stop as a parallel spring-dashpot system is sufficient to recapture the end stop behavior. Design and characterization of a novel MEMS electrostatic vibration energy harvester have been done. The harvester exploits the relative motion between two proof masses with different resonant frequencies. The transducer is implemented as an in-plane gap overlap comb struc- structure. Its main feature is to broaden the effective bandwidth compared to a single mass reference design. The silicon area of one energy harvester device measures 8.5 mm X 8.5 mm. Experimental tests prove the concept. For broadband vibrations the dual mass harvester obtains a wider bandwidth (approximately 8 Hz) compared to a single mass reference device (approximately 4 Hz).(au)
Model parameters estimation and sensitivity by genetic algorithms
International Nuclear Information System (INIS)
Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca
2003-01-01
In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The
Development and evaluation of thermal model reduction algorithms for spacecraft
Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus
2015-05-01
This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.
Adjustment Criterion and Algorithm in Adjustment Model with Uncertain
Directory of Open Access Journals (Sweden)
SONG Yingchun
2015-02-01
Full Text Available Uncertainty often exists in the process of obtaining measurement data, which affects the reliability of parameter estimation. This paper establishes a new adjustment model in which uncertainty is incorporated into the function model as a parameter. A new adjustment criterion and its iterative algorithm are given based on uncertainty propagation law in the residual error, in which the maximum possible uncertainty is minimized. This paper also analyzes, with examples, the different adjustment criteria and features of optimal solutions about the least-squares adjustment, the uncertainty adjustment and total least-squares adjustment. Existing error theory is extended with new observational data processing method about uncertainty.
A Multiple Model Prediction Algorithm for CNC Machine Wear PHM
Directory of Open Access Journals (Sweden)
Huimin Chen
2011-01-01
Full Text Available The 2010 PHM data challenge focuses on the remaining useful life (RUL estimation for cutters of a high speed CNC milling machine using measurements from dynamometer, accelerometer, and acoustic emission sensors. We present a multiple model approach for wear depth estimation of milling machine cutters using the provided data. The feature selection, initial wear estimation and multiple model fusion components of the proposed algorithm are explained in details and compared with several alternative methods using the training data. The final submission ranked #2 among professional and student participants and the method is applicable to other data driven PHM problems.
Comparison of evolutionary algorithms in gene regulatory network model inference.
LENUS (Irish Health Repository)
2010-01-01
ABSTRACT: BACKGROUND: The evolution of high throughput technologies that measure gene expression levels has created a data base for inferring GRNs (a process also known as reverse engineering of GRNs). However, the nature of these data has made this process very difficult. At the moment, several methods of discovering qualitative causal relationships between genes with high accuracy from microarray data exist, but large scale quantitative analysis on real biological datasets cannot be performed, to date, as existing approaches are not suitable for real microarray data which are noisy and insufficient. RESULTS: This paper performs an analysis of several existing evolutionary algorithms for quantitative gene regulatory network modelling. The aim is to present the techniques used and offer a comprehensive comparison of approaches, under a common framework. Algorithms are applied to both synthetic and real gene expression data from DNA microarrays, and ability to reproduce biological behaviour, scalability and robustness to noise are assessed and compared. CONCLUSIONS: Presented is a comparison framework for assessment of evolutionary algorithms, used to infer gene regulatory networks. Promising methods are identified and a platform for development of appropriate model formalisms is established.
A finite state model for respiratory motion analysis in image guided radiation therapy
International Nuclear Information System (INIS)
Wu Huanmei; Sharp, Gregory C; Salzberg, Betty; Kaeli, David; Shirato, Hiroki; Jiang, Steve B
2004-01-01
Effective image guided radiation treatment of a moving tumour requires adequate information on respiratory motion characteristics. For margin expansion, beam tracking and respiratory gating, the tumour motion must be quantified for pretreatment planning and monitored on-line. We propose a finite state model for respiratory motion analysis that captures our natural understanding of breathing stages. In this model, a regular breathing cycle is represented by three line segments, exhale, end-of-exhale and inhale, while abnormal breathing is represented by an irregular breathing state. In addition, we describe an on-line implementation of this model in one dimension. We found this model can accurately characterize a wide variety of patient breathing patterns. This model was used to describe the respiratory motion for 23 patients with peak-to-peak motion greater than 7 mm. The average root mean square error over all patients was less than 1 mm and no patient has an error worse than 1.5 mm. Our model provides a convenient tool to quantify respiratory motion characteristics, such as patterns of frequency changes and amplitude changes, and can be applied to internal or external motion, including internal tumour position, abdominal surface, diaphragm, spirometry and other surrogates
A finite state model for respiratory motion analysis in image guided radiation therapy
Energy Technology Data Exchange (ETDEWEB)
Wu Huanmei [College of Computer and Information Science, Northeastern University, Boston, MA 02115 (United States); Sharp, Gregory C [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 (United States); Salzberg, Betty [College of Computer and Information Science, Northeastern University, Boston, MA 02115 (United States); Kaeli, David [Department of Electrical and Computer Engineering, Northeastern University, Boston, MA 02115 (United States); Shirato, Hiroki [Department of Radiation Medicine, Hokkaido University School of Medicine, Sapporo (Japan); Jiang, Steve B [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 (United States)
2004-12-07
Effective image guided radiation treatment of a moving tumour requires adequate information on respiratory motion characteristics. For margin expansion, beam tracking and respiratory gating, the tumour motion must be quantified for pretreatment planning and monitored on-line. We propose a finite state model for respiratory motion analysis that captures our natural understanding of breathing stages. In this model, a regular breathing cycle is represented by three line segments, exhale, end-of-exhale and inhale, while abnormal breathing is represented by an irregular breathing state. In addition, we describe an on-line implementation of this model in one dimension. We found this model can accurately characterize a wide variety of patient breathing patterns. This model was used to describe the respiratory motion for 23 patients with peak-to-peak motion greater than 7 mm. The average root mean square error over all patients was less than 1 mm and no patient has an error worse than 1.5 mm. Our model provides a convenient tool to quantify respiratory motion characteristics, such as patterns of frequency changes and amplitude changes, and can be applied to internal or external motion, including internal tumour position, abdominal surface, diaphragm, spirometry and other surrogates.
A trade-off analysis design tool. Aircraft interior noise-motion/passenger satisfaction model
Jacobson, I. D.
1977-01-01
A design tool was developed to enhance aircraft passenger satisfaction. The effect of aircraft interior motion and noise on passenger comfort and satisfaction was modelled. Effects of individual aircraft noise sources were accounted for, and the impact of noise on passenger activities and noise levels to safeguard passenger hearing were investigated. The motion noise effect models provide a means for tradeoff analyses between noise and motion variables, and also provide a framework for optimizing noise reduction among noise sources. Data for the models were collected onboard commercial aircraft flights and specially scheduled tests.
Dabaghi, Mayssa
2014-01-01
A comprehensive parameterized stochastic model of near-fault ground motions in two orthogonal horizontal directions is developed. The proposed model uniquely combines several existing and new sub-models to represent major characteristics of recorded near-fault ground motions. These characteristics include near-fault effects of directivity and fling step; temporal and spectral non-stationarity; intensity, duration and frequency content characteristics; directionality of components, as well as ...
Discrete time motion model for guiding people in urban areas using multiple robots
Garrell Zulueta, Anais; Sanfeliu Cortés, Alberto; Moreno-Noguer, Francesc
2009-01-01
We present a new model for people guidance in urban settings using several mobile robots, that overcomes the limitations of existing approaches, which are either tailored to tightly bounded environments, or based on unrealistic human behaviors. Although the robots motion is controlled by means of a standard particle filter formulation, the novelty of our approach resides in how the environment and human and robot motions are modeled. In particular we define a “Discrete-Time-Motion” model, whi...
The effect of postoperative passive motion on rotator cuff healing in a rat model.
Peltz, Cathryn D; Dourte, Leann M; Kuntz, Andrew F; Sarver, Joseph J; Kim, Soung-Yon; Williams, Gerald R; Soslowsky, Louis J
2009-10-01
Surgical repairs of torn rotator cuff tendons frequently fail. Immobilization has been shown to improve tissue mechanical properties in an animal model of rotator cuff repair, and passive motion has been shown to improve joint mechanics in animal models of flexor tendon repair. Our objective was to determine if daily passive motion would improve joint mechanics in comparison with continuous immobilization in a rat rotator cuff repair model. We hypothesized that daily passive motion would result in improved passive shoulder joint mechanics in comparison with continuous immobilization initially and that there would be no differences in passive joint mechanics or insertion site mechanical properties after four weeks of remobilization. A supraspinatus injury was created and was surgically repaired in sixty-five Sprague-Dawley rats. Rats were separated into three postoperative groups (continuous immobilization, passive motion protocol 1, and passive motion protocol 2) for two weeks before all underwent a remobilization protocol for four weeks. Serial measurements of passive shoulder mechanics (internal and external range of motion and joint stiffness) were made before surgery and at two and six weeks after surgery. After the animals were killed, collagen organization and mechanical properties of the tendon-to-bone insertion site were determined. Total range of motion for both passive motion groups (49% and 45% of the pre-injury values) was less than that for the continuous immobilization group (59% of the pre-injury value) at two weeks and remained significantly less following four weeks of remobilization exercise. Joint stiffness at two weeks was increased for both passive motion groups in comparison with the continuous immobilization group. At both two and six weeks after repair, internal range of motion was significantly decreased whereas external range of motion was not. There were no differences between the groups in terms of collagen organization or mechanical
Characterization of free breathing patterns with 5D lung motion model
Energy Technology Data Exchange (ETDEWEB)
Zhao Tianyu; Lu Wei; Yang Deshan; Mutic, Sasa; Noel, Camille E.; Parikh, Parag J.; Bradley, Jeffrey D.; Low, Daniel A. [Department of Radiation Oncology, Washington University School of Medicine, St. Louis, Missouri 63110 (United States)
2009-11-15
Purpose: To determine the quiet respiration breathing motion model parameters for lung cancer and nonlung cancer patients. Methods: 49 free breathing patient 4DCT image datasets (25 scans, cine mode) were collected with simultaneous quantitative spirometry. A cross-correlation registration technique was employed to track the lung tissue motion between scans. The registration results were applied to a lung motion model: X-vector=X-vector{sub 0}+{alpha}-vector{beta}-vector f, where X-vector is the position of a piece of tissue located at reference position X-vector{sub 0} during a reference breathing phase (zero tidal volume v, zero airflow f). {alpha}-vector is a parameter that characterizes the motion due to air filling (motion as a function of tidal volume v) and {beta}-vector is the parameter that accounts for the motion due to the imbalance of dynamical stress distributions during inspiration and exhalation that causes lung motion hysteresis (motion as a function of airflow f). The parameters {alpha}-vector and {beta}-vector together provide a quantitative characterization of breathing motion that inherently includes the complex hysteresis interplay. The {alpha}-vector and {beta}-vector distributions were examined for each patient to determine overall general patterns and interpatient pattern variations. Results: For 44 patients, the greatest values of |{alpha}-vector| were observed in the inferior and posterior lungs. For the rest of the patients, |{alpha}-vector| reached its maximum in the anterior lung in three patients and the lateral lung in two patients. The hysteresis motion {beta}-vector had greater variability, but for the majority of patients, |{beta}-vector| was largest in the lateral lungs. Conclusions: This is the first report of the three-dimensional breathing motion model parameters for a large cohort of patients. The model has the potential for noninvasively predicting lung motion. The majority of patients exhibited similar |{alpha}-vector| maps
IIR Filter Modeling Using an Algorithm Inspired on Electromagnetism
Directory of Open Access Journals (Sweden)
Cuevas-Jiménez E.
2013-01-01
Full Text Available Infinite-impulse-response (IIR filtering provides a powerful approach for solving a variety of problems. However, its design represents a very complicated task, since the error surface of IIR filters is generally multimodal, global optimization techniques are required in order to avoid local minima. In this paper, a new method based on the Electromagnetism-Like Optimization Algorithm (EMO is proposed for IIR filter modeling. EMO originates from the electro-magnetism theory of physics by assuming potential solutions as electrically charged particles which spread around the solution space. The charge of each particle depends on its objective function value. This algorithm employs a collective attraction-repulsion mechanism to move the particles towards optimality. The experimental results confirm the high performance of the proposed method in solving various benchmark identification problems.
High speed railway track dynamics models, algorithms and applications
Lei, Xiaoyan
2017-01-01
This book systematically summarizes the latest research findings on high-speed railway track dynamics, made by the author and his research team over the past decade. It explores cutting-edge issues concerning the basic theory of high-speed railways, covering the dynamic theories, models, algorithms and engineering applications of the high-speed train and track coupling system. Presenting original concepts, systematic theories and advanced algorithms, the book places great emphasis on the precision and completeness of its content. The chapters are interrelated yet largely self-contained, allowing readers to either read through the book as a whole or focus on specific topics. It also combines theories with practice to effectively introduce readers to the latest research findings and developments in high-speed railway track dynamics. It offers a valuable resource for researchers, postgraduates and engineers in the fields of civil engineering, transportation, highway & railway engineering.
Architecture in motion: A model for music composition
Variego, Jorge Elias
2011-12-01
Speculations regarding the relationship between music and architecture go back to the very origins of these disciplines. Throughout history, these links have always reaffirmed that music and architecture are analogous art forms that only diverge in their object of study. In the 1 st c. BCE Vitruvius conceived Architecture as "one of the most inclusive and universal human activities" where the architect should be educated in all the arts, having a vast knowledge in history, music and philosophy. In the 18th c., the German thinker Johann Wolfgang von Goethe, described Architecture as "frozen music". More recently, in the 20th c., Iannis Xenakis studied the similar structuring principles between Music and Architecture creating his own "models" of musical composition based on mathematical principles and geometric constructions. The goal of this document is to propose a compositional method that will function as a translator between the acoustical properties of a room and music, to facilitate the creation of musical works that will not only happen within an enclosed space but will also intentionally interact with the space. Acoustical measurements of rooms such as reverberation time, frequency response and volume will be measured and systematically organized in correspondence with orchestrational parameters. The musical compositions created after the proposed model are evocative of the spaces on which they are based. They are meant to be performed in any space, not exclusively in the one where the acoustical measurements were obtained. The visual component of architectural design is disregarded; the room is considered a musical instrument, with its particular sound qualities and resonances. Compositions using the proposed model will not result as sonified shapes, they will be musical works literally "tuned" to a specific space. This Architecture in motion is an attempt to adopt scientific research to the service of a creative activity and to let the aural properties of
O'Reilly, Meaghan Anne; Whyne, Cari Marisa
2008-08-01
A comparative analysis of parametric and patient-specific finite element (FE) modeling of spinal motion segments. To develop patient-specific FE models of spinal motion segments using mesh-morphing methods applied to a parametric FE model. To compare strain and displacement patterns in parametric and morphed models for both healthy and metastatically involved vertebrae. Parametric FE models may be limited in their ability to fully represent patient-specific geometries and material property distributions. Generation of multiple patient-specific FE models has been limited because of computational expense. Morphing methods have been successfully used to generate multiple specimen-specific FE models of caudal rat vertebrae. FE models of a healthy and a metastatic T6-T8 spinal motion segment were analyzed with and without patient-specific material properties. Parametric and morphed models were compared using a landmark-based morphing algorithm. Morphing of the parametric FE model and including patient-specific material properties both had a strong impact on magnitudes and patterns of vertebral strain and displacement. Small but important geometric differences can be represented through morphing of parametric FE models. The mesh-morphing algorithm developed provides a rapid method for generating patient-specific FE models of spinal motion segments.
Real time tracking by LOPF algorithm with mixture model
Meng, Bo; Zhu, Ming; Han, Guangliang; Wu, Zhiguo
2007-11-01
A new particle filter-the Local Optimum Particle Filter (LOPF) algorithm is presented for tracking object accurately and steadily in visual sequences in real time which is a challenge task in computer vision field. In order to using the particles efficiently, we first use Sobel algorithm to extract the profile of the object. Then, we employ a new Local Optimum algorithm to auto-initialize some certain number of particles from these edge points as centre of the particles. The main advantage we do this in stead of selecting particles randomly in conventional particle filter is that we can pay more attentions on these more important optimum candidates and reduce the unnecessary calculation on those negligible ones, in addition we can overcome the conventional degeneracy phenomenon in a way and decrease the computational costs. Otherwise, the threshold is a key factor that affecting the results very much. So here we adapt an adaptive threshold choosing method to get the optimal Sobel result. The dissimilarities between the target model and the target candidates are expressed by a metric derived from the Bhattacharyya coefficient. Here, we use both the counter cue to select the particles and the color cur to describe the targets as the mixture target model. The effectiveness of our scheme is demonstrated by real visual tracking experiments. Results from simulations and experiments with real video data show the improved performance of the proposed algorithm when compared with that of the standard particle filter. The superior performance is evident when the target encountering the occlusion in real video where the standard particle filter usually fails.
Directory of Open Access Journals (Sweden)
Alba Sandyra Bezerra Lopes
2012-01-01
Full Text Available The motion estimation is the most complex module in a video encoder requiring a high processing throughput and high memory bandwidth, mainly when the focus is high-definition videos. The throughput problem can be solved increasing the parallelism in the internal operations. The external memory bandwidth may be reduced using a memory hierarchy. This work presents a memory hierarchy model for a full-search motion estimation core. The proposed memory hierarchy model is based on a data reuse scheme considering the full search algorithm features. The proposed memory hierarchy expressively reduces the external memory bandwidth required for the motion estimation process, and it provides a very high data throughput for the ME core. This throughput is necessary to achieve real time when processing high-definition videos. When considering the worst bandwidth scenario, this memory hierarchy is able to reduce the external memory bandwidth in 578 times. A case study for the proposed hierarchy, using 32×32 search window and 8×8 block size, was implemented and prototyped on a Virtex 4 FPGA. The results show that it is possible to reach 38 frames per second when processing full HD frames (1920×1080 pixels using nearly 299 Mbytes per second of external memory bandwidth.
A hybrid multiview stereo algorithm for modeling urban scenes.
Lafarge, Florent; Keriven, Renaud; Brédif, Mathieu; Vu, Hoang-Hiep
2013-01-01
We present an original multiview stereo reconstruction algorithm which allows the 3D-modeling of urban scenes as a combination of meshes and geometric primitives. The method provides a compact model while preserving details: Irregular elements such as statues and ornaments are described by meshes, whereas regular structures such as columns and walls are described by primitives (planes, spheres, cylinders, cones, and tori). We adopt a two-step strategy consisting first in segmenting the initial meshbased surface using a multilabel Markov Random Field-based model and second in sampling primitive and mesh components simultaneously on the obtained partition by a Jump-Diffusion process. The quality of a reconstruction is measured by a multi-object energy model which takes into account both photo-consistency and semantic considerations (i.e., geometry and shape layout). The segmentation and sampling steps are embedded into an iterative refinement procedure which provides an increasingly accurate hybrid representation. Experimental results on complex urban structures and large scenes are presented and compared to state-of-the-art multiview stereo meshing algorithms.
Exploration Of Deep Learning Algorithms Using Openacc Parallel Programming Model
Hamam, Alwaleed A.
2017-03-13
Deep learning is based on a set of algorithms that attempt to model high level abstractions in data. Specifically, RBM is a deep learning algorithm that used in the project to increase it\\'s time performance using some efficient parallel implementation by OpenACC tool with best possible optimizations on RBM to harness the massively parallel power of NVIDIA GPUs. GPUs development in the last few years has contributed to growing the concept of deep learning. OpenACC is a directive based ap-proach for computing where directives provide compiler hints to accelerate code. The traditional Restricted Boltzmann Ma-chine is a stochastic neural network that essentially perform a binary version of factor analysis. RBM is a useful neural net-work basis for larger modern deep learning model, such as Deep Belief Network. RBM parameters are estimated using an efficient training method that called Contrastive Divergence. Parallel implementation of RBM is available using different models such as OpenMP, and CUDA. But this project has been the first attempt to apply OpenACC model on RBM.
Exploration Of Deep Learning Algorithms Using Openacc Parallel Programming Model
Hamam, Alwaleed A.; Khan, Ayaz H.
2017-01-01
Deep learning is based on a set of algorithms that attempt to model high level abstractions in data. Specifically, RBM is a deep learning algorithm that used in the project to increase it's time performance using some efficient parallel implementation by OpenACC tool with best possible optimizations on RBM to harness the massively parallel power of NVIDIA GPUs. GPUs development in the last few years has contributed to growing the concept of deep learning. OpenACC is a directive based ap-proach for computing where directives provide compiler hints to accelerate code. The traditional Restricted Boltzmann Ma-chine is a stochastic neural network that essentially perform a binary version of factor analysis. RBM is a useful neural net-work basis for larger modern deep learning model, such as Deep Belief Network. RBM parameters are estimated using an efficient training method that called Contrastive Divergence. Parallel implementation of RBM is available using different models such as OpenMP, and CUDA. But this project has been the first attempt to apply OpenACC model on RBM.
Focuss algorithm application in kinetic compartment modeling for PET tracer
International Nuclear Information System (INIS)
Huang Xinrui; Bao Shanglian
2004-01-01
Molecular imaging is in the process of becoming. Its application mostly depends on the molecular discovery process of imaging probes and drugs, from the mouse to the patient, from research to clinical practice. Positron emission tomography (PET) can non-invasively monitor . pharmacokinetic and functional processes of drugs in intact organisms at tracer concentrations by kinetic modeling. It has been known that for all biological systems, linear or nonlinear, if the system is injected by a tracer in a steady state, the distribution of the tracer follows the kinetics of a linear compartmental system, which has sums of exponential solutions. Based on the general compartmental description of the tracer's fate in vivo, we presented a novel kinetic modeling approach for the quantification of in vivo tracer studies with dynamic positron emission tomography (PET), which can determine a parsimonious model consisting with the measured data. This kinetic modeling technique allows for estimation of parametric images from a voxel based analysis and requires no a priori decision about the tracer's fate in vivo, instead determining the most appropriate model from the information contained within the kinetic data. Choosing a set of exponential functions, convolved with the plasma input function, as basis functions, the time activity curve of a region or a pixel can be written as a linear combination of the basis functions with corresponding coefficients. The number of non-zero coefficients returned corresponds to the model order which is related to the number of tissue compartments. The system macro parameters are simply determined using the focal underdetermined system solver (FOCUSS) algorithm. The FOCUSS algorithm is a nonparametric algorithm for finding localized energy solutions from limited data and is a recursive linear estimation procedure. FOCUSS algorithm usually converges very fast, so demands a few iterations. The effectiveness is verified by simulation and clinical
International Nuclear Information System (INIS)
Cavan, A.E.; Wilson, P.L.; Meyer, J.; Berbeco, R.I.
2010-01-01
Full text: Accuracy of radiotherapy treatment of lung cancer is limited by respiratory induced tumour motion. Compensation for this motion is required to increase treatment efficacy. The lung tumour motion is related to motion of an external abdominal marker, but a reliable model of this correlation is essential. Three viscoelastic systems were developed, in order to determine the best model and analyse its effectiveness on clinical data. Three 1D viscoelastic systems (a spring and dash pot in parallel, series and a combination) were developed and compared using a simulated breathing pattern. The most effective model was applied to 60 clinical data sets (consisting of co-ordinates of tumour and abdominal motion) from multiple treatment fractions of ten patients. The model was optimised for each data set, and efficacy determined by calculating the root mean square (RMS) error between the mo elled position and the actual tumour motion. Upon application to clinical data the parallel configuration achieved an average RMS error of 0.95 mm (superior-inferior direction). The model had patient specific parameters, and displayed good consistency over extended treatment periods. The model ha dled amplitude, frequency and baseline variations of the input signal, and phase shifts between tumour and abdominal motions. This study has shown that a viscoelastic model can be used to cor relate internal lung tumour motion with an external abdominal signal. The ability to handle breathing pattern in'egularities is comparable or better than previous models. Extending the model to a full 3D, pr dictive system could allow clinical implementation for radiotherapy.
Continuum Models for Irregular Phase Boundary Motion in Shape-Memory Tensile Bars
National Research Council Canada - National Science Library
Rosakis, Phoebus
1997-01-01
... observed experimentally. We show that when the model involves a kinetic relation that is 'unstable' in a definite sense, 'stick-slip' motion of the interface between phases and serration of the accompanying stress-elongation...
Mathematical models of decision making on space vehicle motion control at fuzzy conditions
International Nuclear Information System (INIS)
Arslanov, M.Z.; Ismail, E.E.; Oryspaev, D.O.
2005-01-01
the structure of decision making for control of spacecraft motion is considered. Mathematical models of decision making problems are investigated. New methods of criteria convolution are received. New convolution have properties of smoothness. (author)
Non-Stationary Modelling and Simulation of Near-Source Earthquake Ground Motion
DEFF Research Database (Denmark)
Skjærbæk, P. S.; Kirkegaard, Poul Henning; Fouskitakis, G. N.
1997-01-01
This paper is concerned with modelling and simulation of near-source earthquake ground motion. Recent studies have revealed that these motions show heavy non-stationary behaviour with very low frequencies dominating parts of the earthquake sequence. Modeling and simulation of this behaviour...... by an epicentral distance of 16 km and measured during the 1979 Imperial Valley earthquake in California (U .S .A.). The results of the study indicate that while all three approaches can successfully predict near-source ground motions, the Neural Network based one gives somewhat poorer simulation results....
Non-Stationary Modelling and Simulation of Near-Source Earthquake Ground Motion
DEFF Research Database (Denmark)
Skjærbæk, P. S.; Kirkegaard, Poul Henning; Fouskitakis, G. N.
This paper is concerned with modelling and simulation of near-source earthquake ground motion. Recent studies have revealed that these motions show heavy non-stationary behaviour with very low frequencies dominating parts of the earthquake sequence. Modelling and simulation of this behaviour...... by an epicentral distance of 16 km and measured during the 1979 Imperial valley earthquake in California (USA). The results of the study indicate that while all three approaches can succesfully predict near-source ground motions, the Neural Network based one gives somewhat poorer simulation results....
Model-Based Description of Human Body Motions for Ergonomics Evaluation
Imai, Sayaka
This paper presents modeling of Working Process and Working Simulation factory works. I focus on an example work (motion), its actual work(motion) and reference between them. An example work and its actual work can be analyzed and described as a sequence of atomic action. In order to describe workers' motion, some concepts of Atomic Unit, Model Events and Mediator are introduced. By using these concepts, we can analyze a workers' action and evaluate their works. Also, we consider it as a possible way for unifying all the data used in various applications (CAD/CAM, etc) during the design process and evaluating all subsystems in a virtual Factory.
Economic modeling using evolutionary algorithms : the effect of binary encoding of strategies
Waltman, L.R.; Eck, van N.J.; Dekker, Rommert; Kaymak, U.
2011-01-01
We are concerned with evolutionary algorithms that are employed for economic modeling purposes. We focus in particular on evolutionary algorithms that use a binary encoding of strategies. These algorithms, commonly referred to as genetic algorithms, are popular in agent-based computational economics
Bora, S. S.; Scherbaum, F.; Kuehn, N. M.; Stafford, P.; Edwards, B.
2014-12-01
In a probabilistic seismic hazard assessment (PSHA) framework, it still remains a challenge to adjust ground motion prediction equations (GMPEs) for application in different seismological environments. In this context, this study presents a complete framework for the development of a response spectral GMPE easily adjustable to different seismological conditions; and which does not suffer from the technical problems associated with the adjustment in response spectral domain. Essentially, the approach consists of an empirical FAS (Fourier Amplitude Spectrum) model and a duration model for ground motion which are combined within the random vibration theory (RVT) framework to obtain the full response spectral ordinates. Additionally, FAS corresponding to individual acceleration records are extrapolated beyond the frequency range defined by the data using the stochastic FAS model, obtained by inversion as described in Edwards & Faeh, (2013). To that end, an empirical model for a duration, which is tuned to optimize the fit between RVT based and observed response spectral ordinate, at each oscillator frequency is derived. Although, the main motive of the presented approach was to address the adjustability issues of response spectral GMPEs; comparison, of median predicted response spectra with the other regional models indicate that presented approach can also be used as a stand-alone model. Besides that, a significantly lower aleatory variability (σbrands it to a potentially viable alternative to the classical regression (on response spectral ordinates) based GMPEs for seismic hazard studies in the near future. The dataset used for the presented analysis is a subset of the recently compiled database RESORCE-2012 across Europe, Middle East and the Mediterranean region.
Modeling the Swift Bat Trigger Algorithm with Machine Learning
Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori
2016-01-01
To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift / BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of greater than or equal to 97 percent (less than or equal to 3 percent error), which is a significant improvement on a cut in GRB flux, which has an accuracy of 89.6 percent (10.4 percent error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of n (sub 0) approaching 0.48 (sup plus 0.41) (sub minus 0.23) per cubic gigaparsecs per year with power-law indices of n (sub 1) approaching 1.7 (sup plus 0.6) (sub minus 0.5) and n (sub 2) approaching minus 5.9 (sup plus 5.7) (sub minus 0.1) for GRBs above and below a break point of z (redshift) (sub 1) approaching 6.8 (sup plus 2.8) (sub minus 3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting.
Modeling the Swift BAT Trigger Algorithm with Machine Learning
Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori
2015-01-01
To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. (2014) is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of approximately greater than 97% (approximately less than 3% error), which is a significant improvement on a cut in GRB flux which has an accuracy of 89:6% (10:4% error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of eta(sub 0) approximately 0.48(+0.41/-0.23) Gpc(exp -3) yr(exp -1) with power-law indices of eta(sub 1) approximately 1.7(+0.6/-0.5) and eta(sub 2) approximately -5.9(+5.7/-0.1) for GRBs above and below a break point of z(sub 1) approximately 6.8(+2.8/-3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting. The code used in this is analysis is publicly available online.
Numerical model updating technique for structures using firefly algorithm
Sai Kubair, K.; Mohan, S. C.
2018-03-01
Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.
Schema Design and Normalization Algorithm for XML Databases Model
Directory of Open Access Journals (Sweden)
Samir Abou El-Seoud
2009-06-01
Full Text Available In this paper we study the problem of schema design and normalization in XML databases model. We show that, like relational databases, XML documents may contain redundant information, and this redundancy may cause update anomalies. Furthermore, such problems are caused by certain functional dependencies among paths in the document. Based on our research works, in which we presented the functional dependencies and normal forms of XML Schema, we present the decomposition algorithm for converting any XML Schema into normalized one, that satisfies X-BCNF.
Development of modelling algorithm of technological systems by statistical tests
Shemshura, E. A.; Otrokov, A. V.; Chernyh, V. G.
2018-03-01
The paper tackles the problem of economic assessment of design efficiency regarding various technological systems at the stage of their operation. The modelling algorithm of a technological system was performed using statistical tests and with account of the reliability index allows estimating the level of machinery technical excellence and defining the efficiency of design reliability against its performance. Economic feasibility of its application shall be determined on the basis of service quality of a technological system with further forecasting of volumes and the range of spare parts supply.
Stochastic geometry, spatial statistics and random fields models and algorithms
2015-01-01
Providing a graduate level introduction to various aspects of stochastic geometry, spatial statistics and random fields, this volume places a special emphasis on fundamental classes of models and algorithms as well as on their applications, for example in materials science, biology and genetics. This book has a strong focus on simulations and includes extensive codes in Matlab and R, which are widely used in the mathematical community. It can be regarded as a continuation of the recent volume 2068 of Lecture Notes in Mathematics, where other issues of stochastic geometry, spatial statistics and random fields were considered, with a focus on asymptotic methods.
Heterogeneous Agents Model with the Worst Out Algorithm
Czech Academy of Sciences Publication Activity Database
Vošvrda, Miloslav; Vácha, Lukáš
I, č. 1 (2007), s. 54-66 ISSN 1802-4696 R&D Projects: GA MŠk(CZ) LC06075; GA ČR(CZ) GA402/06/0990 Grant - others:GA UK(CZ) 454/2004/A-EK/FSV Institutional research plan: CEZ:AV0Z10750506 Keywords : Efficient Market s Hypothesis * Fractal Market Hypothesis * agents' investment horizons * agents' trading strategies * technical trading rules * heterogeneous agent model with stochastic memory * Worst out Algorithm Subject RIV: AH - Economics
Two-Stage Electricity Demand Modeling Using Machine Learning Algorithms
Directory of Open Access Journals (Sweden)
Krzysztof Gajowniczek
2017-10-01
Full Text Available Forecasting of electricity demand has become one of the most important areas of research in the electric power industry, as it is a critical component of cost-efficient power system management and planning. In this context, accurate and robust load forecasting is supposed to play a key role in reducing generation costs, and deals with the reliability of the power system. However, due to demand peaks in the power system, forecasts are inaccurate and prone to high numbers of errors. In this paper, our contributions comprise a proposed data-mining scheme for demand modeling through peak detection, as well as the use of this information to feed the forecasting system. For this purpose, we have taken a different approach from that of time series forecasting, representing it as a two-stage pattern recognition problem. We have developed a peak classification model followed by a forecasting model to estimate an aggregated demand volume. We have utilized a set of machine learning algorithms to benefit from both accurate detection of the peaks and precise forecasts, as applied to the Polish power system. The key finding is that the algorithms can detect 96.3% of electricity peaks (load value equal to or above the 99th percentile of the load distribution and deliver accurate forecasts, with mean absolute percentage error (MAPE of 3.10% and resistant mean absolute percentage error (r-MAPE of 2.70% for the 24 h forecasting horizon.
Quantum-circuit model of Hamiltonian search algorithms
International Nuclear Information System (INIS)
Roland, Jeremie; Cerf, Nicolas J.
2003-01-01
We analyze three different quantum search algorithms, namely, the traditional circuit-based Grover's algorithm, its continuous-time analog by Hamiltonian evolution, and the quantum search by local adiabatic evolution. We show that these algorithms are closely related in the sense that they all perform a rotation, at a constant angular velocity, from a uniform superposition of all states to the solution state. This makes it possible to implement the two Hamiltonian-evolution algorithms on a conventional quantum circuit, while keeping the quadratic speedup of Grover's original algorithm. It also clarifies the link between the adiabatic search algorithm and Grover's algorithm
4D modeling and estimation of respiratory motion for radiation therapy
Lorenz, Cristian
2013-01-01
Respiratory motion causes an important uncertainty in radiotherapy planning of the thorax and upper abdomen. The main objective of radiation therapy is to eradicate or shrink tumor cells without damaging the surrounding tissue by delivering a high radiation dose to the tumor region and a dose as low as possible to healthy organ tissues. Meeting this demand remains a challenge especially in case of lung tumors due to breathing-induced tumor and organ motion where motion amplitudes can measure up to several centimeters. Therefore, modeling of respiratory motion has become increasingly important in radiation therapy. With 4D imaging techniques spatiotemporal image sequences can be acquired to investigate dynamic processes in the patient’s body. Furthermore, image registration enables the estimation of the breathing-induced motion and the description of the temporal change in position and shape of the structures of interest by establishing the correspondence between images acquired at different phases of the br...
Using genetic algorithms to calibrate a water quality model.
Liu, Shuming; Butler, David; Brazier, Richard; Heathwaite, Louise; Khu, Soon-Thiam
2007-03-15
With the increasing concern over the impact of diffuse pollution on water bodies, many diffuse pollution models have been developed in the last two decades. A common obstacle in using such models is how to determine the values of the model parameters. This is especially true when a model has a large number of parameters, which makes a full range of calibration expensive in terms of computing time. Compared with conventional optimisation approaches, soft computing techniques often have a faster convergence speed and are more efficient for global optimum searches. This paper presents an attempt to calibrate a diffuse pollution model using a genetic algorithm (GA). Designed to simulate the export of phosphorus from diffuse sources (agricultural land) and point sources (human), the Phosphorus Indicators Tool (PIT) version 1.1, on which this paper is based, consisted of 78 parameters. Previous studies have indicated the difficulty of full range model calibration due to the number of parameters involved. In this paper, a GA was employed to carry out the model calibration in which all parameters were involved. A sensitivity analysis was also performed to investigate the impact of operators in the GA on its effectiveness in optimum searching. The calibration yielded satisfactory results and required reasonable computing time. The application of the PIT model to the Windrush catchment with optimum parameter values was demonstrated. The annual P loss was predicted as 4.4 kg P/ha/yr, which showed a good fitness to the observed value.
A new parallelization algorithm of ocean model with explicit scheme
Fu, X. D.
2017-08-01
This paper will focus on the parallelization of ocean model with explicit scheme which is one of the most commonly used schemes in the discretization of governing equation of ocean model. The characteristic of explicit schema is that calculation is simple, and that the value of the given grid point of ocean model depends on the grid point at the previous time step, which means that one doesn’t need to solve sparse linear equations in the process of solving the governing equation of the ocean model. Aiming at characteristics of the explicit scheme, this paper designs a parallel algorithm named halo cells update with tiny modification of original ocean model and little change of space step and time step of the original ocean model, which can parallelize ocean model by designing transmission module between sub-domains. This paper takes the GRGO for an example to implement the parallelization of GRGO (Global Reduced Gravity Ocean model) with halo update. The result demonstrates that the higher speedup can be achieved at different problem size.
On-chip visual perception of motion: a bio-inspired connectionist model on FPGA.
Torres-Huitzil, César; Girau, Bernard; Castellanos-Sánchez, Claudio
2005-01-01
Visual motion provides useful information to understand the dynamics of a scene to allow intelligent systems interact with their environment. Motion computation is usually restricted by real time requirements that need the design and implementation of specific hardware architectures. In this paper, the design of hardware architecture for a bio-inspired neural model for motion estimation is presented. The motion estimation is based on a strongly localized bio-inspired connectionist model with a particular adaptation of spatio-temporal Gabor-like filtering. The architecture is constituted by three main modules that perform spatial, temporal, and excitatory-inhibitory connectionist processing. The biomimetic architecture is modeled, simulated and validated in VHDL. The synthesis results on a Field Programmable Gate Array (FPGA) device show the potential achievement of real-time performance at an affordable silicon area.
A heuristic mathematical model for the dynamics of sensory conflict and motion sickness
Oman, C. M.
1982-01-01
By consideration of the information processing task faced by the central nervous system in estimating body spatial orientation and in controlling active body movement using an internal model referenced control strategy, a mathematical model for sensory conflict generation is developed. The model postulates a major dynamic functional role for sensory conflict signals in movement control, as well as in sensory-motor adaptation. It accounts for the role of active movement in creating motion sickness symptoms in some experimental circumstance, and in alleviating them in others. The relationship between motion sickness produced by sensory rearrangement and that resulting from external motion disturbances is explicitly defined. A nonlinear conflict averaging model is proposed which describes dynamic aspects of experimentally observed subjective discomfort sensation, and suggests resulting behaviours. The model admits several possibilities for adaptive mechanisms which do not involve internal model updating. Further systematic efforts to experimentally refine and validate the model are indicated.
Sakkas, Georgios; Sakellariou, Nikolaos
2018-05-01
Strong motion recordings are the key in many earthquake engineering applications and are also fundamental for seismic design. The present study focuses on the automated correction of accelerograms, analog and digital. The main feature of the proposed algorithm is the automatic selection for the cut-off frequencies based on a minimum spectral value in a predefined frequency bandwidth, instead of the typical signal-to-noise approach. The algorithm follows the basic steps of the correction procedure (instrument correction, baseline correction and appropriate filtering). Besides the corrected time histories, Peak Ground Acceleration, Peak Ground Velocity, Peak Ground Displacement values and the corrected Fourier Spectra are also calculated as well as the response spectra. The algorithm is written in Matlab environment, is fast enough and can be used for batch processing or in real-time applications. In addition, the possibility to also perform a signal-to-noise ratio is added as well as to perform causal or acausal filtering. The algorithm has been tested in six significant earthquakes (Kozani-Grevena 1995, Aigio 1995, Athens 1999, Lefkada 2003 and Kefalonia 2014) of the Greek territory with analog and digital accelerograms.
"Updates to Model Algorithms & Inputs for the Biogenic ...
We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observations. This has resulted in improvements in model evaluations of modeled isoprene, NOx, and O3. The National Exposure Research Laboratory (NERL) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA mission to protect human health and the environment. AMAD research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollution problem, but also in developing emission control policies and regulations for air quality improvements.
Active motions of Brownian particles in a generalized energy-depot model
International Nuclear Information System (INIS)
Zhang Yong; Koo Kim, Chul; Lee, Kong-Ju-Bock
2008-01-01
We present a generalized energy-depot model in which the rate of conversion of the internal energy into motion can be dependent on the position and velocity of a particle. When the conversion rate is a general function of the velocity, the active particle exhibits diverse patterns of motion, including a braking mechanism and a stepping motion. The phase trajectories of the motion are investigated in a systematic way. With a particular form of the conversion rate dependent on the position and velocity, the particle shows a spontaneous oscillation characterizing a negative stiffness. These types of active behaviors are compared with similar phenomena observed in biology, such as the stepping motion of molecular motors and amplification in the hearing mechanism. Hence, our model can provide a generic understanding of the active motion related to the energy conversion and also a new control mechanism for nano-robots. We also investigate the effect of noise, especially on the stepping motion, and observe random walk-like behavior as expected.
Wagner, Martin G; Hatt, Charles R; Dunkerley, David A P; Bodart, Lindsay E; Raval, Amish N; Speidel, Michael A
2018-04-16
Transcatheter aortic valve replacement (TAVR) is a minimally invasive procedure in which a prosthetic heart valve is placed and expanded within a defective aortic valve. The device placement is commonly performed using two-dimensional (2D) fluoroscopic imaging. Within this work, we propose a novel technique to track the motion and deformation of the prosthetic valve in three dimensions based on biplane fluoroscopic image sequences. The tracking approach uses a parameterized point cloud model of the valve stent which can undergo rigid three-dimensional (3D) transformation and different modes of expansion. Rigid elements of the model are individually rotated and translated in three dimensions to approximate the motions of the stent. Tracking is performed using an iterative 2D-3D registration procedure which estimates the model parameters by minimizing the mean-squared image values at the positions of the forward-projected model points. Additionally, an initialization technique is proposed, which locates clusters of salient features to determine the initial position and orientation of the model. The proposed algorithms were evaluated based on simulations using a digital 4D CT phantom as well as experimentally acquired images of a prosthetic valve inside a chest phantom with anatomical background features. The target registration error was 0.12 ± 0.04 mm in the simulations and 0.64 ± 0.09 mm in the experimental data. The proposed algorithm could be used to generate 3D visualization of the prosthetic valve from two projections. In combination with soft-tissue sensitive-imaging techniques like transesophageal echocardiography, this technique could enable 3D image guidance during TAVR procedures. © 2018 American Association of Physicists in Medicine.
Computational Analysis of 3D Ising Model Using Metropolis Algorithms
International Nuclear Information System (INIS)
Sonsin, A F; Cortes, M R; Nunes, D R; Gomes, J V; Costa, R S
2015-01-01
We simulate the Ising Model with the Monte Carlo method and use the algorithms of Metropolis to update the distribution of spins. We found that, in the specific case of the three-dimensional Ising Model, methods of Metropolis are efficient. Studying the system near the point of phase transition, we observe that the magnetization goes to zero. In our simulations we analyzed the behavior of the magnetization and magnetic susceptibility to verify the phase transition in a paramagnetic to ferromagnetic material. The behavior of the magnetization and of the magnetic susceptibility as a function of the temperature suggest a phase transition around KT/J ≈ 4.5 and was evidenced the problem of finite size of the lattice to work with large lattice. (paper)
Using the fuzzy modeling for the retrieval algorithms
International Nuclear Information System (INIS)
Mohamed, A.H
2010-01-01
A rapid growth in number and size of images in databases and world wide web (www) has created a strong need for more efficient search and retrieval systems to exploit the benefits of this large amount of information. However, the collection of this information is now based on the image technology. One of the limitations of the current image analysis techniques necessitates that most image retrieval systems use some form of text description provided by the users as the basis to index and retrieve images. To overcome this problem, the proposed system introduces the using of fuzzy modeling to describe the image by using the linguistic ambiguities. Also, the proposed system can include vague or fuzzy terms in modeling the queries to match the image descriptions in the retrieval process. This can facilitate the indexing and retrieving process, increase their performance and decrease its computational time . Therefore, the proposed system can improve the performance of the traditional image retrieval algorithms.
An Intelligent Model for Pairs Trading Using Genetic Algorithms.
Huang, Chien-Feng; Hsu, Chi-Jen; Chen, Chi-Chung; Chang, Bao Rong; Li, Chen-An
2015-01-01
Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice.
A neural model of motion processing and visual navigation by cortical area MST.
Grossberg, S; Mingolla, E; Pack, C
1999-12-01
Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.
Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes
Directory of Open Access Journals (Sweden)
Tomoaki Nakamura
2017-12-01
Full Text Available Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM, the emission distributions of which are Gaussian processes (GPs. Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods.
3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models
International Nuclear Information System (INIS)
Dhou, S; Hurwitz, M; Cai, W; Rottmann, J; Williams, C; Wagar, M; Berbeco, R; Lewis, J H; Mishra, P; Li, R; Ionascu, D
2015-01-01
3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. (paper)
One-degree-of-freedom spherical model for the passive motion of the human ankle joint.
Sancisi, Nicola; Baldisserri, Benedetta; Parenti-Castelli, Vincenzo; Belvedere, Claudio; Leardini, Alberto
2014-04-01
Mathematical modelling of mobility at the human ankle joint is essential for prosthetics and orthotic design. The scope of this study is to show that the ankle joint passive motion can be represented by a one-degree-of-freedom spherical motion. Moreover, this motion is modelled by a one-degree-of-freedom spherical parallel mechanism model, and the optimal pivot-point position is determined. Passive motion and anatomical data were taken from in vitro experiments in nine lower limb specimens. For each of these, a spherical mechanism, including the tibiofibular and talocalcaneal segments connected by a spherical pair and by the calcaneofibular and tibiocalcaneal ligament links, was defined from the corresponding experimental kinematics and geometry. An iterative procedure was used to optimize the geometry of the model, able to predict original experimental motion. The results of the simulations showed a good replication of the original natural motion, despite the numerous model assumptions and simplifications, with mean differences between experiments and predictions smaller than 1.3 mm (average 0.33 mm) for the three joint position components and smaller than 0.7° (average 0.32°) for the two out-of-sagittal plane rotations, once plotted versus the full flexion arc. The relevant pivot-point position after model optimization was found within the tibial mortise, but not exactly in a central location. The present combined experimental and modelling analysis of passive motion at the human ankle joint shows that a one degree-of-freedom spherical mechanism predicts well what is observed in real joints, although its computational complexity is comparable to the standard hinge joint model.
Aoyagi, Daisuke; Ichinose, Wade E; Harkema, Susan J; Reinkensmeyer, David J; Bobrow, James E
2007-09-01
Locomotor training using body weight support on a treadmill and manual assistance is a promising rehabilitation technique following neurological injuries, such as spinal cord injury (SCI) and stroke. Previous robots that automate this technique impose constraints on naturalistic walking due to their kinematic structure, and are typically operated in a stiff mode, limiting the ability of the patient or human trainer to influence the stepping pattern. We developed a pneumatic gait training robot that allows for a full range of natural motion of the legs and pelvis during treadmill walking, and provides compliant assistance. However, we observed an unexpected consequence of the device's compliance: unimpaired and SCI individuals invariably began walking out-of-phase with the device. Thus, the robot perturbed rather than assisted stepping. To address this problem, we developed a novel algorithm that synchronizes the device in real-time to the actual motion of the individual by sensing the state error and adjusting the replay timing to reduce this error. This paper describes data from experiments with individuals with SCI that demonstrate the effectiveness of the synchronization algorithm, and the potential of the device for relieving the trainers of strenuous work while maintaining naturalistic stepping.
Comparison of different models of motion in a crowded environment: a Monte Carlo study.
Polanowski, P; Sikorski, A
2017-02-22
In this paper we investigate the motion of molecules in crowded environments for two dramatically different types of molecular transport. The first type is realized by the dynamic lattice liquid model, which is based on a cooperative movement concept and thus, the motion of molecules is highly correlated. The second one corresponds to a so-called motion of a single agent where the motion of molecules is considered as a random walk without any correlation with other moving elements. The crowded environments are modeled as a two-dimensional triangular lattice with fixed impenetrable obstacles. Our simulation results indicate that the type of transport has an impact on the dynamics of the system, the percolation threshold, critical exponents, and on molecules' trajectories.
Modelling large motion events in fMRI studies of patients with epilepsy
DEFF Research Database (Denmark)
Lemieux, Louis; Salek-Haddadi, Afraim; Lund, Torben E
2007-01-01
-positive activation. Head motion can lead to severe image degradation and result in false-positive activation and is usually worse in patients than in healthy subjects. We performed general linear model fMRI data analysis on simultaneous EEG-fMRI data acquired in 34 cases with focal epilepsy. Signal changes...... associated with large inter-scan motion events (head jerks) were modelled using modified design matrices that include 'scan nulling' regressors. We evaluated the efficacy of this approach by mapping the proportion of the brain for which F-tests across the additional regressors were significant. In 95......% of cases, there was a significant effect of motion in 50% of the brain or greater; for the scan nulling effect, the proportion was 36%; this effect was predominantly in the neocortex. We conclude that careful consideration of the motion-related effects in fMRI studies of patients with epilepsy is essential...
Harada, Y.; Wessel, P.; Sterling, A.; Kroenke, L.
2002-12-01
Inter-hotspot motion within the Pacific plate is one of the most controversial issues in recent geophysical studies. However, it is a fact that many geophysical and geological data including ages and positions of seamount chains in the Pacific plate can largely be explained by a simple model of absolute motion derived from assumptions of rigid plates and fixed hotspots. Therefore we take the stand that if a model of plate motion can explain the ages and positions of Pacific hotspot tracks, inter-hotspot motion would not be justified. On the other hand, if any discrepancies between the model and observations are found, the inter-hotspot motion may then be estimated from these discrepancies. To make an accurate model of the absolute motion of the Pacific plate, we combined two different approaches: the polygonal finite rotation method (PFRM) by Harada and Hamano (2000) and the hot-spotting technique developed by Wessel and Kroenke (1997). The PFRM can determine accurate positions of finite rotation poles for the Pacific plate if the present positions of hotspots are known. On the other hand, the hot-spotting technique can predict present positions of hotspots if the absolute plate motion is given. Therefore we can undertake iterative calculations using the two methods. This hybrid method enables us to determine accurate finite rotation poles for the Pacific plate solely from geometry of Hawaii, Louisville and Easter(Crough)-Line hotspot tracks from around 70 Ma to present. Information of ages can be independently assigned to the model after the poles and rotation angles are determined. We did not detect any inter-hotspot motion from the geometry of these Pacific hotspot tracks using this method. The Ar-Ar ages of Pacific seamounts including new age data of ODP Leg 197 are used to test the newly determined model of the Pacific plate motion. The ages of Hawaii, Louisville, Easter(Crough)-Line, and Cobb hotspot tracks are quite consistent with each other from 70 Ma to
Algorithms for a parallel implementation of Hidden Markov Models with a small state space
DEFF Research Database (Denmark)
Nielsen, Jesper; Sand, Andreas
2011-01-01
Two of the most important algorithms for Hidden Markov Models are the forward and the Viterbi algorithms. We show how formulating these using linear algebra naturally lends itself to parallelization. Although the obtained algorithms are slow for Hidden Markov Models with large state spaces...
Hydrological excitation of polar motion by different variables from the GLDAS models
Winska, Malgorzata; Nastula, Jolanta; Salstein, David
2017-12-01
Continental hydrological loading by land water, snow and ice is a process that is important for the full understanding of the excitation of polar motion. In this study, we compute different estimations of hydrological excitation functions of polar motion (as hydrological angular momentum, HAM) using various variables from the Global Land Data Assimilation System (GLDAS) models of the land-based hydrosphere. The main aim of this study is to show the influence of variables from different hydrological processes including evapotranspiration, runoff, snowmelt and soil moisture, on polar motion excitations at annual and short-term timescales. Hydrological excitation functions of polar motion are determined using selected variables of these GLDAS realizations. Furthermore, we use time-variable gravity field solutions from the Gravity Recovery and Climate Experiment (GRACE) to determine the hydrological mass effects on polar motion excitation. We first conduct an intercomparison of the maps of variations of regional hydrological excitation functions, timing and phase diagrams of different regional and global HAMs. Next, we estimate the hydrological signal in geodetically observed polar motion excitation as a residual by subtracting the contributions of atmospheric angular momentum and oceanic angular momentum. Finally, the hydrological excitations are compared with those hydrological signals determined from residuals of the observed polar motion excitation series. The results will help us understand the relative importance of polar motion excitation within the individual hydrological processes, based on hydrological modeling. This method will allow us to estimate how well the polar motion excitation budget in the seasonal and inter-annual spectral ranges can be closed.
Prompt form of relativistic equations of motion in a model of singular lagrangian formalism
International Nuclear Information System (INIS)
Gajda, R.P.; Duviryak, A.A.; Klyuchkovskij, Yu.B.
1983-01-01
The purpose of the paper is to develope the way of transition from equations of motion in singular lagrangian formalism to three-dimensional equations of Newton type in the prompt form of dynamics in the framework of c -2 parameter expansion (s. c. quasireltativistic approaches), as well as to find corresponding integrals of motion. The first quasirelativistifc approach for Dominici, Gomis, Longhi model was obtained and investigated
Genome Scale Modeling in Systems Biology: Algorithms and Resources
Najafi, Ali; Bidkhori, Gholamreza; Bozorgmehr, Joseph H.; Koch, Ina; Masoudi-Nejad, Ali
2014-01-01
In recent years, in silico studies and trial simulations have complemented experimental procedures. A model is a description of a system, and a system is any collection of interrelated objects; an object, moreover, is some elemental unit upon which observations can be made but whose internal structure either does not exist or is ignored. Therefore, any network analysis approach is critical for successful quantitative modeling of biological systems. This review highlights some of most popular and important modeling algorithms, tools, and emerging standards for representing, simulating and analyzing cellular networks in five sections. Also, we try to show these concepts by means of simple example and proper images and graphs. Overall, systems biology aims for a holistic description and understanding of biological processes by an integration of analytical experimental approaches along with synthetic computational models. In fact, biological networks have been developed as a platform for integrating information from high to low-throughput experiments for the analysis of biological systems. We provide an overview of all processes used in modeling and simulating biological networks in such a way that they can become easily understandable for researchers with both biological and mathematical backgrounds. Consequently, given the complexity of generated experimental data and cellular networks, it is no surprise that researchers have turned to computer simulation and the development of more theory-based approaches to augment and assist in the development of a fully quantitative understanding of cellular dynamics. PMID:24822031
Akiyama, S.; Kawaji, K.; Fujihara, S.
2013-12-01
Since fault fracturing due to an earthquake can simultaneously cause ground motion and tsunami, it is appropriate to evaluate the ground motion and the tsunami by single fault model. However, several source models are used independently in the ground motion simulation or the tsunami simulation, because of difficulty in evaluating both phenomena simultaneously. Many source models for the 2011 off the Pacific coast of Tohoku Earthquake are proposed from the inversion analyses of seismic observations or from those of tsunami observations. Most of these models show the similar features, which large amount of slip is located at the shallower part of fault area near the Japan Trench. This indicates that the ground motion and the tsunami can be evaluated by the single source model. Therefore, we examine the possibility of the tsunami prediction, using the fault model estimated from seismic observation records. In this study, we try to carry out the tsunami simulation using the displacement field of oceanic crustal movements, which is calculated from the ground motion simulation of the 2011 off the Pacific coast of Tohoku Earthquake. We use two fault models by Yoshida et al. (2011), which are based on both the teleseismic body wave and on the strong ground motion records. Although there is the common feature in those fault models, the amount of slip near the Japan trench is lager in the fault model from the strong ground motion records than in that from the teleseismic body wave. First, the large-scale ground motion simulations applying those fault models used by the voxel type finite element method are performed for the whole eastern Japan. The synthetic waveforms computed from the simulations are generally consistent with the observation records of K-NET (Kinoshita (1998)) and KiK-net stations (Aoi et al. (2000)), deployed by the National Research Institute for Earth Science and Disaster Prevention (NIED). Next, the tsunami simulations are performed by the finite
Estimation of Motion Vector Fields
DEFF Research Database (Denmark)
Larsen, Rasmus
1993-01-01
This paper presents an approach to the estimation of 2-D motion vector fields from time varying image sequences. We use a piecewise smooth model based on coupled vector/binary Markov random fields. We find the maximum a posteriori solution by simulated annealing. The algorithm generate sample...... fields by means of stochastic relaxation implemented via the Gibbs sampler....
Directory of Open Access Journals (Sweden)
W. H. Kwong
2000-06-01
Full Text Available The development of a new simplified model predictive control algorithm has been proposed in this work. The algorithm is developed within the framework of internal model control, and it is easy to understanding and implement. Simulation results for a continuous fermenter, which show that the proposed control algorithm is robust for moderate variations in plant parameters, are presented. The algorithm shows a good performance for setpoint tracking.
DEFF Research Database (Denmark)
Pedersen, Henrik; Ólafsdóttir, Hildur; Larsen, Rasmus
2010-01-01
The clinical potential of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is currently limited by respiratory induced motion of the heart. This paper presents a unifying model of perfusion and motion in which respiratory motion becomes an integral part of myocardial perfusion...... quantification. Hence, the need for tedious manual motion correction prior to perfusion quantification is avoided. In addition, we demonstrate that the proposed framework facilitates the process of reconstructing DCEMRI from sparsely sampled data in the presence of respiratory motion. The paper focuses primarily...... on the underlying theory of the proposed framework, but shows in vivo results of respiratory motion correction and simulation results of reconstructing sparsely sampled data....
Variable selection in Logistic regression model with genetic algorithm.
Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi
2018-02-01
Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.
Ripple-Spreading Network Model Optimization by Genetic Algorithm
Directory of Open Access Journals (Sweden)
Xiao-Bing Hu
2013-01-01
Full Text Available Small-world and scale-free properties are widely acknowledged in many real-world complex network systems, and many network models have been developed to capture these network properties. The ripple-spreading network model (RSNM is a newly reported complex network model, which is inspired by the natural ripple-spreading phenomenon on clam water surface. The RSNM exhibits good potential for describing both spatial and temporal features in the development of many real-world networks where the influence of a few local events spreads out through nodes and then largely determines the final network topology. However, the relationships between ripple-spreading related parameters (RSRPs of RSNM and small-world and scale-free topologies are not as obvious or straightforward as in many other network models. This paper attempts to apply genetic algorithm (GA to tune the values of RSRPs, so that the RSNM may generate these two most important network topologies. The study demonstrates that, once RSRPs are properly tuned by GA, the RSNM is capable of generating both network topologies and therefore has a great flexibility to study many real-world complex network systems.
Ab initio modeling of the motional Stark effect on MAST
International Nuclear Information System (INIS)
De Bock, M. F. M.; Conway, N. J.; Walsh, M. J.; Carolan, P. G.; Hawkes, N. C.
2008-01-01
A multichord motional Stark effect (MSE) system has recently been built on the MAST tokamak. In MAST the π and σ lines of the MSE spectrum overlap due to the low magnetic field typical for present day spherical tokamaks. Also, the field curvature results in a large change in the pitch angle over the observation volume. The measured polarization angle does not relate to one local pitch angle but to an integration over all pitch angles in the observation volume. The velocity distribution of the neutral beam further complicates the measurement. To take into account volume effects and velocity distribution, an ab initio code was written that simulates the MSE spectrum on MAST. The code is modular and can easily be adjusted for other tokamaks. The code returns the intensity, polarized fraction, and polarization angle as a function of wavelength. Results of the code are presented, showing the effect on depolarization and wavelength dependence of the polarization angle. The code is used to optimize the design and calibration of the MSE diagnostic.
Mathematical models and algorithms for the computer program 'WOLF'
International Nuclear Information System (INIS)
Halbach, K.
1975-12-01
The computer program FLOW finds the nonrelativistic self- consistent set of two-dimensional ion trajectories and electric fields (including space charges from ions and electrons) for a given set of initial and boundary conditions for the particles and fields. The combination of FLOW with the optimization code PISA gives the program WOLF, which finds the shape of the emitter which is consistent with the plasma forming it, and in addition varies physical characteristics such as electrode position, shapes, and potentials so that some performance characteristics are optimized. The motivation for developing these programs was the desire to design optimum ion source extractor/accelerator systems in a systematic fashion. The purpose of this report is to explain and derive the mathematical models and algorithms which approximate the real physical processes. It serves primarily to document the computer programs. 10 figures
Model-based fault detection algorithm for photovoltaic system monitoring
Harrou, Fouzi
2018-02-12
Reliable detection of faults in PV systems plays an important role in improving their reliability, productivity, and safety. This paper addresses the detection of faults in the direct current (DC) side of photovoltaic (PV) systems using a statistical approach. Specifically, a simulation model that mimics the theoretical performances of the inspected PV system is designed. Residuals, which are the difference between the measured and estimated output data, are used as a fault indicator. Indeed, residuals are used as the input for the Multivariate CUmulative SUM (MCUSUM) algorithm to detect potential faults. We evaluated the proposed method by using data from an actual 20 MWp grid-connected PV system located in the province of Adrar, Algeria.
A Linear Algorithm for Black Scholes Economic Model
Directory of Open Access Journals (Sweden)
Dumitru FANACHE
2008-01-01
Full Text Available The pricing of options is a very important problem encountered in financial domain. The famous Black-Scholes model provides explicit closed form solution for the values of certain (European style call and put options. But for many other options, either there are no closed form solution, or if such closed form solutions exist, the formulas exhibiting them are complicated and difficult to evaluate accurately by conventional methods. The aim of this paper is to study the possibility of obtaining the numerical solution for the Black-Scholes equation in parallel, by means of several processors, using the finite difference method. A comparison between the complexity of the parallel algorithm and the serial one is given.
Lee, Suk-Jun; Yu, Seung-Man
2017-08-01
The purpose of this study was to evaluate the usefulness and clinical applications of MultiVaneXD which was applying iterative motion correction reconstruction algorithm T2-weighted images compared with MultiVane images taken with a 3T MRI. A total of 20 patients with suspected pathologies of the liver and pancreatic-biliary system based on clinical and laboratory findings underwent upper abdominal MRI, acquired using the MultiVane and MultiVaneXD techniques. Two reviewers analyzed the MultiVane and MultiVaneXD T2-weighted images qualitatively and quantitatively. Each reviewer evaluated vessel conspicuity by observing motion artifacts and the sharpness of the portal vein, hepatic vein, and upper organs. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated by one reviewer for quantitative analysis. The interclass correlation coefficient was evaluated to measure inter-observer reliability. There were significant differences between MultiVane and MultiVaneXD in motion artifact evaluation. Furthermore, MultiVane was given a better score than MultiVaneXD in abdominal organ sharpness and vessel conspicuity, but the difference was insignificant. The reliability coefficient values were over 0.8 in every evaluation. MultiVaneXD (2.12) showed a higher value than did MultiVane (1.98), but the difference was insignificant ( p = 0.135). MultiVaneXD is a motion correction method that is more advanced than MultiVane, and it produced an increased SNR, resulting in a greater ability to detect focal abdominal lesions.
Time series modeling and forecasting using memetic algorithms for regime-switching models.
Bergmeir, Christoph; Triguero, Isaac; Molina, Daniel; Aznarte, José Luis; Benitez, José Manuel
2012-11-01
In this brief, we present a novel model fitting procedure for the neuro-coefficient smooth transition autoregressive model (NCSTAR), as presented by Medeiros and Veiga. The model is endowed with a statistically founded iterative building procedure and can be interpreted in terms of fuzzy rule-based systems. The interpretability of the generated models and a mathematically sound building procedure are two very important properties of forecasting models. The model fitting procedure employed by the original NCSTAR is a combination of initial parameter estimation by a grid search procedure with a traditional local search algorithm. We propose a different fitting procedure, using a memetic algorithm, in order to obtain more accurate models. An empirical evaluation of the method is performed, applying it to various real-world time series originating from three forecasting competitions. The results indicate that we can significantly enhance the accuracy of the models, making them competitive to models commonly used in the field.
Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.
Directory of Open Access Journals (Sweden)
Gonglin Yuan
Full Text Available Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1 βk ≥ 0 2 the search direction has the trust region property without the use of any line search method 3 the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.
Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.
Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou
2015-01-01
Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1) βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.
Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm
Ulbrich, Norbert Manfred
2013-01-01
A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.
The finite-difference and finite-element modeling of seismic wave propagation and earthquake motion
International Nuclear Information System (INIS)
Moszo, P.; Kristek, J.; Galis, M.; Pazak, P.; Balazovijech, M.
2006-01-01
Numerical modeling of seismic wave propagation and earthquake motion is an irreplaceable tool in investigation of the Earth's structure, processes in the Earth, and particularly earthquake phenomena. Among various numerical methods, the finite-difference method is the dominant method in the modeling of earthquake motion. Moreover, it is becoming more important in the seismic exploration and structural modeling. At the same time we are convinced that the best time of the finite-difference method in seismology is in the future. This monograph provides tutorial and detailed introduction to the application of the finite-difference, finite-element, and hybrid finite-difference-finite-element methods to the modeling of seismic wave propagation and earthquake motion. The text does not cover all topics and aspects of the methods. We focus on those to which we have contributed. (Author)
Liang, Lihua; Yuan, Jia; Zhang, Songtao; Zhao, Peng
2018-01-01
This work presents optimal linear quadratic regulator (LQR) based on genetic algorithm (GA) to solve the two degrees of freedom (2 DoF) motion control problem in head seas for wave piercing catamarans (WPC). The proposed LQR based GA control strategy is to select optimal weighting matrices (Q and R). The seakeeping performance of WPC based on proposed algorithm is challenged because of multi-input multi-output (MIMO) system of uncertain coefficient problems. Besides the kinematical constraint problems of WPC, the external conditions must be considered, like the sea disturbance and the actuators (a T-foil and two flaps) control. Moreover, this paper describes the MATLAB and LabVIEW software plats to simulate the reduction effects of WPC. Finally, the real-time (RT) NI CompactRIO embedded controller is selected to test the effectiveness of the actuators based on proposed techniques. In conclusion, simulation and experimental results prove the correctness of the proposed algorithm. The percentage of heave and pitch reductions are more than 18% in different high speeds and bad sea conditions. And the results also verify the feasibility of NI CompactRIO embedded controller.
Ranking of several ground-motion models for seismic hazard analysis in Iran
International Nuclear Information System (INIS)
Ghasemi, H; Zare, M; Fukushima, Y
2008-01-01
In this study, six attenuation relationships are classified with respect to the ranking scheme proposed by Scherbaum et al (2004 Bull. Seismol. Soc. Am. 94 1–22). First, the strong motions recorded during the 2002 Avaj, 2003 Bam, 2004 Kojour and 2006 Silakhor earthquakes are consistently processed. Then the normalized residual sets are determined for each selected ground-motion model, considering the strong-motion records chosen. The main advantage of these records is that corresponding information about the causative fault plane has been well studied for the selected events. Such information is used to estimate several control parameters which are essential inputs for attenuation relations. The selected relations (Zare et al (1999 Soil Dyn. Earthq. Eng. 18 101–23); Fukushima et al (2003 J. Earthq. Eng. 7 573–98); Sinaeian (2006 PhD Thesis International Institute of Earthquake Engineering and Seismology, Tehran, Iran); Boore and Atkinson (2007 PEER, Report 2007/01); Campbell and Bozorgnia (2007 PEER, Report 2007/02); and Chiou and Youngs (2006 PEER Interim Report for USGS Review)) have been deemed suitable for predicting peak ground-motion amplitudes in the Iranian plateau. Several graphical techniques and goodness-of-fit measures are also applied for statistical distribution analysis of the normalized residual sets. Such analysis reveals ground-motion models, developed using Iranian strong-motion records as the most appropriate ones in the Iranian context. The results of the present study are applicable in seismic hazard assessment projects in Iran
Modeling and measurement of the motion of the DIII-D vacuum vessel during vertical instabilities
International Nuclear Information System (INIS)
Reis, E.; Blevins, R.D.; Jensen, T.H.; Luxon, J.L.; Petersen, P.I.; Strait, E.J.
1991-11-01
The motions of the D3-D vacuum vessel during vertical instabilities of elongated plasmas have been measured and studied over the past five years. The currents flowing in the vessel wall and the plasma scrapeoff layer were also measured and correlated to a physics model. These results provide a time history load distribution on the vessel which were input to a dynamic analysis for correlation to the measured motions. The structural model of the vessel using the loads developed from the measured vessel currents showed that the calculated displacement history correlated well with the measured values. The dynamic analysis provides a good estimate of the stresses and the maximum allowable deflection of the vessel. In addition, the vessel motions produce acoustic emissions at 21 Hertz that are sufficiently loud to be felt as well as heard by the D3-D operators. Time history measurements of the sounds were correlated to the vessel displacements. An analytical model of an oscillating sphere provided a reasonable correlation to the amplitude of the measured sounds. The correlation of the theoretical and measured vessel currents, the dynamic measurements and analysis, and the acoustic measurements and analysis show that: (1) The physics model can predict vessel forces for selected values of plasma resistivity. The model also predicts poloidal and toroidal wall currents which agree with measured values; (2) The force-time history from the above model, used in conjunction with an axisymmetric structural model of the vessel, predicts vessel motions which agree well with measured values; (3) The above results, input to a simple acoustic model predicts the magnitude of sounds emitted from the vessel during disruptions which agree with acoustic measurements; (4) Correlation of measured vessel motions with structural analysis shows that a maximum vertical motion of the vessel up to 0.24 in will not overstress the vessel or its supports. 11 refs., 10 figs., 1 tab
Directory of Open Access Journals (Sweden)
Yiliang Zeng
Full Text Available Due to the rapid development of motor vehicle Driver Assistance Systems (DAS, the safety problems associated with automatic driving have become a hot issue in Intelligent Transportation. The traffic sign is one of the most important tools used to reinforce traffic rules. However, traffic sign image degradation based on computer vision is unavoidable during the vehicle movement process. In order to quickly and accurately recognize traffic signs in motion-blurred images in DAS, a new image restoration algorithm based on border deformation detection in the spatial domain is proposed in this paper. The border of a traffic sign is extracted using color information, and then the width of the border is measured in all directions. According to the width measured and the corresponding direction, both the motion direction and scale of the image can be confirmed, and this information can be used to restore the motion-blurred image. Finally, a gray mean grads (GMG ratio is presented to evaluate the image restoration quality. Compared to the traditional restoration approach which is based on the blind deconvolution method and Lucy-Richardson method, our method can greatly restore motion blurred images and improve the correct recognition rate. Our experiments show that the proposed method is able to restore traffic sign information accurately and efficiently.
Optimized combination model and algorithm of parking guidance information configuration
Directory of Open Access Journals (Sweden)
Tian Ye
2011-01-01
Full Text Available Abstract Operators of parking guidance and information (PGI systems often have difficulty in providing the best car park availability information to drivers in periods of high demand. A new PGI configuration model based on the optimized combination method was proposed by analyzing of parking choice behavior. This article first describes a parking choice behavioral model incorporating drivers perceptions of waiting times at car parks based on PGI signs. This model was used to predict the influence of PGI signs on the overall performance of the traffic system. Then relationships were developed for estimating the arrival rates at car parks based on driver characteristics, car park attributes as well as the car park availability information displayed on PGI signs. A mathematical program was formulated to determine the optimal display PGI sign configuration to minimize total travel time. A genetic algorithm was used to identify solutions that significantly reduced queue lengths and total travel time compared with existing practices. These procedures were applied to an existing PGI system operating in Deqing Town and Xiuning City. Significant reductions in total travel time of parking vehicles with PGI being configured. This would reduce traffic congestion and lead to various environmental benefits.
The application of the sinusoidal model to lung cancer patient respiratory motion
International Nuclear Information System (INIS)
George, R.; Vedam, S.S.; Chung, T.D.; Ramakrishnan, V.; Keall, P.J.
2005-01-01
Accurate modeling of the respiratory cycle is important to account for the effect of organ motion on dose calculation for lung cancer patients. The aim of this study is to evaluate the accuracy of a respiratory model for lung cancer patients. Lujan et al. [Med. Phys. 26(5), 715-720 (1999)] proposed a model, which became widely used, to describe organ motion due to respiration. This model assumes that the parameters do not vary between and within breathing cycles. In this study, first, the correlation of respiratory motion traces with the model f(t) as a function of the parameter n(n=1,2,3) was undertaken for each breathing cycle from 331 four-minute respiratory traces acquired from 24 lung cancer patients using three breathing types: free breathing, audio instruction, and audio-visual biofeedback. Because cos 2 and cos 4 had similar correlation coefficients, and cos 2 and cos 1 have a trigonometric relationship, for simplicity, the cos 1 value was consequently used for further analysis in which the variations in mean position (z 0 ), amplitude of motion (b) and period (τ) with and without biofeedback or instructions were investigated. For all breathing types, the parameter values, mean position (z 0 ), amplitude of motion (b), and period (τ) exhibited significant cycle-to-cycle variations. Audio-visual biofeedback showed the least variations for all three parameters (z 0 , b, and τ). It was found that mean position (z 0 ) could be approximated with a normal distribution, and the amplitude of motion (b) and period (τ) could be approximated with log normal distributions. The overall probability density function (pdf) of f(t) for each of the three breathing types was fitted with three models: normal, bimodal, and the pdf of a simple harmonic oscillator. It was found that the normal and the bimodal models represented the overall respiratory motion pdfs with correlation values from 0.95 to 0.99, whereas the range of the simple harmonic oscillator pdf correlation
Moving object detection using dynamic motion modelling from UAV aerial images.
Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid
2014-01-01
Motion analysis based moving object detection from UAV aerial image is still an unsolved issue due to inconsideration of proper motion estimation. Existing moving object detection approaches from UAV aerial images did not deal with motion based pixel intensity measurement to detect moving object robustly. Besides current research on moving object detection from UAV aerial images mostly depends on either frame difference or segmentation approach separately. There are two main purposes for this research: firstly to develop a new motion model called DMM (dynamic motion model) and secondly to apply the proposed segmentation approach SUED (segmentation using edge based dilation) using frame difference embedded together with DMM model. The proposed DMM model provides effective search windows based on the highest pixel intensity to segment only specific area for moving object rather than searching the whole area of the frame using SUED. At each stage of the proposed scheme, experimental fusion of the DMM and SUED produces extracted moving objects faithfully. Experimental result reveals that the proposed DMM and SUED have successfully demonstrated the validity of the proposed methodology.
Some Numerical Aspects on Crowd Motion - The Hughes Model
Gomes, Diogo A.; Machado Velho, Roberto
2016-01-01
Here, we study a crowd model proposed by R. Hughes in [5] and we describe a numerical approach to solve it. This model comprises a Fokker-Planck equation coupled with an Eikonal equation with Dirichlet or Neumann data. First, we establish a priori
Directory of Open Access Journals (Sweden)
Qiang Zhang
2017-09-01
Full Text Available Course keeping is hard to implement under the condition of the propeller stopping or reversing at slow speed for berthing due to the ship's dynamic motion becoming highly nonlinear. To solve this problem, a practical Maneuvering Modeling Group (MMG ship mathematic model with propeller reversing transverse forces and low speed correction is first discussed to be applied for the right-handed single-screw ship. Secondly, a novel PID-based nonlinear feedback algorithm driven by bipolar sigmoid function is proposed. The PID parameters are determined by a closed-loop gain shaping algorithm directly, while the closed-loop gain shaping theory was employed for effects analysis of this algorithm. Finally, simulation experiments were carried out on an LPG ship. It is shown that the energy consumption and the smoothness performance of the nonlinear feedback control are reduced by 4.2% and 14.6% with satisfactory control effects; the proposed algorithm has the advantages of robustness, energy saving and safety in berthing practice.
GRAVITATIONAL LENS MODELING WITH GENETIC ALGORITHMS AND PARTICLE SWARM OPTIMIZERS
International Nuclear Information System (INIS)
Rogers, Adam; Fiege, Jason D.
2011-01-01
Strong gravitational lensing of an extended object is described by a mapping from source to image coordinates that is nonlinear and cannot generally be inverted analytically. Determining the structure of the source intensity distribution also requires a description of the blurring effect due to a point-spread function. This initial study uses an iterative gravitational lens modeling scheme based on the semilinear method to determine the linear parameters (source intensity profile) of a strongly lensed system. Our 'matrix-free' approach avoids construction of the lens and blurring operators while retaining the least-squares formulation of the problem. The parameters of an analytical lens model are found through nonlinear optimization by an advanced genetic algorithm (GA) and particle swarm optimizer (PSO). These global optimization routines are designed to explore the parameter space thoroughly, mapping model degeneracies in detail. We develop a novel method that determines the L-curve for each solution automatically, which represents the trade-off between the image χ 2 and regularization effects, and allows an estimate of the optimally regularized solution for each lens parameter set. In the final step of the optimization procedure, the lens model with the lowest χ 2 is used while the global optimizer solves for the source intensity distribution directly. This allows us to accurately determine the number of degrees of freedom in the problem to facilitate comparison between lens models and enforce positivity on the source profile. In practice, we find that the GA conducts a more thorough search of the parameter space than the PSO.
The use of genetic algorithms to model protoplanetary discs
Hetem, Annibal; Gregorio-Hetem, Jane
2007-12-01
The protoplanetary discs of T Tauri and Herbig Ae/Be stars have previously been studied using geometric disc models to fit their spectral energy distribution (SED). The simulations provide a means to reproduce the signatures of various circumstellar structures, which are related to different levels of infrared excess. With the aim of improving our previous model, which assumed a simple flat-disc configuration, we adopt here a reprocessing flared-disc model that assumes hydrostatic, radiative equilibrium. We have developed a method to optimize the parameter estimation based on genetic algorithms (GAs). This paper describes the implementation of the new code, which has been applied to Herbig stars from the Pico dos Dias Survey catalogue, in order to illustrate the quality of the fitting for a variety of SED shapes. The star AB Aur was used as a test of the GA parameter estimation, and demonstrates that the new code reproduces successfully a canonical example of the flared-disc model. The GA method gives a good quality of fit, but the range of input parameters must be chosen with caution, as unrealistic disc parameters can be derived. It is confirmed that the flared-disc model fits the flattened SEDs typical of Herbig stars; however, embedded objects (increasing SED slope) and debris discs (steeply decreasing SED slope) are not well fitted with this configuration. Even considering the limitation of the derived parameters, the automatic process of SED fitting provides an interesting tool for the statistical analysis of the circumstellar luminosity of large samples of young stars.
DESIGN REVIEW OF CAD MODELS USING A NUI LEAP MOTION SENSOR
Directory of Open Access Journals (Sweden)
GÎRBACIA Florin
2015-06-01
Full Text Available Natural User Interfaces (NUI is a relatively new area of research that aims to develop humancomputer interfaces, natural and intuitive, using voice commands, hand movements and gesture recognition, similar to communication between people which also implies body language and gestures. In this paper is presented a natural designed workspace which acquires the user's motion using a Leap Motion sensor and visualizes the CAD models using a CAVE-like 3D visualisation system. The user can modify complex CAD models using bimanual gesture commands in a 3D virtual environment. The developed bimanual gestures for rotate, pan, zoom and explode are presented. From the conducted experiments is established that Leap Motion NUI sensor provides an intuitive tool for design review of CAD models, performed even by users with no experience in CAD systems and virtual environments.
Some Numerical Aspects on Crowd Motion - The Hughes Model
Gomes, Diogo A.
2016-01-06
Here, we study a crowd model proposed by R. Hughes in [5] and we describe a numerical approach to solve it. This model comprises a Fokker-Planck equation coupled with an Eikonal equation with Dirichlet or Neumann data. First, we establish a priori estimates for the solution. Second, we study radial solutions and identify a shock formation mechanism. Third, we illustrate the existence of congestion, the breakdown of the model, and the trend to the equilibrium. Finally, we propose a new numerical method and consider two numerical examples.
Use of a genetic algorithm in a subchannel model
International Nuclear Information System (INIS)
Alberto Teyssedou; Armando Nava-Dominguez
2005-01-01
Full text of publication follows: The channel of a nuclear reactor contains the fuel bundles which are made up of fuel elements distributed in a manner that creates a series of interconnected subchannels through which the coolant flows. Subchannel codes are used to determine local flow variables; these codes consider the complex geometry of a nuclear fuel bundle as being divided in simple parallel and interconnected cells called 'subchannels'. Each subchannel is bounded by the solid walls of the fuel rods or by imaginary boundaries placed between adjacent subchannels. In each subchannel the flow is considered as one dimensional, therefore lateral mixing mechanisms between subchannels should be taken into account. These mixing mechanisms are: Diversion cross-flow, Turbulent mixing, Turbulent void diffusion, Void drift and Buoyancy drift; they are implemented as independent contribution terms in a pseudo-vectorial lateral momentum equation. These mixing terms are calculated with correlations that require the use of empirical coefficients. It has been observed, however, that there is no unique set of coefficients and or correlations that can be used to predict a complete range of experimental conditions. To avoid this drawback, in this paper a Genetic Algorithm (GA) was coupled to a subchannel model. The use of a GA in conjunction with an appropriate objective function allows the subchannel model to internally determine the optimal values of the coefficients without user intervention. The subchannel model requires two diffusion coefficients, the drift flux two-phase flow distribution coefficient, C 0 , and a coefficient used to control the lateral pressure losses. The GA algorithm was implemented in order to find the most appropriate values of these four coefficients. Genetic algorithms (GA) are based on the theory of evolution; thus, the GA manipulates a population of individuals (chromosomes) in order to evolve them towards a best adaptation (fitness criterion) to
Comparison analysis for classification algorithm in data mining and the study of model use
Chen, Junde; Zhang, Defu
2018-04-01
As a key technique in data mining, classification algorithm was received extensive attention. Through an experiment of classification algorithm in UCI data set, we gave a comparison analysis method for the different algorithms and the statistical test was used here. Than that, an adaptive diagnosis model for preventive electricity stealing and leakage was given as a specific case in the paper.
A simple and efficient parallel FFT algorithm using the BSP model
Bisseling, R.H.; Inda, M.A.
2000-01-01
In this paper we present a new parallel radix FFT algorithm based on the BSP model Our parallel algorithm uses the groupcyclic distribution family which makes it simple to understand and easy to implement We show how to reduce the com munication cost of the algorithm by a factor of three in the case
The Support Reduction Algorithm for Computing Non-Parametric Function Estimates in Mixture Models
GROENEBOOM, PIET; JONGBLOED, GEURT; WELLNER, JON A.
2008-01-01
In this paper, we study an algorithm (which we call the support reduction algorithm) that can be used to compute non-parametric M-estimators in mixture models. The algorithm is compared with natural competitors in the context of convex regression and the ‘Aspect problem’ in quantum physics.
Evaluation of Mathematical Models for Tankers’ Maneuvering Motions
Directory of Open Access Journals (Sweden)
Erhan AKSU
2017-03-01
Full Text Available In this study, the maneuvering performance of two tanker ships, KVLCC1 and KVLCC2 which have different stern forms are predicted using a system-based method. Two different 3 DOF (degrees of freedom mathematical models based on the MMG(Maneuvering Modeling Group concept areappliedwith the difference in representing lateral force and yawing moment by second and third order polynomials respectively. Hydrodynamic coefficients and related parameters used in the mathematical models of the same scale models of KVLCC1 and KVLCC2 ships are estimated by using experimental data of NMRI (National Maritime Research Institute. The simulations of turning circle with rudder angle ±35o , zigzag(±10o /±10o and zigzag (±20o /±20o maneuvers are carried out and compared with free running model test data of MARIN (Maritime Research Institute Netherlands in this study. As a result of the analysis, it can be summarised that MMG model based on the third order polynomial is superior to the one based on the second order polynomial in view of estimation accuracy of lateral hull force and yawing moment.
A Subject-Specific Kinematic Model to Predict Human Motion in Exoskeleton-Assisted Gait
Torricelli, Diego; Cortés, Camilo; Lete, Nerea; Bertelsen, Álvaro; Gonzalez-Vargas, Jose E.; del-Ama, Antonio J.; Dimbwadyo, Iris; Moreno, Juan C.; Florez, Julian; Pons, Jose L.
2018-01-01
The relative motion between human and exoskeleton is a crucial factor that has remarkable consequences on the efficiency, reliability and safety of human-robot interaction. Unfortunately, its quantitative assessment has been largely overlooked in the literature. Here, we present a methodology that allows predicting the motion of the human joints from the knowledge of the angular motion of the exoskeleton frame. Our method combines a subject-specific skeletal model with a kinematic model of a lower limb exoskeleton (H2, Technaid), imposing specific kinematic constraints between them. To calibrate the model and validate its ability to predict the relative motion in a subject-specific way, we performed experiments on seven healthy subjects during treadmill walking tasks. We demonstrate a prediction accuracy lower than 3.5° globally, and around 1.5° at the hip level, which represent an improvement up to 66% compared to the traditional approach assuming no relative motion between the user and the exoskeleton. PMID:29755336
Effective Multi-Model Motion Tracking Under Multiple Team Member Actuators
Gu, Yang; Veloso, Manuela
2009-01-01
Motivated by the interactions between a team and the tracked target, we contribute a method to achieve efficient tracking through using a play-based motion model and combined vision and infrared sensory information. This method gives the robot a more exact task-specific motion model when executing different tactics over the tracked target (e.g. the ball) or collaborating with the tracked target (e.g. the team member). Then we represent the system in a compact dynamic Bayesian network and use ...
Scheller, Johannes; Braza, Marianna; Triantafyllou, Michael
2016-11-01
Bats and other animals rapidly change their wingspan in order to control the aerodynamic forces. A NACA0013 type airfoil with dynamically changing span is proposed as a simple model to experimentally study these biomimetic morphing wings. Combining this large-scale morphing with inline motion allows to control both force magnitude and direction. Force measurements are conducted in order to analyze the impact of the 4 degree of freedom flapping motion on the flow. A blade-element theory augmented unsteady aerodynamic model is then used to derive optimal flapping trajectories.
Genetic algorithm based optimization of advanced solar cell designs modeled in Silvaco AtlasTM
Utsler, James
2006-01-01
A genetic algorithm was used to optimize the power output of multi-junction solar cells. Solar cell operation was modeled using the Silvaco ATLASTM software. The output of the ATLASTM simulation runs served as the input to the genetic algorithm. The genetic algorithm was run as a diffusing computation on a network of eighteen dual processor nodes. Results showed that the genetic algorithm produced better power output optimizations when compared with the results obtained using the hill cli...
ARMA models for earthquake ground motions. Seismic Safety Margins Research Program
International Nuclear Information System (INIS)
Chang, Mark K.; Kwiatkowski, Jan W.; Nau, Robert F.; Oliver, Robert M.; Pister, Karl S.
1981-02-01
This report contains an analysis of four major California earthquake records using a class of discrete linear time-domain processes commonly referred to as ARMA (Autoregressive/Moving-Average) models. It has been possible to analyze these different earthquakes, identify the order of the appropriate ARMA model(s), estimate parameters and test the residuals generated by these models. It has also been possible to show the connections, similarities and differences between the traditional continuous models (with parameter estimates based on spectral analyses) and the discrete models with parameters estimated by various maximum likelihood techniques applied to digitized acceleration data in the time domain. The methodology proposed in this report is suitable for simulating earthquake ground motions in the time domain and appears to be easily adapted to serve as inputs for nonlinear discrete time models of structural motions. (author)
Directory of Open Access Journals (Sweden)
Yu Fan
2016-10-01
Full Text Available In order to defend the hypersonic glide vehicle (HGV, a cost-effective single-model tracking algorithm using Cubature Kalman filter (CKF is proposed in this paper based on modified aerodynamic model (MAM as process equation and radar measurement model as measurement equation. In the existing aerodynamic model, the two control variables attack angle and bank angle cannot be measured by the existing radar equipment and their control laws cannot be known by defenders. To establish the process equation, the MAM for HGV tracking is proposed by using additive white noise to model the rates of change of the two control variables. For the ease of comparison several multiple model algorithms based on CKF are presented, including interacting multiple model (IMM algorithm, adaptive grid interacting multiple model (AGIMM algorithm and hybrid grid multiple model (HGMM algorithm. The performances of these algorithms are compared and analyzed according to the simulation results. The simulation results indicate that the proposed tracking algorithm based on modified aerodynamic model has the best tracking performance with the best accuracy and least computational cost among all tracking algorithms in this paper. The proposed algorithm is cost-effective for HGV tracking.
Modeling of genetic algorithms with a finite population
C.H.M. van Kemenade
1997-01-01
textabstractCross-competition between non-overlapping building blocks can strongly influence the performance of evolutionary algorithms. The choice of the selection scheme can have a strong influence on the performance of a genetic algorithm. This paper describes a number of different genetic
Numerical Algorithms for Deterministic Impulse Control Models with Applications
Grass, D.; Chahim, M.
2012-01-01
Abstract: In this paper we describe three different algorithms, from which two (as far as we know) are new in the literature. We take both the size of the jump as the jump times as decision variables. The first (new) algorithm considers an Impulse Control problem as a (multipoint) Boundary Value
A Dynamic Model for Roll Motion of Ships Due to Flooding
DEFF Research Database (Denmark)
Xia, Jinzhu; Jensen, Jørgen Juncher; Pedersen, Preben Terndrup
1997-01-01
A dynamic model is presented of the roll motion of damaged RoRo vessels which couples the internal cross-flooding flow and the air action in the equalizing compartment. The cross flooding flow and the air motion are modelled by a modified Bernoulli equation, where artificial damping is introduced...... to avoid modal instability based on the original Bernoulli equation. The fluid action of the flooded water on the ship is expressed by its influence on the moment of inertia of the ship and the heeling moment, which is a couple created by the gravitational force of the flooded water and the change...... of buoyancy of the ship.Two limiting flooding cases are examined in the present analysis: The sudden ingress of a certain amount of water to the damaged compartment with no further water exchange between the sea and the flooded compartment during the roll motion, and the continuous ingress of water through...
A proximity algorithm accelerated by Gauss-Seidel iterations for L1/TV denoising models
Li, Qia; Micchelli, Charles A.; Shen, Lixin; Xu, Yuesheng
2012-09-01
Our goal in this paper is to improve the computational performance of the proximity algorithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed-point equations expressed in terms of the proximity operators. Based upon this observation we develop an algorithm for solving the model and establish its convergence. Furthermore, we demonstrate that the proposed algorithm can be accelerated through the use of the componentwise Gauss-Seidel iteration so that the CPU time consumed is significantly reduced. Numerical experiments using the proposed algorithm for impulsive noise removal are included, with a comparison to three recently developed algorithms. The numerical results show that while the proposed algorithm enjoys a high quality of the restored images, as the other three known algorithms do, it performs significantly better in terms of computational efficiency measured in the CPU time consumed.
A proximity algorithm accelerated by Gauss–Seidel iterations for L1/TV denoising models
International Nuclear Information System (INIS)
Li, Qia; Shen, Lixin; Xu, Yuesheng; Micchelli, Charles A
2012-01-01
Our goal in this paper is to improve the computational performance of the proximity algorithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed-point equations expressed in terms of the proximity operators. Based upon this observation we develop an algorithm for solving the model and establish its convergence. Furthermore, we demonstrate that the proposed algorithm can be accelerated through the use of the componentwise Gauss–Seidel iteration so that the CPU time consumed is significantly reduced. Numerical experiments using the proposed algorithm for impulsive noise removal are included, with a comparison to three recently developed algorithms. The numerical results show that while the proposed algorithm enjoys a high quality of the restored images, as the other three known algorithms do, it performs significantly better in terms of computational efficiency measured in the CPU time consumed. (paper)
Lee, Ji Won; Kim, Chang Won; Lee, Geewon; Lee, Han Cheol; Kim, Sang-Pil; Choi, Bum Sung; Jeong, Yeon Joo
2018-02-01
Background Using the hybrid electrocardiogram (ECG)-gated computed tomography (CT) technique, assessment of entire aorta, coronary arteries, and aortic valve can be possible using single-bolus contrast administration within a single acquisition. Purpose To compare the image quality of hybrid ECG-gated and non-gated CT angiography of the aorta and evaluate the effect of a motion correction algorithm (MCA) on coronary artery image quality in a hybrid ECG-gated aorta CT group. Material and Methods In total, 104 patients (76 men; mean age = 65.8 years) prospectively randomized into two groups (Group 1 = hybrid ECG-gated CT; Group 2 = non-gated CT) underwent wide-detector array aorta CT. Image quality, assessed using a four-point scale, was compared between the groups. Coronary artery image quality was compared between the conventional reconstruction and motion correction reconstruction subgroups in Group 1. Results Group 1 showed significant advantages over Group 2 in aortic wall, cardiac chamber, aortic valve, coronary ostia, and main coronary arteries image quality (all P ECG-gated CT significantly improved the heart and aortic wall image quality and the MCA can further improve the image quality and interpretability of coronary arteries.
Stochastic resonance induced by novel random transitions of motion of FitzHugh-Nagumo neuron model
International Nuclear Information System (INIS)
Zhang Guangjun; Xu Jianxue
2005-01-01
In contrast to the previous studies which have dealt with stochastic resonance induced by random transitions of system motion between two coexisting limit cycle attractors in the FitzHugh-Nagumo (FHN) neuron model after Hopf bifurcation and which have dealt with the phenomenon of stochastic resonance induced by external noise when the model with periodic input has only one attractor before Hopf bifurcation, in this paper we have focused our attention on stochastic resonance (SR) induced by a novel transition behavior, the transitions of motion of the model among one attractor on the left side of bifurcation point and two attractors on the right side of bifurcation point under the perturbation of noise. The results of research show: since one bifurcation of transition from one to two limit cycle attractors and the other bifurcation of transition from two to one limit cycle attractors occur in turn besides Hopf bifurcation, the novel transitions of motion of the model occur when bifurcation parameter is perturbed by weak internal noise; the bifurcation point of the model may stochastically slightly shift to the left or right when FHN neuron model is perturbed by external Gaussian distributed white noise, and then the novel transitions of system motion also occur under the perturbation of external noise; the novel transitions could induce SR alone, and when the novel transitions of motion of the model and the traditional transitions between two coexisting limit cycle attractors after bifurcation occur in the same process the SR also may occur with complicated behaviors types; the mechanism of SR induced by external noise when FHN neuron model with periodic input has only one attractor before Hopf bifurcation is related to this kind of novel transition mentioned above
Evaluating ortholog prediction algorithms in a yeast model clade.
Directory of Open Access Journals (Sweden)
Leonidas Salichos
Full Text Available BACKGROUND: Accurate identification of orthologs is crucial for evolutionary studies and for functional annotation. Several algorithms have been developed for ortholog delineation, but so far, manually curated genome-scale biological databases of orthologous genes for algorithm evaluation have been lacking. We evaluated four popular ortholog prediction algorithms (MultiParanoid; and OrthoMCL; RBH: Reciprocal Best Hit; RSD: Reciprocal Smallest Distance; the last two extended into clustering algorithms cRBH and cRSD, respectively, so that they can predict orthologs across multiple taxa against a set of 2,723 groups of high-quality curated orthologs from 6 Saccharomycete yeasts in the Yeast Gene Order Browser. RESULTS: Examination of sensitivity [TP/(TP+FN], specificity [TN/(TN+FP], and accuracy [(TP+TN/(TP+TN+FP+FN] across a broad parameter range showed that cRBH was the most accurate and specific algorithm, whereas OrthoMCL was the most sensitive. Evaluation of the algorithms across a varying number of species showed that cRBH had the highest accuracy and lowest false discovery rate [FP/(FP+TP], followed by cRSD. Of the six species in our set, three descended from an ancestor that underwent whole genome duplication. Subsequent differential duplicate loss events in the three descendants resulted in distinct classes of gene loss patterns, including cases where the genes retained in the three descendants are paralogs, constituting 'traps' for ortholog prediction algorithms. We found that the false discovery rate of all algorithms dramatically increased in these traps. CONCLUSIONS: These results suggest that simple algorithms, like cRBH, may be better ortholog predictors than more complex ones (e.g., OrthoMCL and MultiParanoid for evolutionary and functional genomics studies where the objective is the accurate inference of single-copy orthologs (e.g., molecular phylogenetics, but that all algorithms fail to accurately predict orthologs when paralogy
Stochastic motion of a particle in a model fluctuating medium
International Nuclear Information System (INIS)
Moreau, M.; Gaveau, B.; Perera, A.; Frankowicz, M.
1993-01-01
We present several models of time fluctuating media with finite memory, consisting in one and two-dimensional lattices, the Modes of which fluctuate between two internal states according to a Poisson process. A particle moves on the lattice, the diffusion by the Modes depending on their internal state. Such models can be used for the microscopic theory of reaction constants in a dense phase, or for the study of diffusion or reactivity in a complex medium. In a number of cases, the transmission probability of the medium is computed exactly; it is shown that stochastic resonances can occur, an optimal transmission being obtained for a convenient choice of parameters. In more general situations, approximate solutions are given in the case of short and moderate memory of the obstacles. The diffusion in an infinite two-dimensional lattice is studied, and the memory is shown to affect the distribution of the particles rather than the diffusion law. (author). 25 refs, 5 figs
Energy Technology Data Exchange (ETDEWEB)
Alpuche Aviles, Jorge E.; VanBeek, Timothy [CancerCare Manitoba, Winnipeg (Canada); Sasaki, David; Rivest, Ryan; Akra, Mohamed [CancerCare Manitoba, Winnipeg (Canada); University of Manitoba, Winnipeg (Canada)
2016-08-15
Purpose: This work presents an algorithm used to quantify intra-fraction motion for patients treated using deep inspiration breath hold (DIBH). The algorithm quantifies the position of the chest wall in breast tangent fields using electronic portal images. Methods: The algorithm assumes that image profiles, taken along a direction perpendicular to the medial border of the field, follow a monotonically and smooth decreasing function. This assumption is invalid in the presence of lung and can be used to calculate chest wall position. The algorithm was validated by determining the position of the chest wall for varying field edge positions in portal images of a thoracic phantom. The algorithm was used to quantify intra-fraction motion in cine images for 7 patients treated with DIBH. Results: Phantom results show that changes in the distance between chest wall and field edge were accurate within 0.1 mm on average. For a fixed field edge, the algorithm calculates the position of the chest wall with a 0.2 mm standard deviation. Intra-fraction motion for DIBH patients was within 1 mm 91.4% of the time and within 1.5 mm 97.9% of the time. The maximum intra-fraction motion was 3.0 mm. Conclusions: A physics based algorithm was developed and can be used to quantify the position of chest wall irradiated in tangent portal images with an accuracy of 0.1 mm and precision of 0.6 mm. Intra-fraction motion for patients treated with DIBH at our clinic is less than 3 mm.
Meteorological fluid dynamics asymptotic modelling, stability and chaotic atmospheric motion
Zeytounian, Radyadour K
1991-01-01
The author considers meteorology as a part of fluid dynamics. He tries to derive the properties of atmospheric flows from a rational analysis of the Navier-Stokes equations, at the same time analyzing various types of initial and boundary problems. This approach to simulate nature by models from fluid dynamics will be of interest to both scientists and students of physics and theoretical meteorology.
International Nuclear Information System (INIS)
Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim
2014-01-01
A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems
Energy Technology Data Exchange (ETDEWEB)
Elsheikh, Ahmed H., E-mail: aelsheikh@ices.utexas.edu [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Institute of Petroleum Engineering, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); Wheeler, Mary F. [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Hoteit, Ibrahim [Department of Earth Sciences and Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal (Saudi Arabia)
2014-02-01
A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.
Pitching motion control of a butterfly-like 3D flapping wing-body model
Suzuki, Kosuke; Minami, Keisuke; Inamuro, Takaji
2014-11-01
Free flights and a pitching motion control of a butterfly-like flapping wing-body model are numerically investigated by using an immersed boundary-lattice Boltzmann method. The model flaps downward for generating the lift force and backward for generating the thrust force. Although the model can go upward against the gravity by the generated lift force, the model generates the nose-up torque, consequently gets off-balance. In this study, we discuss a way to control the pitching motion by flexing the body of the wing-body model like an actual butterfly. The body of the model is composed of two straight rigid rod connected by a rotary actuator. It is found that the pitching angle is suppressed in the range of +/-5° by using the proportional-plus-integral-plus-derivative (PID) control for the input torque of the rotary actuator.
Algorithms for Bayesian network modeling and reliability assessment of infrastructure systems
International Nuclear Information System (INIS)
Tien, Iris; Der Kiureghian, Armen
2016-01-01
Novel algorithms are developed to enable the modeling of large, complex infrastructure systems as Bayesian networks (BNs). These include a compression algorithm that significantly reduces the memory storage required to construct the BN model, and an updating algorithm that performs inference on compressed matrices. These algorithms address one of the major obstacles to widespread use of BNs for system reliability assessment, namely the exponentially increasing amount of information that needs to be stored as the number of components in the system increases. The proposed compression and inference algorithms are described and applied to example systems to investigate their performance compared to that of existing algorithms. Orders of magnitude savings in memory storage requirement are demonstrated using the new algorithms, enabling BN modeling and reliability analysis of larger infrastructure systems. - Highlights: • Novel algorithms developed for Bayesian network modeling of infrastructure systems. • Algorithm presented to compress information in conditional probability tables. • Updating algorithm presented to perform inference on compressed matrices. • Algorithms applied to example systems to investigate their performance. • Orders of magnitude savings in memory storage requirement demonstrated.
Adoption of the Hash algorithm in a conceptual model for the civil registry of Ecuador
Toapanta, Moisés; Mafla, Enrique; Orizaga, Antonio
2018-04-01
The Hash security algorithm was analyzed in order to mitigate information security in a distributed architecture. The objective of this research is to develop a prototype for the Adoption of the algorithm Hash in a conceptual model for the Civil Registry of Ecuador. The deductive method was used in order to analyze the published articles that have a direct relation with the research project "Algorithms and Security Protocols for the Civil Registry of Ecuador" and articles related to the Hash security algorithm. It resulted from this research: That the SHA-1 security algorithm is appropriate for use in Ecuador's civil registry; we adopted the SHA-1 algorithm used in the flowchart technique and finally we obtained the adoption of the hash algorithm in a conceptual model. It is concluded that from the comparison of the DM5 and SHA-1 algorithm, it is suggested that in the case of an implementation, the SHA-1 algorithm is taken due to the amount of information and data available from the Civil Registry of Ecuador; It is determined that the SHA-1 algorithm that was defined using the flowchart technique can be modified according to the requirements of each institution; the model for adopting the hash algorithm in a conceptual model is a prototype that can be modified according to all the actors that make up each organization.
THE CONTENT MODEL AND THE EQUATIONS OF MOTION OF ELECTRIC VEHICLE
Directory of Open Access Journals (Sweden)
K. O. Soroka
2015-06-01
Full Text Available Purpose. The calculation methods improvement of the electric vehicle curve movement and the cost of electricity with the aim of performance and accuracy of calculations improving are considered in the paper. Methodology. The method is based upon the general principles of mathematical simulation, when a conceptual model of problem domain is created and then a mathematic model is formulated according to the conceptual model. Development of an improved conceptual model of electric vehicles motion is proposed and a corresponding mathematical model is studied. Findings. The authors proposed model in which the vehicle considers as a system of interacting point-like particles with defined interactions under the influence of external forces. As a mathematical model the Euler-Lagrange equation of the second kind is used. Conservative and dissipative forces affecting the system dynamics are considered. Equations for calculating motion of electric vehicles with taking into account the energy consumption are proposed. Originality. In the paper the conceptual model of motion for electric vehicles with distributed masses has been developed as a system of interacting point-like particles. In the easiest case the system has only one degree of freedom. The mathematical model is based on Lagrange equations. The shown approach allows a detailed and physically based description of the electric vehicles dynamics. The derived motion equations for public electric transport are substantially more precise than the equations recommended in textbooks and the reference documentation. The motion equations and energy consumption calculations for transportation of one passenger with a trolleybus are developed. It is shown that the energy consumption depends on the data of vehicle and can increase when the manload is above the certain level. Practical value. The authors received the equations of motion and labour costs in the calculations focused on the use of computer methods
A neural model of the temporal dynamics of figure-ground segregation in motion perception.
Raudies, Florian; Neumann, Heiko
2010-03-01
How does the visual system manage to segment a visual scene into surfaces and objects and manage to attend to a target object? Based on psychological and physiological investigations, it has been proposed that the perceptual organization and segmentation of a scene is achieved by the processing at different levels of the visual cortical hierarchy. According to this, motion onset detection, motion-defined shape segregation, and target selection are accomplished by processes which bind together simple features into fragments of increasingly complex configurations at different levels in the processing hierarchy. As an alternative to this hierarchical processing hypothesis, it has been proposed that the processing stages for feature detection and segregation are reflected in different temporal episodes in the response patterns of individual neurons. Such temporal epochs have been observed in the activation pattern of neurons as low as in area V1. Here, we present a neural network model of motion detection, figure-ground segregation and attentive selection which explains these response patterns in an unifying framework. Based on known principles of functional architecture of the visual cortex, we propose that initial motion and motion boundaries are detected at different and hierarchically organized stages in the dorsal pathway. Visual shapes that are defined by boundaries, which were generated from juxtaposed opponent motions, are represented at different stages in the ventral pathway. Model areas in the different pathways interact through feedforward and modulating feedback, while mutual interactions enable the communication between motion and form representations. Selective attention is devoted to shape representations by sending modulating feedback signals from higher levels (working memory) to intermediate levels to enhance their responses. Areas in the motion and form pathway are coupled through top-down feedback with V1 cells at the bottom end of the hierarchy
Parallelization of the model-based iterative reconstruction algorithm DIRA
International Nuclear Information System (INIS)
Oertenberg, A.; Sandborg, M.; Alm Carlsson, G.; Malusek, A.; Magnusson, M.
2016-01-01
New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelization of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelization of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelized using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelization of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelization with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. (authors)
Parallel algorithms for interactive manipulation of digital terrain models
Davis, E. W.; Mcallister, D. F.; Nagaraj, V.
1988-01-01
Interactive three-dimensional graphics applications, such as terrain data representation and manipulation, require extensive arithmetic processing. Massively parallel machines are attractive for this application since they offer high computational rates, and grid connected architectures provide a natural mapping for grid based terrain models. Presented here are algorithms for data movement on the massive parallel processor (MPP) in support of pan and zoom functions over large data grids. It is an extension of earlier work that demonstrated real-time performance of graphics functions on grids that were equal in size to the physical dimensions of the MPP. When the dimensions of a data grid exceed the processing array size, data is packed in the array memory. Windows of the total data grid are interactively selected for processing. Movement of packed data is needed to distribute items across the array for efficient parallel processing. Execution time for data movement was found to exceed that for arithmetic aspects of graphics functions. Performance figures are given for routines written in MPP Pascal.
Assimaki, D.; Li, W.; Steidl, J. M.; Schmedes, J.
2007-12-01
The assessment of strong motion site response is of great significance, both for mitigating seismic hazard and for performing detailed analyses of earthquake source characteristics. There currently exists, however, large degree of uncertainty concerning the mathematical model to be employed for the computationally efficient evaluation of local site effects, and the site investigation program necessary to evaluate the nonlinear input model parameters and ensure cost-effective predictions; and while site response observations may provide critical constraints on interpretation methods, the lack of a statistically significant number of in-situ strong motion records prohibits statistical analyses to be conducted and uncertainties to be quantified based entirely on field data. In this paper, we combine downhole observations and broadband ground motion synthe