WorldWideScience

Sample records for deformation algorithms applied

  1. On combining algorithms for deformable image registration

    NARCIS (Netherlands)

    Muenzing, S.E.A.; Ginneken, van B.; Pluim, J.P.W.; Dawant, B.M.

    2012-01-01

    We propose a meta-algorithm for registration improvement by combining deformable image registrations (MetaReg). It is inspired by a well-established method from machine learning, the combination of classifiers. MetaReg consists of two main components: (1) A strategy for composing an improved

  2. Deconvolution algorithms applied in ultrasonics

    International Nuclear Information System (INIS)

    Perrot, P.

    1993-12-01

    In a complete system of acquisition and processing of ultrasonic signals, it is often necessary at one stage to use some processing tools to get rid of the influence of the different elements of that system. By that means, the final quality of the signals in terms of resolution is improved. There are two main characteristics of ultrasonic signals which make this task difficult. Firstly, the signals generated by transducers are very often non-minimum phase. The classical deconvolution algorithms are unable to deal with such characteristics. Secondly, depending on the medium, the shape of the propagating pulse is evolving. The spatial invariance assumption often used in classical deconvolution algorithms is rarely valid. Many classical algorithms, parametric and non-parametric, have been investigated: the Wiener-type, the adaptive predictive techniques, the Oldenburg technique in the frequency domain, the minimum variance deconvolution. All the algorithms have been firstly tested on simulated data. One specific experimental set-up has also been analysed. Simulated and real data has been produced. This set-up demonstrated the interest in applying deconvolution, in terms of the achieved resolution. (author). 32 figs., 29 refs

  3. A two-dimensional deformable phantom for quantitatively verifying deformation algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Kirby, Neil; Chuang, Cynthia; Pouliot, Jean [Department of Radiation Oncology, University of California San Francisco, San Francisco, California 94143-1708 (United States)

    2011-08-15

    Purpose: The incorporation of deformable image registration into the treatment planning process is rapidly advancing. For this reason, the methods used to verify the underlying deformation algorithms must evolve equally fast. This manuscript proposes a two-dimensional deformable phantom, which can objectively verify the accuracy of deformation algorithms, as the next step for improving these techniques. Methods: The phantom represents a single plane of the anatomy for a head and neck patient. Inflation of a balloon catheter inside the phantom simulates tumor growth. CT and camera images of the phantom are acquired before and after its deformation. Nonradiopaque markers reside on the surface of the deformable anatomy and are visible through an acrylic plate, which enables an optical camera to measure their positions; thus, establishing the ground-truth deformation. This measured deformation is directly compared to the predictions of deformation algorithms, using several similarity metrics. The ratio of the number of points with more than a 3 mm deformation error over the number that are deformed by more than 3 mm is used for an error metric to evaluate algorithm accuracy. Results: An optical method of characterizing deformation has been successfully demonstrated. For the tests of this method, the balloon catheter deforms 32 out of the 54 surface markers by more than 3 mm. Different deformation errors result from the different similarity metrics. The most accurate deformation predictions had an error of 75%. Conclusions: The results presented here demonstrate the utility of the phantom for objectively verifying deformation algorithms and determining which is the most accurate. They also indicate that the phantom would benefit from more electron density heterogeneity. The reduction of the deformable anatomy to a two-dimensional system allows for the use of nonradiopaque markers, which do not influence deformation algorithms. This is the fundamental advantage of this

  4. A new PPP algorithm for deformation monitoring with single ...

    Indian Academy of Sciences (India)

    However, the existing SF PPP methods can be hardly implemented for deformation monitoring directly due to their limited ... solutions for various applications, such as survey- ..... SEID model, traditional DF PPP and the new PPP algorithm with ...

  5. High performance deformable image registration algorithms for manycore processors

    CERN Document Server

    Shackleford, James; Sharp, Gregory

    2013-01-01

    High Performance Deformable Image Registration Algorithms for Manycore Processors develops highly data-parallel image registration algorithms suitable for use on modern multi-core architectures, including graphics processing units (GPUs). Focusing on deformable registration, we show how to develop data-parallel versions of the registration algorithm suitable for execution on the GPU. Image registration is the process of aligning two or more images into a common coordinate frame and is a fundamental step to be able to compare or fuse data obtained from different sensor measurements. E

  6. Performance of 12 DIR algorithms in low-contrast regions for mass and density conserving deformation

    International Nuclear Information System (INIS)

    Yeo, U. J.; Supple, J. R.; Franich, R. D.; Taylor, M. L.; Smith, R.; Kron, T.

    2013-01-01

    Purpose: Deformable image registration (DIR) has become a key tool for adaptive radiotherapy to account for inter- and intrafraction organ deformation. Of contemporary interest, the application to deformable dose accumulation requires accurate deformation even in low contrast regions where dose gradients may exist within near-uniform tissues. One expects high-contrast features to generally be deformed more accurately by DIR algorithms. The authors systematically assess the accuracy of 12 DIR algorithms and quantitatively examine, in particular, low-contrast regions, where accuracy has not previously been established.Methods: This work investigates DIR algorithms in three dimensions using deformable gel (DEFGEL) [U. J. Yeo, M. L. Taylor, L. Dunn, R. L. Smith, T. Kron, and R. D. Franich, “A novel methodology for 3D deformable dosimetry,” Med. Phys. 39, 2203–2213 (2012)], for application to mass- and density-conserving deformations. CT images of DEFGEL phantoms with 16 fiducial markers (FMs) implanted were acquired in deformed and undeformed states for three different representative deformation geometries. Nonrigid image registration was performed using 12 common algorithms in the public domain. The optimum parameter setup was identified for each algorithm and each was tested for deformation accuracy in three scenarios: (I) original images of the DEFGEL with 16 FMs; (II) images with eight of the FMs mathematically erased; and (III) images with all FMs mathematically erased. The deformation vector fields obtained for scenarios II and III were then applied to the original images containing all 16 FMs. The locations of the FMs estimated by the algorithms were compared to actual locations determined by CT imaging. The accuracy of the algorithms was assessed by evaluation of three-dimensional vectors between true marker locations and predicted marker locations.Results: The mean magnitude of 16 error vectors per sample ranged from 0.3 to 3.7, 1.0 to 6.3, and 1.3 to 7

  7. PPP Sliding Window Algorithm and Its Application in Deformation Monitoring

    Science.gov (United States)

    Song, Weiwei; Zhang, Rui; Yao, Yibin; Liu, Yanyan; Hu, Yuming

    2016-01-01

    Compared with the double-difference relative positioning method, the precise point positioning (PPP) algorithm can avoid the selection of a static reference station and directly measure the three-dimensional position changes at the observation site and exhibit superiority in a variety of deformation monitoring applications. However, because of the influence of various observing errors, the accuracy of PPP is generally at the cm-dm level, which cannot meet the requirements needed for high precision deformation monitoring. For most of the monitoring applications, the observation stations maintain stationary, which can be provided as a priori constraint information. In this paper, a new PPP algorithm based on a sliding window was proposed to improve the positioning accuracy. Firstly, data from IGS tracking station was processed using both traditional and new PPP algorithm; the results showed that the new algorithm can effectively improve positioning accuracy, especially for the elevation direction. Then, an earthquake simulation platform was used to simulate an earthquake event; the results illustrated that the new algorithm can effectively detect the vibrations change of a reference station during an earthquake. At last, the observed Wenchuan earthquake experimental results showed that the new algorithm was feasible to monitor the real earthquakes and provide early-warning alerts. PMID:27241172

  8. The ANACONDA algorithm for deformable image registration in radiotherapy

    International Nuclear Information System (INIS)

    Weistrand, Ola; Svensson, Stina

    2015-01-01

    Purpose: The purpose of this work was to describe a versatile algorithm for deformable image registration with applications in radiotherapy and to validate it on thoracic 4DCT data as well as CT/cone beam CT (CBCT) data. Methods: ANAtomically CONstrained Deformation Algorithm (ANACONDA) combines image information (i.e., intensities) with anatomical information as provided by contoured image sets. The registration problem is formulated as a nonlinear optimization problem and solved with an in-house developed solver, tailored to this problem. The objective function, which is minimized during optimization, is a linear combination of four nonlinear terms: 1. image similarity term; 2. grid regularization term, which aims at keeping the deformed image grid smooth and invertible; 3. a shape based regularization term which works to keep the deformation anatomically reasonable when regions of interest are present in the reference image; and 4. a penalty term which is added to the optimization problem when controlling structures are used, aimed at deforming the selected structure in the reference image to the corresponding structure in the target image. Results: To validate ANACONDA, the authors have used 16 publically available thoracic 4DCT data sets for which target registration errors from several algorithms have been reported in the literature. On average for the 16 data sets, the target registration error is 1.17 ± 0.87 mm, Dice similarity coefficient is 0.98 for the two lungs, and image similarity, measured by the correlation coefficient, is 0.95. The authors have also validated ANACONDA using two pelvic cases and one head and neck case with planning CT and daily acquired CBCT. Each image has been contoured by a physician (radiation oncologist) or experienced radiation therapist. The results are an improvement with respect to rigid registration. However, for the head and neck case, the sample set is too small to show statistical significance. Conclusions: ANACONDA

  9. Research on the Phase Aberration Correction with a Deformable Mirror Controlled by a Genetic Algorithm

    International Nuclear Information System (INIS)

    Yang, P; Hu, S J; Chen, S Q; Yang, W; Xu, B; Jiang, W H

    2006-01-01

    In order to improve laser beam quality, a real number encoding genetic algorithm based on adaptive optics technology was presented. This algorithm was applied to control a 19-channel deformable mirror to correct phase aberration in laser beam. It is known that when traditional adaptive optics system is used to correct laser beam wave-front phase aberration, a precondition is to measure the phase aberration information in the laser beam. However, using genetic algorithms, there is no necessary to know the phase aberration information in the laser beam beforehand. The only parameter need to know is the Light intensity behind the pinhole on the focal plane. This parameter was used as the fitness function for the genetic algorithm. Simulation results show that the optimal shape of the 19-channel deformable mirror applied to correct the phase aberration can be ascertained. The peak light intensity was improved by a factor of 21, and the encircled energy strehl ratio was increased to 0.34 from 0.02 as the phase aberration was corrected with this technique

  10. An advanced algorithm for deformation estimation in non-urban areas

    Science.gov (United States)

    Goel, Kanika; Adam, Nico

    2012-09-01

    This paper presents an advanced differential SAR interferometry stacking algorithm for high resolution deformation monitoring in non-urban areas with a focus on distributed scatterers (DSs). Techniques such as the Small Baseline Subset Algorithm (SBAS) have been proposed for processing DSs. SBAS makes use of small baseline differential interferogram subsets. Singular value decomposition (SVD), i.e. L2 norm minimization is applied to link independent subsets separated by large baselines. However, the interferograms used in SBAS are multilooked using a rectangular window to reduce phase noise caused for instance by temporal decorrelation, resulting in a loss of resolution and the superposition of topography and deformation signals from different objects. Moreover, these have to be individually phase unwrapped and this can be especially difficult in natural terrains. An improved deformation estimation technique is presented here which exploits high resolution SAR data and is suitable for rural areas. The implemented method makes use of small baseline differential interferograms and incorporates an object adaptive spatial phase filtering and residual topography removal for an accurate phase and coherence estimation, while preserving the high resolution provided by modern satellites. This is followed by retrieval of deformation via the SBAS approach, wherein, the phase inversion is performed using an L1 norm minimization which is more robust to the typical phase unwrapping errors encountered in non-urban areas. Meter resolution TerraSAR-X data of an underground gas storage reservoir in Germany is used for demonstrating the effectiveness of this newly developed technique in rural areas.

  11. Deformation of Copahue volcano: Inversion of InSAR data using a genetic algorithm

    Science.gov (United States)

    Velez, Maria Laura; Euillades, Pablo; Caselli, Alberto; Blanco, Mauro; Díaz, Jose Martínez

    2011-04-01

    The Copahue volcano is one of the most active volcanoes in Argentina with eruptions having been reported as recently as 1992, 1995 and 2000. A deformation analysis using the Differential Synthetic Aperture Radar technique (DInSAR) was performed on Copahue-Caviahue Volcanic Complex (CCVC) from Envisat radar images between 2002 and 2007. A deformation rate of approximately 2 cm/yr was calculated, located mostly on the north-eastern flank of Copahue volcano, and assumed to be constant during the period of the interferograms. The geometry of the source responsible for the deformation was evaluated from an inversion of the mean velocity deformation measurements using two different models based on pressure sources embedded in an elastic homogeneous half-space. A genetic algorithm was applied as an optimization tool to find the best fit source. Results from inverse modelling indicate that a source located beneath the volcano edifice at a mean depth of 4 km is producing a volume change of approximately 0.0015 km/yr. This source was analysed considering the available studies of the area, and a conceptual model of the volcanic-hydrothermal system was designed. The source of deformation is related to a depressurisation of the system that results from the release of magmatic fluids across the boundary between the brittle and plastic domains. These leakages are considered to be responsible for the weak phreatic eruptions recently registered at the Copahue volcano.

  12. HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN

    Science.gov (United States)

    While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...

  13. Diverse Geological Applications For Basil: A 2d Finite-deformation Computational Algorithm

    Science.gov (United States)

    Houseman, Gregory A.; Barr, Terence D.; Evans, Lynn

    Geological processes are often characterised by large finite-deformation continuum strains, on the order of 100% or greater. Microstructural processes cause deformation that may be represented by a viscous constitutive mechanism, with viscosity that may depend on temperature, pressure, or strain-rate. We have developed an effective com- putational algorithm for the evaluation of 2D deformation fields produced by Newto- nian or non-Newtonian viscous flow. With the implementation of this algorithm as a computer program, Basil, we have applied it to a range of diverse applications in Earth Sciences. Viscous flow fields in 2D may be defined for the thin-sheet case or, using a velocity-pressure formulation, for the plane-strain case. Flow fields are represented using 2D triangular elements with quadratic interpolation for velocity components and linear for pressure. The main matrix equation is solved by an efficient and compact conjugate gradient algorithm with iteration for non-Newtonian viscosity. Regular grids may be used, or grids based on a random distribution of points. Definition of the prob- lem requires that velocities, tractions, or some combination of the two, are specified on all external boundary nodes. Compliant boundaries may also be defined, based on the idea that traction is opposed to and proportional to boundary displacement rate. In- ternal boundary segments, allowing fault-like displacements within a viscous medium have also been developed, and we find that the computed displacement field around the fault tip is accurately represented for Newtonian and non-Newtonian viscosities, in spite of the stress singularity at the fault tip. Basil has been applied by us and colleagues to problems that include: thin sheet calculations of continental collision, Rayleigh-Taylor instability of the continental mantle lithosphere, deformation fields around fault terminations at the outcrop scale, stress and deformation fields in and around porphyroblasts, and

  14. An optimisation algorithm for determination of treatment margins around moving and deformable targets

    International Nuclear Information System (INIS)

    Redpath, Anthony Thomas; Muren, Ludvig Paul

    2005-01-01

    Purpose: Determining treatment margins for inter-fractional motion of moving and deformable clinical target volumes (CTVs) remains a major challenge. This paper describes and applies an optimisation algorithm designed to derive such margins. Material and methods: The algorithm works by expanding the CTV, as determined from a pre-treatment or planning scan, to enclose the CTV positions observed during treatment. CTV positions during treatment may be obtained using, for example, repeat CT scanning and/or repeat electronic portal imaging (EPI). The algorithm can be applied to both individual patients and to a set of patients. The margins derived will minimise the excess volume outside the envelope that encloses all observed CTV positions (the CTV envelope). Initially, margins are set such that the envelope is more than adequately covered when the planning CTV is expanded. The algorithm uses an iterative method where the margins are sampled randomly and are then either increased or decreased randomly. The algorithm is tested on a set of 19 bladder cancer patients that underwent weekly repeat CT scanning and EPI throughout their treatment course. Results: From repeated runs on individual patients, the algorithm produces margins within a range of ±2 mm that lie among the best results found with an exhaustive search approach, and that agree within 3 mm with margins determined by a manual approach on the same data. The algorithm could be used to determine margins to cover any specified geometrical uncertainty, and allows for the determination of reduced margins by relaxing the coverage criteria, for example disregarding extreme CTV positions, or an arbitrarily selected volume fraction of the CTV envelope, and/or patients with extreme geometrical uncertainties. Conclusion: An optimisation approach to margin determination is found to give reproducible results within the accuracy required. The major advantage with this algorithm is that it is completely empirical, and it is

  15. Simulating Deformations of MR Brain Images for Validation of Atlas-based Segmentation and Registration Algorithms

    OpenAIRE

    Xue, Zhong; Shen, Dinggang; Karacali, Bilge; Stern, Joshua; Rottenberg, David; Davatzikos, Christos

    2006-01-01

    Simulated deformations and images can act as the gold standard for evaluating various template-based image segmentation and registration algorithms. Traditional deformable simulation methods, such as the use of analytic deformation fields or the displacement of landmarks followed by some form of interpolation, are often unable to construct rich (complex) and/or realistic deformations of anatomical organs. This paper presents new methods aiming to automatically simulate realistic inter- and in...

  16. The minimally invasive spinal deformity surgery algorithm: a reproducible rational framework for decision making in minimally invasive spinal deformity surgery.

    Science.gov (United States)

    Mummaneni, Praveen V; Shaffrey, Christopher I; Lenke, Lawrence G; Park, Paul; Wang, Michael Y; La Marca, Frank; Smith, Justin S; Mundis, Gregory M; Okonkwo, David O; Moal, Bertrand; Fessler, Richard G; Anand, Neel; Uribe, Juan S; Kanter, Adam S; Akbarnia, Behrooz; Fu, Kai-Ming G

    2014-05-01

    Minimally invasive surgery (MIS) is an alternative to open deformity surgery for the treatment of patients with adult spinal deformity. However, at this time MIS techniques are not as versatile as open deformity techniques, and MIS techniques have been reported to result in suboptimal sagittal plane correction or pseudarthrosis when used for severe deformities. The minimally invasive spinal deformity surgery (MISDEF) algorithm was created to provide a framework for rational decision making for surgeons who are considering MIS versus open spine surgery. A team of experienced spinal deformity surgeons developed the MISDEF algorithm that incorporates a patient's preoperative radiographic parameters and leads to one of 3 general plans ranging from MIS direct or indirect decompression to open deformity surgery with osteotomies. The authors surveyed fellowship-trained spine surgeons experienced with spinal deformity surgery to validate the algorithm using a set of 20 cases to establish interobserver reliability. They then resurveyed the same surgeons 2 months later with the same cases presented in a different sequence to establish intraobserver reliability. Responses were collected and tabulated. Fleiss' analysis was performed using MATLAB software. Over a 3-month period, 11 surgeons completed the surveys. Responses for MISDEF algorithm case review demonstrated an interobserver kappa of 0.58 for the first round of surveys and an interobserver kappa of 0.69 for the second round of surveys, consistent with substantial agreement. In at least 10 cases there was perfect agreement between the reviewing surgeons. The mean intraobserver kappa for the 2 surveys was 0.86 ± 0.15 (± SD) and ranged from 0.62 to 1. The use of the MISDEF algorithm provides consistent and straightforward guidance for surgeons who are considering either an MIS or an open approach for the treatment of patients with adult spinal deformity. The MISDEF algorithm was found to have substantial inter- and

  17. Gradient algorithm applied to laboratory quantum control

    International Nuclear Information System (INIS)

    Roslund, Jonathan; Rabitz, Herschel

    2009-01-01

    The exploration of a quantum control landscape, which is the physical observable as a function of the control variables, is fundamental for understanding the ability to perform observable optimization in the laboratory. For high control variable dimensions, trajectory-based methods provide a means for performing such systematic explorations by exploiting the measured gradient of the observable with respect to the control variables. This paper presents a practical, robust, easily implemented statistical method for obtaining the gradient on a general quantum control landscape in the presence of noise. In order to demonstrate the method's utility, the experimentally measured gradient is utilized as input in steepest-ascent trajectories on the landscapes of three model quantum control problems: spectrally filtered and integrated second harmonic generation as well as excitation of atomic rubidium. The gradient algorithm achieves efficiency gains of up to approximately three times that of the standard genetic algorithm and, as such, is a promising tool for meeting quantum control optimization goals as well as landscape analyses. The landscape trajectories directed by the gradient should aid in the continued investigation and understanding of controlled quantum phenomena.

  18. Validation of deformable image registration algorithms on CT images of ex vivo porcine bladders with fiducial markers.

    Science.gov (United States)

    Wognum, S; Heethuis, S E; Rosario, T; Hoogeman, M S; Bel, A

    2014-07-01

    The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations. Excised porcine bladders, exhibiting similar filling and voiding behavior as human bladders, provide such an environment. The aim of this study was to determine the spatial accuracy of different DIR algorithms on CT images of ex vivo porcine bladders with radiopaque fiducial markers applied to the outer surface, for a range of bladder volumes, using various accuracy metrics. Five excised porcine bladders with a grid of 30-40 radiopaque fiducial markers attached to the outer wall were suspended inside a water-filled phantom. The bladder was filled with a controlled amount of water with added contrast medium for a range of filling volumes (100-400 ml in steps of 50 ml) using a luer lock syringe, and CT scans were acquired at each filling volume. DIR was performed for each data set, with the 100 ml bladder as the reference image. Six intensity-based algorithms (optical flow or demons-based) implemented in theMATLAB platform DIRART, a b-spline algorithm implemented in the commercial software package VelocityAI, and a structure-based algorithm (Symmetric Thin Plate Spline Robust Point Matching) were validated, using adequate parameter settings according to values previously published. The resulting deformation vector field from each registration was applied to the contoured bladder structures and to the marker coordinates for spatial error calculation. The quality of the algorithms was assessed by comparing the different error metrics across the different algorithms, and by comparing the effect of deformation magnitude (bladder volume difference) per algorithm, using the Independent Samples Kruskal-Wallis test. The authors found good structure accuracy without dependency on

  19. Validation of deformable image registration algorithms on CT images of ex vivo porcine bladders with fiducial markers

    International Nuclear Information System (INIS)

    Wognum, S.; Heethuis, S. E.; Bel, A.; Rosario, T.; Hoogeman, M. S.

    2014-01-01

    Purpose: The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations. Excised porcine bladders, exhibiting similar filling and voiding behavior as human bladders, provide such an environment. The aim of this study was to determine the spatial accuracy of different DIR algorithms on CT images ofex vivo porcine bladders with radiopaque fiducial markers applied to the outer surface, for a range of bladder volumes, using various accuracy metrics. Methods: Five excised porcine bladders with a grid of 30–40 radiopaque fiducial markers attached to the outer wall were suspended inside a water-filled phantom. The bladder was filled with a controlled amount of water with added contrast medium for a range of filling volumes (100–400 ml in steps of 50 ml) using a luer lock syringe, and CT scans were acquired at each filling volume. DIR was performed for each data set, with the 100 ml bladder as the reference image. Six intensity-based algorithms (optical flow or demons-based) implemented in theMATLAB platform DIRART, a b-spline algorithm implemented in the commercial software package VelocityAI, and a structure-based algorithm (Symmetric Thin Plate Spline Robust Point Matching) were validated, using adequate parameter settings according to values previously published. The resulting deformation vector field from each registration was applied to the contoured bladder structures and to the marker coordinates for spatial error calculation. The quality of the algorithms was assessed by comparing the different error metrics across the different algorithms, and by comparing the effect of deformation magnitude (bladder volume difference) per algorithm, using the Independent Samples Kruskal-Wallis test. Results: The authors found good structure

  20. Validation of deformable image registration algorithms on CT images of ex vivo porcine bladders with fiducial markers

    Energy Technology Data Exchange (ETDEWEB)

    Wognum, S., E-mail: s.wognum@gmail.com; Heethuis, S. E.; Bel, A. [Department of Radiation Oncology, Academic Medical Center, Meibergdreef 9, 1105 AZ Amsterdam (Netherlands); Rosario, T. [Department of Radiation Oncology, VU University Medical Center, De Boelelaan 1117, 1081 HZ Amsterdam (Netherlands); Hoogeman, M. S. [Department of Radiation Oncology, Erasmus MC Cancer Institute, Erasmus Medical Center, Groene Hilledijk 301, 3075 EA Rotterdam (Netherlands)

    2014-07-15

    Purpose: The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations. Excised porcine bladders, exhibiting similar filling and voiding behavior as human bladders, provide such an environment. The aim of this study was to determine the spatial accuracy of different DIR algorithms on CT images ofex vivo porcine bladders with radiopaque fiducial markers applied to the outer surface, for a range of bladder volumes, using various accuracy metrics. Methods: Five excised porcine bladders with a grid of 30–40 radiopaque fiducial markers attached to the outer wall were suspended inside a water-filled phantom. The bladder was filled with a controlled amount of water with added contrast medium for a range of filling volumes (100–400 ml in steps of 50 ml) using a luer lock syringe, and CT scans were acquired at each filling volume. DIR was performed for each data set, with the 100 ml bladder as the reference image. Six intensity-based algorithms (optical flow or demons-based) implemented in theMATLAB platform DIRART, a b-spline algorithm implemented in the commercial software package VelocityAI, and a structure-based algorithm (Symmetric Thin Plate Spline Robust Point Matching) were validated, using adequate parameter settings according to values previously published. The resulting deformation vector field from each registration was applied to the contoured bladder structures and to the marker coordinates for spatial error calculation. The quality of the algorithms was assessed by comparing the different error metrics across the different algorithms, and by comparing the effect of deformation magnitude (bladder volume difference) per algorithm, using the Independent Samples Kruskal-Wallis test. Results: The authors found good structure

  1. Multiobjective Genetic Algorithm applied to dengue control.

    Science.gov (United States)

    Florentino, Helenice O; Cantane, Daniela R; Santos, Fernando L P; Bannwart, Bettina F

    2014-12-01

    Dengue fever is an infectious disease caused by a virus of the Flaviridae family and transmitted to the person by a mosquito of the genus Aedes aegypti. This disease has been a global public health problem because a single mosquito can infect up to 300 people and between 50 and 100 million people are infected annually on all continents. Thus, dengue fever is currently a subject of research, whether in the search for vaccines and treatments for the disease or efficient and economical forms of mosquito control. The current study aims to study techniques of multiobjective optimization to assist in solving problems involving the control of the mosquito that transmits dengue fever. The population dynamics of the mosquito is studied in order to understand the epidemic phenomenon and suggest strategies of multiobjective programming for mosquito control. A Multiobjective Genetic Algorithm (MGA_DENGUE) is proposed to solve the optimization model treated here and we discuss the computational results obtained from the application of this technique. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Bio-inspired algorithms applied to molecular docking simulations.

    Science.gov (United States)

    Heberlé, G; de Azevedo, W F

    2011-01-01

    Nature as a source of inspiration has been shown to have a great beneficial impact on the development of new computational methodologies. In this scenario, analyses of the interactions between a protein target and a ligand can be simulated by biologically inspired algorithms (BIAs). These algorithms mimic biological systems to create new paradigms for computation, such as neural networks, evolutionary computing, and swarm intelligence. This review provides a description of the main concepts behind BIAs applied to molecular docking simulations. Special attention is devoted to evolutionary algorithms, guided-directed evolutionary algorithms, and Lamarckian genetic algorithms. Recent applications of these methodologies to protein targets identified in the Mycobacterium tuberculosis genome are described.

  3. Real-time deformation of human soft tissues: A radial basis meshless 3D model based on Marquardt's algorithm.

    Science.gov (United States)

    Zhou, Jianyong; Luo, Zu; Li, Chunquan; Deng, Mi

    2018-01-01

    When the meshless method is used to establish the mathematical-mechanical model of human soft tissues, it is necessary to define the space occupied by human tissues as the problem domain and the boundary of the domain as the surface of those tissues. Nodes should be distributed in both the problem domain and on the boundaries. Under external force, the displacement of the node is computed by the meshless method to represent the deformation of biological soft tissues. However, computation by the meshless method consumes too much time, which will affect the simulation of real-time deformation of human tissues in virtual surgery. In this article, the Marquardt's Algorithm is proposed to fit the nodal displacement at the problem domain's boundary and obtain the relationship between surface deformation and force. When different external forces are applied, the deformation of soft tissues can be quickly obtained based on this relationship. The analysis and discussion show that the improved model equations with Marquardt's Algorithm not only can simulate the deformation in real-time but also preserve the authenticity of the deformation model's physical properties. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Research on Kalman Filtering Algorithm for Deformation Information Series of Similar Single-Difference Model

    Institute of Scientific and Technical Information of China (English)

    L(U) Wei-cai; XU Shao-quan

    2004-01-01

    Using similar single-difference methodology(SSDM) to solve the deformation values of the monitoring points, there is unstability of the deformation information series, at sometimes.In order to overcome this shortcoming, Kalman filtering algorithm for this series is established,and its correctness and validity are verified with the test data obtained on the movable platform in plane. The results show that Kalman filtering can improve the correctness, reliability and stability of the deformation information series.

  5. Speckle photography applied to measure deformations of very large structures

    Science.gov (United States)

    Conley, Edgar; Morgan, Chris K.

    1995-04-01

    Fundamental principles of mechanics have recently been brought to bear on problems concerning very large structures. Fields of study include tectonic plate motion, nuclear waste repository vault closure mechanisms, the flow of glacier and sea ice, and highway bridge damage assessment and residual life prediction. Quantitative observations, appropriate for formulating and verifying models, are still scarce however, so the need to adapt new methods of experimental mechanics is clear. Large dynamic systems often exist in environments subject to rapid change. Therefore, a simple field technique that incorporates short time scales and short gage lengths is required. Further, the measuring methods must yield displacements reliably, and under oft-times adverse field conditions. Fortunately, the advantages conferred by an experimental mechanics technique known as speckle photography nicely fulfill this rather stringent set of performance requirements. Speckle seemed to lend itself nicely to the application since it is robust and relatively inexpensive. Experiment requirements are minimal -- a camera, high resolution film, illumination, and an optically rough surface. Perhaps most important is speckle's distinct advantage over point-by-point methods: It maps the two dimensional displacement vectors of the whole field of interest. And finally, given the method's high spatial resolution, relatively short observation times are necessary. In this paper we discuss speckle, two variations of which were used to gage the deformation of a reinforced concrete bridge structure subjected to bending loads. The measurement technique proved to be easily applied, and yielded the location of the neutral axis self consistently. The research demonstrates the feasibility of using whole field techniques to detect and quantify surface strains of large structures under load.

  6. Evolutionary algorithms applied to Landau-gauge fixing

    International Nuclear Information System (INIS)

    Markham, J.F.

    1998-01-01

    Current algorithms used to put a lattice gauge configuration into Landau gauge either suffer from the problem of critical slowing-down or involve an additions computational expense to overcome it. Evolutionary Algorithms (EAs), which have been widely applied to other global optimisation problems, may be of use in gauge fixing. Also, being global, they should not suffer from critical slowing-down as do local gradient based algorithms. We apply EA'S and also a Steepest Descent (SD) based method to the problem of Landau Gauge Fixing and compare their performance. (authors)

  7. The effect of algorithm form on deformation and instability in tension

    International Nuclear Information System (INIS)

    Goldthorpe, B.D.; Church, P.

    1997-01-01

    Equilibrium and flow equations have been developed for limited boundary conditions to describe deformation and localisation in tension. These equations have been used to study the influence of algorithmic form on deformation response for four sample algorithms. It is shown that the structure of the algorithm can have a profound effect on the extent and rate of localisation. These results from analytical solutions are compared to those from computer modelling of similar problems and the agreement is shown to be extremely good over the whole range. (orig.)

  8. A stable partitioned FSI algorithm for incompressible flow and deforming beams

    Energy Technology Data Exchange (ETDEWEB)

    Li, L., E-mail: lil19@rpi.edu [Department of Mathematical Sciences, Rensselaer Polytechnic Institute, Troy, NY 12180 (United States); Henshaw, W.D., E-mail: henshw@rpi.edu [Department of Mathematical Sciences, Rensselaer Polytechnic Institute, Troy, NY 12180 (United States); Banks, J.W., E-mail: banksj3@rpi.edu [Department of Mathematical Sciences, Rensselaer Polytechnic Institute, Troy, NY 12180 (United States); Schwendeman, D.W., E-mail: schwed@rpi.edu [Department of Mathematical Sciences, Rensselaer Polytechnic Institute, Troy, NY 12180 (United States); Main, A., E-mail: amain8511@gmail.com [Department of Civil and Environmental Engineering, Duke University, Durham, NC 27708 (United States)

    2016-05-01

    An added-mass partitioned (AMP) algorithm is described for solving fluid–structure interaction (FSI) problems coupling incompressible flows with thin elastic structures undergoing finite deformations. The new AMP scheme is fully second-order accurate and stable, without sub-time-step iterations, even for very light structures when added-mass effects are strong. The fluid, governed by the incompressible Navier–Stokes equations, is solved in velocity-pressure form using a fractional-step method; large deformations are treated with a mixed Eulerian-Lagrangian approach on deforming composite grids. The motion of the thin structure is governed by a generalized Euler–Bernoulli beam model, and these equations are solved in a Lagrangian frame using two approaches, one based on finite differences and the other on finite elements. The key AMP interface condition is a generalized Robin (mixed) condition on the fluid pressure. This condition, which is derived at a continuous level, has no adjustable parameters and is applied at the discrete level to couple the partitioned domain solvers. Special treatment of the AMP condition is required to couple the finite-element beam solver with the finite-difference-based fluid solver, and two coupling approaches are described. A normal-mode stability analysis is performed for a linearized model problem involving a beam separating two fluid domains, and it is shown that the AMP scheme is stable independent of the ratio of the mass of the fluid to that of the structure. A traditional partitioned (TP) scheme using a Dirichlet–Neumann coupling for the same model problem is shown to be unconditionally unstable if the added mass of the fluid is too large. A series of benchmark problems of increasing complexity are considered to illustrate the behavior of the AMP algorithm, and to compare the behavior with that of the TP scheme. The results of all these benchmark problems verify the stability and accuracy of the AMP scheme. Results for

  9. A stable partitioned FSI algorithm for incompressible flow and deforming beams

    International Nuclear Information System (INIS)

    Li, L.; Henshaw, W.D.; Banks, J.W.; Schwendeman, D.W.; Main, A.

    2016-01-01

    An added-mass partitioned (AMP) algorithm is described for solving fluid–structure interaction (FSI) problems coupling incompressible flows with thin elastic structures undergoing finite deformations. The new AMP scheme is fully second-order accurate and stable, without sub-time-step iterations, even for very light structures when added-mass effects are strong. The fluid, governed by the incompressible Navier–Stokes equations, is solved in velocity-pressure form using a fractional-step method; large deformations are treated with a mixed Eulerian-Lagrangian approach on deforming composite grids. The motion of the thin structure is governed by a generalized Euler–Bernoulli beam model, and these equations are solved in a Lagrangian frame using two approaches, one based on finite differences and the other on finite elements. The key AMP interface condition is a generalized Robin (mixed) condition on the fluid pressure. This condition, which is derived at a continuous level, has no adjustable parameters and is applied at the discrete level to couple the partitioned domain solvers. Special treatment of the AMP condition is required to couple the finite-element beam solver with the finite-difference-based fluid solver, and two coupling approaches are described. A normal-mode stability analysis is performed for a linearized model problem involving a beam separating two fluid domains, and it is shown that the AMP scheme is stable independent of the ratio of the mass of the fluid to that of the structure. A traditional partitioned (TP) scheme using a Dirichlet–Neumann coupling for the same model problem is shown to be unconditionally unstable if the added mass of the fluid is too large. A series of benchmark problems of increasing complexity are considered to illustrate the behavior of the AMP algorithm, and to compare the behavior with that of the TP scheme. The results of all these benchmark problems verify the stability and accuracy of the AMP scheme. Results for

  10. Validation of deformable image registration algorithms on CT images of ex vivo porcine bladders with fiducial markers

    NARCIS (Netherlands)

    Wognum, S.; Heethuis, S. E.; Rosario, T.; Hoogeman, M. S.; Bel, A.

    2014-01-01

    The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations.

  11. Genetic algorithms applied to nuclear reactor design optimization

    International Nuclear Information System (INIS)

    Pereira, C.M.N.A.; Schirru, R.; Martinez, A.S.

    2000-01-01

    A genetic algorithm is a powerful search technique that simulates natural evolution in order to fit a population of computational structures to the solution of an optimization problem. This technique presents several advantages over classical ones such as linear programming based techniques, often used in nuclear engineering optimization problems. However, genetic algorithms demand some extra computational cost. Nowadays, due to the fast computers available, the use of genetic algorithms has increased and its practical application has become a reality. In nuclear engineering there are many difficult optimization problems related to nuclear reactor design. Genetic algorithm is a suitable technique to face such kind of problems. This chapter presents applications of genetic algorithms for nuclear reactor core design optimization. A genetic algorithm has been designed to optimize the nuclear reactor cell parameters, such as array pitch, isotopic enrichment, dimensions and cells materials. Some advantages of this genetic algorithm implementation over a classical method based on linear programming are revealed through the application of both techniques to a simple optimization problem. In order to emphasize the suitability of genetic algorithms for design optimization, the technique was successfully applied to a more complex problem, where the classical method is not suitable. Results and comments about the applications are also presented. (orig.)

  12. Identifying Septal Support Reconstructions for Saddle Nose Deformity: The Cakmak Algorithm.

    Science.gov (United States)

    Cakmak, Ozcan; Emre, Ismet Emrah; Ozkurt, Fazil Emre

    2015-01-01

    The saddle nose deformity is one of the most challenging problems in nasal surgery with a less predictable and reproducible result than other nasal procedures. The main feature of this deformity is loss of septal support with both functional and aesthetic implications. Most reports on saddle nose have focused on aesthetic improvement and neglected the reestablishment of septal support to improve airway. To explain how the Cakmak algorithm, an algorithm that describes various fixation techniques and grafts in different types of saddle nose deformities, aids in identifying saddle nose reconstructions that restore supportive nasal framework and provide the aesthetic improvements typically associated with procedures to correct saddle nose deformities. This algorithm presents septal support reconstruction of patients with saddle nose deformity based on the experience of the senior author in 206 patients with saddle nose deformity. Preoperative examination, intraoperative assessment, reconstruction techniques, graft materials, and patient evaluation of aesthetic success were documented, and 4 different types of saddle nose deformities were defined. The Cakmak algorithm classifies varying degrees of saddle nose deformity from type 0 to type 4 and helps identify the most appropriate surgical procedure to restore the supportive nasal framework and aesthetic dorsum. Among the 206 patients, 110 women and 96 men, mean (range) age was 39.7 years (15-68 years), and mean (range) of follow-up was 32 months (6-148 months). All but 12 patients had a history of previous nasal surgeries. Application of the Cakmak algorithm resulted in 36 patients categorized with type 0 saddle nose deformities; 79, type 1; 50, type 2; 20, type 3a; 7, type 3b; and 14, type 4. Postoperative photographs showed improvement of deformities, and patient surveys revealed aesthetic improvement in 201 patients and improvement in nasal breathing in 195 patients. Three patients developed postoperative infection

  13. Swarm, genetic and evolutionary programming algorithms applied to multiuser detection

    Directory of Open Access Journals (Sweden)

    Paul Jean Etienne Jeszensky

    2005-02-01

    Full Text Available In this paper, the particles swarm optimization technique, recently published in the literature, and applied to Direct Sequence/Code Division Multiple Access systems (DS/CDMA with multiuser detection (MuD is analyzed, evaluated and compared. The Swarm algorithm efficiency when applied to the DS-CDMA multiuser detection (Swarm-MuD is compared through the tradeoff performance versus computational complexity, being the complexity expressed in terms of the number of necessary operations in order to reach the performance obtained through the optimum detector or the Maximum Likelihood detector (ML. The comparison is accomplished among the genetic algorithm, evolutionary programming with cloning and Swarm algorithm under the same simulation basis. Additionally, it is proposed an heuristics-MuD complexity analysis through the number of computational operations. Finally, an analysis is carried out for the input parameters of the Swarm algorithm in the attempt to find the optimum parameters (or almost-optimum for the algorithm applied to the MuD problem.

  14. A simplified algorithm for measuring erythrocyte deformability dispersion by laser ektacytometry

    Energy Technology Data Exchange (ETDEWEB)

    Nikitin, S Yu; Yurchuk, Yu S [Department of Physics, M.V. Lomonosov Moscow State University (Russian Federation)

    2015-08-31

    The possibility of measuring the dispersion of red blood cell deformability by laser diffractometry in shear flow (ektacytometry) is analysed theoretically. A diffraction pattern parameter is found, which is sensitive to the dispersion of erythrocyte deformability and to a lesser extent – to such parameters as the level of the scattered light intensity, the shape of red blood cells, the concentration of red blood cells in the suspension, the geometric dimensions of the experimental setup, etc. A new algorithm is proposed for measuring erythrocyte deformability dispersion by using data of laser ektacytometry. (laser applications in medicine)

  15. Formulations and algorithms for problems on rock mass and support deformation during mining

    Science.gov (United States)

    Seryakov, VM

    2018-03-01

    The analysis of problem formulations to calculate stress-strain state of mine support and surrounding rocks mass in rock mechanics shows that such formulations incompletely describe the mechanical features of joint deformation in the rock mass–support system. The present paper proposes an algorithm to take into account the actual conditions of rock mass and support interaction and the algorithm implementation method to ensure efficient calculation of stresses in rocks and support.

  16. Parameterless evolutionary algorithm applied to the nuclear reload problem

    International Nuclear Information System (INIS)

    Caldas, Gustavo Henrique Flores; Schirru, Roberto

    2008-01-01

    In this work, an evolutionary algorithm with no parameters called FPBIL (parameter free PBIL) is developed based on PBIL (population-based incremental learning). Moreover, the analysis reveals how the parameters from PBIL can be replaced by self-adaptable mechanisms which appear from the radically different form by which the evolution is processed. Despite the advantages, the FPBIL reveals itself compact and relatively modest in the use of computational resources. The FPBIL is then applied to the nuclear reload problem. The experimental results observed are compared to those of other works and corroborate to affirm the superiority of the new algorithm

  17. Applied economic model development algorithm for electronics company

    Directory of Open Access Journals (Sweden)

    Mikhailov I.

    2017-01-01

    Full Text Available The purpose of this paper is to report about received experience in the field of creating the actual methods and algorithms that help to simplify development of applied decision support systems. It reports about an algorithm, which is a result of two years research and have more than one-year practical verification. In a case of testing electronic components, the time of the contract conclusion is crucial point to make the greatest managerial mistake. At this stage, it is difficult to achieve a realistic assessment of time-limit and of wage-fund for future work. The creation of estimating model is possible way to solve this problem. In the article is represented an algorithm for creation of those models. The algorithm is based on example of the analytical model development that serves for amount of work estimation. The paper lists the algorithm’s stages and explains their meanings with participants’ goals. The implementation of the algorithm have made possible twofold acceleration of these models development and fulfilment of management’s requirements. The resulting models have made a significant economic effect. A new set of tasks was identified to be further theoretical study.

  18. Modified compensation algorithm of lever-arm effect and flexural deformation for polar shipborne transfer alignment based on improved adaptive Kalman filter

    International Nuclear Information System (INIS)

    Wang, Tongda; Cheng, Jianhua; Guan, Dongxue; Kang, Yingyao; Zhang, Wei

    2017-01-01

    Due to the lever-arm effect and flexural deformation in the practical application of transfer alignment (TA), the TA performance is decreased. The existing polar TA algorithm only compensates a fixed lever-arm without considering the dynamic lever-arm caused by flexural deformation; traditional non-polar TA algorithms also have some limitations. Thus, the performance of existing compensation algorithms is unsatisfactory. In this paper, a modified compensation algorithm of the lever-arm effect and flexural deformation is proposed to promote the accuracy and speed of the polar TA. On the basis of a dynamic lever-arm model and a noise compensation method for flexural deformation, polar TA equations are derived in grid frames. Based on the velocity-plus-attitude matching method, the filter models of polar TA are designed. An adaptive Kalman filter (AKF) is improved to promote the robustness and accuracy of the system, and then applied to the estimation of the misalignment angles. Simulation and experiment results have demonstrated that the modified compensation algorithm based on the improved AKF for polar TA can effectively compensate the lever-arm effect and flexural deformation, and then improve the accuracy and speed of TA in the polar region. (paper)

  19. Generation of synthetic image sequences for the verification of matching and tracking algorithms for deformation analysis

    Science.gov (United States)

    Bethmann, F.; Jepping, C.; Luhmann, T.

    2013-04-01

    This paper reports on a method for the generation of synthetic image data for almost arbitrary static or dynamic 3D scenarios. Image data generation is based on pre-defined 3D objects, object textures, camera orientation data and their imaging properties. The procedure does not focus on the creation of photo-realistic images under consideration of complex imaging and reflection models as they are used by common computer graphics programs. In contrast, the method is designed with main emphasis on geometrically correct synthetic images without radiometric impact. The calculation process includes photogrammetric distortion models, hence cameras with arbitrary geometric imaging characteristics can be applied. Consequently, image sets can be created that are consistent to mathematical photogrammetric models to be used as sup-pixel accurate data for the assessment of high-precision photogrammetric processing methods. In the first instance the paper describes the process of image simulation under consideration of colour value interpolation, MTF/PSF and so on. Subsequently the geometric quality of the synthetic images is evaluated with ellipse operators. Finally, simulated image sets are used to investigate matching and tracking algorithms as they have been developed at IAPG for deformation measurement in car safety testing.

  20. Fuzzy model predictive control algorithm applied in nuclear power plant

    International Nuclear Information System (INIS)

    Zuheir, Ahmad

    2006-01-01

    The aim of this paper is to design a predictive controller based on a fuzzy model. The Takagi-Sugeno fuzzy model with an Adaptive B-splines neuro-fuzzy implementation is used and incorporated as a predictor in a predictive controller. An optimization approach with a simplified gradient technique is used to calculate predictions of the future control actions. In this approach, adaptation of the fuzzy model using dynamic process information is carried out to build the predictive controller. The easy description of the fuzzy model and the easy computation of the gradient sector during the optimization procedure are the main advantages of the computation algorithm. The algorithm is applied to the control of a U-tube steam generation unit (UTSG) used for electricity generation. (author)

  1. A three-dimensional sorting reliability algorithm for coastline deformation monitoring, using interferometric data

    International Nuclear Information System (INIS)

    Genderen, J v; Marghany, M

    2014-01-01

    The paper focusses on three-dimensional (3-D) coastline deformation using interferometric synthetic aperture radar data(InSAR). Conventional InSAR procedures were implemented on three repeat passes of ENVISAT ASAR data. Furthermore, the three-dimensional sorting reliabilities algorithm (3D-SRA) were implemented with the phase unwrapping technique. Subsequently, the 3D-SRA was used to eliminate the phase decorrelation impact from the interferograms. The study showed that the performance of the InSAR method using the 3D-SRA algorithm, is better than the conventional InSAR procedure. In conclusion, the integration of the 3D-SRA, together with phase unwrapping, can produce accurate 3-D coastline deformation information

  2. Multi-Objective Optimization of Grillages Applying the Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Darius Mačiūnas

    2012-01-01

    Full Text Available The article analyzes the optimization of grillage-type foundations seeking for the least possible reactive forces in the poles for a given number of poles and for the least possible bending moments of absolute values in the connecting beams of the grillage. Therefore, we suggest using a compromise objective function (to be minimized that consists of the maximum reactive force arising in all poles and the maximum bending moment of the absolute value in connecting beams; both components include the given weights. The variables of task design are pole positions under connecting beams. The optimization task is solved applying the algorithm containing all the initial data of the problem. Reactive forces and bending moments are calculated using an original program (finite element method is applied. This program is integrated into the optimization algorithm using the “black-box” principle. The “black-box” finite element program sends back the corresponding value of the objective function. Numerical experiments revealed the optimal quantity of points to compute bending moments. The obtained results show a certain ratio of weights in the objective function where the contribution of reactive forces and bending moments to the objective function are equivalent. This solution can serve as a pilot project for more detailed design.Article in Lithuanian

  3. Development of a deformable dosimetric phantom to verify dose accumulation algorithms for adaptive radiotherapy.

    Science.gov (United States)

    Zhong, Hualiang; Adams, Jeffrey; Glide-Hurst, Carri; Zhang, Hualin; Li, Haisen; Chetty, Indrin J

    2016-01-01

    Adaptive radiotherapy may improve treatment outcomes for lung cancer patients. Because of the lack of an effective tool for quality assurance, this therapeutic modality is not yet accepted in clinic. The purpose of this study is to develop a deformable physical phantom for validation of dose accumulation algorithms in regions with heterogeneous mass. A three-dimensional (3D) deformable phantom was developed containing a tissue-equivalent tumor and heterogeneous sponge inserts. Thermoluminescent dosimeters (TLDs) were placed at multiple locations in the phantom each time before dose measurement. Doses were measured with the phantom in both the static and deformed cases. The deformation of the phantom was actuated by a motor driven piston. 4D computed tomography images were acquired to calculate 3D doses at each phase using Pinnacle and EGSnrc/DOSXYZnrc. These images were registered using two registration software packages: VelocityAI and Elastix. With the resultant displacement vector fields (DVFs), the calculated 3D doses were accumulated using a mass-and energy congruent mapping method and compared to those measured by the TLDs at four typical locations. In the static case, TLD measurements agreed with all the algorithms by 1.8% at the center of the tumor volume and by 4.0% in the penumbra. In the deformable case, the phantom's deformation was reproduced within 1.1 mm. For the 3D dose calculated by Pinnacle, the total dose accumulated with the Elastix DVF agreed well to the TLD measurements with their differences <2.5% at four measured locations. When the VelocityAI DVF was used, their difference increased up to 11.8%. For the 3D dose calculated by EGSnrc/DOSXYZnrc, the total doses accumulated with the two DVFs were within 5.7% of the TLD measurements which are slightly over the rate of 5% for clinical acceptance. The detector-embedded deformable phantom allows radiation dose to be measured in a dynamic environment, similar to deforming lung tissues, supporting

  4. Development of a deformable dosimetric phantom to verify dose accumulation algorithms for adaptive radiotherapy

    Directory of Open Access Journals (Sweden)

    Hualiang Zhong

    2016-01-01

    Full Text Available Adaptive radiotherapy may improve treatment outcomes for lung cancer patients. Because of the lack of an effective tool for quality assurance, this therapeutic modality is not yet accepted in clinic. The purpose of this study is to develop a deformable physical phantom for validation of dose accumulation algorithms in regions with heterogeneous mass. A three-dimensional (3D deformable phantom was developed containing a tissue-equivalent tumor and heterogeneous sponge inserts. Thermoluminescent dosimeters (TLDs were placed at multiple locations in the phantom each time before dose measurement. Doses were measured with the phantom in both the static and deformed cases. The deformation of the phantom was actuated by a motor driven piston. 4D computed tomography images were acquired to calculate 3D doses at each phase using Pinnacle and EGSnrc/DOSXYZnrc. These images were registered using two registration software packages: VelocityAI and Elastix. With the resultant displacement vector fields (DVFs, the calculated 3D doses were accumulated using a mass-and energy congruent mapping method and compared to those measured by the TLDs at four typical locations. In the static case, TLD measurements agreed with all the algorithms by 1.8% at the center of the tumor volume and by 4.0% in the penumbra. In the deformable case, the phantom's deformation was reproduced within 1.1 mm. For the 3D dose calculated by Pinnacle, the total dose accumulated with the Elastix DVF agreed well to the TLD measurements with their differences <2.5% at four measured locations. When the VelocityAI DVF was used, their difference increased up to 11.8%. For the 3D dose calculated by EGSnrc/DOSXYZnrc, the total doses accumulated with the two DVFs were within 5.7% of the TLD measurements which are slightly over the rate of 5% for clinical acceptance. The detector-embedded deformable phantom allows radiation dose to be measured in a dynamic environment, similar to deforming lung

  5. Genetic algorithms applied to nonlinear and complex domains; TOPICAL

    International Nuclear Information System (INIS)

    Barash, D; Woodin, A E

    1999-01-01

    The dissertation, titled ''Genetic Algorithms Applied to Nonlinear and Complex Domains'', describes and then applies a new class of powerful search algorithms (GAS) to certain domains. GAS are capable of solving complex and nonlinear problems where many parameters interact to produce a ''final'' result such as the optimization of the laser pulse in the interaction of an atom with an intense laser field. GAS can very efficiently locate the global maximum by searching parameter space in problems which are unsuitable for a search using traditional methods. In particular, the dissertation contains new scientific findings in two areas. First, the dissertation examines the interaction of an ultra-intense short laser pulse with atoms. GAS are used to find the optimal frequency for stabilizing atoms in the ionization process. This leads to a new theoretical formulation, to explain what is happening during the ionization process and how the electron is responding to finite (real-life) laser pulse shapes. It is shown that the dynamics of the process can be very sensitive to the ramp of the pulse at high frequencies. The new theory which is formulated, also uses a novel concept (known as the (t,t') method) to numerically solve the time-dependent Schrodinger equation Second, the dissertation also examines the use of GAS in modeling decision making problems. It compares GAS with traditional techniques to solve a class of problems known as Markov Decision Processes. The conclusion of the dissertation should give a clear idea of where GAS are applicable, especially in the physical sciences, in problems which are nonlinear and complex, i.e. difficult to analyze by other means

  6. Genetic algorithms applied to nonlinear and complex domains

    International Nuclear Information System (INIS)

    Barash, D; Woodin, A E

    1999-01-01

    The dissertation, titled ''Genetic Algorithms Applied to Nonlinear and Complex Domains'', describes and then applies a new class of powerful search algorithms (GAS) to certain domains. GAS are capable of solving complex and nonlinear problems where many parameters interact to produce a ''final'' result such as the optimization of the laser pulse in the interaction of an atom with an intense laser field. GAS can very efficiently locate the global maximum by searching parameter space in problems which are unsuitable for a search using traditional methods. In particular, the dissertation contains new scientific findings in two areas. First, the dissertation examines the interaction of an ultra-intense short laser pulse with atoms. GAS are used to find the optimal frequency for stabilizing atoms in the ionization process. This leads to a new theoretical formulation, to explain what is happening during the ionization process and how the electron is responding to finite (real-life) laser pulse shapes. It is shown that the dynamics of the process can be very sensitive to the ramp of the pulse at high frequencies. The new theory which is formulated, also uses a novel concept (known as the (t,t') method) to numerically solve the time-dependent Schrodinger equation Second, the dissertation also examines the use of GAS in modeling decision making problems. It compares GAS with traditional techniques to solve a class of problems known as Markov Decision Processes. The conclusion of the dissertation should give a clear idea of where GAS are applicable, especially in the physical sciences, in problems which are nonlinear and complex, i.e. difficult to analyze by other means

  7. A Gradient-Based Multistart Algorithm for Multimodal Aerodynamic Shape Optimization Problems Based on Free-Form Deformation

    Science.gov (United States)

    Streuber, Gregg Mitchell

    Environmental and economic factors motivate the pursuit of more fuel-efficient aircraft designs. Aerodynamic shape optimization is a powerful tool in this effort, but is hampered by the presence of multimodality in many design spaces. Gradient-based multistart optimization uses a sampling algorithm and multiple parallel optimizations to reliably apply fast gradient-based optimization to moderately multimodal problems. Ensuring that the sampled geometries remain physically realizable requires manually developing specialized linear constraints for each class of problem. Utilizing free-form deformation geometry control allows these linear constraints to be written in a geometry-independent fashion, greatly easing the process of applying the algorithm to new problems. This algorithm was used to assess the presence of multimodality when optimizing a wing in subsonic and transonic flows, under inviscid and viscous conditions, and a blended wing-body under transonic, viscous conditions. Multimodality was present in every wing case, while the blended wing-body was found to be generally unimodal.

  8. Dose mapping sensitivity to deformable registration uncertainties in fractionated radiotherapy – applied to prostate proton treatments

    International Nuclear Information System (INIS)

    Tilly, David; Tilly, Nina; Ahnesjö, Anders

    2013-01-01

    Calculation of accumulated dose in fractionated radiotherapy based on spatial mapping of the dose points generally requires deformable image registration (DIR). The accuracy of the accumulated dose thus depends heavily on the DIR quality. This motivates investigations of how the registration uncertainty influences dose planning objectives and treatment outcome predictions. A framework was developed where the dose mapping can be associated with a variable known uncertainty to simulate the DIR uncertainties in a clinical workflow. The framework enabled us to study the dependence of dose planning metrics, and the predicted treatment outcome, on the DIR uncertainty. The additional planning margin needed to compensate for the dose mapping uncertainties can also be determined. We applied the simulation framework to a hypofractionated proton treatment of the prostate using two different scanning beam spot sizes to also study the dose mapping sensitivity to penumbra widths. The planning parameter most sensitive to the DIR uncertainty was found to be the target D 95 . We found that the registration mean absolute error needs to be ≤0.20 cm to obtain an uncertainty better than 3% of the calculated D 95 for intermediate sized penumbras. Use of larger margins in constructing PTV from CTV relaxed the registration uncertainty requirements to the cost of increased dose burdens to the surrounding organs at risk. The DIR uncertainty requirements should be considered in an adaptive radiotherapy workflow since this uncertainty can have significant impact on the accumulated dose. The simulation framework enabled quantification of the accuracy requirement for DIR algorithms to provide satisfactory clinical accuracy in the accumulated dose

  9. Applying Kitaev's algorithm in an ion trap quantum computer

    International Nuclear Information System (INIS)

    Travaglione, B.; Milburn, G.J.

    2000-01-01

    Full text: Kitaev's algorithm is a method of estimating eigenvalues associated with an operator. Shor's factoring algorithm, which enables a quantum computer to crack RSA encryption codes, is a specific example of Kitaev's algorithm. It has been proposed that the algorithm can also be used to generate eigenstates. We extend this proposal for small quantum systems, identifying the conditions under which the algorithm can successfully generate eigenstates. We then propose an implementation scheme based on an ion trap quantum computer. This scheme allows us to illustrate a simple example, in which the algorithm effectively generates eigenstates

  10. Improved Bat Algorithm Applied to Multilevel Image Thresholding

    Directory of Open Access Journals (Sweden)

    Adis Alihodzic

    2014-01-01

    Full Text Available Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed.

  11. SU-E-J-89: Comparative Analysis of MIM and Velocity’s Image Deformation Algorithm Using Simulated KV-CBCT Images for Quality Assurance

    Energy Technology Data Exchange (ETDEWEB)

    Cline, K; Narayanasamy, G; Obediat, M; Stanley, D; Stathakis, S; Kirby, N [University of Texas Health Science Center at San Antonio, Cancer Therapy and Research Center, San Antonio, TX (United States); Kim, H [University of California San Francisco, San Francisco, CA (United States)

    2015-06-15

    Purpose: Deformable image registration (DIR) is used routinely in the clinic without a formalized quality assurance (QA) process. Using simulated deformations to digitally deform images in a known way and comparing to DIR algorithm predictions is a powerful technique for DIR QA. This technique must also simulate realistic image noise and artifacts, especially between modalities. This study developed an algorithm to create simulated daily kV cone-beam computed-tomography (CBCT) images from CT images for DIR QA between these modalities. Methods: A Catphan and physical head-and-neck phantom, with known deformations, were used. CT and kV-CBCT images of the Catphan were utilized to characterize the changes in Hounsfield units, noise, and image cupping that occur between these imaging modalities. The algorithm then imprinted these changes onto a CT image of the deformed head-and-neck phantom, thereby creating a simulated-CBCT image. CT and kV-CBCT images of the undeformed and deformed head-and-neck phantom were also acquired. The Velocity and MIM DIR algorithms were applied between the undeformed CT image and each of the deformed CT, CBCT, and simulated-CBCT images to obtain predicted deformations. The error between the known and predicted deformations was used as a metric to evaluate the quality of the simulated-CBCT image. Ideally, the simulated-CBCT image registration would produce the same accuracy as the deformed CBCT image registration. Results: For Velocity, the mean error was 1.4 mm for the CT-CT registration, 1.7 mm for the CT-CBCT registration, and 1.4 mm for the CT-simulated-CBCT registration. These same numbers were 1.5, 4.5, and 5.9 mm, respectively, for MIM. Conclusion: All cases produced similar accuracy for Velocity. MIM produced similar values of accuracy for CT-CT registration, but was not as accurate for CT-CBCT registrations. The MIM simulated-CBCT registration followed this same trend, but overestimated MIM DIR errors relative to the CT

  12. SU-E-J-89: Comparative Analysis of MIM and Velocity’s Image Deformation Algorithm Using Simulated KV-CBCT Images for Quality Assurance

    International Nuclear Information System (INIS)

    Cline, K; Narayanasamy, G; Obediat, M; Stanley, D; Stathakis, S; Kirby, N; Kim, H

    2015-01-01

    Purpose: Deformable image registration (DIR) is used routinely in the clinic without a formalized quality assurance (QA) process. Using simulated deformations to digitally deform images in a known way and comparing to DIR algorithm predictions is a powerful technique for DIR QA. This technique must also simulate realistic image noise and artifacts, especially between modalities. This study developed an algorithm to create simulated daily kV cone-beam computed-tomography (CBCT) images from CT images for DIR QA between these modalities. Methods: A Catphan and physical head-and-neck phantom, with known deformations, were used. CT and kV-CBCT images of the Catphan were utilized to characterize the changes in Hounsfield units, noise, and image cupping that occur between these imaging modalities. The algorithm then imprinted these changes onto a CT image of the deformed head-and-neck phantom, thereby creating a simulated-CBCT image. CT and kV-CBCT images of the undeformed and deformed head-and-neck phantom were also acquired. The Velocity and MIM DIR algorithms were applied between the undeformed CT image and each of the deformed CT, CBCT, and simulated-CBCT images to obtain predicted deformations. The error between the known and predicted deformations was used as a metric to evaluate the quality of the simulated-CBCT image. Ideally, the simulated-CBCT image registration would produce the same accuracy as the deformed CBCT image registration. Results: For Velocity, the mean error was 1.4 mm for the CT-CT registration, 1.7 mm for the CT-CBCT registration, and 1.4 mm for the CT-simulated-CBCT registration. These same numbers were 1.5, 4.5, and 5.9 mm, respectively, for MIM. Conclusion: All cases produced similar accuracy for Velocity. MIM produced similar values of accuracy for CT-CT registration, but was not as accurate for CT-CBCT registrations. The MIM simulated-CBCT registration followed this same trend, but overestimated MIM DIR errors relative to the CT

  13. Continuous firefly algorithm applied to PWR core pattern enhancement

    Energy Technology Data Exchange (ETDEWEB)

    Poursalehi, N., E-mail: npsalehi@yahoo.com [Engineering Department, Shahid Beheshti University, G.C., P.O. Box 1983963113, Tehran (Iran, Islamic Republic of); Zolfaghari, A.; Minuchehr, A.; Moghaddam, H.K. [Engineering Department, Shahid Beheshti University, G.C., P.O. Box 1983963113, Tehran (Iran, Islamic Republic of)

    2013-05-15

    Highlights: ► Numerical results indicate the reliability of CFA for the nuclear reactor LPO. ► The major advantages of CFA are its light computational cost and fast convergence. ► Our experiments demonstrate the ability of CFA to obtain the near optimal loading pattern. -- Abstract: In this research, the new meta-heuristic optimization strategy, firefly algorithm, is developed for the nuclear reactor loading pattern optimization problem. Two main goals in reactor core fuel management optimization are maximizing the core multiplication factor (K{sub eff}) in order to extract the maximum cycle energy and minimizing the power peaking factor due to safety constraints. In this work, we define a multi-objective fitness function according to above goals for the core fuel arrangement enhancement. In order to evaluate and demonstrate the ability of continuous firefly algorithm (CFA) to find the near optimal loading pattern, we developed CFA nodal expansion code (CFANEC) for the fuel management operation. This code consists of two main modules including CFA optimization program and a developed core analysis code implementing nodal expansion method to calculate with coarse meshes by dimensions of fuel assemblies. At first, CFA is applied for the Foxholes test case with continuous variables in order to validate CFA and then for KWU PWR using a decoding strategy for discrete variables. Results indicate the efficiency and relatively fast convergence of CFA in obtaining near optimal loading pattern with respect to considered fitness function. At last, our experience with the CFA confirms that the CFA is easy to implement and reliable.

  14. Continuous firefly algorithm applied to PWR core pattern enhancement

    International Nuclear Information System (INIS)

    Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.; Moghaddam, H.K.

    2013-01-01

    Highlights: ► Numerical results indicate the reliability of CFA for the nuclear reactor LPO. ► The major advantages of CFA are its light computational cost and fast convergence. ► Our experiments demonstrate the ability of CFA to obtain the near optimal loading pattern. -- Abstract: In this research, the new meta-heuristic optimization strategy, firefly algorithm, is developed for the nuclear reactor loading pattern optimization problem. Two main goals in reactor core fuel management optimization are maximizing the core multiplication factor (K eff ) in order to extract the maximum cycle energy and minimizing the power peaking factor due to safety constraints. In this work, we define a multi-objective fitness function according to above goals for the core fuel arrangement enhancement. In order to evaluate and demonstrate the ability of continuous firefly algorithm (CFA) to find the near optimal loading pattern, we developed CFA nodal expansion code (CFANEC) for the fuel management operation. This code consists of two main modules including CFA optimization program and a developed core analysis code implementing nodal expansion method to calculate with coarse meshes by dimensions of fuel assemblies. At first, CFA is applied for the Foxholes test case with continuous variables in order to validate CFA and then for KWU PWR using a decoding strategy for discrete variables. Results indicate the efficiency and relatively fast convergence of CFA in obtaining near optimal loading pattern with respect to considered fitness function. At last, our experience with the CFA confirms that the CFA is easy to implement and reliable

  15. The digital geometric phase technique applied to the deformation evaluation of MEMS devices

    International Nuclear Information System (INIS)

    Liu, Z W; Xie, H M; Gu, C Z; Meng, Y G

    2009-01-01

    Quantitative evaluation of the structure deformation of microfabricated electromechanical systems is of importance for the design and functional control of microsystems. In this investigation, a novel digital geometric phase technique was developed to meet the deformation evaluation requirement of microelectromechanical systems (MEMS). The technique is performed on the basis of regular artificial lattices, instead of a natural atom lattice. The regular artificial lattices with a pitch ranging from micrometer to nanometer will be directly fabricated on the measured surface of MEMS devices by using a focused ion beam (FIB). Phase information can be obtained from the Bragg filtered images after fast Fourier transform (FFT) and inverse fast Fourier transform (IFFT) of the scanning electronic microscope (SEM) images. Then the in-plane displacement field and the local strain field related to the phase information will be evaluated. The obtained results show that the technique can be well applied to deformation measurement with nanometer sensitivity and stiction force estimation of a MEMS device

  16. Applying Biomimetic Algorithms for Extra-Terrestrial Habitat Generation

    Science.gov (United States)

    Birge, Brian

    2012-01-01

    The objective is to simulate and optimize distributed cooperation among a network of robots tasked with cooperative excavation on an extra-terrestrial surface. Additionally to examine the concept of directed Emergence among a group of limited artificially intelligent agents. Emergence is the concept of achieving complex results from very simple rules or interactions. For example, in a termite mound each individual termite does not carry a blueprint of how to make their home in a global sense, but their interactions based strictly on local desires create a complex superstructure. Leveraging this Emergence concept applied to a simulation of cooperative agents (robots) will allow an examination of the success of non-directed group strategy achieving specific results. Specifically the simulation will be a testbed to evaluate population based robotic exploration and cooperative strategies while leveraging the evolutionary teamwork approach in the face of uncertainty about the environment and partial loss of sensors. Checking against a cost function and 'social' constraints will optimize cooperation when excavating a simulated tunnel. Agents will act locally with non-local results. The rules by which the simulated robots interact will be optimized to the simplest possible for the desired result, leveraging Emergence. Sensor malfunction and line of sight issues will be incorporated into the simulation. This approach falls under Swarm Robotics, a subset of robot control concerned with finding ways to control large groups of robots. Swarm Robotics often contains biologically inspired approaches, research comes from social insect observation but also data from among groups of herding, schooling, and flocking animals. Biomimetic algorithms applied to manned space exploration is the method under consideration for further study.

  17. A meshless EFG-based algorithm for 3D deformable modeling of soft tissue in real-time.

    Science.gov (United States)

    Abdi, Elahe; Farahmand, Farzam; Durali, Mohammad

    2012-01-01

    The meshless element-free Galerkin method was generalized and an algorithm was developed for 3D dynamic modeling of deformable bodies in real time. The efficacy of the algorithm was investigated in a 3D linear viscoelastic model of human spleen subjected to a time-varying compressive force exerted by a surgical grasper. The model remained stable in spite of the considerably large deformations occurred. There was a good agreement between the results and those of an equivalent finite element model. The computational cost, however, was much lower, enabling the proposed algorithm to be effectively used in real-time applications.

  18. Orientation estimation algorithm applied to high-spin projectiles

    International Nuclear Information System (INIS)

    Long, D F; Lin, J; Zhang, X M; Li, J

    2014-01-01

    High-spin projectiles are low cost military weapons. Accurate orientation information is critical to the performance of the high-spin projectiles control system. However, orientation estimators have not been well translated from flight vehicles since they are too expensive, lack launch robustness, do not fit within the allotted space, or are too application specific. This paper presents an orientation estimation algorithm specific for these projectiles. The orientation estimator uses an integrated filter to combine feedback from a three-axis magnetometer, two single-axis gyros and a GPS receiver. As a new feature of this algorithm, the magnetometer feedback estimates roll angular rate of projectile. The algorithm also incorporates online sensor error parameter estimation performed simultaneously with the projectile attitude estimation. The second part of the paper deals with the verification of the proposed orientation algorithm through numerical simulation and experimental tests. Simulations and experiments demonstrate that the orientation estimator can effectively estimate the attitude of high-spin projectiles. Moreover, online sensor calibration significantly enhances the estimation performance of the algorithm. (paper)

  19. Orientation estimation algorithm applied to high-spin projectiles

    Science.gov (United States)

    Long, D. F.; Lin, J.; Zhang, X. M.; Li, J.

    2014-06-01

    High-spin projectiles are low cost military weapons. Accurate orientation information is critical to the performance of the high-spin projectiles control system. However, orientation estimators have not been well translated from flight vehicles since they are too expensive, lack launch robustness, do not fit within the allotted space, or are too application specific. This paper presents an orientation estimation algorithm specific for these projectiles. The orientation estimator uses an integrated filter to combine feedback from a three-axis magnetometer, two single-axis gyros and a GPS receiver. As a new feature of this algorithm, the magnetometer feedback estimates roll angular rate of projectile. The algorithm also incorporates online sensor error parameter estimation performed simultaneously with the projectile attitude estimation. The second part of the paper deals with the verification of the proposed orientation algorithm through numerical simulation and experimental tests. Simulations and experiments demonstrate that the orientation estimator can effectively estimate the attitude of high-spin projectiles. Moreover, online sensor calibration significantly enhances the estimation performance of the algorithm.

  20. TPS-HAMMER: improving HAMMER registration algorithm by soft correspondence matching and thin-plate splines based deformation interpolation.

    Science.gov (United States)

    Wu, Guorong; Yap, Pew-Thian; Kim, Minjeong; Shen, Dinggang

    2010-02-01

    We present an improved MR brain image registration algorithm, called TPS-HAMMER, which is based on the concepts of attribute vectors and hierarchical landmark selection scheme proposed in the highly successful HAMMER registration algorithm. We demonstrate that TPS-HAMMER algorithm yields better registration accuracy, robustness, and speed over HAMMER owing to (1) the employment of soft correspondence matching and (2) the utilization of thin-plate splines (TPS) for sparse-to-dense deformation field generation. These two aspects can be integrated into a unified framework to refine the registration iteratively by alternating between soft correspondence matching and dense deformation field estimation. Compared with HAMMER, TPS-HAMMER affords several advantages: (1) unlike the Gaussian propagation mechanism employed in HAMMER, which can be slow and often leaves unreached blotches in the deformation field, the deformation interpolation in the non-landmark points can be obtained immediately with TPS in our algorithm; (2) the smoothness of deformation field is preserved due to the nice properties of TPS; (3) possible misalignments can be alleviated by allowing the matching of the landmarks with a number of possible candidate points and enforcing more exact matches in the final stages of the registration. Extensive experiments have been conducted, using the original HAMMER as a comparison baseline, to validate the merits of TPS-HAMMER. The results show that TPS-HAMMER yields significant improvement in both accuracy and speed, indicating high applicability for the clinical scenario. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  1. Differential Evolution algorithm applied to FSW model calibration

    Science.gov (United States)

    Idagawa, H. S.; Santos, T. F. A.; Ramirez, A. J.

    2014-03-01

    Friction Stir Welding (FSW) is a solid state welding process that can be modelled using a Computational Fluid Dynamics (CFD) approach. These models use adjustable parameters to control the heat transfer and the heat input to the weld. These parameters are used to calibrate the model and they are generally determined using the conventional trial and error approach. Since this method is not very efficient, we used the Differential Evolution (DE) algorithm to successfully determine these parameters. In order to improve the success rate and to reduce the computational cost of the method, this work studied different characteristics of the DE algorithm, such as the evolution strategy, the objective function, the mutation scaling factor and the crossover rate. The DE algorithm was tested using a friction stir weld performed on a UNS S32205 Duplex Stainless Steel.

  2. Applying genetic algorithms for programming manufactoring cell tasks

    Directory of Open Access Journals (Sweden)

    Efredy Delgado

    2005-05-01

    Full Text Available This work was aimed for developing computational intelligence for scheduling a manufacturing cell's tasks, based manily on genetic algorithms. The manufacturing cell was modelled as beign a production-line; the makespan was calculated by using heuristics adapted from several libraries for genetic algorithms computed in C++ builder. Several problems dealing with small, medium and large list of jobs and machinery were resolved. The results were compared with other heuristics. The approach developed here would seem to be promising for future research concerning scheduling manufacturing cell tasks involving mixed batches.

  3. Optimising a shaft's geometry by applying genetic algorithms

    Directory of Open Access Journals (Sweden)

    María Alejandra Guzmán

    2005-05-01

    Full Text Available Many engnieering design tasks involve optimising several conflicting goals; these types of problem are known as Multiobjective Optimisation Problems (MOPs. Evolutionary techniques have proved to be an effective tool for finding solutions to these MOPs during the last decade, Variations on the basic generic algorithm have been particulary proposed by different researchers for finding rapid optimal solutions to MOPs. The NSGA (Non-dominated Sorting Generic Algorithm has been implemented in this paper for finding an optimal design for a shaft subjected to cyclic loads, the conflycting goals being minimum weight and minimum lateral deflection.

  4. Parallel preconditioned conjugate gradient algorithm applied to neutron diffusion problem

    International Nuclear Information System (INIS)

    Majumdar, A.; Martin, W.R.

    1992-01-01

    Numerical solution of the neutron diffusion problem requires solving a linear system of equations such as Ax = b, where A is an n x n symmetric positive definite (SPD) matrix; x and b are vectors with n components. The preconditioned conjugate gradient (PCG) algorithm is an efficient iterative method for solving such a linear system of equations. In this paper, the authors describe the implementation of a parallel PCG algorithm on a shared memory machine (BBN TC2000) and on a distributed workstation (IBM RS6000) environment created by the parallel virtual machine parallelization software

  5. Performance evaluation of the EM algorithm applied to radiographic images

    International Nuclear Information System (INIS)

    Brailean, J.C.; Giger, M.L.; Chen, C.T.; Sullivan, B.J.

    1990-01-01

    In this paper the authors evaluate the expectation maximization (EM) algorithm, both qualitatively and quantitatively, as a technique for enhancing radiographic images. Previous studies have qualitatively shown the usefulness of the EM algorithm but have failed to quantify and compare its performance with those of other image processing techniques. Recent studies by Loo et al, Ishida et al, and Giger et al, have explained improvements in image quality quantitatively in terms of a signal-to-noise ratio (SNR) derived from signal detection theory. In this study, we take a similar approach in quantifying the effect of the EM algorithm on detection of simulated low-contrast square objects superimposed on radiographic mottle. The SNRs of the original and processed images are calculated taking into account both the human visual system response and the screen-film transfer function as well as a noise component internal to the eye-brain system. The EM algorithm was also implemented on digital screen-film images of test patterns and clinical mammograms

  6. The PBIL algorithm applied to a nuclear reactor design optimization

    Energy Technology Data Exchange (ETDEWEB)

    Machado, Marcelo D.; Medeiros, Jose A.C.C.; Lima, Alan M.M. de; Schirru, Roberto [Instituto Alberto Luiz Coimbra de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ-RJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear. Lab. de Monitoracao de Processos]. E-mails: marcelo@lmp.ufrj.br; canedo@lmp.ufrj.br; alan@lmp.ufrj.br; schirru@lmp.ufrj.br

    2007-07-01

    The Population-Based Incremental Learning (PBIL) algorithm is a method that combines the mechanism of genetic algorithm with the simple competitive learning, creating an important tool to be used in the optimization of numeric functions and combinatory problems. PBIL works with a set of solutions to the problems, called population, whose objective is create a probability vector, containing real values in each position, that when used in a decoding procedure gives subjects that present the best solutions for the function to be optimized. In this work a new form of learning for algorithm PBIL is developed, having aimed at to reduce the necessary time for the optimization process. This new algorithm will be used in the nuclear reactor design optimization. The optimization problem consists in adjusting several reactor cell parameters, such as dimensions, enrichment and materials, in order to minimize the average peak-factor in a 3-enrichment zone reactor, considering some restrictions. In this optimization is used the computational code HAMMER, and the results compared with other methods of optimization by artificial intelligence. (author)

  7. The PBIL algorithm applied to a nuclear reactor design optimization

    International Nuclear Information System (INIS)

    Machado, Marcelo D.; Medeiros, Jose A.C.C.; Lima, Alan M.M. de; Schirru, Roberto

    2007-01-01

    The Population-Based Incremental Learning (PBIL) algorithm is a method that combines the mechanism of genetic algorithm with the simple competitive learning, creating an important tool to be used in the optimization of numeric functions and combinatory problems. PBIL works with a set of solutions to the problems, called population, whose objective is create a probability vector, containing real values in each position, that when used in a decoding procedure gives subjects that present the best solutions for the function to be optimized. In this work a new form of learning for algorithm PBIL is developed, having aimed at to reduce the necessary time for the optimization process. This new algorithm will be used in the nuclear reactor design optimization. The optimization problem consists in adjusting several reactor cell parameters, such as dimensions, enrichment and materials, in order to minimize the average peak-factor in a 3-enrichment zone reactor, considering some restrictions. In this optimization is used the computational code HAMMER, and the results compared with other methods of optimization by artificial intelligence. (author)

  8. Genetic algorithms applied to the nuclear power plant operation

    International Nuclear Information System (INIS)

    Schirru, R.; Martinez, A.S.; Pereira, C.M.N.A.

    2000-01-01

    Nuclear power plant operation often involves very important human decisions, such as actions to be taken after a nuclear accident/transient, or finding the best core reload pattern, a complex combinatorial optimization problem which requires expert knowledge. Due to the complexity involved in the decisions to be taken, computerized systems have been intensely explored in order to aid the operator. Following hardware advances, soft computing has been improved and, nowadays, intelligent technologies, such as genetic algorithms, neural networks and fuzzy systems, are being used to support operator decisions. In this chapter two main problems are explored: transient diagnosis and nuclear core refueling. Here, solutions to such kind of problems, based on genetic algorithms, are described. A genetic algorithm was designed to optimize the nuclear fuel reload of Angra-1 nuclear power plant. Results compared to those obtained by an expert reveal a gain in the burn-up cycle. Two other genetic algorithm approaches were used to optimize real time diagnosis systems. The first one learns partitions in the time series that represents the transients, generating a set of classification centroids. The other one involves the optimization of an adaptive vector quantization neural network. Results are shown and commented. (orig.)

  9. An Improved Crow Search Algorithm Applied to Energy Problems

    Directory of Open Access Journals (Sweden)

    Primitivo Díaz

    2018-03-01

    Full Text Available The efficient use of energy in electrical systems has become a relevant topic due to its environmental impact. Parameter identification in induction motors and capacitor allocation in distribution networks are two representative problems that have strong implications in the massive use of energy. From an optimization perspective, both problems are considered extremely complex due to their non-linearity, discontinuity, and high multi-modality. These characteristics make difficult to solve them by using standard optimization techniques. On the other hand, metaheuristic methods have been widely used as alternative optimization algorithms to solve complex engineering problems. The Crow Search Algorithm (CSA is a recent metaheuristic method based on the intelligent group behavior of crows. Although CSA presents interesting characteristics, its search strategy presents great difficulties when it faces high multi-modal formulations. In this paper, an improved version of the CSA method is presented to solve complex optimization problems of energy. In the new algorithm, two features of the original CSA are modified: (I the awareness probability (AP and (II the random perturbation. With such adaptations, the new approach preserves solution diversity and improves the convergence to difficult high multi-modal optima. In order to evaluate its performance, the proposed algorithm has been tested in a set of four optimization problems which involve induction motors and distribution networks. The results demonstrate the high performance of the proposed method when it is compared with other popular approaches.

  10. Horn–Schunck optical flow applied to deformation measurement of a birdlike airfoil

    Directory of Open Access Journals (Sweden)

    Gong Xiaoliang

    2015-10-01

    Full Text Available Current deformation measurement techniques suffer from limited spatial resolution. In this work, a highly accurate and high-resolution Horn–Schunck optical flow method is developed and then applied to measuring the static deformation of a birdlike flexible airfoil at a series of angles of attack at Reynolds number 100,000 in a low speed, low noise wind tunnel. To allow relatively large displacements, a nonlinear Horn–Schunck model and a coarse-to-fine warping process are adopted. To preserve optical flow discontinuities, a nonquadratic penalization function, a multi-cue driven bilateral filtering and a principle component analysis of local image patterns are used. First, the accuracy and convergence of this Horn–Schunck technique are verified on a benchmark. Then, the maximum displacement that can be reliably calculated by this technique is studied on synthetic images. Both studies are compared with the performance of a Lucas–Kanade optical flow method. Finally, the Horn–Schunck technique is used to estimate the 3-D deformation of the birdlike airfoil through a stereoscopic camera setup. The results are compared with those computed by Lucas–Kanade optical flow, image correlation and numerical simulation.

  11. SU-F-J-88: Comparison of Two Deformable Image Registration Algorithms for CT-To-CT Contour Propagation

    Energy Technology Data Exchange (ETDEWEB)

    Gopal, A; Xu, H; Chen, S [University of Maryland School of Medicine, Columbia, MD (United States)

    2016-06-15

    Purpose: To compare the contour propagation accuracy of two deformable image registration (DIR) algorithms in the Raystation treatment planning system – the “Hybrid” algorithm based on image intensities and anatomical information; and the “Biomechanical” algorithm based on linear anatomical elasticity and finite element modeling. Methods: Both DIR algorithms were used for CT-to-CT deformation for 20 lung radiation therapy patients that underwent treatment plan revisions. Deformation accuracy was evaluated using landmark tracking to measure the target registration error (TRE) and inverse consistency error (ICE). The deformed contours were also evaluated against physician drawn contours using Dice similarity coefficients (DSC). Contour propagation was qualitatively assessed using a visual quality score assigned by physicians, and a refinement quality score (0algorithms showed similar ICE (< 1.5 mm), but the hybrid DIR (TRE = 3.2 mm) performed better than the biomechanical DIR (TRE = 4.3 mm) with landmark tracking. Both algorithms had comparable DSC (DSC > 0.9 for lungs, > 0.85 for heart, > 0.8 for liver) and similar qualitative assessments (VQS < 0.35, RQS > 0.75 for lungs). When anatomical structures were used to control the deformation, the DSC improved more significantly for the biomechanical DIR compared to the hybrid DIR, while the VQS and RQS improved only for the controlling structures. However, while the inclusion of controlling structures improved the TRE for the hybrid DIR, it increased the TRE for the biomechanical DIR. Conclusion: The hybrid DIR was found to perform slightly better than the biomechanical DIR based on lower TRE while the DSC, VQS, and RQS studies yielded comparable results for both. The use of controlling structures showed considerable improvement in the hybrid DIR results and is recommended for clinical use in

  12. Applied algorithm in the liner inspection of solid rocket motors

    Science.gov (United States)

    Hoffmann, Luiz Felipe Simões; Bizarria, Francisco Carlos Parquet; Bizarria, José Walter Parquet

    2018-03-01

    In rocket motors, the bonding between the solid propellant and thermal insulation is accomplished by a thin adhesive layer, known as liner. The liner application method involves a complex sequence of tasks, which includes in its final stage, the surface integrity inspection. Nowadays in Brazil, an expert carries out a thorough visual inspection to detect defects on the liner surface that may compromise the propellant interface bonding. Therefore, this paper proposes an algorithm that uses the photometric stereo technique and the K-nearest neighbor (KNN) classifier to assist the expert in the surface inspection. Photometric stereo allows the surface information recovery of the test images, while the KNN method enables image pixels classification into two classes: non-defect and defect. Tests performed on a computer vision based prototype validate the algorithm. The positive results suggest that the algorithm is feasible and when implemented in a real scenario, will be able to help the expert in detecting defective areas on the liner surface.

  13. ANTQ evolutionary algorithm applied to nuclear fuel reload problem

    International Nuclear Information System (INIS)

    Machado, Liana; Schirru, Roberto

    2000-01-01

    Nuclear fuel reload optimization is a NP-complete combinatorial optimization problem where the aim is to find fuel rods' configuration that maximizes burnup or minimizes the power peak factor. For decades this problem was solved exclusively using an expert's knowledge. From the eighties, however, there have been efforts to automatize fuel reload. The first relevant effort used Simulated Annealing, but more recent publications show Genetic Algorithm's (GA) efficiency on this problem's solution. Following this direction, our aim is to optimize nuclear fuel reload using Ant-Q, a reinforcement learning algorithm based on the Cellular Computing paradigm. Ant-Q's results on the Travelling Salesmen Problem, which is conceptually similar to fuel reload, are better than the GA's ones. Ant-Q was tested on fuel reload by the simulation of the first cycle in-out reload of Bibils, a 193 fuel element PWR. Comparing An-Q's result with the GA's ones, it can b seen that even without a local heuristics, the former evolutionary algorithm can be used to solve the nuclear fuel reload problem. (author)

  14. Applying Planning Algorithms to Argue in Cooperative Work

    Science.gov (United States)

    Monteserin, Ariel; Schiaffino, Silvia; Amandi, Analía

    Negotiation is typically utilized in cooperative work scenarios for solving conflicts. Anticipating possible arguments in this negotiation step represents a key factor since we can take decisions about our participation in the cooperation process. In this context, we present a novel application of planning algorithms for argument generation, where the actions of a plan represent the arguments that a person might use during the argumentation process. In this way, we can plan how to persuade the other participants in cooperative work for reaching an expected agreement in terms of our interests. This approach allows us to take advantages since we can test anticipated argumentative solutions in advance.

  15. Genetic Algorithm Applied to the Eigenvalue Equalization Filtered-x LMS Algorithm (EE-FXLMS

    Directory of Open Access Journals (Sweden)

    Stephan P. Lovstedt

    2008-01-01

    Full Text Available The FXLMS algorithm, used extensively in active noise control (ANC, exhibits frequency-dependent convergence behavior. This leads to degraded performance for time-varying tonal noise and noise with multiple stationary tones. Previous work by the authors proposed the eigenvalue equalization filtered-x least mean squares (EE-FXLMS algorithm. For that algorithm, magnitude coefficients of the secondary path transfer function are modified to decrease variation in the eigenvalues of the filtered-x autocorrelation matrix, while preserving the phase, giving faster convergence and increasing overall attenuation. This paper revisits the EE-FXLMS algorithm, using a genetic algorithm to find magnitude coefficients that give the least variation in eigenvalues. This method overcomes some of the problems with implementing the EE-FXLMS algorithm arising from finite resolution of sampled systems. Experimental control results using the original secondary path model, and a modified secondary path model for both the previous implementation of EE-FXLMS and the genetic algorithm implementation are compared.

  16. Automated microaneurysm detection algorithms applied to diabetic retinopathy retinal images

    Directory of Open Access Journals (Sweden)

    Akara Sopharak

    2013-07-01

    Full Text Available Diabetic retinopathy is the commonest cause of blindness in working age people. It is characterised and graded by the development of retinal microaneurysms, haemorrhages and exudates. The damage caused by diabetic retinopathy can be prevented if it is treated in its early stages. Therefore, automated early detection can limit the severity of the disease, improve the follow-up management of diabetic patients and assist ophthalmologists in investigating and treating the disease more efficiently. This review focuses on microaneurysm detection as the earliest clinically localised characteristic of diabetic retinopathy, a frequently observed complication in both Type 1 and Type 2 diabetes. Algorithms used for microaneurysm detection from retinal images are reviewed. A number of features used to extract microaneurysm are summarised. Furthermore, a comparative analysis of reported methods used to automatically detect microaneurysms is presented and discussed. The performance of methods and their complexity are also discussed.

  17. An improved data integration algorithm to constrain the 3D displacement field induced by fast deformation phenomena tested on the Napa Valley earthquake

    Science.gov (United States)

    Polcari, Marco; Fernández, José; Albano, Matteo; Bignami, Christian; Palano, Mimmo; Stramondo, Salvatore

    2017-12-01

    In this work, we propose an improved algorithm to constrain the 3D ground displacement field induced by fast surface deformations due to earthquakes or landslides. Based on the integration of different data, we estimate the three displacement components by solving a function minimization problem from the Bayes theory. We exploit the outcomes from SAR Interferometry (InSAR), Global Positioning System (GNSS) and Multiple Aperture Interferometry (MAI) to retrieve the 3D surface displacement field. Any other source of information can be added to the processing chain in a simple way, being the algorithm computationally efficient. Furthermore, we use the intensity Pixel Offset Tracking (POT) to locate the discontinuity produced on the surface by a sudden deformation phenomenon and then improve the GNSS data interpolation. This approach allows to be independent from other information such as in-situ investigations, tectonic studies or knowledge of the data covariance matrix. We applied such a method to investigate the ground deformation field related to the 2014 Mw 6.0 Napa Valley earthquake, occurred few kilometers from the San Andreas fault system.

  18. Apply lightweight recognition algorithms in optical music recognition

    Science.gov (United States)

    Pham, Viet-Khoi; Nguyen, Hai-Dang; Nguyen-Khac, Tung-Anh; Tran, Minh-Triet

    2015-02-01

    The problems of digitalization and transformation of musical scores into machine-readable format are necessary to be solved since they help people to enjoy music, to learn music, to conserve music sheets, and even to assist music composers. However, the results of existing methods still require improvements for higher accuracy. Therefore, the authors propose lightweight algorithms for Optical Music Recognition to help people to recognize and automatically play musical scores. In our proposal, after removing staff lines and extracting symbols, each music symbol is represented as a grid of identical M ∗ N cells, and the features are extracted and classified with multiple lightweight SVM classifiers. Through experiments, the authors find that the size of 10 ∗ 12 cells yields the highest precision value. Experimental results on the dataset consisting of 4929 music symbols taken from 18 modern music sheets in the Synthetic Score Database show that our proposed method is able to classify printed musical scores with accuracy up to 99.56%.

  19. Applying Intelligent Algorithms to Automate the Identification of Error Factors.

    Science.gov (United States)

    Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han

    2018-05-03

    Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.

  20. SU-F-J-88: Comparison of Two Deformable Image Registration Algorithms for CT-To-CT Contour Propagation

    International Nuclear Information System (INIS)

    Gopal, A; Xu, H; Chen, S

    2016-01-01

    Purpose: To compare the contour propagation accuracy of two deformable image registration (DIR) algorithms in the Raystation treatment planning system – the “Hybrid” algorithm based on image intensities and anatomical information; and the “Biomechanical” algorithm based on linear anatomical elasticity and finite element modeling. Methods: Both DIR algorithms were used for CT-to-CT deformation for 20 lung radiation therapy patients that underwent treatment plan revisions. Deformation accuracy was evaluated using landmark tracking to measure the target registration error (TRE) and inverse consistency error (ICE). The deformed contours were also evaluated against physician drawn contours using Dice similarity coefficients (DSC). Contour propagation was qualitatively assessed using a visual quality score assigned by physicians, and a refinement quality score (0 0.9 for lungs, > 0.85 for heart, > 0.8 for liver) and similar qualitative assessments (VQS 0.75 for lungs). When anatomical structures were used to control the deformation, the DSC improved more significantly for the biomechanical DIR compared to the hybrid DIR, while the VQS and RQS improved only for the controlling structures. However, while the inclusion of controlling structures improved the TRE for the hybrid DIR, it increased the TRE for the biomechanical DIR. Conclusion: The hybrid DIR was found to perform slightly better than the biomechanical DIR based on lower TRE while the DSC, VQS, and RQS studies yielded comparable results for both. The use of controlling structures showed considerable improvement in the hybrid DIR results and is recommended for clinical use in contour propagation.

  1. PSP SAR interferometry monitoring of ground and structure deformations applied to archaeological sites

    Science.gov (United States)

    Costantini, Mario; Francioni, Elena; Trillo, Francesco; Minati, Federico; Margottini, Claudio; Spizzichino, Daniele; Trigila, Alessandro; Iadanza, Carla

    2017-04-01

    Archaeological sites and cultural heritage are considered as critical assets for the society, representing not only the history of region or a culture, but also contributing to create a common identity of people living in a certain region. In this view, it is becoming more and more urgent to preserve them from climate changes effect and in general from their degradation. These structures are usually just as precious as fragile: remote sensing technology can be useful to monitor these treasures. In this work, we will focus on ground deformation measurements obtained by satellite SAR interferometry and on the methodology adopted and implemented in order to use the results operatively for conservation policies in a Italian archaeological site. The analysis is based on the processing of COSMO-SkyMed Himage data by the e-GEOS proprietary Persistent Scatterer Pair (PSP) SAR interferometry technology. The PSP technique is a proven SAR interferometry technology characterized by the fact of exploiting in the processing only the relative properties between close points (pairs) in order to overcome atmospheric artefacts (which are one of the main problems of SAR interferometry). Validations analyses [Costantini et al. 2015] settled that this technique applied to COSMO-SkyMed Himage data is able to retrieve very dense (except of course on vegetated or cultivated areas) millimetric deformation measurements with sub-metric localization. Considering the limitations of all the interferometric techniques, in particular the fact that the measurement are along the line of sight (LOS) and the geometric distortions, in order to obtain the maximum information from interferometric analysis, both ascending and descending geometry have been used. The ascending analysis allows selecting measurements points over the top and, approximately, South-West part of the structures, while the descending one over the top and the South-East part of the structures. The interferometric techniques needs

  2. SU-C-BRA-07: Variability of Patient-Specific Motion Models Derived Using Different Deformable Image Registration Algorithms for Lung Cancer Stereotactic Body Radiotherapy (SBRT) Patients

    Energy Technology Data Exchange (ETDEWEB)

    Dhou, S; Williams, C [Brigham and Women’s Hospital / Harvard Medical School, Boston, MA (United States); Ionascu, D [William Beaumont Hospital, Royal Oak, MI (United States); Lewis, J [University of California at Los Angeles, Los Angeles, CA (United States)

    2016-06-15

    Purpose: To study the variability of patient-specific motion models derived from 4-dimensional CT (4DCT) images using different deformable image registration (DIR) algorithms for lung cancer stereotactic body radiotherapy (SBRT) patients. Methods: Motion models are derived by 1) applying DIR between each 4DCT image and a reference image, resulting in a set of displacement vector fields (DVFs), and 2) performing principal component analysis (PCA) on the DVFs, resulting in a motion model (a set of eigenvectors capturing the variations in the DVFs). Three DIR algorithms were used: 1) Demons, 2) Horn-Schunck, and 3) iterative optical flow. The motion models derived were compared using patient 4DCT scans. Results: Motion models were derived and the variations were evaluated according to three criteria: 1) the average root mean square (RMS) difference which measures the absolute difference between the components of the eigenvectors, 2) the dot product between the eigenvectors which measures the angular difference between the eigenvectors in space, and 3) the Euclidean Model Norm (EMN), which is calculated by summing the dot products of an eigenvector with the first three eigenvectors from the reference motion model in quadrature. EMN measures how well an eigenvector can be reconstructed using another motion model derived using a different DIR algorithm. Results showed that comparing to a reference motion model (derived using the Demons algorithm), the eigenvectors of the motion model derived using the iterative optical flow algorithm has smaller RMS, larger dot product, and larger EMN values than those of the motion model derived using Horn-Schunck algorithm. Conclusion: The study showed that motion models vary depending on which DIR algorithms were used to derive them. The choice of a DIR algorithm may affect the accuracy of the resulting model, and it is important to assess the suitability of the algorithm chosen for a particular application. This project was supported

  3. SU-C-BRA-07: Variability of Patient-Specific Motion Models Derived Using Different Deformable Image Registration Algorithms for Lung Cancer Stereotactic Body Radiotherapy (SBRT) Patients

    International Nuclear Information System (INIS)

    Dhou, S; Williams, C; Ionascu, D; Lewis, J

    2016-01-01

    Purpose: To study the variability of patient-specific motion models derived from 4-dimensional CT (4DCT) images using different deformable image registration (DIR) algorithms for lung cancer stereotactic body radiotherapy (SBRT) patients. Methods: Motion models are derived by 1) applying DIR between each 4DCT image and a reference image, resulting in a set of displacement vector fields (DVFs), and 2) performing principal component analysis (PCA) on the DVFs, resulting in a motion model (a set of eigenvectors capturing the variations in the DVFs). Three DIR algorithms were used: 1) Demons, 2) Horn-Schunck, and 3) iterative optical flow. The motion models derived were compared using patient 4DCT scans. Results: Motion models were derived and the variations were evaluated according to three criteria: 1) the average root mean square (RMS) difference which measures the absolute difference between the components of the eigenvectors, 2) the dot product between the eigenvectors which measures the angular difference between the eigenvectors in space, and 3) the Euclidean Model Norm (EMN), which is calculated by summing the dot products of an eigenvector with the first three eigenvectors from the reference motion model in quadrature. EMN measures how well an eigenvector can be reconstructed using another motion model derived using a different DIR algorithm. Results showed that comparing to a reference motion model (derived using the Demons algorithm), the eigenvectors of the motion model derived using the iterative optical flow algorithm has smaller RMS, larger dot product, and larger EMN values than those of the motion model derived using Horn-Schunck algorithm. Conclusion: The study showed that motion models vary depending on which DIR algorithms were used to derive them. The choice of a DIR algorithm may affect the accuracy of the resulting model, and it is important to assess the suitability of the algorithm chosen for a particular application. This project was supported

  4. Added clinical value of applying myocardial deformation imaging to assess right ventricular function.

    Science.gov (United States)

    Sokalskis, Vladislavs; Peluso, Diletta; Jagodzinski, Annika; Sinning, Christoph

    2017-06-01

    Right heart dysfunction has been found to be a strong prognostic factor predicting adverse outcome in various cardiopulmonary diseases. Conventional echocardiographic measurements can be limited by geometrical assumptions and impaired reproducibility. Speckle tracking-derived strain provides a robust quantification of right ventricular function. It explicitly evaluates myocardial deformation, as opposed to tissue Doppler-derived strain, which is computed from tissue velocity gradients. Right ventricular longitudinal strain provides a sensitive tool for detecting right ventricular dysfunction, even at subclinical levels. Moreover, the longitudinal strain can be applied for prognostic stratification of patients with pulmonary hypertension, pulmonary embolism, and congestive heart failure. Speckle tracking-derived right atrial strain, right ventricular longitudinal strain-derived mechanical dyssynchrony, and three-dimensional echocardiography-derived strain are emerging imaging parameters and methods. Their application in research is paving the way for their clinical use. © 2017, Wiley Periodicals, Inc.

  5. Analyses of large quasistatic deformations of inelastic bodies by a new hybrid-stress finite element algorithm

    Science.gov (United States)

    Reed, K. W.; Atluri, S. N.

    1983-01-01

    A new hybrid-stress finite element algorithm, suitable for analyses of large, quasistatic, inelastic deformations, is presented. The algorithm is base upon a generalization of de Veubeke's complementary energy principle. The principal variables in the formulation are the nominal stress rate and spin, and thg resulting finite element equations are discrete versions of the equations of compatibility and angular momentum balance. The algorithm produces true rates, time derivatives, as opposed to 'increments'. There results a complete separation of the boundary value problem (for stress rate and velocity) and the initial value problem (for total stress and deformation); hence, their numerical treatments are essentially independent. After a fairly comprehensive discussion of the numerical treatment of the boundary value problem, we launch into a detailed examination of the numerical treatment of the initial value problem, covering the topics of efficiency, stability and objectivity. The paper is closed with a set of examples, finite homogeneous deformation problems, which serve to bring out important aspects of the algorithm.

  6. A TRMM-Calibrated Infrared Rainfall Algorithm Applied Over Brazil

    Science.gov (United States)

    Negri, A. J.; Xu, L.; Adler, R. F.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    The development of a satellite infrared technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall in Amazonia are presented. The Convective-Stratiform. Technique, calibrated by coincident, physically retrieved rain rates from the Tropical Rain Measuring Mission (TRMM) Microwave Imager (TMI), is applied during January to April 1999 over northern South America. The diurnal cycle of rainfall, as well as the division between convective and stratiform rainfall is presented. Results compare well (a one-hour lag) with the diurnal cycle derived from Tropical Ocean-Global Atmosphere (TOGA) radar-estimated rainfall in Rondonia. The satellite estimates reveal that the convective rain constitutes, in the mean, 24% of the rain area while accounting for 67% of the rain volume. The effects of geography (rivers, lakes, coasts) and topography on the diurnal cycle of convection are examined. In particular, the Amazon River, downstream of Manaus, is shown to both enhance early morning rainfall and inhibit afternoon convection. Monthly estimates from this technique, dubbed CST/TMI, are verified over a dense rain gage network in the state of Ceara, in northeast Brazil. The CST/TMI showed a high bias equal to +33% of the gage mean, indicating that possibly the TMI estimates alone are also high. The root mean square difference (after removal of the bias) equaled 36.6% of the gage mean. The correlation coefficient was 0.77 based on 72 station-months.

  7. Applying controlled non-uniform deformation for in vitro studies of cell mechanobiology.

    Science.gov (United States)

    Balestrini, Jenna L; Skorinko, Jeremy K; Hera, Adriana; Gaudette, Glenn R; Billiar, Kristen L

    2010-06-01

    Cells within connective tissues routinely experience a wide range of non-uniform mechanical loads that regulate many cell behaviors. In this study, we developed an experimental system to produce complex strain patterns for the study of strain magnitude, anisotropy, and gradient effects on cells in culture. A standard equibiaxial cell stretching system was modified by affixing glass coverslips (5, 10, or 15 mm diameter) to the center of 35 mm diameter flexible-bottomed culture wells. Ring inserts were utilized to limit applied strain to different levels in each individual well at a given vacuum pressure thus enabling parallel experiments at different strain levels. Deformation fields were measured using high-density mapping for up to 6% applied strain. The addition of the rigid inclusion creates strong circumferential and radial strain gradients, with a continuous range of stretch anisotropy ranging from strip biaxial to equibiaxial strain and radial strains up to 24% near the inclusion. Dermal fibroblasts seeded within our 2D system (5 mm inclusions; 2% applied strain for 2 days at 0.2 Hz) demonstrated the characteristic orientation perpendicular to the direction of principal strain. Dermal fibroblasts seeded within fibrin gels (5 mm inclusions; 6% applied strain for 8 days at 0.2 Hz) oriented themselves similarly and compacted their surrounding matrix to an increasing extent with local strain magnitude. This study verifies how inhomogeneous strain fields can be produced in a tunable and simply constructed system and demonstrates the potential utility for studying gradients with a continuous spectrum of strain magnitudes and anisotropies.

  8. The Great Deluge Algorithm applied to a nuclear reactor core design optimization problem

    International Nuclear Information System (INIS)

    Sacco, Wagner F.; Oliveira, Cassiano R.E. de

    2005-01-01

    The Great Deluge Algorithm (GDA) is a local search algorithm introduced by Dueck. It is an analogy with a flood: the 'water level' rises continuously and the proposed solution must lie above the 'surface' in order to survive. The crucial parameter is the 'rain speed', which controls convergence of the algorithm similarly to Simulated Annealing's annealing schedule. This algorithm is applied to the reactor core design optimization problem, which consists in adjusting several reactor cell parameters, such as dimensions, enrichment and materials, in order to minimize the average peak-factor in a 3-enrichment-zone reactor, considering restrictions on the average thermal flux, criticality and sub-moderation. This problem was previously attacked by the canonical genetic algorithm (GA) and by a Niching Genetic Algorithm (NGA). NGAs were designed to force the genetic algorithm to maintain a heterogeneous population throughout the evolutionary process, avoiding the phenomenon known as genetic drift, where all the individuals converge to a single solution. The results obtained by the Great Deluge Algorithm are compared to those obtained by both algorithms mentioned above. The three algorithms are submitted to the same computational effort and GDA reaches the best results, showing its potential for other applications in the nuclear engineering field as, for instance, the nuclear core reload optimization problem. One of the great advantages of this algorithm over the GA is that it does not require special operators for discrete optimization. (author)

  9. Real-time slicing algorithm for Stereolithography (STL) CAD model applied in additive manufacturing industry

    Science.gov (United States)

    Adnan, F. A.; Romlay, F. R. M.; Shafiq, M.

    2018-04-01

    Owing to the advent of the industrial revolution 4.0, the need for further evaluating processes applied in the additive manufacturing application particularly the computational process for slicing is non-trivial. This paper evaluates a real-time slicing algorithm for slicing an STL formatted computer-aided design (CAD). A line-plane intersection equation was applied to perform the slicing procedure at any given height. The application of this algorithm has found to provide a better computational time regardless the number of facet in the STL model. The performance of this algorithm is evaluated by comparing the results of the computational time for different geometry.

  10. Examining applying high performance genetic data feature selection and classification algorithms for colon cancer diagnosis.

    Science.gov (United States)

    Al-Rajab, Murad; Lu, Joan; Xu, Qiang

    2017-07-01

    This paper examines the accuracy and efficiency (time complexity) of high performance genetic data feature selection and classification algorithms for colon cancer diagnosis. The need for this research derives from the urgent and increasing need for accurate and efficient algorithms. Colon cancer is a leading cause of death worldwide, hence it is vitally important for the cancer tissues to be expertly identified and classified in a rapid and timely manner, to assure both a fast detection of the disease and to expedite the drug discovery process. In this research, a three-phase approach was proposed and implemented: Phases One and Two examined the feature selection algorithms and classification algorithms employed separately, and Phase Three examined the performance of the combination of these. It was found from Phase One that the Particle Swarm Optimization (PSO) algorithm performed best with the colon dataset as a feature selection (29 genes selected) and from Phase Two that the Support Vector Machine (SVM) algorithm outperformed other classifications, with an accuracy of almost 86%. It was also found from Phase Three that the combined use of PSO and SVM surpassed other algorithms in accuracy and performance, and was faster in terms of time analysis (94%). It is concluded that applying feature selection algorithms prior to classification algorithms results in better accuracy than when the latter are applied alone. This conclusion is important and significant to industry and society. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    International Nuclear Information System (INIS)

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems

  12. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    Energy Technology Data Exchange (ETDEWEB)

    Elsheikh, Ahmed H., E-mail: aelsheikh@ices.utexas.edu [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Institute of Petroleum Engineering, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); Wheeler, Mary F. [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Hoteit, Ibrahim [Department of Earth Sciences and Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal (Saudi Arabia)

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.

  13. Power to the People! Meta-algorithmic modelling in applied data science

    NARCIS (Netherlands)

    Spruit, M.; Jagesar, R.

    2016-01-01

    This position paper first defines the research field of applied data science at the intersection of domain expertise, data mining, and engineering capabilities, with particular attention to analytical applications. We then propose a meta-algorithmic approach for applied data science with societal

  14. On randomized algorithms for numerical solution of applied Fredholm integral equations of the second kind

    Science.gov (United States)

    Voytishek, Anton V.; Shipilov, Nikolay M.

    2017-11-01

    In this paper, the systematization of numerical (implemented on a computer) randomized functional algorithms for approximation of a solution of Fredholm integral equation of the second kind is carried out. Wherein, three types of such algorithms are distinguished: the projection, the mesh and the projection-mesh methods. The possibilities for usage of these algorithms for solution of practically important problems is investigated in detail. The disadvantages of the mesh algorithms, related to the necessity of calculation values of the kernels of integral equations in fixed points, are identified. On practice, these kernels have integrated singularities, and calculation of their values is impossible. Thus, for applied problems, related to solving Fredholm integral equation of the second kind, it is expedient to use not mesh, but the projection and the projection-mesh randomized algorithms.

  15. Expeditious 3D poisson vlasov algorithm applied to ion extraction from a plasma

    International Nuclear Information System (INIS)

    Whealton, J.H.; McGaffey, R.W.; Meszaros, P.S.

    1983-01-01

    A new 3D Poisson Vlasov algorithm is under development which differs from a previous algorithm, referenced in this paper, in two respects: the mesh lines are cartesian, and the Poisson equation is solved iteratively. The resulting algorithm has been used to examine the same boundary value problem as considered in the earlier algorithm except that the number of nodes is 2 times greater. The same physical results were obtained except the computational time was reduced by a factor of 60 and the memory requirement was reduced by a factor of 10. This algorithm at present restricts Neumann boundary conditions to orthogonal planes lying along mesh lines. No such restriction applies to Dirichlet boundaries. An emittance diagram is shown below where those points lying on the y = 0 line start on the axis of symmetry and those near the y = 1 line start near the slot end

  16. PSO-Based Algorithm Applied to Quadcopter Micro Air Vehicle Controller Design

    Directory of Open Access Journals (Sweden)

    Huu-Khoa Tran

    2016-09-01

    Full Text Available Due to the rapid development of science and technology in recent times, many effective controllers are designed and applied successfully to complicated systems. The significant task of controller design is to determine optimized control gains in a short period of time. With this purpose in mind, a combination of the particle swarm optimization (PSO-based algorithm and the evolutionary programming (EP algorithm is introduced in this article. The benefit of this integration algorithm is the creation of new best-parameters for control design schemes. The proposed controller designs are then demonstrated to have the best performance for nonlinear micro air vehicle models.

  17. The gravitational attraction algorithm: a new metaheuristic applied to a nuclear reactor core design optimization problem

    International Nuclear Information System (INIS)

    Sacco, Wagner F.; Oliveira, Cassiano R.E. de

    2005-01-01

    A new metaheuristic called 'Gravitational Attraction Algorithm' (GAA) is introduced in this article. It is an analogy with the gravitational force field, where a body attracts another proportionally to both masses and inversely to their distances. The GAA is a populational algorithm where, first of all, the solutions are clustered using the Fuzzy Clustering Means (FCM) algorithm. Following that, the gravitational forces of the individuals in relation to each cluster are evaluated and this individual or solution is displaced to the cluster with the greatest attractive force. Once it is inside this cluster, the solution receives small stochastic variations, performing a local exploration. Then the solutions are crossed over and the process starts all over again. The parameters required by the GAA are the 'diversity factor', which is used to create a random diversity in a fashion similar to genetic algorithm's mutation, and the number of clusters for the FCM. GAA is applied to the reactor core design optimization problem which consists in adjusting several reactor cell parameters in order to minimize the average peak-factor in a 3-enrichment-zone reactor, considering operational restrictions. This problem was previously attacked using the canonical genetic algorithm (GA) and a Niching Genetic Algorithm (NGA). The new metaheuristic is then compared to those two algorithms. The three algorithms are submitted to the same computational effort and GAA reaches the best results, showing its potential for other applications in the nuclear engineering field as, for instance, the nuclear core reload optimization problem. (author)

  18. Wireless Sensor Networks for Heritage Object Deformation Detection and Tracking Algorithm

    Directory of Open Access Journals (Sweden)

    Zhijun Xie

    2014-10-01

    Full Text Available Deformation is the direct cause of heritage object collapse. It is significant to monitor and signal the early warnings of the deformation of heritage objects. However, traditional heritage object monitoring methods only roughly monitor a simple-shaped heritage object as a whole, but cannot monitor complicated heritage objects, which may have a large number of surfaces inside and outside. Wireless sensor networks, comprising many small-sized, low-cost, low-power intelligent sensor nodes, are more useful to detect the deformation of every small part of the heritage objects. Wireless sensor networks need an effective mechanism to reduce both the communication costs and energy consumption in order to monitor the heritage objects in real time. In this paper, we provide an effective heritage object deformation detection and tracking method using wireless sensor networks (EffeHDDT. In EffeHDDT, we discover a connected core set of sensor nodes to reduce the communication cost for transmitting and collecting the data of the sensor networks. Particularly, we propose a heritage object boundary detecting and tracking mechanism. Both theoretical analysis and experimental results demonstrate that our EffeHDDT method outperforms the existing methods in terms of network traffic and the precision of the deformation detection.

  19. Performances of the New Real Time Tsunami Detection Algorithm applied to tide gauges data

    Science.gov (United States)

    Chierici, F.; Embriaco, D.; Morucci, S.

    2017-12-01

    Real-time tsunami detection algorithms play a key role in any Tsunami Early Warning System. We have developed a new algorithm for tsunami detection (TDA) based on the real-time tide removal and real-time band-pass filtering of seabed pressure time series acquired by Bottom Pressure Recorders. The TDA algorithm greatly increases the tsunami detection probability, shortens the detection delay and enhances detection reliability with respect to the most widely used tsunami detection algorithm, while containing the computational cost. The algorithm is designed to be used also in autonomous early warning systems with a set of input parameters and procedures which can be reconfigured in real time. We have also developed a methodology based on Monte Carlo simulations to test the tsunami detection algorithms. The algorithm performance is estimated by defining and evaluating statistical parameters, namely the detection probability, the detection delay, which are functions of the tsunami amplitude and wavelength, and the occurring rate of false alarms. In this work we present the performance of the TDA algorithm applied to tide gauge data. We have adapted the new tsunami detection algorithm and the Monte Carlo test methodology to tide gauges. Sea level data acquired by coastal tide gauges in different locations and environmental conditions have been used in order to consider real working scenarios in the test. We also present an application of the algorithm to the tsunami event generated by Tohoku earthquake on March 11th 2011, using data recorded by several tide gauges scattered all over the Pacific area.

  20. Neural Network Blind Equalization Algorithm Applied in Medical CT Image Restoration

    Directory of Open Access Journals (Sweden)

    Yunshan Sun

    2013-01-01

    Full Text Available A new algorithm for iterative blind image restoration is presented in this paper. The method extends blind equalization found in the signal case to the image. A neural network blind equalization algorithm is derived and used in conjunction with Zigzag coding to restore the original image. As a result, the effect of PSF can be removed by using the proposed algorithm, which contributes to eliminate intersymbol interference (ISI. In order to obtain the estimation of the original image, what is proposed in this method is to optimize constant modulus blind equalization cost function applied to grayscale CT image by using conjugate gradient method. Analysis of convergence performance of the algorithm verifies the feasibility of this method theoretically; meanwhile, simulation results and performance evaluations of recent image quality metrics are provided to assess the effectiveness of the proposed method.

  1. 3D scanning applied in the evaluation of large plastic deformation

    Directory of Open Access Journals (Sweden)

    Márcio Eduardo Silveira

    2012-01-01

    Full Text Available Crash test are experimental studies demanded by specialized agencies in order to evaluate the performance of a component (or entire vehicle when subjected to an impact. The results, often highly destructive, produce large deformations in the product. The use of numerical simulation in initial stages of a project is essential to reduce costs. One difficulty in validating numerical results involves the correlation between the level and the deformation mode of the component, since it is a highly nonlinear simulation in which various parameters can affect the results. The main objective of this study was to propose a methodology to correlate the result of crash tests of a fuel tank with the numerical simulations, using an optical 3D scanner. The results are promising, and the methodology implemented would be used for any products that involve large deformations.

  2. Self-adaptive global best harmony search algorithm applied to reactor core fuel management optimization

    International Nuclear Information System (INIS)

    Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.; Valavi, K.

    2013-01-01

    Highlights: • SGHS enhanced the convergence rate of LPO using some improvements in comparison to basic HS and GHS. • SGHS optimization algorithm obtained averagely better fitness relative to basic HS and GHS algorithms. • Upshot of the SGHS implementation in the LPO reveals its flexibility, efficiency and reliability. - Abstract: The aim of this work is to apply the new developed optimization algorithm, Self-adaptive Global best Harmony Search (SGHS), for PWRs fuel management optimization. SGHS algorithm has some modifications in comparison with basic Harmony Search (HS) and Global-best Harmony Search (GHS) algorithms such as dynamically change of parameters. For the demonstration of SGHS ability to find an optimal configuration of fuel assemblies, basic Harmony Search (HS) and Global-best Harmony Search (GHS) algorithms also have been developed and investigated. For this purpose, Self-adaptive Global best Harmony Search Nodal Expansion package (SGHSNE) has been developed implementing HS, GHS and SGHS optimization algorithms for the fuel management operation of nuclear reactor cores. This package uses developed average current nodal expansion code which solves the multi group diffusion equation by employment of first and second orders of Nodal Expansion Method (NEM) for two dimensional, hexagonal and rectangular geometries, respectively, by one node per a FA. Loading pattern optimization was performed using SGHSNE package for some test cases to present the SGHS algorithm capability in converging to near optimal loading pattern. Results indicate that the convergence rate and reliability of the SGHS method are quite promising and practically, SGHS improves the quality of loading pattern optimization results relative to HS and GHS algorithms. As a result, it has the potential to be used in the other nuclear engineering optimization problems

  3. Towards adaptive radiotherapy for head and neck patients: validation of an in-house deformable registration algorithm

    Science.gov (United States)

    Veiga, C.; McClelland, J.; Moinuddin, S.; Ricketts, K.; Modat, M.; Ourselin, S.; D'Souza, D.; Royle, G.

    2014-03-01

    The purpose of this work is to validate an in-house deformable image registration (DIR) algorithm for adaptive radiotherapy for head and neck patients. We aim to use the registrations to estimate the "dose of the day" and assess the need to replan. NiftyReg is an open-source implementation of the B-splines deformable registration algorithm, developed in our institution. We registered a planning CT to a CBCT acquired midway through treatment for 5 HN patients that required replanning. We investigated 16 different parameter settings that previously showed promising results. To assess the registrations, structures delineated in the CT were warped and compared with contours manually drawn by the same clinical expert on the CBCT. This structure set contained vertebral bodies and soft tissue. Dice similarity coefficient (DSC), overlap index (OI), centroid position and distance between structures' surfaces were calculated for every registration, and a set of parameters that produces good results for all datasets was found. We achieve a median value of 0.845 in DSC, 0.889 in OI, error smaller than 2 mm in centroid position and over 90% of the warped surface pixels are distanced less than 2 mm of the manually drawn ones. By using appropriate DIR parameters, we are able to register the planning geometry (pCT) to the daily geometry (CBCT).

  4. Effects of deformable registration algorithms on the creation of statistical maps for preoperative targeting in deep brain stimulation procedures

    Science.gov (United States)

    Liu, Yuan; D'Haese, Pierre-Francois; Dawant, Benoit M.

    2014-03-01

    Deep brain stimulation, which is used to treat various neurological disorders, involves implanting a permanent electrode into precise targets deep in the brain. Accurate pre-operative localization of the targets on pre-operative MRI sequence is challenging as these are typically located in homogenous regions with poor contrast. Population-based statistical atlases can assist with this process. Such atlases are created by acquiring the location of efficacious regions from numerous subjects and projecting them onto a common reference image volume using some normalization method. In previous work, we presented results concluding that non-rigid registration provided the best result for such normalization. However, this process could be biased by the choice of the reference image and/or registration approach. In this paper, we have qualitatively and quantitatively compared the performance of six recognized deformable registration methods at normalizing such data in poor contrasted regions onto three different reference volumes using a unique set of data from 100 patients. We study various metrics designed to measure the centroid, spread, and shape of the normalized data. This study leads to a total of 1800 deformable registrations and results show that statistical atlases constructed using different deformable registration methods share comparable centroids and spreads with marginal differences in their shape. Among the six methods being studied, Diffeomorphic Demons produces the largest spreads and centroids that are the furthest apart from the others in general. Among the three atlases, one atlas consistently outperforms the other two with smaller spreads for each algorithm. However, none of the differences in the spreads were found to be statistically significant, across different algorithms or across different atlases.

  5. Toward adaptive radiotherapy for head and neck patients: Uncertainties in dose warping due to the choice of deformable registration algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Veiga, Catarina, E-mail: catarina.veiga.11@ucl.ac.uk; Royle, Gary [Radiation Physics Group, Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT (United Kingdom); Lourenço, Ana Mónica [Radiation Physics Group, Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, United Kingdom and Acoustics and Ionizing Radiation Team, National Physical Laboratory, Teddington TW11 0LW (United Kingdom); Mouinuddin, Syed [Department of Radiotherapy, University College London Hospital, London NW1 2BU (United Kingdom); Herk, Marcel van [Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam 1066 CX (Netherlands); Modat, Marc; Ourselin, Sébastien; McClelland, Jamie R. [Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT (United Kingdom)

    2015-02-15

    Purpose: The aims of this work were to evaluate the performance of several deformable image registration (DIR) algorithms implemented in our in-house software (NiftyReg) and the uncertainties inherent to using different algorithms for dose warping. Methods: The authors describe a DIR based adaptive radiotherapy workflow, using CT and cone-beam CT (CBCT) imaging. The transformations that mapped the anatomy between the two time points were obtained using four different DIR approaches available in NiftyReg. These included a standard unidirectional algorithm and more sophisticated bidirectional ones that encourage or ensure inverse consistency. The forward (CT-to-CBCT) deformation vector fields (DVFs) were used to propagate the CT Hounsfield units and structures to the daily geometry for “dose of the day” calculations, while the backward (CBCT-to-CT) DVFs were used to remap the dose of the day onto the planning CT (pCT). Data from five head and neck patients were used to evaluate the performance of each implementation based on geometrical matching, physical properties of the DVFs, and similarity between warped dose distributions. Geometrical matching was verified in terms of dice similarity coefficient (DSC), distance transform, false positives, and false negatives. The physical properties of the DVFs were assessed calculating the harmonic energy, determinant of the Jacobian, and inverse consistency error of the transformations. Dose distributions were displayed on the pCT dose space and compared using dose difference (DD), distance to dose difference, and dose volume histograms. Results: All the DIR algorithms gave similar results in terms of geometrical matching, with an average DSC of 0.85 ± 0.08, but the underlying properties of the DVFs varied in terms of smoothness and inverse consistency. When comparing the doses warped by different algorithms, we found a root mean square DD of 1.9% ± 0.8% of the prescribed dose (pD) and that an average of 9% ± 4% of

  6. An Intuitive Dominant Test Algorithm of CP-nets Applied on Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Liu Zhaowei

    2014-07-01

    Full Text Available A wireless sensor network is of spatially distributed with autonomous sensors, just like a multi-Agent system with single Agent. Conditional Preference networks is a qualitative tool for representing ceteris paribus (all other things being equal preference statements, it has been a research hotspot in artificial intelligence recently. But the algorithm and complexity of strong dominant test with respect to binary-valued structure CP-nets have not been solved, and few researchers address the application to other domain. In this paper, strong dominant test and application of CP-nets are studied in detail. Firstly, by constructing induced graph of CP-nets and studying its properties, we make a conclusion that the problem of strong dominant test on binary-valued CP-nets is single source shortest path problem essentially, so strong dominant test problem can be solved by improved Dijkstra’s algorithm. Secondly, we apply the algorithm above mentioned to the completeness of wireless sensor network, and design a completeness judging algorithm based on strong dominant test. Thirdly, we apply the algorithm on wireless sensor network to solve routing problem. In the end, we point out some interesting work in the future.

  7. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems. © 2013 Elsevier Inc.

  8. Experimental study and analytical model of deformation of magnetostrictive films as applied to mirrors for x-ray space telescopes.

    Science.gov (United States)

    Wang, Xiaoli; Knapp, Peter; Vaynman, S; Graham, M E; Cao, Jian; Ulmer, M P

    2014-09-20

    The desire for continuously gaining new knowledge in astronomy has pushed the frontier of engineering methods to deliver lighter, thinner, higher quality mirrors at an affordable cost for use in an x-ray observatory. To address these needs, we have been investigating the application of magnetic smart materials (MSMs) deposited as a thin film on mirror substrates. MSMs have some interesting properties that make the application of MSMs to mirror substrates a promising solution for making the next generation of x-ray telescopes. Due to the ability to hold a shape with an impressed permanent magnetic field, MSMs have the potential to be the method used to make light weight, affordable x-ray telescope mirrors. This paper presents the experimental setup for measuring the deformation of the magnetostrictive bimorph specimens under an applied magnetic field, and the analytical and numerical analysis of the deformation. As a first step in the development of tools to predict deflections, we deposited Terfenol-D on the glass substrates. We then made measurements that were compared with the results from the analytical and numerical analysis. The surface profiles of thin-film specimens were measured under an external magnetic field with white light interferometry (WLI). The analytical model provides good predictions of film deformation behavior under various magnetic field strengths. This work establishes a solid foundation for further research to analyze the full three-dimensional deformation behavior of magnetostrictive thin films.

  9. SU-E-J-94: Geometric and Dosimetric Evaluation of Deformation Image Registration Algorithms Using Virtual Phantoms Generated From Patients with Lung Cancer

    International Nuclear Information System (INIS)

    Shen, Z; Greskovich, J; Xia, P; Bzdusek, K

    2015-01-01

    Purpose: To generate virtual phantoms with clinically relevant deformation and use them to objectively evaluate geometric and dosimetric uncertainties of deformable image registration (DIR) algorithms. Methods: Ten lung cancer patients undergoing adaptive 3DCRT planning were selected. For each patient, a pair of planning CT (pCT) and replanning CT (rCT) were used as the basis for virtual phantom generation. Manually adjusted meshes were created for selected ROIs (e.g. PTV, lungs, spinal cord, esophagus, and heart) on pCT and rCT. The mesh vertices were input into a thin-plate spline algorithm to generate a reference displacement vector field (DVF). The reference DVF was used to deform pCT to generate a simulated replanning CT (srCT) that was closely matched to rCT. Three DIR algorithms (Demons, B-Spline, and intensity-based) were applied to these ten virtual phantoms. The images, ROIs, and doses were mapped from pCT to srCT using the DVFs computed by these three DIRs and compared to those mapped using the reference DVF. Results: The average Dice coefficients for selected ROIs were from 0.85 to 0.96 for Demons, from 0.86 to 0.97 for intensity-based, and from 0.76 to 0.95 for B-Spline. The average Hausdorff distances for selected ROIs were from 2.2 to 5.4 mm for Demons, from 2.3 to 6.8 mm for intensity-based, and from 2.4 to 11.4 mm for B-Spline. The average absolute dose errors for selected ROIs were from 0.2 to 0.6 Gy for Demons, from 0.1 to 0.5 Gy for intensity-based, and from 0.5 to 1.5 Gy for B-Spline. Conclusion: Virtual phantoms were modeled after patients with lung cancer and were clinically relevant for adaptive radiotherapy treatment replanning. Virtual phantoms with known DVFs serve as references and can provide a fair comparison when evaluating different DIRs. Demons and intensity-based DIRs were shown to have smaller geometric and dosimetric uncertainties than B-Spline. Z Shen: None; K Bzdusek: an employee of Philips Healthcare; J Greskovich: None; P Xia

  10. Deformation Cycling of a Ti - Ni Alloy with Superelasticity Effect Applied in Cardiology

    Science.gov (United States)

    Kaputkin, D. E.; Morozova, T. V.

    2014-07-01

    The study concerns the effect of the conditions and of the force of loading experienced by an implanted device from a Ti - Ni alloy during its transfer to the working zone, for example, in endoscopic implantation into the coronary sinus of the greater vena cava of heart. It is shown that preliminary deformation cycling (10 - 15 cycles) stabilizes the set of mechanical properties of the alloy.

  11. A reverse engineering algorithm for neural networks, applied to the subthalamopallidal network of basal ganglia.

    Science.gov (United States)

    Floares, Alexandru George

    2008-01-01

    Modeling neural networks with ordinary differential equations systems is a sensible approach, but also very difficult. This paper describes a new algorithm based on linear genetic programming which can be used to reverse engineer neural networks. The RODES algorithm automatically discovers the structure of the network, including neural connections, their signs and strengths, estimates its parameters, and can even be used to identify the biophysical mechanisms involved. The algorithm is tested on simulated time series data, generated using a realistic model of the subthalamopallidal network of basal ganglia. The resulting ODE system is highly accurate, and results are obtained in a matter of minutes. This is because the problem of reverse engineering a system of coupled differential equations is reduced to one of reverse engineering individual algebraic equations. The algorithm allows the incorporation of common domain knowledge to restrict the solution space. To our knowledge, this is the first time a realistic reverse engineering algorithm based on linear genetic programming has been applied to neural networks.

  12. Object-constrained meshless deformable algorithm for high speed 3D nonrigid registration between CT and CBCT

    International Nuclear Information System (INIS)

    Chen Ting; Kim, Sung; Goyal, Sharad; Jabbour, Salma; Zhou Jinghao; Rajagopal, Gunaretnum; Haffty, Bruce; Yue Ning

    2010-01-01

    Purpose: High-speed nonrigid registration between the planning CT and the treatment CBCT data is critical for real time image guided radiotherapy (IGRT) to improve the dose distribution and to reduce the toxicity to adjacent organs. The authors propose a new fully automatic 3D registration framework that integrates object-based global and seed constraints with the grayscale-based ''demons'' algorithm. Methods: Clinical objects were segmented on the planning CT images and were utilized as meshless deformable models during the nonrigid registration process. The meshless models reinforced a global constraint in addition to the grayscale difference between CT and CBCT in order to maintain the shape and the volume of geometrically complex 3D objects during the registration. To expedite the registration process, the framework was stratified into hierarchies, and the authors used a frequency domain formulation to diffuse the displacement between the reference and the target in each hierarchy. Also during the registration of pelvis images, they replaced the air region inside the rectum with estimated pixel values from the surrounding rectal wall and introduced an additional seed constraint to robustly track and match the seeds implanted into the prostate. The proposed registration framework and algorithm were evaluated on 15 real prostate cancer patients. For each patient, prostate gland, seminal vesicle, bladder, and rectum were first segmented by a radiation oncologist on planning CT images for radiotherapy planning purpose. The same radiation oncologist also manually delineated the tumor volumes and critical anatomical structures in the corresponding CBCT images acquired at treatment. These delineated structures on the CBCT were only used as the ground truth for the quantitative validation, while structures on the planning CT were used both as the input to the registration method and the ground truth in validation. By registering the planning CT to the CBCT, a

  13. A Comparative Study of Improved Artificial Bee Colony Algorithms Applied to Multilevel Image Thresholding

    Directory of Open Access Journals (Sweden)

    Kanjana Charansiriphaisan

    2013-01-01

    Full Text Available Multilevel thresholding is a highly useful tool for the application of image segmentation. Otsu’s method, a common exhaustive search for finding optimal thresholds, involves a high computational cost. There has been a lot of recent research into various meta-heuristic searches in the area of optimization research. This paper analyses and discusses using a family of artificial bee colony algorithms, namely, the standard ABC, ABC/best/1, ABC/best/2, IABC/best/1, IABC/rand/1, and CABC, and some particle swarm optimization-based algorithms for searching multilevel thresholding. The strategy for an onlooker bee to select an employee bee was modified to serve our purposes. The metric measures, which are used to compare the algorithms, are the maximum number of function calls, successful rate, and successful performance. The ranking was performed by Friedman ranks. The experimental results showed that IABC/best/1 outperformed the other techniques when all of them were applied to multilevel image thresholding. Furthermore, the experiments confirmed that IABC/best/1 is a simple, general, and high performance algorithm.

  14. Performance Evaluation of Various STL File Mesh Refining Algorithms Applied for FDM-RP Process

    Science.gov (United States)

    Ledalla, Siva Rama Krishna; Tirupathi, Balaji; Sriram, Venkatesh

    2018-06-01

    Layered manufacturing machines use the stereolithography (STL) file to build parts. When a curved surface is converted from a computer aided design (CAD) file to STL, it results in a geometrical distortion and chordal error. Parts manufactured with this file, might not satisfy geometric dimensioning and tolerance requirements due to approximated geometry. Current algorithms built in CAD packages have export options to globally reduce this distortion, which leads to an increase in the file size and pre-processing time. In this work, different mesh subdivision algorithms are applied on STL file of a complex geometric features using MeshLab software. The mesh subdivision algorithms considered in this work are modified butterfly subdivision technique, loops sub division technique and general triangular midpoint sub division technique. A comparative study is made with respect to volume and the build time using the above techniques. It is found that triangular midpoint sub division algorithm is more suitable for the geometry under consideration. Only the wheel cap part is then manufactured on Stratasys MOJO FDM machine. The surface roughness of the part is measured on Talysurf surface roughness tester.

  15. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    Science.gov (United States)

    Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel

    2015-08-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.

  16. Deconvolution algorithms applied in ultrasonics; Methodes de deconvolution en echographie ultrasonore

    Energy Technology Data Exchange (ETDEWEB)

    Perrot, P

    1993-12-01

    In a complete system of acquisition and processing of ultrasonic signals, it is often necessary at one stage to use some processing tools to get rid of the influence of the different elements of that system. By that means, the final quality of the signals in terms of resolution is improved. There are two main characteristics of ultrasonic signals which make this task difficult. Firstly, the signals generated by transducers are very often non-minimum phase. The classical deconvolution algorithms are unable to deal with such characteristics. Secondly, depending on the medium, the shape of the propagating pulse is evolving. The spatial invariance assumption often used in classical deconvolution algorithms is rarely valid. Many classical algorithms, parametric and non-parametric, have been investigated: the Wiener-type, the adaptive predictive techniques, the Oldenburg technique in the frequency domain, the minimum variance deconvolution. All the algorithms have been firstly tested on simulated data. One specific experimental set-up has also been analysed. Simulated and real data has been produced. This set-up demonstrated the interest in applying deconvolution, in terms of the achieved resolution. (author). 32 figs., 29 refs.

  17. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    International Nuclear Information System (INIS)

    Martins, Fabio J W A; Foucaut, Jean-Marc; Stanislas, Michel; Thomas, Lionel; Azevedo, Luis F A

    2015-01-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time. (paper)

  18. A necessary condition for applying MUSIC algorithm in limited-view inverse scattering problem

    International Nuclear Information System (INIS)

    Park, Taehoon; Park, Won-Kwang

    2015-01-01

    Throughout various results of numerical simulations, it is well-known that MUltiple SIgnal Classification (MUSIC) algorithm can be applied in the limited-view inverse scattering problems. However, the application is somehow heuristic. In this contribution, we identify a necessary condition of MUSIC for imaging of collection of small, perfectly conducting cracks. This is based on the fact that MUSIC imaging functional can be represented as an infinite series of Bessel function of integer order of the first kind. Numerical experiments from noisy synthetic data supports our investigation. (paper)

  19. APPLYING ARTIFICIAL NEURAL NETWORK OPTIMIZED BY FIREWORKS ALGORITHM FOR STOCK PRICE ESTIMATION

    Directory of Open Access Journals (Sweden)

    Khuat Thanh Tung

    2016-04-01

    Full Text Available Stock prediction is to determine the future value of a company stock dealt on an exchange. It plays a crucial role to raise the profit gained by firms and investors. Over the past few years, many methods have been developed in which plenty of efforts focus on the machine learning framework achieving the promising results. In this paper, an approach based on Artificial Neural Network (ANN optimized by Fireworks algorithm and data preprocessing by Haar Wavelet is applied to estimate the stock prices. The system was trained and tested with real data of various companies collected from Yahoo Finance. The obtained results are encouraging.

  20. A necessary condition for applying MUSIC algorithm in limited-view inverse scattering problem

    Science.gov (United States)

    Park, Taehoon; Park, Won-Kwang

    2015-09-01

    Throughout various results of numerical simulations, it is well-known that MUltiple SIgnal Classification (MUSIC) algorithm can be applied in the limited-view inverse scattering problems. However, the application is somehow heuristic. In this contribution, we identify a necessary condition of MUSIC for imaging of collection of small, perfectly conducting cracks. This is based on the fact that MUSIC imaging functional can be represented as an infinite series of Bessel function of integer order of the first kind. Numerical experiments from noisy synthetic data supports our investigation.

  1. Intelligent simulated annealing algorithm applied to the optimization of the main magnet for magnetic resonance imaging machine

    International Nuclear Information System (INIS)

    Sanchez Lopez, Hector

    2001-01-01

    This work describes an alternative algorithm of Simulated Annealing applied to the design of the main magnet for a Magnetic Resonance Imaging machine. The algorithm uses a probabilistic radial base neuronal network to classify the possible solutions, before the objective function evaluation. This procedure allows reducing up to 50% the number of iterations required by simulated annealing to achieve the global maximum, when compared with the SA algorithm. The algorithm was applied to design a 0.1050 Tesla four coil resistive magnet, which produces a magnetic field 2.13 times more uniform than the solution given by SA. (author)

  2. Specific algorithm method of scoring the Clock Drawing Test applied in cognitively normal elderly

    Directory of Open Access Journals (Sweden)

    Liana Chaves Mendes-Santos

    Full Text Available The Clock Drawing Test (CDT is an inexpensive, fast and easily administered measure of cognitive function, especially in the elderly. This instrument is a popular clinical tool widely used in screening for cognitive disorders and dementia. The CDT can be applied in different ways and scoring procedures also vary. OBJECTIVE: The aims of this study were to analyze the performance of elderly on the CDT and evaluate inter-rater reliability of the CDT scored by using a specific algorithm method adapted from Sunderland et al. (1989. METHODS: We analyzed the CDT of 100 cognitively normal elderly aged 60 years or older. The CDT ("free-drawn" and Mini-Mental State Examination (MMSE were administered to all participants. Six independent examiners scored the CDT of 30 participants to evaluate inter-rater reliability. RESULTS AND CONCLUSION: A score of 5 on the proposed algorithm ("Numbers in reverse order or concentrated", equivalent to 5 points on the original Sunderland scale, was the most frequent (53.5%. The CDT specific algorithm method used had high inter-rater reliability (p<0.01, and mean score ranged from 5.06 to 5.96. The high frequency of an overall score of 5 points may suggest the need to create more nuanced evaluation criteria, which are sensitive to differences in levels of impairment in visuoconstructive and executive abilities during aging.

  3. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  4. Spectral deformation techniques applied to the study of quantum statistical irreversible processes

    International Nuclear Information System (INIS)

    Courbage, M.

    1978-01-01

    A procedure of analytic continuation of the resolvent of Liouville operators for quantum statistical systems is discussed. When applied to the theory of irreversible processes of the Brussels School, this method supports the idea that the restriction to a class of initial conditions is necessary to obtain an irreversible behaviour. The general results are tested on the Friedrichs model. (Auth.)

  5. A methodology for the geometric design of heat recovery steam generators applying genetic algorithms

    International Nuclear Information System (INIS)

    Durán, M. Dolores; Valdés, Manuel; Rovira, Antonio; Rincón, E.

    2013-01-01

    This paper shows how the geometric design of heat recovery steam generators (HRSG) can be achieved. The method calculates the product of the overall heat transfer coefficient (U) by the area of the heat exchange surface (A) as a function of certain thermodynamic design parameters of the HRSG. A genetic algorithm is then applied to determine the best set of geometric parameters which comply with the desired UA product and, at the same time, result in a small heat exchange area and low pressure losses in the HRSG. In order to test this method, the design was applied to the HRSG of an existing plant and the results obtained were compared with the real exchange area of the steam generator. The findings show that the methodology is sound and offers reliable results even for complex HRSG designs. -- Highlights: ► The paper shows a methodology for the geometric design of heat recovery steam generators. ► Calculates product of the overall heat transfer coefficient by heat exchange area as a function of certain HRSG thermodynamic design parameters. ► It is a complement for the thermoeconomic optimization method. ► Genetic algorithms are used for solving the optimization problem

  6. Active filtering applied to radiographic images unfolded by the Richardson-Lucy algorithm

    International Nuclear Information System (INIS)

    Almeida, Gevaldo L. de; Silvani, Maria Ines; Lopes, Ricardo T.

    2011-01-01

    Degradation of images caused by systematic uncertainties can be reduced when one knows the features of the spoiling agent. Typical uncertainties of this kind arise in radiographic images due to the non - zero resolution of the detector used to acquire them, and from the non-punctual character of the source employed in the acquisition, or from the beam divergence when extended sources are used. Both features blur the image, which, instead of a single point exhibits a spot with a vanishing edge, reproducing hence the point spread function - PSF of the system. Once this spoiling function is known, an inverse problem approach, involving inversion of matrices, can then be used to retrieve the original image. As these matrices are generally ill-conditioned, due to statistical fluctuation and truncation errors, iterative procedures should be applied, such as the Richardson-Lucy algorithm. This algorithm has been applied in this work to unfold radiographic images acquired by transmission of thermal neutrons and gamma-rays. After this procedure, the resulting images undergo an active filtering which fairly improves their final quality at a negligible cost in terms of processing time. The filter ruling the process is based on the matrix of the correction factors for the last iteration of the deconvolution procedure. Synthetic images degraded with a known PSF, and undergone to the same treatment, have been used as benchmark to evaluate the soundness of the developed active filtering procedure. The deconvolution and filtering algorithms have been incorporated to a Fortran program, written to deal with real images, generate the synthetic ones and display both. (author)

  7. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.; Wheeler, Mary Fanett; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using

  8. Applied Swarm-based medicine: collecting decision trees for patterns of algorithms analysis.

    Science.gov (United States)

    Panje, Cédric M; Glatzer, Markus; von Rappard, Joscha; Rothermundt, Christian; Hundsberger, Thomas; Zumstein, Valentin; Plasswilm, Ludwig; Putora, Paul Martin

    2017-08-16

    The objective consensus methodology has recently been applied in consensus finding in several studies on medical decision-making among clinical experts or guidelines. The main advantages of this method are an automated analysis and comparison of treatment algorithms of the participating centers which can be performed anonymously. Based on the experience from completed consensus analyses, the main steps for the successful implementation of the objective consensus methodology were identified and discussed among the main investigators. The following steps for the successful collection and conversion of decision trees were identified and defined in detail: problem definition, population selection, draft input collection, tree conversion, criteria adaptation, problem re-evaluation, results distribution and refinement, tree finalisation, and analysis. This manuscript provides information on the main steps for successful collection of decision trees and summarizes important aspects at each point of the analysis.

  9. Aida-CMK multi-algorithm optimization kernel applied to analog IC sizing

    CERN Document Server

    Lourenço, Ricardo; Horta, Nuno

    2015-01-01

    This work addresses the research and development of an innovative optimization kernel applied to analog integrated circuit (IC) design. Particularly, this works describes the modifications inside the AIDA Framework, an electronic design automation framework fully developed by at the Integrated Circuits Group-LX of the Instituto de Telecomunicações, Lisbon. It focusses on AIDA-CMK, by enhancing AIDA-C, which is the circuit optimizer component of AIDA, with a new multi-objective multi-constraint optimization module that constructs a base for multiple algorithm implementations. The proposed solution implements three approaches to multi-objective multi-constraint optimization, namely, an evolutionary approach with NSGAII, a swarm intelligence approach with MOPSO and stochastic hill climbing approach with MOSA. Moreover, the implemented structure allows the easy hybridization between kernels transforming the previous simple NSGAII optimization module into a more evolved and versatile module supporting multiple s...

  10. An improved flux-split algorithm applied to hypersonic flows in chemical equilibrium

    Science.gov (United States)

    Palmer, Grant

    1988-01-01

    An explicit, finite-difference, shock-capturing numerical algorithm is presented and applied to hypersonic flows assumed to be in thermochemical equilibrium. Real-gas chemistry is either loosely coupled to the gasdynamics by way of a Gibbs free energy minimization package or fully coupled using species mass conservation equations with finite-rate chemical reactions. A scheme is developed that maintains stability in the explicit, finite-rate formulation while allowing relatively high time steps. The codes use flux vector splitting to difference the inviscid fluxes and employ real-gas corrections to viscosity and thermal conductivity. Numerical results are compared against existing ballistic range and flight data. Flows about complex geometries are also computed.

  11. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  12. Metaheuristic Algorithms Applied to Bioenergy Supply Chain Problems: Theory, Review, Challenges, and Future

    Directory of Open Access Journals (Sweden)

    Krystel K. Castillo-Villar

    2014-11-01

    Full Text Available Bioenergy is a new source of energy that accounts for a substantial portion of the renewable energy production in many countries. The production of bioenergy is expected to increase due to its unique advantages, such as no harmful emissions and abundance. Supply-related problems are the main obstacles precluding the increase of use of biomass (which is bulky and has low energy density to produce bioenergy. To overcome this challenge, large-scale optimization models are needed to be solved to enable decision makers to plan, design, and manage bioenergy supply chains. Therefore, the use of effective optimization approaches is of great importance. The traditional mathematical methods (such as linear, integer, and mixed-integer programming frequently fail to find optimal solutions for non-convex and/or large-scale models whereas metaheuristics are efficient approaches for finding near-optimal solutions that use less computational resources. This paper presents a comprehensive review by studying and analyzing the application of metaheuristics to solve bioenergy supply chain models as well as the exclusive challenges of the mathematical problems applied in the bioenergy supply chain field. The reviewed metaheuristics include: (1 population approaches, such as ant colony optimization (ACO, the genetic algorithm (GA, particle swarm optimization (PSO, and bee colony algorithm (BCA; and (2 trajectory approaches, such as the tabu search (TS and simulated annealing (SA. Based on the outcomes of this literature review, the integrated design and planning of bioenergy supply chains problem has been solved primarily by implementing the GA. The production process optimization was addressed primarily by using both the GA and PSO. The supply chain network design problem was treated by utilizing the GA and ACO. The truck and task scheduling problem was solved using the SA and the TS, where the trajectory-based methods proved to outperform the population

  13. Monte Carlo evaluation of a photon pencil kernel algorithm applied to fast neutron therapy treatment planning

    Science.gov (United States)

    Söderberg, Jonas; Alm Carlsson, Gudrun; Ahnesjö, Anders

    2003-10-01

    When dedicated software is lacking, treatment planning for fast neutron therapy is sometimes performed using dose calculation algorithms designed for photon beam therapy. In this work Monte Carlo derived neutron pencil kernels in water were parametrized using the photon dose algorithm implemented in the Nucletron TMS (treatment management system) treatment planning system. A rectangular fast-neutron fluence spectrum with energies 0-40 MeV (resembling a polyethylene filtered p(41)+ Be spectrum) was used. Central axis depth doses and lateral dose distributions were calculated and compared with the corresponding dose distributions from Monte Carlo calculations for homogeneous water and heterogeneous slab phantoms. All absorbed doses were normalized to the reference dose at 10 cm depth for a field of radius 5.6 cm in a 30 × 40 × 20 cm3 water test phantom. Agreement to within 7% was found in both the lateral and the depth dose distributions. The deviations could be explained as due to differences in size between the test phantom and that used in deriving the pencil kernel (radius 200 cm, thickness 50 cm). In the heterogeneous phantom, the TMS, with a directly applied neutron pencil kernel, and Monte Carlo calculated absorbed doses agree approximately for muscle but show large deviations for media such as adipose or bone. For the latter media, agreement was substantially improved by correcting the absorbed doses calculated in TMS with the neutron kerma factor ratio and the stopping power ratio between tissue and water. The multipurpose Monte Carlo code FLUKA was used both in calculating the pencil kernel and in direct calculations of absorbed dose in the phantom.

  14. SU-D-202-04: Validation of Deformable Image Registration Algorithms for Head and Neck Adaptive Radiotherapy in Routine Clinical Setting

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, L; Pi, Y; Chen, Z; Xu, X [University of Science and Technology of China, Hefei, Anhui (China); Wang, Z [University of Science and Technology of China, Hefei, Anhui (China); The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui (China); Shi, C [Saint Vincent Medical Center, Bridgeport, CT (United States); Long, T; Luo, W; Wang, F [The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui (China)

    2016-06-15

    Purpose: To evaluate the ROI contours and accumulated dose difference using different deformable image registration (DIR) algorithms for head and neck (H&N) adaptive radiotherapy. Methods: Eight H&N cancer patients were randomly selected from the affiliated hospital. During the treatment, patients were rescanned every week with ROIs well delineated by radiation oncologist on each weekly CT. New weekly treatment plans were also re-designed with consistent dose prescription on the rescanned CT and executed for one week on Siemens CT-on-rails accelerator. At the end, we got six weekly CT scans from CT1 to CT6 including six weekly treatment plans for each patient. The primary CT1 was set as the reference CT for DIR proceeding with the left five weekly CTs using ANACONDA and MORFEUS algorithms separately in RayStation and the external skin ROI was set to be the controlling ROI both. The entire calculated weekly dose were deformed and accumulated on corresponding reference CT1 according to the deformation vector field (DVFs) generated by the two different DIR algorithms respectively. Thus we got both the ANACONDA-based and MORFEUS-based accumulated total dose on CT1 for each patient. At the same time, we mapped the ROIs on CT1 to generate the corresponding ROIs on CT6 using ANACONDA and MORFEUS DIR algorithms. DICE coefficients between the DIR deformed and radiation oncologist delineated ROIs on CT6 were calculated. Results: For DIR accumulated dose, PTV D95 and Left-Eyeball Dmax show significant differences with 67.13 cGy and 109.29 cGy respectively (Table1). For DIR mapped ROIs, PTV, Spinal cord and Left-Optic nerve show difference with −0.025, −0.127 and −0.124 (Table2). Conclusion: Even two excellent DIR algorithms can give divergent results for ROI deformation and dose accumulation. As more and more TPS get DIR module integrated, there is an urgent need to realize the potential risk using DIR in clinical.

  15. Plastic Deformation Induced by Nanoindentation Test Applied on ZrN/Si3N4 Multilayer Coatings

    Directory of Open Access Journals (Sweden)

    Zhengtao Wu

    2017-12-01

    Full Text Available ZrN/Si3N4 multilayer coating that alternates with either nanocrystalline ZrN or amorphous Si3N4 interlayers was fabricated by reactively magnetron sputtering in an Ar-N2 mixture atmosphere. The thicknesses of the nanocrystalline ZrN and the amorphous Si3N4 interlayers are ~12.5 and 2.5 nm, respectively. The ZrN/Si3N4 coating exhibits a promoted hardness of 28.6 ± 1.2 GPa when compared to the binary ZrN. Microstructure evolution just underneath the nanoindentation impression of the ZrN/Si3N4 multilayer coating has been investigated. The result indicates that both ZrN nanograin rotations and plastic flow of the Si3N4 interlayers contribute to the permanent deformation of the multilayer coating induced by the nanoindentation. In addition, the introduction of the a-Si3N4 interlayers hinders both the initiation and propagation of microcracks when the multilayer coating was applied to the scratch test. The propagation deflection of the microcracks was observed attributed to the heterogenous interface, which produces the hardness promotion of the multilayer coating eventually.

  16. Matrix product algorithm for stochastic dynamics on networks applied to nonequilibrium Glauber dynamics

    Science.gov (United States)

    Barthel, Thomas; De Bacco, Caterina; Franz, Silvio

    2018-01-01

    We introduce and apply an efficient method for the precise simulation of stochastic dynamical processes on locally treelike graphs. Networks with cycles are treated in the framework of the cavity method. Such models correspond, for example, to spin-glass systems, Boolean networks, neural networks, or other technological, biological, and social networks. Building upon ideas from quantum many-body theory, our approach is based on a matrix product approximation of the so-called edge messages—conditional probabilities of vertex variable trajectories. Computation costs and accuracy can be tuned by controlling the matrix dimensions of the matrix product edge messages (MPEM) in truncations. In contrast to Monte Carlo simulations, the algorithm has a better error scaling and works for both single instances as well as the thermodynamic limit. We employ it to examine prototypical nonequilibrium Glauber dynamics in the kinetic Ising model. Because of the absence of cancellation effects, observables with small expectation values can be evaluated accurately, allowing for the study of decay processes and temporal correlations.

  17. A genetic algorithm applied to a PWR turbine extraction optimization to increase cycle efficiency

    International Nuclear Information System (INIS)

    Sacco, Wagner F.; Schirru, Roberto

    2002-01-01

    In nuclear power plants feedwater heaters are used to heat feedwater from its temperature leaving the condenser to final feedwater temperature using steam extracted from various stages of the turbines. The purpose of this process is to increase cycle efficiency. The determination of the optimal fraction of mass flow rate to be extracted from each stage of the turbines is a complex optimization problem. This kind of problem has been efficiently solved by means of evolutionary computation techniques, such as Genetic Algorithms (GAs). GAs, which are systems based upon principles from biological genetics, have been successfully applied to several combinatorial optimization problems in nuclear engineering, as the nuclear fuel reload optimization problem. We introduce the use of GAs in cycle efficiency optimization by finding an optimal combination of turbine extractions. In order to demonstrate the effectiveness of our approach, we have chosen a typical PWR as case study. The secondary side of the PWR was simulated using PEPSE, which is a modeling tool used to perform integrated heat balances for power plants. The results indicate that the GA is a quite promising tool for cycle efficiency optimization. (author)

  18. A non-parametric 2D deformable template classifier

    DEFF Research Database (Denmark)

    Schultz, Nette; Nielsen, Allan Aasbjerg; Conradsen, Knut

    2005-01-01

    feature space the ship-master will be able to interactively define a segmentation map, which is refined and optimized by the deformable template algorithms. The deformable templates are defined as two-dimensional vector-cycles. Local random transformations are applied to the vector-cycles, and stochastic...

  19. Genetic algorithm applied to a Soil-Vegetation-Atmosphere system: Sensitivity and uncertainty analysis

    Science.gov (United States)

    Schneider, Sébastien; Jacques, Diederik; Mallants, Dirk

    2010-05-01

    the inversion procedure a genetical algorithm (GA) was used. Specific features such as elitism, roulette-wheel process for selection operator and island theory were implemented. Optimization was based on the water content measurements recorded at several depths. Ten scenarios have been elaborated and applied on the two lysimeters in order to investigate the impact of the conceptual model in terms of processes description (mechanistic or compartmental) and geometry (number of horizons in the profile description) on the calibration accuracy. Calibration leads to a good agreement with the measured water contents. The most critical parameters for improving the goodness of fit are the number of horizons and the type of process description. Best fit are found for a mechanistic model with 5 horizons resulting in absolute differences between observed and simulated water contents less than 0.02 cm3cm-3 in average. Parameter estimate analysis shows that layers thicknesses are poorly constrained whereas hydraulic parameters are much well defined.

  20. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  1. Genetic algorithm for design and manufacture optimization based on numerical simulations applied to aeronautic composite parts

    International Nuclear Information System (INIS)

    Mouton, S.; Ledoux, Y.; Teissandier, D.; Sebastian, P.

    2010-01-01

    A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM registered and Samcef registered softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.

  2. Imperialist Competitive Algorithm with Dynamic Parameter Adaptation Using Fuzzy Logic Applied to the Optimization of Mathematical Functions

    Directory of Open Access Journals (Sweden)

    Emer Bernal

    2017-01-01

    Full Text Available In this paper we are presenting a method using fuzzy logic for dynamic parameter adaptation in the imperialist competitive algorithm, which is usually known by its acronym ICA. The ICA algorithm was initially studied in its original form to find out how it works and what parameters have more effect upon its results. Based on this study, several designs of fuzzy systems for dynamic adjustment of the ICA parameters are proposed. The experiments were performed on the basis of solving complex optimization problems, particularly applied to benchmark mathematical functions. A comparison of the original imperialist competitive algorithm and our proposed fuzzy imperialist competitive algorithm was performed. In addition, the fuzzy ICA was compared with another metaheuristic using a statistical test to measure the advantage of the proposed fuzzy approach for dynamic parameter adaptation.

  3. The Patch-Levy-Based Bees Algorithm Applied to Dynamic Optimization Problems

    Directory of Open Access Journals (Sweden)

    Wasim A. Hussein

    2017-01-01

    Full Text Available Many real-world optimization problems are actually of dynamic nature. These problems change over time in terms of the objective function, decision variables, constraints, and so forth. Therefore, it is very important to study the performance of a metaheuristic algorithm in dynamic environments to assess the robustness of the algorithm to deal with real-word problems. In addition, it is important to adapt the existing metaheuristic algorithms to perform well in dynamic environments. This paper investigates a recently proposed version of Bees Algorithm, which is called Patch-Levy-based Bees Algorithm (PLBA, on solving dynamic problems, and adapts it to deal with such problems. The performance of the PLBA is compared with other BA versions and other state-of-the-art algorithms on a set of dynamic multimodal benchmark problems of different degrees of difficulties. The results of the experiments show that PLBA achieves better results than the other BA variants. The obtained results also indicate that PLBA significantly outperforms some of the other state-of-the-art algorithms and is competitive with others.

  4. Comprehensive evaluation of ten deformable image registration algorithms for contour propagation between CT and cone-beam CT images in adaptive head & neck radiotherapy.

    Science.gov (United States)

    Li, Xin; Zhang, Yuyu; Shi, Yinghua; Wu, Shuyu; Xiao, Yang; Gu, Xuejun; Zhen, Xin; Zhou, Linghong

    2017-01-01

    Deformable image registration (DIR) is a critical technic in adaptive radiotherapy (ART) for propagating contours between planning computerized tomography (CT) images and treatment CT/cone-beam CT (CBCT) images to account for organ deformation for treatment re-planning. To validate the ability and accuracy of DIR algorithms in organ at risk (OAR) contour mapping, ten intensity-based DIR strategies, which were classified into four categories-optical flow-based, demons-based, level-set-based and spline-based-were tested on planning CT and fractional CBCT images acquired from twenty-one head & neck (H&N) cancer patients who underwent 6~7-week intensity-modulated radiation therapy (IMRT). Three similarity metrics, i.e., the Dice similarity coefficient (DSC), the percentage error (PE) and the Hausdorff distance (HD), were employed to measure the agreement between the propagated contours and the physician-delineated ground truths of four OARs, including the vertebra (VTB), the vertebral foramen (VF), the parotid gland (PG) and the submandibular gland (SMG). It was found that the evaluated DIRs in this work did not necessarily outperform rigid registration. DIR performed better for bony structures than soft-tissue organs, and the DIR performance tended to vary for different ROIs with different degrees of deformation as the treatment proceeded. Generally, the optical flow-based DIR performed best, while the demons-based DIR usually ranked last except for a modified demons-based DISC used for CT-CBCT DIR. These experimental results suggest that the choice of a specific DIR algorithm depends on the image modality, anatomic site, magnitude of deformation and application. Therefore, careful examinations and modifications are required before accepting the auto-propagated contours, especially for automatic re-planning ART systems.

  5. Comprehensive evaluation of ten deformable image registration algorithms for contour propagation between CT and cone-beam CT images in adaptive head & neck radiotherapy.

    Directory of Open Access Journals (Sweden)

    Xin Li

    Full Text Available Deformable image registration (DIR is a critical technic in adaptive radiotherapy (ART for propagating contours between planning computerized tomography (CT images and treatment CT/cone-beam CT (CBCT images to account for organ deformation for treatment re-planning. To validate the ability and accuracy of DIR algorithms in organ at risk (OAR contour mapping, ten intensity-based DIR strategies, which were classified into four categories-optical flow-based, demons-based, level-set-based and spline-based-were tested on planning CT and fractional CBCT images acquired from twenty-one head & neck (H&N cancer patients who underwent 6~7-week intensity-modulated radiation therapy (IMRT. Three similarity metrics, i.e., the Dice similarity coefficient (DSC, the percentage error (PE and the Hausdorff distance (HD, were employed to measure the agreement between the propagated contours and the physician-delineated ground truths of four OARs, including the vertebra (VTB, the vertebral foramen (VF, the parotid gland (PG and the submandibular gland (SMG. It was found that the evaluated DIRs in this work did not necessarily outperform rigid registration. DIR performed better for bony structures than soft-tissue organs, and the DIR performance tended to vary for different ROIs with different degrees of deformation as the treatment proceeded. Generally, the optical flow-based DIR performed best, while the demons-based DIR usually ranked last except for a modified demons-based DISC used for CT-CBCT DIR. These experimental results suggest that the choice of a specific DIR algorithm depends on the image modality, anatomic site, magnitude of deformation and application. Therefore, careful examinations and modifications are required before accepting the auto-propagated contours, especially for automatic re-planning ART systems.

  6. A quantitative performance evaluation of the EM algorithm applied to radiographic images

    International Nuclear Information System (INIS)

    Brailean, J.C.; Sullivan, B.J.; Giger, M.L.; Chen, C.T.

    1991-01-01

    In this paper, the authors quantitatively evaluate the performance of the Expectation Maximization (EM) algorithm as a restoration technique for radiographic images. The perceived signal-to-noise ratio (SNR), of simple radiographic patterns processed by the EM algorithm are calculated on the basis of a statistical decision theory model that includes both the observer's visual response function and a noise component internal to the eye-brain system. The relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to quantitatively compare the effects of the EM algorithm to two popular image enhancement techniques: contrast enhancement (windowing) and unsharp mask filtering

  7. Branch-and-Bound algorithm applied to uncertainty quantification of a Boiling Water Reactor Station Blackout

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, Joseph, E-mail: joseph.nielsen@inl.gov [Idaho National Laboratory, 1955 N. Fremont Avenue, P.O. Box 1625, Idaho Falls, ID 83402 (United States); University of Idaho, Department of Mechanical Engineering and Nuclear Engineering Program, 1776 Science Center Drive, Idaho Falls, ID 83402-1575 (United States); Tokuhiro, Akira [University of Idaho, Department of Mechanical Engineering and Nuclear Engineering Program, 1776 Science Center Drive, Idaho Falls, ID 83402-1575 (United States); Hiromoto, Robert [University of Idaho, Department of Computer Science, 1776 Science Center Drive, Idaho Falls, ID 83402-1575 (United States); Tu, Lei [University of Idaho, Department of Mechanical Engineering and Nuclear Engineering Program, 1776 Science Center Drive, Idaho Falls, ID 83402-1575 (United States)

    2015-12-15

    state. Dynamic PRA (DPRA) methods provide a more rigorous analysis of complex dynamic systems. Unfortunately DPRA methods introduce issues associated with combinatorial explosion of states. This paper presents a methodology to address combinatorial explosion using a Branch-and-Bound algorithm applied to Dynamic Event Trees (DET), which utilize LENDIT (L – Length, E – Energy, N – Number, D – Distribution, I – Information, and T – Time) as well as a set theory to describe system, state, resource, and response (S2R2) sets to create bounding functions for the DET. The optimization of the DET in identifying high probability failure branches is extended to create a Phenomenological Identification and Ranking Table (PIRT) methodology to evaluate modeling parameters important to safety of those failure branches that have a high probability of failure. The PIRT can then be used as a tool to identify and evaluate the need for experimental validation of models that have the potential to reduce risk. In order to demonstrate this methodology, a Boiling Water Reactor (BWR) Station Blackout (SBO) case study is presented.

  8. Branch-and-Bound algorithm applied to uncertainty quantification of a Boiling Water Reactor Station Blackout

    International Nuclear Information System (INIS)

    Nielsen, Joseph; Tokuhiro, Akira; Hiromoto, Robert; Tu, Lei

    2015-01-01

    state. Dynamic PRA (DPRA) methods provide a more rigorous analysis of complex dynamic systems. Unfortunately DPRA methods introduce issues associated with combinatorial explosion of states. This paper presents a methodology to address combinatorial explosion using a Branch-and-Bound algorithm applied to Dynamic Event Trees (DET), which utilize LENDIT (L – Length, E – Energy, N – Number, D – Distribution, I – Information, and T – Time) as well as a set theory to describe system, state, resource, and response (S2R2) sets to create bounding functions for the DET. The optimization of the DET in identifying high probability failure branches is extended to create a Phenomenological Identification and Ranking Table (PIRT) methodology to evaluate modeling parameters important to safety of those failure branches that have a high probability of failure. The PIRT can then be used as a tool to identify and evaluate the need for experimental validation of models that have the potential to reduce risk. In order to demonstrate this methodology, a Boiling Water Reactor (BWR) Station Blackout (SBO) case study is presented.

  9. Hybrid Model Based on Genetic Algorithms and SVM Applied to Variable Selection within Fruit Juice Classification

    Directory of Open Access Journals (Sweden)

    C. Fernandez-Lozano

    2013-01-01

    Full Text Available Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM. Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA, the most representative variables for a specific classification problem can be selected.

  10. SU-E-J-109: Evaluation of Deformable Accumulated Parotid Doses Using Different Registration Algorithms in Adaptive Head and Neck Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Xu, S [Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, 100084 China (China); Chinese PLA General Hospital, Beijing, 100853 China (China); Liu, B [Image processing center, Beihang University, Beijing, 100191 China (China)

    2015-06-15

    Purpose: Three deformable image registration (DIR) algorithms are utilized to perform deformable dose accumulation for head and neck tomotherapy treatment, and the differences of the accumulated doses are evaluated. Methods: Daily MVCT data for 10 patients with pathologically proven nasopharyngeal cancers were analyzed. The data were acquired using tomotherapy (TomoTherapy, Accuray) at the PLA General Hospital. The prescription dose to the primary target was 70Gy in 33 fractions.Three DIR methods (B-spline, Diffeomorphic Demons and MIMvista) were used to propagate parotid structures from planning CTs to the daily CTs and accumulate fractionated dose on the planning CTs. The mean accumulated doses of parotids were quantitatively compared and the uncertainties of the propagated parotid contours were evaluated using Dice similarity index (DSI). Results: The planned mean dose of the ipsilateral parotids (32.42±3.13Gy) was slightly higher than those of the contralateral parotids (31.38±3.19Gy)in 10 patients. The difference between the accumulated mean doses of the ipsilateral parotids in the B-spline, Demons and MIMvista deformation algorithms (36.40±5.78Gy, 34.08±6.72Gy and 33.72±2.63Gy ) were statistically significant (B-spline vs Demons, P<0.0001, B-spline vs MIMvista, p =0.002). And The difference between those of the contralateral parotids in the B-spline, Demons and MIMvista deformation algorithms (34.08±4.82Gy, 32.42±4.80Gy and 33.92±4.65Gy ) were also significant (B-spline vs Demons, p =0.009, B-spline vs MIMvista, p =0.074). For the DSI analysis, the scores of B-spline, Demons and MIMvista DIRs were 0.90, 0.89 and 0.76. Conclusion: Shrinkage of parotid volumes results in the dose increase to the parotid glands in adaptive head and neck radiotherapy. The accumulated doses of parotids show significant difference using the different DIR algorithms between kVCT and MVCT. Therefore, the volume-based criterion (i.e. DSI) as a quantitative evaluation of

  11. New approaches of the potential field for QPSO algorithm applied to nuclear reactor reload problem

    International Nuclear Information System (INIS)

    Nicolau, Andressa dos Santos; Schirru, Roberto

    2015-01-01

    Recently quantum-inspired version of the Particle Swarm Optimization (PSO) algorithm, Quantum Particle Swarm Optimization (QPSO) was proposed. The QPSO algorithm permits all particles to have a quantum behavior, where some sort of 'quantum motion' is imposed in the search process. When the QPSO is tested against a set of benchmarking functions, it showed superior performances as compared to classical PSO. The QPSO outperforms the classical one most of the time in convergence speed and achieves better levels for the fitness functions. The great advantage of QPSO algorithm is that it uses only one parameter control. The critical step or QPSO algorithm is the choice of suitable attractive potential field that can guarantee bound states for the particles moving in the quantum environment. In this article, one version of QPSO algorithm was tested with two types of potential well: delta-potential well harmonic oscillator. The main goal of this study is to show with of the potential field is the most suitable for use in QPSO in a solution of the Nuclear Reactor Reload Optimization Problem, especially in the cycle 7 of a Brazilian Nuclear Power Plant. All result were compared with the performance of its classical counterpart of the literature and shows that QPSO algorithm are well situated among the best alternatives for dealing with hard optimization problems, such as NRROP. (author)

  12. SVC control enhancement applying self-learning fuzzy algorithm for islanded microgrid

    Directory of Open Access Journals (Sweden)

    Hossam Gabbar

    2016-03-01

    Full Text Available Maintaining voltage stability, within acceptable levels, for islanded Microgrids (MGs is a challenge due to limited exchange power between generation and loads. This paper proposes an algorithm to enhance the dynamic performance of islanded MGs in presence of load disturbance using Static VAR Compensator (SVC with Fuzzy Model Reference Learning Controller (FMRLC. The proposed algorithm compensates MG nonlinearity via fuzzy membership functions and inference mechanism imbedded in both controller and inverse model. Hence, MG keeps the desired performance as required at any operating condition. Furthermore, the self-learning capability of the proposed control algorithm compensates for grid parameter’s variation even with inadequate information about load dynamics. A reference model was designed to reject bus voltage disturbance with achievable performance by the proposed fuzzy controller. Three simulations scenarios have been presented to investigate effectiveness of proposed control algorithm in improving steady-state and transient performance of islanded MGs. The first scenario conducted without SVC, second conducted with SVC using PID controller and third conducted using FMRLC algorithm. A comparison for results shows ability of proposed control algorithm to enhance disturbance rejection due to learning process.

  13. New approaches of the potential field for QPSO algorithm applied to nuclear reactor reload problem

    Energy Technology Data Exchange (ETDEWEB)

    Nicolau, Andressa dos Santos; Schirru, Roberto, E-mail: andressa@lmp.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear

    2015-07-01

    Recently quantum-inspired version of the Particle Swarm Optimization (PSO) algorithm, Quantum Particle Swarm Optimization (QPSO) was proposed. The QPSO algorithm permits all particles to have a quantum behavior, where some sort of 'quantum motion' is imposed in the search process. When the QPSO is tested against a set of benchmarking functions, it showed superior performances as compared to classical PSO. The QPSO outperforms the classical one most of the time in convergence speed and achieves better levels for the fitness functions. The great advantage of QPSO algorithm is that it uses only one parameter control. The critical step or QPSO algorithm is the choice of suitable attractive potential field that can guarantee bound states for the particles moving in the quantum environment. In this article, one version of QPSO algorithm was tested with two types of potential well: delta-potential well harmonic oscillator. The main goal of this study is to show with of the potential field is the most suitable for use in QPSO in a solution of the Nuclear Reactor Reload Optimization Problem, especially in the cycle 7 of a Brazilian Nuclear Power Plant. All result were compared with the performance of its classical counterpart of the literature and shows that QPSO algorithm are well situated among the best alternatives for dealing with hard optimization problems, such as NRROP. (author)

  14. Bladder dose accumulation based on a biomechanical deformable image registration algorithm in volumetric modulated arc therapy for prostate cancer

    DEFF Research Database (Denmark)

    Andersen, E S; Muren, L P; Sørensen, T S

    2012-01-01

    Variations in bladder position, shape and volume cause uncertainties in the doses delivered to this organ during a course of radiotherapy for pelvic tumors. The purpose of this study was to evaluate the potential of dose accumulation based on repeat imaging and deformable image registration (DIR)...

  15. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  16. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  17. The fuzzy clearing approach for a niching genetic algorithm applied to a nuclear reactor core design optimization problem

    International Nuclear Information System (INIS)

    Sacco, Wagner F.; Machado, Marcelo D.; Pereira, Claudio M.N.A.; Schirru, Roberto

    2004-01-01

    This article extends previous efforts on genetic algorithms (GAs) applied to a core design optimization problem. We introduce the application of a new Niching Genetic Algorithm (NGA) to this problem and compare its performance to these previous works. The optimization problem consists in adjusting several reactor cell parameters, such as dimensions, enrichment and materials, in order to minimize the average peak-factor in a three-enrichment zone reactor, considering restrictions on the average thermal flux, criticality and sub-moderation. After exhaustive experiments we observed that our new niching method performs better than the conventional GA due to a greater exploration of the search space

  18. Comparison of vessel enhancement algorithms applied to time-of-flight MRA images for cerebrovascular segmentation.

    Science.gov (United States)

    Phellan, Renzo; Forkert, Nils D

    2017-11-01

    Vessel enhancement algorithms are often used as a preprocessing step for vessel segmentation in medical images to improve the overall segmentation accuracy. Each algorithm uses different characteristics to enhance vessels, such that the most suitable algorithm may vary for different applications. This paper presents a comparative analysis of the accuracy gains in vessel segmentation generated by the use of nine vessel enhancement algorithms: Multiscale vesselness using the formulas described by Erdt (MSE), Frangi (MSF), and Sato (MSS), optimally oriented flux (OOF), ranking orientations responses path operator (RORPO), the regularized Perona-Malik approach (RPM), vessel enhanced diffusion (VED), hybrid diffusion with continuous switch (HDCS), and the white top hat algorithm (WTH). The filters were evaluated and compared based on time-of-flight MRA datasets and corresponding manual segmentations from 5 healthy subjects and 10 patients with an arteriovenous malformation. Additionally, five synthetic angiographic datasets with corresponding ground truth segmentation were generated with three different noise levels (low, medium, and high) and also used for comparison. The parameters for each algorithm and subsequent segmentation were optimized using leave-one-out cross evaluation. The Dice coefficient, Matthews correlation coefficient, area under the ROC curve, number of connected components, and true positives were used for comparison. The results of this study suggest that vessel enhancement algorithms do not always lead to more accurate segmentation results compared to segmenting nonenhanced images directly. Multiscale vesselness algorithms, such as MSE, MSF, and MSS proved to be robust to noise, while diffusion-based filters, such as RPM, VED, and HDCS ranked in the top of the list in scenarios with medium or no noise. Filters that assume tubular-shapes, such as MSE, MSF, MSS, OOF, RORPO, and VED show a decrease in accuracy when considering patients with an AVM

  19. Comparison of Nonequilibrium Solution Algorithms Applied to Chemically Stiff Hypersonic Flows

    Science.gov (United States)

    Palmer, Grant; Venkatapathy, Ethiraj

    1995-01-01

    Three solution algorithms, explicit under-relaxation, point implicit, and lower-upper symmetric Gauss-Seidel, are used to compute nonequilibrium flow around the Apollo 4 return capsule at the 62-km altitude point in its descent trajectory. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness.The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15 and 30, the lower-upper symmetric Gauss-Seidel method produces an eight order of magnitude drop in the energy residual in one-third to one-half the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 30 and above. At Mach 40 the performance of the lower-upper symmetric Gauss-Seidel algorithm deteriorates to the point that it is out performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.

  20. An Encoding Technique for Multiobjective Evolutionary Algorithms Applied to Power Distribution System Reconfiguration

    Directory of Open Access Journals (Sweden)

    J. L. Guardado

    2014-01-01

    Full Text Available Network reconfiguration is an alternative to reduce power losses and optimize the operation of power distribution systems. In this paper, an encoding scheme for evolutionary algorithms is proposed in order to search efficiently for the Pareto-optimal solutions during the reconfiguration of power distribution systems considering multiobjective optimization. The encoding scheme is based on the edge window decoder (EWD technique, which was embedded in the Strength Pareto Evolutionary Algorithm 2 (SPEA2 and the Nondominated Sorting Genetic Algorithm II (NSGA-II. The effectiveness of the encoding scheme was proved by solving a test problem for which the true Pareto-optimal solutions are known in advance. In order to prove the practicability of the encoding scheme, a real distribution system was used to find the near Pareto-optimal solutions for different objective functions to optimize.

  1. An encoding technique for multiobjective evolutionary algorithms applied to power distribution system reconfiguration.

    Science.gov (United States)

    Guardado, J L; Rivas-Davalos, F; Torres, J; Maximov, S; Melgoza, E

    2014-01-01

    Network reconfiguration is an alternative to reduce power losses and optimize the operation of power distribution systems. In this paper, an encoding scheme for evolutionary algorithms is proposed in order to search efficiently for the Pareto-optimal solutions during the reconfiguration of power distribution systems considering multiobjective optimization. The encoding scheme is based on the edge window decoder (EWD) technique, which was embedded in the Strength Pareto Evolutionary Algorithm 2 (SPEA2) and the Nondominated Sorting Genetic Algorithm II (NSGA-II). The effectiveness of the encoding scheme was proved by solving a test problem for which the true Pareto-optimal solutions are known in advance. In order to prove the practicability of the encoding scheme, a real distribution system was used to find the near Pareto-optimal solutions for different objective functions to optimize.

  2. Making Deformable Template Models Operational

    DEFF Research Database (Denmark)

    Fisker, Rune

    2000-01-01

    for estimation of the model parameters, which applies a combination of a maximum likelihood and minimum distance criterion. Another contribution is a very fast search based initialization algorithm using a filter interpretation of the likelihood model. These two methods can be applied to most deformable template......Deformable template models are a very popular and powerful tool within the field of image processing and computer vision. This thesis treats this type of models extensively with special focus on handling their common difficulties, i.e. model parameter selection, initialization and optimization....... A proper handling of the common difficulties is essential for making the models operational by a non-expert user, which is a requirement for intensifying and commercializing the use of deformable template models. The thesis is organized as a collection of the most important articles, which has been...

  3. New algorithm using only one variable measurement applied to a maximum power point tracker

    Energy Technology Data Exchange (ETDEWEB)

    Salas, V.; Olias, E.; Lazaro, A.; Barrado, A. [University Carlos III de Madrid (Spain). Dept. of Electronic Technology

    2005-05-01

    A novel algorithm for seeking the maximum power point of a photovoltaic (PV) array for any temperature and solar irradiation level, needing only the PV current value, is proposed. Satisfactory theoretical and experimental results are presented and were obtained when the algorithm was included on a 100 W 24 V PV buck converter prototype, using an inexpensive microcontroller. The load of the system used was a battery and a resistance. The main advantage of this new maximum power point tracking (MPPT), when is compared with others, is that it only uses the measurement of the photovoltaic current, I{sub PV}. (author)

  4. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  5. Lagrangian and hamiltonian algorithms applied to the elar ged DGL model

    International Nuclear Information System (INIS)

    Batlle, C.; Roman-Roy, N.

    1988-01-01

    We analyse a model of two interating relativistic particles which is useful to illustrate the equivalence between the Dirac-Bergmann and the geometrical presympletic constraint algorithms. Both the lagrangian and hamiltonian formalisms are deeply analysed and we also find and discuss the equations of motion. (Autor)

  6. Estimation of the soil temperature from the AVHRR-NOAA satellite data applying split window algorithms

    International Nuclear Information System (INIS)

    Parra, J.C.; Acevedo, P.S.; Sobrino, J.A.; Morales, L.J.

    2006-01-01

    Four algorithms based on the technique of split-window, to estimate the land surface temperature starting from the data provided by the sensor Advanced Very High Resolution radiometer (AVHRR), on board the series of satellites of the National Oceanic and Atmospheric Administration (NOAA), are carried out. These algorithms consider corrections for atmospheric characteristics and emissivity of the different surfaces of the land. Fourteen images AVHRR-NOAA corresponding to the months of October of 2003, and January of 2004 were used. Simultaneously, measurements of soil temperature in the Carillanca hydro-meteorological station were collected in the Region of La Araucana, Chile (38 deg 41 min S; 72 deg 25 min W). Of all the used algorithms, the best results correspond to the model proposed by Sobrino and Raussoni (2000), with a media and standard deviation corresponding to the difference among the temperature of floor measure in situ and the estimated for this algorithm, of -0.06 and 2.11 K, respectively. (Author)

  7. Searching dependency between algebraic equations: An algorithm applied to automated reasoning

    International Nuclear Information System (INIS)

    Yang Lu; Zhang Jingzhong

    1990-01-01

    An efficient computer algorithm is given to decide how many branches of the solution to a system of algebraic also solve another equation. As one of the applications, this can be used in practice to verify a conjecture with hypotheses and conclusion expressed by algebraic equations, despite the variety of reducible or irreducible. (author). 10 refs

  8. A computationally efficient depression-filling algorithm for digital elevation models, applied to proglacial lake drainage

    NARCIS (Netherlands)

    Berends, Constantijn J.; Van De Wal, Roderik S W

    2016-01-01

    Many processes govern the deglaciation of ice sheets. One of the processes that is usually ignored is the calving of ice in lakes that temporarily surround the ice sheet. In order to capture this process a "flood-fill algorithm" is needed. Here we present and evaluate several optimizations to a

  9. A hybrid niched-island genetic algorithm applied to a nuclear core optimization problem

    International Nuclear Information System (INIS)

    Pereira, Claudio M.N.A.

    2005-01-01

    Diversity maintenance is a key-feature in most genetic-based optimization processes. The quest for such characteristic, has been motivating improvements in the original genetic algorithm (GA). The use of multiple populations (called islands) has demonstrating to increase diversity, delaying the genetic drift. Island Genetic Algorithms (IGA) lead to better results, however, the drift is only delayed, but not avoided. An important advantage of this approach is the simplicity and efficiency for parallel processing. Diversity can also be improved by the use of niching techniques. Niched Genetic Algorithms (NGA) are able to avoid the genetic drift, by containing evolution in niches of a single-population GA, however computational cost is increased. In this work it is investigated the use of a hybrid Niched-Island Genetic Algorithm (NIGA) in a nuclear core optimization problem found in literature. Computational experiments demonstrate that it is possible to take advantage of both, performance enhancement due to the parallelism and drift avoidance due to the use of niches. Comparative results shown that the proposed NIGA demonstrated to be more efficient and robust than an IGA and a NGA for solving the proposed optimization problem. (author)

  10. An Efficient VQ Codebook Search Algorithm Applied to AMR-WB Speech Coding

    Directory of Open Access Journals (Sweden)

    Cheng-Yu Yeh

    2017-04-01

    Full Text Available The adaptive multi-rate wideband (AMR-WB speech codec is widely used in modern mobile communication systems for high speech quality in handheld devices. Nonetheless, a major disadvantage is that vector quantization (VQ of immittance spectral frequency (ISF coefficients takes a considerable computational load in the AMR-WB coding. Accordingly, a binary search space-structured VQ (BSS-VQ algorithm is adopted to efficiently reduce the complexity of ISF quantization in AMR-WB. This search algorithm is done through a fast locating technique combined with lookup tables, such that an input vector is efficiently assigned to a subspace where relatively few codeword searches are required to be executed. In terms of overall search performance, this work is experimentally validated as a superior search algorithm relative to a multiple triangular inequality elimination (MTIE, a TIE with dynamic and intersection mechanisms (DI-TIE, and an equal-average equal-variance equal-norm nearest neighbor search (EEENNS approach. With a full search algorithm as a benchmark for overall search load comparison, this work provides an 87% search load reduction at a threshold of quantization accuracy of 0.96, a figure far beyond 55% in the MTIE, 76% in the EEENNS approach, and 83% in the DI-TIE approach.

  11. Analysis of stress and deformation in non-stationary creep

    International Nuclear Information System (INIS)

    Feijoo, R.A.; Taroco, E.; Guerreiro, J.N.C.

    1980-12-01

    A variational method and its algorithm are presented; they permit the analysis of stress and deformation in non-stationary creep. This algorithm is applied to an infinite cylinder submitted to an internal pressure. The solution obtained is compared with the solution of non-stationary creep problems [pt

  12. An algorithm for applying flagged Sysmex XE-2100 absolute neutrophil counts in clinical practice

    DEFF Research Database (Denmark)

    Friis-Hansen, Lennart; Saelsen, Lone; Abildstrøm, Steen Z

    2008-01-01

    BACKGROUND: Even though most differential leukocyte counts are performed by automated hematology platforms, turn-around time is often prolonged as flagging of test results trigger additional confirmatory manual procedures. However, frequently only the absolute neutrophil count (ANC) is needed. We...... therefore examined if an algorithm could be developed to identify samples in which the automated ANC is valid despite flagged test results. METHODS: During a 3-wk period, a training set consisting of 1448 consecutive flagged test-results from the Sysmex XE-2100 system and associated manual differential...... counts was collected. The training set was used to determine which alarms were associated with valid ANCs. The algorithm was then tested on a new set of 1371 test results collected during a later 3-wk period. RESULTS: Analysis of the training set data revealed that the ANC from test results flagged...

  13. Algorithm applied in dialogue with Skateholders: a case study in a business tourism sector

    Directory of Open Access Journals (Sweden)

    Ana María Gil Lafuente

    2010-12-01

    Full Text Available According to numerous scientific studies one of the most important points in the area of sustainability in business is related to dialogue with stakeholders. Based on Stakeholder Theory we try to analyze corporate sustainability and the process of preparing a report that a company in the tourism sector in accordance with the guidelines of the guide G3 - Global Reporting Initiative. With the completion of an empirical study seeks to understand the expectations of stakeholders regarding the implementation of the contents of the sustainability report. To achieve the proposed aim we use «The Expertons Method» algorithm that allows the aggregation of opinions of various experts on the subject and represents an important extension of fuzzy subsets for aggregation processes. At the end of our study, we present the results of using this algorithm, the contributions and future research.

  14. The particle swarm optimization algorithm applied to nuclear systems surveillance test planning

    International Nuclear Information System (INIS)

    Siqueira, Newton Norat

    2006-12-01

    This work shows a new approach to solve availability maximization problems in electromechanical systems, under periodic preventive scheduled tests. This approach uses a new Optimization tool called PSO developed by Kennedy and Eberhart (2001), Particle Swarm Optimization, integrated with probabilistic safety analysis model. Two maintenance optimization problems are solved by the proposed technique, the first one is a hypothetical electromechanical configuration and the second one is a real case from a nuclear power plant (Emergency Diesel Generators). For both problem PSO is compared to a genetic algorithm (GA). In the experiments made, PSO was able to obtain results comparable or even slightly better than those obtained b GA. Therefore, the PSO algorithm is simpler and its convergence is faster, indicating that PSO is a good alternative for solving such kind of problems. (author)

  15. Transfusion algorithms and how they apply to blood conservation: the high-risk cardiac surgical patient.

    Science.gov (United States)

    Steiner, Marie E; Despotis, George John

    2007-02-01

    Considerable blood product support is administered to the cardiac surgery population. Due to the multifactorial etiology of bleeding in the cardiac bypass patient, blood products frequently and empirically are infused to correct bleeding, with varying success. Several studies have demonstrated the benefit of algorithm-guided transfusion in reducing blood loss, transfusion exposure, or rate of surgical re-exploration for bleeding. Some transfusion algorithms also incorporate laboratory-based decision points in their guidelines. Despite published success with standardized transfusion practices, generalized change in blood use has not been realized, and it is evident that current laboratory-guided hemostasis measures are inadequate to define and address the bleeding etiology in these patients.

  16. Energy loss optimization of run-off-road wheels applying imperialist competitive algorithm

    Directory of Open Access Journals (Sweden)

    Hamid Taghavifar

    2014-08-01

    Full Text Available The novel imperialist competitive algorithm (ICA has presented outstanding fitness on various optimization problems. Application of meta-heuristics has been a dynamic studying interest of the reliability optimization to determine idleness and reliability constituents. The application of a meta-heuristic evolutionary optimization method, imperialist competitive algorithm (ICA, for minimization of energy loss due to wheel rolling resistance in a soil bin facility equipped with single-wheel tester is discussed. The required data were collected thorough various designed experiments in the controlled soil bin environment. Local and global searching of the search space proposed that the energy loss could be reduced to the minimum amount of 15.46 J at the optimized input variable configuration of wheel load at 1.2 kN, tire inflation pressure of 296 kPa and velocity of 2 m/s. Meanwhile, genetic algorithm (GA, particle swarm optimization (PSO and hybridized GA–PSO approaches were benchmarked among the broad spectrum of meta-heuristics to find the outperforming approach. It was deduced that, on account of the obtained results, ICA can achieve optimum configuration with superior accuracy in less required computational time.

  17. Multiple Harmonics Fitting Algorithms Applied to Periodic Signals Based on Hilbert-Huang Transform

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2013-01-01

    Full Text Available A new generation of multipurpose measurement equipment is transforming the role of computers in instrumentation. The new features involve mixed devices, such as kinds of sensors, analog-to-digital and digital-to-analog converters, and digital signal processing techniques, that are able to substitute typical discrete instruments like multimeters and analyzers. Signal-processing applications frequently use least-squares (LS sine-fitting algorithms. Periodic signals may be interpreted as a sum of sine waves with multiple frequencies: the Fourier series. This paper describes a new sine fitting algorithm that is able to fit a multiharmonic acquired periodic signal. By means of a “sinusoidal wave” whose amplitude and phase are both transient, the “triangular wave” can be reconstructed on the basis of Hilbert-Huang transform (HHT. This method can be used to test effective number of bits (ENOBs of analog-to-digital converter (ADC, avoiding the trouble of selecting initial value of the parameters and working out the nonlinear equations. The simulation results show that the algorithm is precise and efficient. In the case of enough sampling points, even under the circumstances of low-resolution signal with the harmonic distortion existing, the root mean square (RMS error between the sampling data of original “triangular wave” and the corresponding points of fitting “sinusoidal wave” is marvelously small. That maybe means, under the circumstances of any periodic signal, that ENOBs of high-resolution ADC can be tested accurately.

  18. User Activity Recognition in Smart Homes Using Pattern Clustering Applied to Temporal ANN Algorithm.

    Science.gov (United States)

    Bourobou, Serge Thomas Mickala; Yoo, Younghwan

    2015-05-21

    This paper discusses the possibility of recognizing and predicting user activities in the IoT (Internet of Things) based smart environment. The activity recognition is usually done through two steps: activity pattern clustering and activity type decision. Although many related works have been suggested, they had some limited performance because they focused only on one part between the two steps. This paper tries to find the best combination of a pattern clustering method and an activity decision algorithm among various existing works. For the first step, in order to classify so varied and complex user activities, we use a relevant and efficient unsupervised learning method called the K-pattern clustering algorithm. In the second step, the training of smart environment for recognizing and predicting user activities inside his/her personal space is done by utilizing the artificial neural network based on the Allen's temporal relations. The experimental results show that our combined method provides the higher recognition accuracy for various activities, as compared with other data mining classification algorithms. Furthermore, it is more appropriate for a dynamic environment like an IoT based smart home.

  19. User Activity Recognition in Smart Homes Using Pattern Clustering Applied to Temporal ANN Algorithm

    Directory of Open Access Journals (Sweden)

    Serge Thomas Mickala Bourobou

    2015-05-01

    Full Text Available This paper discusses the possibility of recognizing and predicting user activities in the IoT (Internet of Things based smart environment. The activity recognition is usually done through two steps: activity pattern clustering and activity type decision. Although many related works have been suggested, they had some limited performance because they focused only on one part between the two steps. This paper tries to find the best combination of a pattern clustering method and an activity decision algorithm among various existing works. For the first step, in order to classify so varied and complex user activities, we use a relevant and efficient unsupervised learning method called the K-pattern clustering algorithm. In the second step, the training of smart environment for recognizing and predicting user activities inside his/her personal space is done by utilizing the artificial neural network based on the Allen’s temporal relations. The experimental results show that our combined method provides the higher recognition accuracy for various activities, as compared with other data mining classification algorithms. Furthermore, it is more appropriate for a dynamic environment like an IoT based smart home.

  20. Optimization of spatial light distribution through genetic algorithms for vision systems applied to quality control

    International Nuclear Information System (INIS)

    Castellini, P; Cecchini, S; Stroppa, L; Paone, N

    2015-01-01

    The paper presents an adaptive illumination system for image quality enhancement in vision-based quality control systems. In particular, a spatial modulation of illumination intensity is proposed in order to improve image quality, thus compensating for different target scattering properties, local reflections and fluctuations of ambient light. The desired spatial modulation of illumination is obtained by a digital light projector, used to illuminate the scene with an arbitrary spatial distribution of light intensity, designed to improve feature extraction in the region of interest. The spatial distribution of illumination is optimized by running a genetic algorithm. An image quality estimator is used to close the feedback loop and to stop iterations once the desired image quality is reached. The technique proves particularly valuable for optimizing the spatial illumination distribution in the region of interest, with the remarkable capability of the genetic algorithm to adapt the light distribution to very different target reflectivity and ambient conditions. The final objective of the proposed technique is the improvement of the matching score in the recognition of parts through matching algorithms, hence of the diagnosis of machine vision-based quality inspections. The procedure has been validated both by a numerical model and by an experimental test, referring to a significant problem of quality control for the washing machine manufacturing industry: the recognition of a metallic clamp. Its applicability to other domains is also presented, specifically for the visual inspection of shoes with retro-reflective tape and T-shirts with paillettes. (paper)

  1. Non extensive statistical physics applied in fracture-induced electric signals during triaxial deformation of Carrara marble

    Science.gov (United States)

    Cartwright-Taylor, Alexis; Vallianatos, Filippos; Sammonds, Peter

    2014-05-01

    We have conducted room-temperature, triaxial compression experiments on samples of Carrara marble, recording concurrently acoustic and electric current signals emitted during the deformation process as well as mechanical loading information and ultrasonic wave velocities. Our results reveal that in a dry non-piezoelectric rock under simulated crustal pressure conditions, a measurable electric current (nA) is generated within the stressed sample. The current is detected only in the region beyond (quasi-)linear elastic deformation; i.e. in the region of permanent deformation beyond the yield point of the material and in the presence of microcracking. Our results extend to shallow crustal conditions previous observations of electric current signals in quartz-free rocks undergoing uniaxial deformation and support the idea of a universal electrification mechanism related to deformation. Confining pressure conditions of our slow strain rate (10-6 s-1) experiments range from the purely brittle regime (10 MPa) to the semi-brittle transition (30-100MPa) where cataclastic flow is the dominant deformation mechanism. Electric current is generated under all confining pressures,implying the existence of a current-producing mechanism during both microfracture and frictional sliding. Some differences are seen in the current evolution between these two regimes, possibly related to crack localisation. In all cases, the measured electric current exhibits episodes of strong fluctuations over short timescales; calm periods punctuated by bursts of strong activity. For the analysis, we adopt an entropy-based statistical physics approach (Tsallis, 1988), particularly suited to the study of fracture related phenomena. We find that the probability distribution of normalised electric current fluctuations over short time intervals (0.5 s) can be well described by a q-Gaussian distribution of a form similar to that which describes turbulent flows. This approach yields different entropic

  2. SLAM algorithm applied to robotics assistance for navigation in unknown environments

    Directory of Open Access Journals (Sweden)

    Lobo Pereira Fernando

    2010-02-01

    Full Text Available Abstract Background The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous. The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI. Methods In this paper, a sequential Extended Kalman Filter (EKF feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. Results The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how

  3. Applying a machine learning model using a locally preserving projection based feature regeneration algorithm to predict breast cancer risk

    Science.gov (United States)

    Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qian, Wei; Zheng, Bin

    2018-03-01

    Both conventional and deep machine learning has been used to develop decision-support tools applied in medical imaging informatics. In order to take advantages of both conventional and deep learning approach, this study aims to investigate feasibility of applying a locally preserving projection (LPP) based feature regeneration algorithm to build a new machine learning classifier model to predict short-term breast cancer risk. First, a computer-aided image processing scheme was used to segment and quantify breast fibro-glandular tissue volume. Next, initially computed 44 image features related to the bilateral mammographic tissue density asymmetry were extracted. Then, an LLP-based feature combination method was applied to regenerate a new operational feature vector using a maximal variance approach. Last, a k-nearest neighborhood (KNN) algorithm based machine learning classifier using the LPP-generated new feature vectors was developed to predict breast cancer risk. A testing dataset involving negative mammograms acquired from 500 women was used. Among them, 250 were positive and 250 remained negative in the next subsequent mammography screening. Applying to this dataset, LLP-generated feature vector reduced the number of features from 44 to 4. Using a leave-onecase-out validation method, area under ROC curve produced by the KNN classifier significantly increased from 0.62 to 0.68 (p breast cancer detected in the next subsequent mammography screening.

  4. Continuous grasp algorithm applied to economic dispatch problem of thermal units

    Energy Technology Data Exchange (ETDEWEB)

    Vianna Neto, Julio Xavier [Pontifical Catholic University of Parana - PUCPR, Curitiba, PR (Brazil). Undergraduate Program at Mechatronics Engineering; Bernert, Diego Luis de Andrade; Coelho, Leandro dos Santos [Pontifical Catholic University of Parana - PUCPR, Curitiba, PR (Brazil). Industrial and Systems Engineering Graduate Program, LAS/PPGEPS], e-mail: leandro.coelho@pucpr.br

    2010-07-01

    The economic dispatch problem (EDP) is one of the fundamental issues in power systems to obtain benefits with the stability, reliability and security. Its objective is to allocate the power demand among committed generators in the most economical manner, while all physical and operational constraints are satisfied. The cost of power generation, particularly in fossil fuel plants, is very high and economic dispatch helps in saving a significant amount of revenue. Recently, as an alternative to the conventional mathematical approaches, modern heuristic optimization techniques such as simulated annealing, evolutionary algorithms, neural networks, ant colony, and tabu search have been given much attention by many researchers due to their ability to find an almost global optimal solution in EDPs. On other hand, continuous GRASP (C-GRASP) is a stochastic local search meta-heuristic for finding cost-efficient solutions to continuous global optimization problems subject to box constraints. Like a greedy randomized adaptive search procedure (GRASP), a C-GRASP is a multi-start procedure where a starting solution for local improvement is constructed in a greedy randomized fashion. The C-GRASP algorithm is validated for a test system consisting of fifteen units, test system that takes into account spinning reserve and prohibited operating zones constrains. (author)

  5. Coarse-grained parallel genetic algorithm applied to a nuclear reactor core design optimization problem

    International Nuclear Information System (INIS)

    Pereira, Claudio M.N.A.; Lapa, Celso M.F.

    2003-01-01

    This work extends the research related to generic algorithms (GA) in core design optimization problems, which basic investigations were presented in previous work. Here we explore the use of the Island Genetic Algorithm (IGA), a coarse-grained parallel GA model, comparing its performance to that obtained by the application of a traditional non-parallel GA. The optimization problem consists on adjusting several reactor cell parameters, such as dimensions, enrichment and materials, in order to minimize the average peak-factor in a 3-enrichment zone reactor, considering restrictions on the average thermal flux, criticality and sub-moderation. Our IGA implementation runs as a distributed application on a conventional local area network (LAN), avoiding the use of expensive parallel computers or architectures. After exhaustive experiments, taking more than 1500 h in 550 MHz personal computers, we have observed that the IGA provided gains not only in terms of computational time, but also in the optimization outcome. Besides, we have also realized that, for such kind of problem, which fitness evaluation is itself time consuming, the time overhead in the IGA, due to the communication in LANs, is practically imperceptible, leading to the conclusion that the use of expensive parallel computers or architecture can be avoided

  6. Azcaxalli: A system based on Ant Colony Optimization algorithms, applied to fuel reloads design in a Boiling Water Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Esquivel-Estrada, Jaime, E-mail: jaime.esquivel@fi.uaemex.m [Facultad de Ingenieria, Universidad Autonoma del Estado de Mexico, Cerro de Coatepec S/N, Toluca de Lerdo, Estado de Mexico 50000 (Mexico); Instituto Nacional de Investigaciones Nucleares, Carr. Mexico Toluca S/N, Ocoyoacac, Estado de Mexico 52750 (Mexico); Ortiz-Servin, Juan Jose, E-mail: juanjose.ortiz@inin.gob.m [Instituto Nacional de Investigaciones Nucleares, Carr. Mexico Toluca S/N, Ocoyoacac, Estado de Mexico 52750 (Mexico); Castillo, Jose Alejandro; Perusquia, Raul [Instituto Nacional de Investigaciones Nucleares, Carr. Mexico Toluca S/N, Ocoyoacac, Estado de Mexico 52750 (Mexico)

    2011-01-15

    This paper presents some results of the implementation of several optimization algorithms based on ant colonies, applied to the fuel reload design in a Boiling Water Reactor. The system called Azcaxalli is constructed with the following algorithms: Ant Colony System, Ant System, Best-Worst Ant System and MAX-MIN Ant System. Azcaxalli starts with a random fuel reload. Ants move into reactor core channels according to the State Transition Rule in order to select two fuel assemblies into a 1/8 part of the reactor core and change positions between them. This rule takes into account pheromone trails and acquired knowledge. Acquired knowledge is obtained from load cycle values of fuel assemblies. Azcaxalli claim is to work in order to maximize the cycle length taking into account several safety parameters. Azcaxalli's objective function involves thermal limits at the end of the cycle, cold shutdown margin at the beginning of the cycle and the neutron effective multiplication factor for a given cycle exposure. Those parameters are calculated by CM-PRESTO code. Through the Haling Principle is possible to calculate the end of the cycle. This system was applied to an equilibrium cycle of 18 months of Laguna Verde Nuclear Power Plant in Mexico. The results show that the system obtains fuel reloads with higher cycle lengths than the original fuel reload. Azcaxalli results are compared with genetic algorithms, tabu search and neural networks results.

  7. Development of a multi-objective PBIL evolutionary algorithm applied to a nuclear reactor core reload optimization problem

    International Nuclear Information System (INIS)

    Machado, Marcelo D.; Dchirru, Roberto

    2005-01-01

    The nuclear reactor core reload optimization problem consists in finding a pattern of partially burned-up and fresh fuels that optimizes the plant's next operation cycle. This optimization problem has been traditionally solved using an expert's knowledge, but recently artificial intelligence techniques have also been applied successfully. The artificial intelligence optimization techniques generally have a single objective. However, most real-world engineering problems, including nuclear core reload optimization, have more than one objective (multi-objective) and these objectives are usually conflicting. The aim of this work is to develop a tool to solve multi-objective problems based on the Population-Based Incremental Learning (PBIL) algorithm. The new tool is applied to solve the Angra 1 PWR core reload optimization problem with the purpose of creating a Pareto surface, so that a pattern selected from this surface can be applied for the plant's next operation cycle. (author)

  8. A niching genetic algorithm applied to a nuclear power plant auxiliary feedwater system surveillance tests policy optimization

    International Nuclear Information System (INIS)

    Sacco, W.F.; Lapa, Celso M.F.; Pereira, C.M.N.A.; Oliveira, C.R.E. de

    2006-01-01

    This article extends previous efforts on genetic algorithms (GAs) applied to a nuclear power plant (NPP) auxiliary feedwater system (AFWS) surveillance tests policy optimization. We introduce the application of a niching genetic algorithm (NGA) to this problem and compare its performance to previous results. The NGA maintains a populational diversity during the search process, thus promoting a greater exploration of the search space. The optimization problem consists in maximizing the system's average availability for a given period of time, considering realistic features such as: (i) aging effects on standby components during the tests; (ii) revealing failures in the tests implies on corrective maintenance, increasing outage times; (iii) components have distinct test parameters (outage time, aging factors, etc.) and (iv) tests are not necessarily periodic. We find that the NGA performs better than the conventional GA and the island GA due to a greater exploration of the search space

  9. A New Missing Data Imputation Algorithm Applied to Electrical Data Loggers

    Directory of Open Access Journals (Sweden)

    Concepción Crespo Turrado

    2015-12-01

    Full Text Available Nowadays, data collection is a key process in the study of electrical power networks when searching for harmonics and a lack of balance among phases. In this context, the lack of data of any of the main electrical variables (phase-to-neutral voltage, phase-to-phase voltage, and current in each phase and power factor adversely affects any time series study performed. When this occurs, a data imputation process must be accomplished in order to substitute the data that is missing for estimated values. This paper presents a novel missing data imputation method based on multivariate adaptive regression splines (MARS and compares it with the well-known technique called multivariate imputation by chained equations (MICE. The results obtained demonstrate how the proposed method outperforms the MICE algorithm.

  10. A simulator-independent optimization tool based on genetic algorithm applied to nuclear reactor design

    International Nuclear Information System (INIS)

    Abreu Pereira, Claudio Marcio Nascimento do; Schirru, Roberto; Martinez, Aquilino Senra

    1999-01-01

    Here is presented an engineering optimization tool based on a genetic algorithm, implemented according to the method proposed in recent work that has demonstrated the feasibility of the use of this technique in nuclear reactor core designs. The tool is simulator-independent in the sense that it can be customized to use most of the simulators which have the input parameters read from formatted text files and the outputs also written from a text file. As the nuclear reactor simulators generally use such kind of interface, the proposed tool plays an important role in nuclear reactor designs. Research reactors may often use non-conventional design approaches, causing different situations that may lead the nuclear engineer to face new optimization problems. In this case, a good optimization technique, together with its customizing facility and a friendly man-machine interface could be very interesting. Here, the tool is described and some advantages are outlined. (author)

  11. Double-Stage Delay Multiply and Sum Beamforming Algorithm Applied to Ultrasound Medical Imaging.

    Science.gov (United States)

    Mozaffarzadeh, Moein; Sadeghi, Masume; Mahloojifar, Ali; Orooji, Mahdi

    2018-03-01

    In ultrasound (US) imaging, delay and sum (DAS) is the most common beamformer, but it leads to low-quality images. Delay multiply and sum (DMAS) was introduced to address this problem. However, the reconstructed images using DMAS still suffer from the level of side lobes and low noise suppression. Here, a novel beamforming algorithm is introduced based on expansion of the DMAS formula. We found that there is a DAS algebra inside the expansion, and we proposed use of the DMAS instead of the DAS algebra. The introduced method, namely double-stage DMAS (DS-DMAS), is evaluated numerically and experimentally. The quantitative results indicate that DS-DMAS results in an approximately 25% lower level of side lobes compared with DMAS. Moreover, the introduced method leads to 23%, 22% and 43% improvement in signal-to-noise ratio, full width at half-maximum and contrast ratio, respectively, compared with the DMAS beamformer. Copyright © 2018. Published by Elsevier Inc.

  12. Comparison of the inversion algorithms applied to the ozone vertical profile retrieval from SCIAMACHY limb measurements

    Directory of Open Access Journals (Sweden)

    A. Rozanov

    2007-09-01

    Full Text Available This paper is devoted to an intercomparison of ozone vertical profiles retrieved from the measurements of scattered solar radiation performed by the SCIAMACHY instrument in the limb viewing geometry. Three different inversion algorithms including the prototype of the operational Level 1 to 2 processor to be operated by the European Space Agency are considered. Unlike usual validation studies, this comparison removes the uncertainties arising when comparing measurements made by different instruments probing slightly different air masses and focuses on the uncertainties specific to the modeling-retrieval problem only. The intercomparison was performed for 5 selected orbits of SCIAMACHY showing a good overall agreement of the results in the middle stratosphere, whereas considerable discrepancies were identified in the lower stratosphere and upper troposphere altitude region. Additionally, comparisons with ground-based lidar measurements are shown for selected profiles demonstrating an overall correctness of the retrievals.

  13. Comparison of algorithms for blood stain detection applied to forensic hyperspectral imagery

    Science.gov (United States)

    Yang, Jie; Messinger, David W.; Mathew, Jobin J.; Dube, Roger R.

    2016-05-01

    Blood stains are among the most important types of evidence for forensic investigation. They contain valuable DNA information, and the pattern of the stains can suggest specifics about the nature of the violence that transpired at the scene. Early detection of blood stains is particularly important since the blood reacts physically and chemically with air and materials over time. Accurate identification of blood remnants, including regions that might have been intentionally cleaned, is an important aspect of forensic investigation. Hyperspectral imaging might be a potential method to detect blood stains because it is non-contact and provides substantial spectral information that can be used to identify regions in a scene with trace amounts of blood. The potential complexity of scenes in which such vast violence occurs can be high when the range of scene material types and conditions containing blood stains at a crime scene are considered. Some stains are hard to detect by the unaided eye, especially if a conscious effort to clean the scene has occurred (we refer to these as "latent" blood stains). In this paper we present the initial results of a study of the use of hyperspectral imaging algorithms for blood detection in complex scenes. We describe a hyperspectral imaging system which generates images covering 400 nm - 700 nm visible range with a spectral resolution of 10 nm. Three image sets of 31 wavelength bands were generated using this camera for a simulated indoor crime scene in which blood stains were placed on a T-shirt and walls. To detect blood stains in the scene, Principal Component Analysis (PCA), Subspace Reed Xiaoli Detection (SRXD), and Topological Anomaly Detection (TAD) algorithms were used. Comparison of the three hyperspectral image analysis techniques shows that TAD is most suitable for detecting blood stains and discovering latent blood stains.

  14. A modified firefly algorithm applied to the nuclear reload problem of a pressurized water reactor

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Iona Maghali Santos de; Schirru, Roberto, E-mail: ioliveira@con.ufrj.b, E-mail: schirru@lmp.ufrj.b [Universidade Federal do Rio de Janeiro (PEN/COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Programa de Engenharia Nuclear

    2011-07-01

    The Nuclear Reactor Reload Problem (NRRP) is an issue of great importance and concern in nuclear engineering. It is the problem related with the periodic operation of replacing part of the fuel of a nuclear reactor. Traditionally, this procedure occurs after a period of operation called a cycle, or whenever the nuclear power plant is unable to continue operating at its nominal power. Studied for more than 40 years, the NRRP still remains a challenge for many optimization techniques due to its multiple objectives concerning economics, safety and reactor physics calculations. Characteristics such as non-linearity, multimodality and high dimensionality also make the NRRP a very complex optimization problem. In broad terms, it aims at getting the best arrangement of fuel in the nuclear reactor core that leads to a maximization of the operating time. The primary goal is to design fuel loading patterns (LPs) so that the core produces the required energy output in an economical way, without violating safety limits. Since multiple feasible solutions can be obtained to this problem, judicious optimization is required in order to identify the most economical among them. In this sense, this paper presents a new contribution in this area and introduces a modified firefly algorithm (FA) to perform LPs optimization for a pressurized water reactor. Based on the original FA introduced by Xin-She Yang in 2008, the proposed methodology seems to be very promising as an optimizer to the NRRP. The experiments performed and the comparisons with some well known best performing algorithms from the literature, confirm this statement. (author)

  15. A modified firefly algorithm applied to the nuclear reload problem of a pressurized water reactor

    International Nuclear Information System (INIS)

    Oliveira, Iona Maghali Santos de; Schirru, Roberto

    2011-01-01

    The Nuclear Reactor Reload Problem (NRRP) is an issue of great importance and concern in nuclear engineering. It is the problem related with the periodic operation of replacing part of the fuel of a nuclear reactor. Traditionally, this procedure occurs after a period of operation called a cycle, or whenever the nuclear power plant is unable to continue operating at its nominal power. Studied for more than 40 years, the NRRP still remains a challenge for many optimization techniques due to its multiple objectives concerning economics, safety and reactor physics calculations. Characteristics such as non-linearity, multimodality and high dimensionality also make the NRRP a very complex optimization problem. In broad terms, it aims at getting the best arrangement of fuel in the nuclear reactor core that leads to a maximization of the operating time. The primary goal is to design fuel loading patterns (LPs) so that the core produces the required energy output in an economical way, without violating safety limits. Since multiple feasible solutions can be obtained to this problem, judicious optimization is required in order to identify the most economical among them. In this sense, this paper presents a new contribution in this area and introduces a modified firefly algorithm (FA) to perform LPs optimization for a pressurized water reactor. Based on the original FA introduced by Xin-She Yang in 2008, the proposed methodology seems to be very promising as an optimizer to the NRRP. The experiments performed and the comparisons with some well known best performing algorithms from the literature, confirm this statement. (author)

  16. Monitoring of surface deformation and microseismicity applied to radioactive waste disposal through hydraulic fracturing at Oak Ridge National Laboratory

    International Nuclear Information System (INIS)

    Stow, S.H.; Haase, C.S.; Switek, J.; Holzhausen, G.R.; Majer, E.

    1985-01-01

    Low-level liquid nuclear wastes are disposed of at Oak Ridge National Laboratory by the hydrofracture process. Wastes are mixed with cement and other additives to form a slurry that is injected into shale of low permeability at 300 m depth. The slurry spreads radially along bedding plane fractures before setting as a grout. Different methods for monitoring the location and behavior of the fractures have been investigated. Radioactive grout sheets can be located by gamma-ray logging of cased observation wells. Two other methods are based on the fact that the ground surface is deformed by the injection. The first entails surface leveling of a series of benchmarks; uplift up to 2.5 cm occurs. The second method involves use of tiltmeters that are sensitive and measure ground deformation in real time during an injection. Both methods show subsidence during the weeks following an injection. Interpretive models for the tiltmeter data are based on the elastic response of isotropic and anisotropic media to the inflation of a fluid-filled fracture. A fourth monitoring method is based on microseismicity. Geophone arrays were used to characterize the fracture process and to provide initial assessment of the feasibility of using seismic measurements to map the fractures as they form. An evaluation of each method is presented. 8 refs., 6 figs

  17. Monitoring of surface deformation and microseismicity applied to radioactive waste disposal through hydraulic fracturing at Oak Ridge National Laboratory

    International Nuclear Information System (INIS)

    Stow, S.H.; Haase, C.S.; Switek, J.; Holzhausen, G.R.; Majer, E.; Applied Geomechanics, Inc., Santa Cruz, CA; Lawrence Berkeley Lab., CA)

    1985-01-01

    Low-level liquid nuclear wastes are disposed of at Oak Ridge National Laboratory by the hydrofracture process. Wastes are mixed with cement and other additives to form a slurry that is injected into shale of low permeability at 300 m depth. The slurry spreads radially along bedding plane fractures before setting as a grout. Different methods for monitoring the location and behavior of the fractures have been investigated. Radioactive grout sheets can be located by gamma-ray logging of cased observation wells. Two other methods are based on the fact that the ground surface is deformed by the injection. The first entails surface leveling of a series of benchmarks; uplift up to 2.5 cm occurs. The second method involves use of tiltmeters that are sensitive and measure ground deformation in real time during an injection. Both methods show subsidence during the weeks following an injection. Interpretive models for the tiltmeter data are based on the elastic response of isotropic and anisotropic media to the inflation of a fluid-filled fracture. A fourth monitoring method is based on microseismicity. Geophone arrays were used to characterize the fracture process and to provide initial assessment of the feasibility of using seismic measurements to map the fractures as they form. An evaluation of each method is presented

  18. A hybrid adaptive large neighborhood search algorithm applied to a lot-sizing problem

    DEFF Research Database (Denmark)

    Muller, Laurent Flindt; Spoorendonk, Simon

    This paper presents a hybrid of a general heuristic framework that has been successfully applied to vehicle routing problems and a general purpose MIP solver. The framework uses local search and an adaptive procedure which choses between a set of large neighborhoods to be searched. A mixed integer...... of a solution and to investigate the feasibility of elements in such a neighborhood. The hybrid heuristic framework is applied to the multi-item capacitated lot sizing problem with dynamic lot sizes, where experiments have been conducted on a series of instances from the literature. On average the heuristic...

  19. Robust algorithms and system theory applied to the reconstruction of primary and secondary vertices

    International Nuclear Information System (INIS)

    Fruehwirth, R.; Liko, D.; Mitaroff, W.; Regler, M.

    1990-01-01

    Filter techniques from system theory have recently been applied to the estimation of track and vertex parameters. In this paper, vertex fitting by the Kalman filter method is discussed. These techniques have been applied to the identification of short-lived decay vertices in the case of high multiplicities as expected at LEP (Monte Carlo data in the DELPHI detector). Then in this context the need of further rebustification of the Kalman filter method is discussed. Finally results of an application with real data at a heavy ion experiment (NA36) will be presented. Here the vertex fit is used to select the interaction point among possible targets

  20. Continuous Recording and Interobserver Agreement Algorithms Reported in The Journal of Applied Behavior Analysis (1995–2005)

    Science.gov (United States)

    Mudford, Oliver C; Taylor, Sarah Ann; Martin, Neil T

    2009-01-01

    We reviewed all research articles in 10 recent volumes of the Journal of Applied Behavior Analysis (JABA): Vol. 28(3), 1995, through Vol. 38(2), 2005. Continuous recording was used in the majority (55%) of the 168 articles reporting data on free-operant human behaviors. Three methods for reporting interobserver agreement (exact agreement, block-by-block agreement, and time-window analysis) were employed in more than 10 of the articles that reported continuous recording. Having identified these currently popular agreement computation algorithms, we explain them to assist researchers, software writers, and other consumers of JABA articles. PMID:19721737

  1. Continuous recording and interobserver agreement algorithms reported in the Journal of Applied Behavior Analysis (1995-2005).

    Science.gov (United States)

    Mudford, Oliver C; Taylor, Sarah Ann; Martin, Neil T

    2009-01-01

    We reviewed all research articles in 10 recent volumes of the Journal of Applied Behavior Analysis (JABA): Vol. 28(3), 1995, through Vol. 38(2), 2005. Continuous recording was used in the majority (55%) of the 168 articles reporting data on free-operant human behaviors. Three methods for reporting interobserver agreement (exact agreement, block-by-block agreement, and time-window analysis) were employed in more than 10 of the articles that reported continuous recording. Having identified these currently popular agreement computation algorithms, we explain them to assist researchers, software writers, and other consumers of JABA articles.

  2. Statistical methods applied to gamma-ray spectroscopy algorithms in nuclear security missions.

    Science.gov (United States)

    Fagan, Deborah K; Robinson, Sean M; Runkle, Robert C

    2012-10-01

    Gamma-ray spectroscopy is a critical research and development priority to a range of nuclear security missions, specifically the interdiction of special nuclear material involving the detection and identification of gamma-ray sources. We categorize existing methods by the statistical methods on which they rely and identify methods that have yet to be considered. Current methods estimate the effect of counting uncertainty but in many cases do not address larger sources of decision uncertainty, which may be significantly more complex. Thus, significantly improving algorithm performance may require greater coupling between the problem physics that drives data acquisition and statistical methods that analyze such data. Untapped statistical methods, such as Bayes Modeling Averaging and hierarchical and empirical Bayes methods, could reduce decision uncertainty by rigorously and comprehensively incorporating all sources of uncertainty. Application of such methods should further meet the needs of nuclear security missions by improving upon the existing numerical infrastructure for which these analyses have not been conducted. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Investigation of high burnup structures in uranium dioxide applying cellular automata: algorithms and codes

    International Nuclear Information System (INIS)

    Akishina, E.P.; Kostenko, B.F.; Ivanov, V.V.

    2003-01-01

    A new method of research in spatial structures that result from uranium dioxide burning in nuclear reactors of modern atomic plants is suggested. The method is based on the presentation of images of the mentioned structures in the form of the working field of a cellular automaton (CA). First, it has allowed one to extract some important quantitative characteristics of the structures directly from the micrographs of the uranium fuel surface. Secondly, the CA has been found out to allow one to formulate easily the dynamics of the evolution of the studied structures in terms of such micrograph elements as spots, spots' boundaries, cracks, etc. Relation has been found between the dynamics and some exactly solvable models of the theory of cellular automata, in particular, the Ising model and the vote model. This investigation gives a detailed description of some CA algorithms which allow one to perform the fuel surface image processing and to model its evolution caused by burnup or chemical etching. (author)

  4. Globally Consistent Indoor Mapping via a Decoupling Rotation and Translation Algorithm Applied to RGB-D Camera Output

    Directory of Open Access Journals (Sweden)

    Yuan Liu

    2017-10-01

    Full Text Available This paper presents a novel RGB-D 3D reconstruction algorithm for the indoor environment. The method can produce globally-consistent 3D maps for potential GIS applications. As the consumer RGB-D camera provides a noisy depth image, the proposed algorithm decouples the rotation and translation for a more robust camera pose estimation, which makes full use of the information, but also prevents inaccuracies caused by noisy depth measurements. The uncertainty in the image depth is not only related to the camera device, but also the environment; hence, a novel uncertainty model for depth measurements was developed using Gaussian mixture applied to multi-windows. The plane features in the indoor environment contain valuable information about the global structure, which can guide the convergence of camera pose solutions, and plane and feature point constraints are incorporated in the proposed optimization framework. The proposed method was validated using publicly-available RGB-D benchmarks and obtained good quality trajectory and 3D models, which are difficult for traditional 3D reconstruction algorithms.

  5. Parallel island genetic algorithm applied to a nuclear power plant auxiliary feedwater system surveillance tests policy optimization

    International Nuclear Information System (INIS)

    Pereira, Claudio M.N.A.; Lapa, Celso M.F.

    2003-01-01

    In this work, we focus the application of an Island Genetic Algorithm (IGA), a coarse-grained parallel genetic algorithm (PGA) model, to a Nuclear Power Plant (NPP) Auxiliary Feedwater System (AFWS) surveillance tests policy optimization. Here, the main objective is to outline, by means of comparisons, the advantages of the IGA over the simple (non-parallel) genetic algorithm (GA), which has been successfully applied in the solution of such kind of problem. The goal of the optimization is to maximize the system's average availability for a given period of time, considering realistic features such as: i) aging effects on standby components during the tests; ii) revealing failures in the tests implies on corrective maintenance, increasing outage times; iii) components have distinct test parameters (outage time, aging factors, etc.) and iv) tests are not necessarily periodic. In our experiments, which were made in a cluster comprised by 8 1-GHz personal computers, we could clearly observe gains not only in the computational time, which reduced linearly with the number of computers, but in the optimization outcome

  6. Bladder dose accumulation based on a biomechanical deformable image registration algorithm in volumetric modulated arc therapy for prostate cancer

    International Nuclear Information System (INIS)

    Andersen, E S; Muren, L P; Thor, M; Petersen, J B; Tanderup, K; Sørensen, T S; Noe, K Ø; Høyer, M; Bentzen, L

    2012-01-01

    Variations in bladder position, shape and volume cause uncertainties in the doses delivered to this organ during a course of radiotherapy for pelvic tumors. The purpose of this study was to evaluate the potential of dose accumulation based on repeat imaging and deformable image registration (DIR) to improve the accuracy of bladder dose assessment. For each of nine prostate cancer patients, the initial treatment plan was re-calculated on eight to nine repeat computed tomography (CT) scans. The planned bladder dose–volume histogram (DVH) parameters were compared to corresponding parameters derived from DIR-based accumulations as well as DVH summation based on dose re-calculations. It was found that the deviations between the DIR-based accumulations and the planned treatment were substantial and ranged (−0.5–2.3) Gy and (−9.4–13.5) Gy for D 2% and D mean , respectively, whereas the deviations between DIR-based accumulations and DVH summation were small and well within 1 Gy. For the investigated treatment scenario, DIR-based bladder dose accumulation did not result in substantial improvement of dose estimation as compared to the straightforward DVH summation. Large variations were found in individual patients between the doses from the initial treatment plan and the accumulated bladder doses. Hence, the use of repeat imaging has a potential for improved accuracy in treatment dose reporting. (paper)

  7. A multi-institution evaluation of deformable image registration algorithms for automatic organ delineation in adaptive head and neck radiotherapy

    International Nuclear Information System (INIS)

    Hardcastle, Nicholas; Kumar, Prashant; Oechsner, Markus; Richter, Anne; Song, Shiyu; Myers, Michael; Polat, Bülent; Bzdusek, Karl; Tomé, Wolfgang A; Cannon, Donald M; Brouwer, Charlotte L; Wittendorp, Paul WH; Dogan, Nesrin; Guckenberger, Matthias; Allaire, Stéphane; Mallya, Yogish

    2012-01-01

    Adaptive Radiotherapy aims to identify anatomical deviations during a radiotherapy course and modify the treatment plan to maintain treatment objectives. This requires regions of interest (ROIs) to be defined using the most recent imaging data. This study investigates the clinical utility of using deformable image registration (DIR) to automatically propagate ROIs. Target (GTV) and organ-at-risk (OAR) ROIs were non-rigidly propagated from a planning CT scan to a per-treatment CT scan for 22 patients. Propagated ROIs were quantitatively compared with expert physician-drawn ROIs on the per-treatment scan using Dice scores and mean slicewise Hausdorff distances, and center of mass distances for GTVs. The propagated ROIs were qualitatively examined by experts and scored based on their clinical utility. Good agreement between the DIR-propagated ROIs and expert-drawn ROIs was observed based on the metrics used. 94% of all ROIs generated using DIR were scored as being clinically useful, requiring minimal or no edits. However, 27% (12/44) of the GTVs required major edits. DIR was successfully used on 22 patients to propagate target and OAR structures for ART with good anatomical agreement for OARs. It is recommended that propagated target structures be thoroughly reviewed by the treating physician

  8. Dynamic Water Surface Detection Algorithm Applied on PROBA-V Multispectral Data

    Directory of Open Access Journals (Sweden)

    Luc Bertels

    2016-12-01

    Full Text Available Water body detection worldwide using spaceborne remote sensing is a challenging task. A global scale multi-temporal and multi-spectral image analysis method for water body detection was developed. The PROBA-V microsatellite has been fully operational since December 2013 and delivers daily near-global synthesis with a spatial resolution of 1 km and 333 m. The Red, Near-InfRared (NIR and Short Wave InfRared (SWIR bands of the atmospherically corrected 10-day synthesis images are first Hue, Saturation and Value (HSV color transformed and subsequently used in a decision tree classification for water body detection. To minimize commission errors four additional data layers are used: the Normalized Difference Vegetation Index (NDVI, Water Body Potential Mask (WBPM, Permanent Glacier Mask (PGM and Volcanic Soil Mask (VSM. Threshold values on the hue and value bands, expressed by a parabolic function, are used to detect the water bodies. Beside the water bodies layer, a quality layer, based on the water bodies occurrences, is available in the output product. The performance of the Water Bodies Detection Algorithm (WBDA was assessed using Landsat 8 scenes over 15 regions selected worldwide. A mean Commission Error (CE of 1.5% was obtained while a mean Omission Error (OE of 15.4% was obtained for minimum Water Surface Ratio (WSR = 0.5 and drops to 9.8% for minimum WSR = 0.6. Here, WSR is defined as the fraction of the PROBA-V pixel covered by water as derived from high spatial resolution images, e.g., Landsat 8. Both the CE = 1.5% and OE = 9.8% (WSR = 0.6 fall within the user requirements of 15%. The WBDA is fully operational in the Copernicus Global Land Service and products are freely available.

  9. Enhancing State-of-the-art Multi-objective Optimization Algorithms by Applying Domain Specific Operators

    DEFF Research Database (Denmark)

    Ghoreishi, Newsha; Sørensen, Jan Corfixen; Jørgensen, Bo Nørregaard

    2015-01-01

    optimization problems where the environment does not change dynamically. For that reason, the requirement for convergence in static optimization problems is not as timecritical as for dynamic optimization problems. Most MOEAs use generic variables and operators that scale to static multi-objective optimization...... problem. The domain specific operators only encode existing knowledge about the environment. A comprehensive comparative study is provided to evaluate the results of applying the CONTROLEUM-GA compared to NSGAII, e-NSGAII and e- MOEA. Experimental results demonstrate clear improvements in convergence time...

  10. Applying Different Independent Component Analysis Algorithms and Support Vector Regression for IT Chain Store Sales Forecasting

    Science.gov (United States)

    Dai, Wensheng

    2014-01-01

    Sales forecasting is one of the most important issues in managing information technology (IT) chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR), is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA) is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model) was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA), temporal ICA (tICA), and spatiotemporal ICA (stICA) to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting. PMID:25165740

  11. Applying different independent component analysis algorithms and support vector regression for IT chain store sales forecasting.

    Science.gov (United States)

    Dai, Wensheng; Wu, Jui-Yu; Lu, Chi-Jie

    2014-01-01

    Sales forecasting is one of the most important issues in managing information technology (IT) chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR), is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA) is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model) was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA), temporal ICA (tICA), and spatiotemporal ICA (stICA) to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting.

  12. Applying Different Independent Component Analysis Algorithms and Support Vector Regression for IT Chain Store Sales Forecasting

    Directory of Open Access Journals (Sweden)

    Wensheng Dai

    2014-01-01

    Full Text Available Sales forecasting is one of the most important issues in managing information technology (IT chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR, is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA, temporal ICA (tICA, and spatiotemporal ICA (stICA to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting.

  13. Model-based testing with UML applied to a roaming algorithm for bluetooth devices.

    Science.gov (United States)

    Dai, Zhen Ru; Grabowski, Jens; Neukirchen, Helmut; Pals, Holger

    2004-11-01

    In late 2001, the Object Management Group issued a Request for Proposal to develop a testing profile for UML 2.0. In June 2003, the work on the UML 2.0 Testing Profile was finally adopted by the OMG. Since March 2004, it has become an official standard of the OMG. The UML 2.0 Testing Profile provides support for UML based model-driven testing. This paper introduces a methodology on how to use the testing profile in order to modify and extend an existing UML design model for test issues. The application of the methodology will be explained by applying it to an existing UML Model for a Bluetooth device.

  14. A iterative algorithm in computarized tomography applied to non-destructive testing

    International Nuclear Information System (INIS)

    Santos, C.A.C.

    1982-10-01

    In the present work, a mathematical model has been developed for two dimensional image reconstruction in computarized tomography applied to non-destructive testing. The method used is the Algebraic Reconstruction Technique (ART) with additive corrections. This model consists of a discontinuous system formed by an NxN array of cells (pixels). The attenuation in the object of a collimated beam of gamma rays has been determined for various positions and angles of incidence (projections) in terms of the interaction of the beam with the intercepted pixels. The contribution of each pixel to beam attenuation was determined using the weight function wij. Simulated tests using standard objects carried out with attenuation coefficients in the range 0,2 to 0,7 cm -1 , were made using cell arrays of up to 25x25. Experiments were made using a gamma radiation source ( 241 Am), a table with translational and rotational movements and a gamma radiation detection system. Results indicate that convergence obtained in the iterative calculations is a function of the distribution of attenuation coefficient in the pixels, of the number of angular projection and of the number of iterations. (author) [pt

  15. SU-E-J-115: Correlation of Displacement Vector Fields Calculated by Deformable Image Registration Algorithms with Motion Parameters of CT Images with Well-Defined Targets and Controlled-Motion

    Energy Technology Data Exchange (ETDEWEB)

    Jaskowiak, J; Ahmad, S; Ali, I [University of Oklahoma Health Sciences Center, Oklahoma City, OK (United States); Alsbou, N [Ohio Northern University, Ada, OH (United States)

    2015-06-15

    Purpose: To investigate correlation of displacement vector fields (DVF) calculated by deformable image registration algorithms with motion parameters in helical axial and cone-beam CT images with motion artifacts. Methods: A mobile thorax phantom with well-known targets with different sizes that were made from water-equivalent material and inserted in foam to simulate lung lesions. The thorax phantom was imaged with helical, axial and cone-beam CT. The phantom was moved with a cyclic motion with different motion amplitudes and frequencies along the superior-inferior direction. Different deformable image registration algorithms including demons, fast demons, Horn-Shunck and iterative-optical-flow from the DIRART software were used to deform CT images for the phantom with different motion patterns. The CT images of the mobile phantom were deformed to CT images of the stationary phantom. Results: The values of displacement vectors calculated by deformable image registration algorithm correlated strongly with motion amplitude where large displacement vectors were calculated for CT images with large motion amplitudes. For example, the maximal displacement vectors were nearly equal to the motion amplitudes (5mm, 10mm or 20mm) at interfaces between the mobile targets lung tissue, while the minimal displacement vectors were nearly equal to negative the motion amplitudes. The maximal and minimal displacement vectors matched with edges of the blurred targets along the Z-axis (motion-direction), while DVF’s were small in the other directions. This indicates that the blurred edges by phantom motion were shifted largely to match with the actual target edge. These shifts were nearly equal to the motion amplitude. Conclusions: The DVF from deformable-image registration algorithms correlated well with motion amplitude of well-defined mobile targets. This can be used to extract motion parameters such as amplitude. However, as motion amplitudes increased, image artifacts increased

  16. 3D deformation field throughout the interior of materials.

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Huiqing; Lu, Wei-Yang

    2013-09-01

    This report contains the one-year feasibility study for our three-year LDRD proposal that is aimed to develop an experimental technique to measure the 3D deformation fields inside a material body. In this feasibility study, we first apply Digital Volume Correlation (DVC) algorithm to pre-existing in-situ Xray Computed Tomography (XCT) image sets with pure rigid body translation. The calculated displacement field has very large random errors and low precision that are unacceptable. Then we enhance these tomography images by setting threshold of the intensity of each slice. DVC algorithm is able to obtain accurate deformation fields from these enhanced image sets and the deformation fields are consistent with the global mechanical loading that is applied to the specimen. Through this study, we prove that the internal markers inside the pre-existing tomography images of aluminum alloy can be enhanced and are suitable for DVC to calculate the deformation field throughout the material body.

  17. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    Science.gov (United States)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  18. MULTIOBJECTIVE EVOLUTIONARY ALGORITHMS APPLIED TO MICROSTRIP ANTENNAS DESIGN ALGORITMOS EVOLUTIVOS MULTIOBJETIVO APLICADOS A LOS PROYECTOS DE ANTENAS MICROSTRIP

    Directory of Open Access Journals (Sweden)

    Juliano Rodrigues Brianeze

    2009-12-01

    Full Text Available This work presents three of the main evolutionary algorithms: Genetic Algorithm, Evolution Strategy and Evolutionary Programming, applied to microstrip antennas design. Efficiency tests were performed, considering the analysis of key physical and geometrical parameters, evolution type, numerical random generators effects, evolution operators and selection criteria. These algorithms were validated through design of microstrip antennas based on the Resonant Cavity Method, and allow multiobjective optimizations, considering bandwidth, standing wave ratio and relative material permittivity. The optimal results obtained with these optimization processes, were confirmed by CST Microwave Studio commercial package.Este trabajo presenta tres de los principales algoritmos evolutivos: Algoritmo Genético, Estrategia Evolutiva y Programación Evolutiva, aplicados al diseño de antenas de microlíneas (microstrip. Se realizaron pruebas de eficiencia de los algoritmos, considerando el análisis de los parámetros físicos y geométricos, tipo de evolución, efecto de generación de números aleatorios, operadores evolutivos y los criterios de selección. Estos algoritmos fueron validados a través del diseño de antenas de microlíneas basado en el Método de Cavidades Resonantes y permiten optimizaciones multiobjetivo, considerando ancho de banda, razón de onda estacionaria y permitividad relativa del dieléctrico. Los resultados óptimos obtenidos fueron confirmados a través del software comercial CST Microwave Studio.

  19. Observer Evaluation of a Metal Artifact Reduction Algorithm Applied to Head and Neck Cone Beam Computed Tomographic Images

    Energy Technology Data Exchange (ETDEWEB)

    Korpics, Mark; Surucu, Murat; Mescioglu, Ibrahim; Alite, Fiori; Block, Alec M.; Choi, Mehee; Emami, Bahman; Harkenrider, Matthew M.; Solanki, Abhishek A.; Roeske, John C., E-mail: jroeske@lumc.edu

    2016-11-15

    Purpose and Objectives: To quantify, through an observer study, the reduction in metal artifacts on cone beam computed tomographic (CBCT) images using a projection-interpolation algorithm, on images containing metal artifacts from dental fillings and implants in patients treated for head and neck (H&N) cancer. Methods and Materials: An interpolation-substitution algorithm was applied to H&N CBCT images containing metal artifacts from dental fillings and implants. Image quality with respect to metal artifacts was evaluated subjectively and objectively. First, 6 independent radiation oncologists were asked to rank randomly sorted blinded images (before and after metal artifact reduction) using a 5-point rating scale (1 = severe artifacts; 5 = no artifacts). Second, the standard deviation of different regions of interest (ROI) within each image was calculated and compared with the mean rating scores. Results: The interpolation-substitution technique successfully reduced metal artifacts in 70% of the cases. From a total of 60 images from 15 H&N cancer patients undergoing image guided radiation therapy, the mean rating score on the uncorrected images was 2.3 ± 1.1, versus 3.3 ± 1.0 for the corrected images. The mean difference in ranking score between uncorrected and corrected images was 1.0 (95% confidence interval: 0.9-1.2, P<.05). The standard deviation of each ROI significantly decreased after artifact reduction (P<.01). Moreover, a negative correlation between the mean rating score for each image and the standard deviation of the oral cavity and bilateral cheeks was observed. Conclusion: The interpolation-substitution algorithm is efficient and effective for reducing metal artifacts caused by dental fillings and implants on CBCT images, as demonstrated by the statistically significant increase in observer image quality ranking and by the decrease in ROI standard deviation between uncorrected and corrected images.

  20. Crossover versus Mutation: A Comparative Analysis of the Evolutionary Strategy of Genetic Algorithms Applied to Combinatorial Optimization Problems

    Directory of Open Access Journals (Sweden)

    E. Osaba

    2014-01-01

    Full Text Available Since their first formulation, genetic algorithms (GAs have been one of the most widely used techniques to solve combinatorial optimization problems. The basic structure of the GAs is known by the scientific community, and thanks to their easy application and good performance, GAs are the focus of a lot of research works annually. Although throughout history there have been many studies analyzing various concepts of GAs, in the literature there are few studies that analyze objectively the influence of using blind crossover operators for combinatorial optimization problems. For this reason, in this paper a deep study on the influence of using them is conducted. The study is based on a comparison of nine techniques applied to four well-known combinatorial optimization problems. Six of the techniques are GAs with different configurations, and the remaining three are evolutionary algorithms that focus exclusively on the mutation process. Finally, to perform a reliable comparison of these results, a statistical study of them is made, performing the normal distribution z-test.

  1. Crossover versus Mutation: A Comparative Analysis of the Evolutionary Strategy of Genetic Algorithms Applied to Combinatorial Optimization Problems

    Science.gov (United States)

    Osaba, E.; Carballedo, R.; Diaz, F.; Onieva, E.; de la Iglesia, I.; Perallos, A.

    2014-01-01

    Since their first formulation, genetic algorithms (GAs) have been one of the most widely used techniques to solve combinatorial optimization problems. The basic structure of the GAs is known by the scientific community, and thanks to their easy application and good performance, GAs are the focus of a lot of research works annually. Although throughout history there have been many studies analyzing various concepts of GAs, in the literature there are few studies that analyze objectively the influence of using blind crossover operators for combinatorial optimization problems. For this reason, in this paper a deep study on the influence of using them is conducted. The study is based on a comparison of nine techniques applied to four well-known combinatorial optimization problems. Six of the techniques are GAs with different configurations, and the remaining three are evolutionary algorithms that focus exclusively on the mutation process. Finally, to perform a reliable comparison of these results, a statistical study of them is made, performing the normal distribution z-test. PMID:25165731

  2. A new formulation of the pseudocontinuous synthesis algorithm applied to the calculation of neutronic flux in PWR reactors

    International Nuclear Information System (INIS)

    Silva, C.F. da.

    1979-09-01

    A new formulation of the pseudocontinuous synthesis algorithm is applied to solve the static three dimensional two-group diffusion equations. The new method avoids ambiguities regarding interface conditions, which are inherent to the differential formulation, by resorting to the finite difference version of the differential equations involved. A considerable number of input/output options, possible core configurations and control rod positioning are implemented resulting in a very flexible as well as economical code to compute 3D fluxes, power density and reactivities of PWR reactors with partial inserted control rods. The performance of this new code is checked against the IAEA 3D Benchmark problem and results show that SINT3D yields comparable accuracy with much less computing time and memory required than in conventional 3D finite differerence codes. (Author) [pt

  3. History Matching and Parameter Estimation of Surface Deformation Data for a CO2 Sequestration Field Project Using Ensemble-Based Algorithms

    Science.gov (United States)

    Tavakoli, Reza; Srinivasan, Sanjay; Wheeler, Mary

    2015-04-01

    The application of ensemble-based algorithms for history matching reservoir models has been steadily increasing over the past decade. However, the majority of implementations in the reservoir engineering have dealt only with production history matching. During geologic sequestration, the injection of large quantities of CO2 into the subsurface may alter the stress/strain field which in turn can lead to surface uplift or subsidence. Therefore, it is essential to couple multiphase flow and geomechanical response in order to predict and quantify the uncertainty of CO2 plume movement for long-term, large-scale CO2 sequestration projects. In this work, we simulate and estimate the properties of a reservoir that is being used to store CO2 as part of the In Salah Capture and Storage project in Algeria. The CO2 is separated from produced natural gas and is re-injected into downdip aquifer portion of the field from three long horizontal wells. The field observation data includes ground surface deformations (uplift) measured using satellite-based radar (InSAR), injection well locations and CO2 injection rate histories provided by the operators. We implement variations of ensemble Kalman filter and ensemble smoother algorithms for assimilating both injection rate data as well as geomechanical observations (surface uplift) into reservoir model. The preliminary estimation results of horizontal permeability and material properties such as Young Modulus and Poisson Ratio are consistent with available measurements and previous studies in this field. Moreover, the existence of high-permeability channels (fractures) within the reservoir; especially in the regions around the injection wells are confirmed. This estimation results can be used to accurately and efficiently predict and quantify the uncertainty in the movement of CO2 plume.

  4. History matching and parameter estimation of surface deformation data for a CO2 sequestration field project using ensemble-based algorithm

    Science.gov (United States)

    Ping, J.; Tavakoli, R.; Min, B.; Srinivasan, S.; Wheeler, M. F.

    2015-12-01

    Optimal management of subsurface processes requires the characterization of the uncertainty in reservoir description and reservoir performance prediction. The application of ensemble-based algorithms for history matching reservoir models has been steadily increasing over the past decade. However, the majority of implementations in the reservoir engineering have dealt only with production history matching. During geologic sequestration, the injection of large quantities of CO2 into the subsurface may alter the stress/strain field which in turn can lead to surface uplift or subsidence. Therefore, it is essential to couple multiphase flow and geomechanical response in order to predict and quantify the uncertainty of CO2 plume movement for long-term, large-scale CO2 sequestration projects. In this work, we simulate and estimate the properties of a reservoir that is being used to store CO2 as part of the In Salah Capture and Storage project in Algeria. The CO2 is separated from produced natural gas and is re-injected into downdip aquifer portion of the field from three long horizontal wells. The field observation data includes ground surface deformations (uplift) measured using satellite-based radar (InSAR), injection well locations and CO2 injection rate histories provided by the operators. We implement ensemble-based algorithms for assimilating both injection rate data as well as geomechanical observations (surface uplift) into reservoir model. The preliminary estimation results of horizontal permeability and material properties such as Young Modulus and Poisson Ratio are consistent with available measurements and previous studies in this field. Moreover, the existence of high-permeability channels/fractures within the reservoir; especially in the regions around the injection wells are confirmed. This estimation results can be used to accurately and efficiently predict and monitor the movement of CO2 plume.

  5. The Moyal momentum algebra applied to θ-deformed 2d conformal models and KdV-hierarchies

    International Nuclear Information System (INIS)

    Boulahoual, A.; Sedra, M.B.

    2002-08-01

    The properties of the Das-Popowicz Moyal momentum algebra that we introduce in hep-th/0207242 are reexamined in detail and used to discuss some aspects of integrable models and 2d conformal field theories. Among the results presented we setup some useful convention notations which lead to extract some non trivial properties of the Moyal momentum algebra. We use the particular sub-algebra sl n -Σ-tilde n (0,n) to construct the sl 2 -Liouville conformal model δδ-barΦ=2/θe -1/θΦ and its sl 3 -Toda extension δδ-bar 1 =Ae -1/2θ(Φ 1 +1/2Φ 2 ) and δδ-barΦ 2 =Be -1/2 / θ (Φ 1 +2Φ 2 ) . We also show that the central charge, a la Feigin-Fuchs, associated to the spin-2 conformal current of the θ-Liouville model is given by c θ =(1+24θ 2 ). Moreover, the results obtained for the Das-Popowicz Mm algebra are applied to study systematically some properties of the Moyal KdV and Boussinesq hierarchies generalizing some known results. We also discuss the primarily condition of conformal w θ -currents and interpret this condition as being a dressing gauge symmetry in the Moyal momentum space. Some computations related to the dressing gauge group are explicitly presented. (author)

  6. A multilevel search algorithm for the maximization of submodular functions applied to the quadratic cost partition problem

    NARCIS (Netherlands)

    Goldengorin, B.; Ghosh, D.

    Maximization of submodular functions on a ground set is a NP-hard combinatorial optimization problem. Data correcting algorithms are among the several algorithms suggested for solving this problem exactly and approximately. From the point of view of Hasse diagrams data correcting algorithms use

  7. On the performance of an artificial bee colony optimization algorithm applied to the accident diagnosis in a PWR nuclear power plant

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Iona Maghali S. de; Schirru, Roberto; Medeiros, Jose A.C.C., E-mail: maghali@lmp.ufrj.b, E-mail: schirru@lmp.ufrj.b, E-mail: canedo@lmp.ufrj.b [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Programa de Engenharia Nuclear

    2009-07-01

    The swarm-based algorithm described in this paper is a new search algorithm capable of locating good solutions efficiently and within a reasonable running time. The work presents a population-based search algorithm that mimics the food foraging behavior of honey bee swarms and can be regarded as belonging to the category of intelligent optimization tools. In its basic version, the algorithm performs a kind of random search combined with neighborhood search and can be used for solving multi-dimensional numeric problems. Following a description of the algorithm, this paper presents a new event classification system based exclusively on the ability of the algorithm to find the best centroid positions that correctly identifies an accident in a PWR nuclear power plant, thus maximizing the number of correct classification of transients. The simulation results show that the performance of the proposed algorithm is comparable to other population-based algorithms when applied to the same problem, with the advantage of employing fewer control parameters. (author)

  8. On the performance of an artificial bee colony optimization algorithm applied to the accident diagnosis in a PWR nuclear power plant

    International Nuclear Information System (INIS)

    Oliveira, Iona Maghali S. de; Schirru, Roberto; Medeiros, Jose A.C.C.

    2009-01-01

    The swarm-based algorithm described in this paper is a new search algorithm capable of locating good solutions efficiently and within a reasonable running time. The work presents a population-based search algorithm that mimics the food foraging behavior of honey bee swarms and can be regarded as belonging to the category of intelligent optimization tools. In its basic version, the algorithm performs a kind of random search combined with neighborhood search and can be used for solving multi-dimensional numeric problems. Following a description of the algorithm, this paper presents a new event classification system based exclusively on the ability of the algorithm to find the best centroid positions that correctly identifies an accident in a PWR nuclear power plant, thus maximizing the number of correct classification of transients. The simulation results show that the performance of the proposed algorithm is comparable to other population-based algorithms when applied to the same problem, with the advantage of employing fewer control parameters. (author)

  9. The level of detail required in a deformable phantom to accurately perform quality assurance of deformable image registration

    Science.gov (United States)

    Saenz, Daniel L.; Kim, Hojin; Chen, Josephine; Stathakis, Sotirios; Kirby, Neil

    2016-09-01

    The primary purpose of the study was to determine how detailed deformable image registration (DIR) phantoms need to adequately simulate human anatomy and accurately assess the quality of DIR algorithms. In particular, how many distinct tissues are required in a phantom to simulate complex human anatomy? Pelvis and head-and-neck patient CT images were used for this study as virtual phantoms. Two data sets from each site were analyzed. The virtual phantoms were warped to create two pairs consisting of undeformed and deformed images. Otsu’s method was employed to create additional segmented image pairs of n distinct soft tissue CT number ranges (fat, muscle, etc). A realistic noise image was added to each image. Deformations were applied in MIM Software (MIM) and Velocity deformable multi-pass (DMP) and compared with the known warping. Images with more simulated tissue levels exhibit more contrast, enabling more accurate results. Deformation error (magnitude of the vector difference between known and predicted deformation) was used as a metric to evaluate how many CT number gray levels are needed for a phantom to serve as a realistic patient proxy. Stabilization of the mean deformation error was reached by three soft tissue levels for Velocity DMP and MIM, though MIM exhibited a persisting difference in accuracy between the discrete images and the unprocessed image pair. A minimum detail of three levels allows a realistic patient proxy for use with Velocity and MIM deformation algorithms.

  10. q-Deformed nonlinear maps

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 64; Issue 3 ... Keywords. Nonlinear dynamics; logistic map; -deformation; Tsallis statistics. ... As a specific example, a -deformation procedure is applied to the logistic map. Compared ...

  11. A new method for class prediction based on signed-rank algorithms applied to Affymetrix® microarray experiments

    Directory of Open Access Journals (Sweden)

    Vassal Aurélien

    2008-01-01

    Full Text Available Abstract Background The huge amount of data generated by DNA chips is a powerful basis to classify various pathologies. However, constant evolution of microarray technology makes it difficult to mix data from different chip types for class prediction of limited sample populations. Affymetrix® technology provides both a quantitative fluorescence signal and a decision (detection call: absent or present based on signed-rank algorithms applied to several hybridization repeats of each gene, with a per-chip normalization. We developed a new prediction method for class belonging based on the detection call only from recent Affymetrix chip type. Biological data were obtained by hybridization on U133A, U133B and U133Plus 2.0 microarrays of purified normal B cells and cells from three independent groups of multiple myeloma (MM patients. Results After a call-based data reduction step to filter out non class-discriminative probe sets, the gene list obtained was reduced to a predictor with correction for multiple testing by iterative deletion of probe sets that sequentially improve inter-class comparisons and their significance. The error rate of the method was determined using leave-one-out and 5-fold cross-validation. It was successfully applied to (i determine a sex predictor with the normal donor group classifying gender with no error in all patient groups except for male MM samples with a Y chromosome deletion, (ii predict the immunoglobulin light and heavy chains expressed by the malignant myeloma clones of the validation group and (iii predict sex, light and heavy chain nature for every new patient. Finally, this method was shown powerful when compared to the popular classification method Prediction Analysis of Microarray (PAM. Conclusion This normalization-free method is routinely used for quality control and correction of collection errors in patient reports to clinicians. It can be easily extended to multiple class prediction suitable with

  12. Feasible Initial Population with Genetic Diversity for a Population-Based Algorithm Applied to the Vehicle Routing Problem with Time Windows

    Directory of Open Access Journals (Sweden)

    Marco Antonio Cruz-Chávez

    2016-01-01

    Full Text Available A stochastic algorithm for obtaining feasible initial populations to the Vehicle Routing Problem with Time Windows is presented. The theoretical formulation for the Vehicle Routing Problem with Time Windows is explained. The proposed method is primarily divided into a clustering algorithm and a two-phase algorithm. The first step is the application of a modified k-means clustering algorithm which is proposed in this paper. The two-phase algorithm evaluates a partial solution to transform it into a feasible individual. The two-phase algorithm consists of a hybridization of four kinds of insertions which interact randomly to obtain feasible individuals. It has been proven that different kinds of insertions impact the diversity among individuals in initial populations, which is crucial for population-based algorithm behavior. A modification to the Hamming distance method is applied to the populations generated for the Vehicle Routing Problem with Time Windows to evaluate their diversity. Experimental tests were performed based on the Solomon benchmarking. Experimental results show that the proposed method facilitates generation of highly diverse populations, which vary according to the type and distribution of the instances.

  13. Evaluation of a wavelet-based compression algorithm applied to the silicon drift detectors data of the ALICE experiment at CERN

    International Nuclear Information System (INIS)

    Falchieri, Davide; Gandolfi, Enzo; Masotti, Matteo

    2004-01-01

    This paper evaluates the performances of a wavelet-based compression algorithm applied to the data produced by the silicon drift detectors of the ALICE experiment at CERN. This compression algorithm is a general purpose lossy technique, in other words, its application could prove useful even on a wide range of other data reduction's problems. In particular the design targets relevant for our wavelet-based compression algorithm are the following ones: a high-compression coefficient, a reconstruction error as small as possible and a very limited execution time. Interestingly, the results obtained are quite close to the ones achieved by the algorithm implemented in the first prototype of the chip CARLOS, the chip that will be used in the silicon drift detectors readout chain

  14. SU-E-T-33: A Feasibility-Seeking Algorithm Applied to Planning of Intensity Modulated Proton Therapy: A Proof of Principle Study

    International Nuclear Information System (INIS)

    Penfold, S; Casiraghi, M; Dou, T; Schulte, R; Censor, Y

    2015-01-01

    Purpose: To investigate the applicability of feasibility-seeking cyclic orthogonal projections to the field of intensity modulated proton therapy (IMPT) inverse planning. Feasibility of constraints only, as opposed to optimization of a merit function, is less demanding algorithmically and holds a promise of parallel computations capability with non-cyclic orthogonal projections algorithms such as string-averaging or block-iterative strategies. Methods: A virtual 2D geometry was designed containing a C-shaped planning target volume (PTV) surrounding an organ at risk (OAR). The geometry was pixelized into 1 mm pixels. Four beams containing a subset of proton pencil beams were simulated in Geant4 to provide the system matrix A whose elements a-ij correspond to the dose delivered to pixel i by a unit intensity pencil beam j. A cyclic orthogonal projections algorithm was applied with the goal of finding a pencil beam intensity distribution that would meet the following dose requirements: D-OAR < 54 Gy and 57 Gy < D-PTV < 64.2 Gy. The cyclic algorithm was based on the concept of orthogonal projections onto half-spaces according to the Agmon-Motzkin-Schoenberg algorithm, also known as ‘ART for inequalities’. Results: The cyclic orthogonal projections algorithm resulted in less than 5% of the PTV pixels and less than 1% of OAR pixels violating their dose constraints, respectively. Because of the abutting OAR-PTV geometry and the realistic modelling of the pencil beam penumbra, complete satisfaction of the dose objectives was not achieved, although this would be a clinically acceptable plan for a meningioma abutting the brainstem, for example. Conclusion: The cyclic orthogonal projections algorithm was demonstrated to be an effective tool for inverse IMPT planning in the 2D test geometry described. We plan to further develop this linear algorithm to be capable of incorporating dose-volume constraints into the feasibility-seeking algorithm

  15. Remote sensing image stitch using modified structure deformation

    Science.gov (United States)

    Pan, Ke-cheng; Chen, Jin-wei; Chen, Yueting; Feng, Huajun

    2012-10-01

    To stitch remote sensing images seamlessly without producing visual artifact which is caused by severe intensity discrepancy and structure misalignment, we modify the original structure deformation based stitching algorithm which have two main problems: Firstly, using Poisson equation to propagate deformation vectors leads to the change of the topological relationship between the key points and their surrounding pixels, which may bring in wrong image characteristics. Secondly, the diffusion area of the sparse matrix is too limited to rectify the global intensity discrepancy. To solve the first problem, we adopt Spring-Mass model and bring in external force to keep the topological relationship between key points and their surrounding pixels. We also apply tensor voting algorithm to achieve the global intensity corresponding curve of the two images to solve the second problem. Both simulated and experimental results show that our algorithm is faster and can reach better result than the original algorithm.

  16. Improved adaptive genetic algorithm with sparsity constraint applied to thermal neutron CT reconstruction of two-phase flow

    Science.gov (United States)

    Yan, Mingfei; Hu, Huasi; Otake, Yoshie; Taketani, Atsushi; Wakabayashi, Yasuo; Yanagimachi, Shinzo; Wang, Sheng; Pan, Ziheng; Hu, Guang

    2018-05-01

    Thermal neutron computer tomography (CT) is a useful tool for visualizing two-phase flow due to its high imaging contrast and strong penetrability of neutrons for tube walls constructed with metallic material. A novel approach for two-phase flow CT reconstruction based on an improved adaptive genetic algorithm with sparsity constraint (IAGA-SC) is proposed in this paper. In the algorithm, the neighborhood mutation operator is used to ensure the continuity of the reconstructed object. The adaptive crossover probability P c and mutation probability P m are improved to help the adaptive genetic algorithm (AGA) achieve the global optimum. The reconstructed results for projection data, obtained from Monte Carlo simulation, indicate that the comprehensive performance of the IAGA-SC algorithm exceeds the adaptive steepest descent-projection onto convex sets (ASD-POCS) algorithm in restoring typical and complex flow regimes. It especially shows great advantages in restoring the simply connected flow regimes and the shape of object. In addition, the CT experiment for two-phase flow phantoms was conducted on the accelerator-driven neutron source to verify the performance of the developed IAGA-SC algorithm.

  17. Qualitative and quantitative evaluation of rigid and deformable motion correction algorithms using dual-energy CT images in view of application to CT perfusion measurements in abdominal organs affected by breathing motion.

    Science.gov (United States)

    Skornitzke, S; Fritz, F; Klauss, M; Pahn, G; Hansen, J; Hirsch, J; Grenacher, L; Kauczor, H-U; Stiller, W

    2015-02-01

    To compare six different scenarios for correcting for breathing motion in abdominal dual-energy CT (DECT) perfusion measurements. Rigid [RRComm(80 kVp)] and non-rigid [NRComm(80 kVp)] registration of commercially available CT perfusion software, custom non-rigid registration [NRCustom(80 kVp], demons algorithm) and a control group [CG(80 kVp)] without motion correction were evaluated using 80 kVp images. Additionally, NRCustom was applied to dual-energy (DE)-blended [NRCustom(DE)] and virtual non-contrast [NRCustom(VNC)] images, yielding six evaluated scenarios. After motion correction, perfusion maps were calculated using a combined maximum slope/Patlak model. For qualitative evaluation, three blinded radiologists independently rated motion correction quality and resulting perfusion maps on a four-point scale (4 = best, 1 = worst). For quantitative evaluation, relative changes in metric values, R(2) and residuals of perfusion model fits were calculated. For motion-corrected images, mean ratings differed significantly [NRCustom(80 kVp) and NRCustom(DE), 3.3; NRComm(80 kVp), 3.1; NRCustom(VNC), 2.9; RRComm(80 kVp), 2.7; CG(80 kVp), 2.7; all p VNC), 22.8%; RRComm(80 kVp), 0.6%; CG(80 kVp), 0%]. Regarding perfusion maps, NRCustom(80 kVp) and NRCustom(DE) were rated highest [NRCustom(80 kVp), 3.1; NRCustom(DE), 3.0; NRComm(80 kVp), 2.8; NRCustom(VNC), 2.6; CG(80 kVp), 2.5; RRComm(80 kVp), 2.4] and had significantly higher R(2) and lower residuals. Correlation between qualitative and quantitative evaluation was low to moderate. Non-rigid motion correction improves spatial alignment of the target region and fit of CT perfusion models. Using DE-blended and DE-VNC images for deformable registration offers no significant improvement. Non-rigid algorithms improve the quality of abdominal CT perfusion measurements but do not benefit from DECT post processing.

  18. Soft real-time EPICS extensions for fast control: A case study applied to a TCV equilibrium algorithm

    International Nuclear Information System (INIS)

    Castro, R.; Romero, J.A.; Vega, J.; Nieto, J.; Ruiz, M.; Sanz, D.; Barrera, E.; De Arcas, G.

    2014-01-01

    Highlights: • Implementation of a soft real-time control system based on EPICS technology. • High data throughput system control implementation. • GPU technology applied to fast control. • EPICS fast control based solution. • Fast control and data acquisition in Linux. - Abstract: For new control systems development, ITER distributes CODAC Core System that is a software package based on Linux RedHat, and includes EPICS (Experimental Physics and Industrial Control System) as software control system solution. EPICS technology is being widely used for implementing control systems in research experiments and it is a very well tested technology, but presents important lacks to meet fast control requirements. To manage and process massive amounts of acquired data, EPICS requires additional functions such as: data block oriented transmissions, links with speed-optimized data buffers and synchronization mechanisms not based on system interruptions. This EPICS limitation turned out clearly during the development of the Fast Plant System Controller Prototype for ITER based on PXIe platform. In this work, we present a solution that, on the one hand, is completely compatible and based on EPCIS technology, and on the other hand, extends EPICS technology for implementing high performance fast control systems with soft-real time characteristics. This development includes components such as: data acquisition, processing, monitoring, data archiving, and data streaming (via network and shared memory). Additionally, it is important to remark that this system is compatible with multiple Graphics Processing Units (GPUs) and is able to integrate MatLab code through MatLab engine connections. It preserves EPICS modularity, enabling system modification or extension with a simple change of configuration, and finally it enables parallelization based on data distribution to different processing components. With the objective of illustrating the presented solution in an actual

  19. Applying the ACSM Preparticipation Screening Algorithm to U.S. Adults: National Health and Nutrition Examination Survey 2001-2004.

    Science.gov (United States)

    Whitfield, Geoffrey P; Riebe, Deborah; Magal, Meir; Liguori, Gary

    2017-10-01

    For most people, the benefits of physical activity far outweigh the risks. Research has suggested that exercise preparticipation questionnaires might refer an unwarranted number of adults for medical evaluation before exercise initiation, creating a potential barrier to adoption. The new American College of Sports Medicine (ACSM) prescreening algorithm relies on current exercise participation; history and symptoms of cardiovascular, metabolic, or renal disease; and desired exercise intensity to determine referral status. Our purpose was to compare the referral proportion of the ACSM algorithm to that of previous screening tools using a representative sample of U.S. adults. On the basis of responses to health questionnaires from the 2001-2004 National Health and Nutrition Examination Survey, we calculated the proportion of adults 40 yr or older who would be referred for medical clearance before exercise participation based on the ACSM algorithm. Results were stratified by age and sex and compared with previous results for the ACSM/American Heart Association Preparticipation Questionnaire and the Physical Activity Readiness Questionnaire. On the basis of the ACSM algorithm, 2.6% of adults would be referred only before beginning vigorous exercise and 54.2% of respondents would be referred before beginning any exercise. Men were more frequently referred before vigorous exercise, and women were more frequently referred before any exercise. Referral was more common with increasing age. The ACSM algorithm referred a smaller proportion of adults for preparticipation medical clearance than the previously examined questionnaires. Although additional validation is needed to determine whether the algorithm correctly identifies those at risk for cardiovascular complications, the revised ACSM algorithm referred fewer respondents than other screening tools. A lower referral proportion may mitigate an important barrier of medical clearance from exercise participation.

  20. A comparison of three-dimensional nonequilibrium solution algorithms applied to hypersonic flows with stiff chemical source terms

    Science.gov (United States)

    Palmer, Grant; Venkatapathy, Ethiraj

    1993-01-01

    Three solution algorithms, explicit underrelaxation, point implicit, and lower upper symmetric Gauss-Seidel (LUSGS), are used to compute nonequilibrium flow around the Apollo 4 return capsule at 62 km altitude. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness. The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15, 23, and 30, the LUSGS method produces an eight order of magnitude drop in the L2 norm of the energy residual in 1/3 to 1/2 the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 23 and above. At Mach 40 the performance of the LUSGS algorithm deteriorates to the point it is out-performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.

  1. Algorithmic analysis of relational learning processes in instructional technology: Some implications for basic, translational, and applied research.

    Science.gov (United States)

    McIlvane, William J; Kledaras, Joanne B; Gerard, Christophe J; Wilde, Lorin; Smelson, David

    2018-07-01

    A few noteworthy exceptions notwithstanding, quantitative analyses of relational learning are most often simple descriptive measures of study outcomes. For example, studies of stimulus equivalence have made much progress using measures such as percentage consistent with equivalence relations, discrimination ratio, and response latency. Although procedures may have ad hoc variations, they remain fairly similar across studies. Comparison studies of training variables that lead to different outcomes are few. Yet to be developed are tools designed specifically for dynamic and/or parametric analyses of relational learning processes. This paper will focus on recent studies to develop (1) quality computer-based programmed instruction for supporting relational learning in children with autism spectrum disorders and intellectual disabilities and (2) formal algorithms that permit ongoing, dynamic assessment of learner performance and procedure changes to optimize instructional efficacy and efficiency. Because these algorithms have a strong basis in evidence and in theories of stimulus control, they may have utility also for basic and translational research. We present an overview of the research program, details of algorithm features, and summary results that illustrate their possible benefits. It also presents arguments that such algorithm development may encourage parametric research, help in integrating new research findings, and support in-depth quantitative analyses of stimulus control processes in relational learning. Such algorithms may also serve to model control of basic behavioral processes that is important to the design of effective programmed instruction for human learners with and without functional disabilities. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Applying a New Adaptive Genetic Algorithm to Study the Layout of Drilling Equipment in Semisubmersible Drilling Platforms

    Directory of Open Access Journals (Sweden)

    Wensheng Xiao

    2015-01-01

    Full Text Available This study proposes a new selection method called trisection population for genetic algorithm selection operations. In this new algorithm, the highest fitness of 2N/3 parent individuals is genetically manipulated to reproduce offspring. This selection method ensures a high rate of effective population evolution and overcomes the tendency of population to fall into local optimal solutions. Rastrigin’s test function was selected to verify the superiority of the method. Based on characteristics of arc tangent function, a genetic algorithm crossover and mutation probability adaptive methods were proposed. This allows individuals close to the average fitness to be operated with a greater probability of crossover and mutation, while individuals close to the maximum fitness are not easily destroyed. This study also analyzed the equipment layout constraints and objective functions of deep-water semisubmersible drilling platforms. The improved genetic algorithm was used to solve the layout plan. Optimization results demonstrate the effectiveness of the improved algorithm and the fit of layout plans.

  3. Initialization of a fractional order identification algorithm applied for Lithium-ion battery modeling in time domain

    Science.gov (United States)

    Nasser Eddine, Achraf; Huard, Benoît; Gabano, Jean-Denis; Poinot, Thierry

    2018-06-01

    This paper deals with the initialization of a non linear identification algorithm used to accurately estimate the physical parameters of Lithium-ion battery. A Randles electric equivalent circuit is used to describe the internal impedance of the battery. The diffusion phenomenon related to this modeling is presented using a fractional order method. The battery model is thus reformulated into a transfer function which can be identified through Levenberg-Marquardt algorithm to ensure the algorithm's convergence to the physical parameters. An initialization method is proposed in this paper by taking into account previously acquired information about the static and dynamic system behavior. The method is validated using noisy voltage response, while precision of the final identification results is evaluated using Monte-Carlo method.

  4. Comparison of Dose Distributions With TG-43 and Collapsed Cone Convolution Algorithms Applied to Accelerated Partial Breast Irradiation Patient Plans

    Energy Technology Data Exchange (ETDEWEB)

    Thrower, Sara L., E-mail: slloupot@mdanderson.org [The University of Texas Graduate School of Biomedical Sciences at Houston, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Shaitelman, Simona F.; Bloom, Elizabeth [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Salehpour, Mohammad; Gifford, Kent [Department of Radiation Physics, The University of Texas Graduate School of Biomedical Sciences at Houston, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States)

    2016-08-01

    Purpose: To compare the treatment plans for accelerated partial breast irradiation calculated by the new commercially available collapsed cone convolution (CCC) and current standard TG-43–based algorithms for 50 patients treated at our institution with either a Strut-Adjusted Volume Implant (SAVI) or Contura device. Methods and Materials: We recalculated target coverage, volume of highly dosed normal tissue, and dose to organs at risk (ribs, skin, and lung) with each algorithm. For 1 case an artificial air pocket was added to simulate 10% nonconformance. We performed a Wilcoxon signed rank test to determine the median differences in the clinical indices V90, V95, V100, V150, V200, and highest-dosed 0.1 cm{sup 3} and 1.0 cm{sup 3} of rib, skin, and lung between the two algorithms. Results: The CCC algorithm calculated lower values on average for all dose-volume histogram parameters. Across the entire patient cohort, the median difference in the clinical indices calculated by the 2 algorithms was <10% for dose to organs at risk, <5% for target volume coverage (V90, V95, and V100), and <4 cm{sup 3} for dose to normal breast tissue (V150 and V200). No discernable difference was seen in the nonconformance case. Conclusions: We found that on average over our patient population CCC calculated (<10%) lower doses than TG-43. These results should inform clinicians as they prepare for the transition to heterogeneous dose calculation algorithms and determine whether clinical tolerance limits warrant modification.

  5. Analysis of the moderate resolution imaging spectroradiometer contextual algorithm for small fire detection, Journal of Applied Remote Sensing Vol.3

    Science.gov (United States)

    W. Wang; J.J. Qu; X. Hao; Y. Liu

    2009-01-01

    In the southeastern United States, most wildland fires are of low intensity. A substantial number of these fires cannot be detected by the MODIS contextual algorithm. To improve the accuracy of fire detection for this region, the remote-sensed characteristics of these fires have to be...

  6. Optimization the Initial Weights of Artificial Neural Networks via Genetic Algorithm Applied to Hip Bone Fracture Prediction

    Directory of Open Access Journals (Sweden)

    Yu-Tzu Chang

    2012-01-01

    Full Text Available This paper aims to find the optimal set of initial weights to enhance the accuracy of artificial neural networks (ANNs by using genetic algorithms (GA. The sample in this study included 228 patients with first low-trauma hip fracture and 215 patients without hip fracture, both of them were interviewed with 78 questions. We used logistic regression to select 5 important factors (i.e., bone mineral density, experience of fracture, average hand grip strength, intake of coffee, and peak expiratory flow rate for building artificial neural networks to predict the probabilities of hip fractures. Three-layer (one hidden layer ANNs models with back-propagation training algorithms were adopted. The purpose in this paper is to find the optimal initial weights of neural networks via genetic algorithm to improve the predictability. Area under the ROC curve (AUC was used to assess the performance of neural networks. The study results showed the genetic algorithm obtained an AUC of 0.858±0.00493 on modeling data and 0.802 ± 0.03318 on testing data. They were slightly better than the results of our previous study (0.868±0.00387 and 0.796±0.02559, resp.. Thus, the preliminary study for only using simple GA has been proved to be effective for improving the accuracy of artificial neural networks.

  7. Assessment of two aerosol optical thickness retrieval algorithms applied to MODIS Aqua and Terra measurements in Europe

    Directory of Open Access Journals (Sweden)

    P. Glantz

    2012-07-01

    Full Text Available The aim of the present study is to validate AOT (aerosol optical thickness and Ångström exponent (α, obtained from MODIS (MODerate resolution Imaging Spectroradiometer Aqua and Terra calibrated level 1 data (1 km horizontal resolution at ground with the SAER (Satellite AErosol Retrieval algorithm and with MODIS Collection 5 (c005 standard product retrievals (10 km horizontal resolution, against AERONET (AErosol RObotic NETwork sun photometer observations over land surfaces in Europe. An inter-comparison of AOT at 0.469 nm obtained with the two algorithms has also been performed. The time periods investigated were chosen to enable a validation of the findings of the two algorithms for a maximal possible variation in sun elevation. The satellite retrievals were also performed with a significant variation in the satellite-viewing geometry, since Aqua and Terra passed the investigation area twice a day for several of the cases analyzed. The validation with AERONET shows that the AOT at 0.469 and 0.555 nm obtained with MODIS c005 is within the expected uncertainty of one standard deviation of the MODIS c005 retrievals (ΔAOT = ± 0.05 ± 0.15 · AOT. The AOT at 0.443 nm retrieved with SAER, but with a much finer spatial resolution, also agreed reasonably well with AERONET measurements. The majority of the SAER AOT values are within the MODIS c005 expected uncertainty range, although somewhat larger average absolute deviation occurs compared to the results obtained with the MODIS c005 algorithm. The discrepancy between AOT from SAER and AERONET is, however, substantially larger for the wavelength 488 nm. This means that the values are, to a larger extent, outside of the expected MODIS uncertainty range. In addition, both satellite retrieval algorithms are unable to estimate α accurately, although the MODIS c005 algorithm performs better. Based on the inter-comparison of the SAER and MODIS c005 algorithms, it was found that SAER on the whole is

  8. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    International Nuclear Information System (INIS)

    Cheng Sheng-Yi; Liu Wen-Jin; Chen Shan-Qiu; Dong Li-Zhi; Yang Ping; Xu Bing

    2015-01-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n 2 ) ∼ O(n 3 ) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ∼ (O(n) 3/2 ), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. (paper)

  9. A novel three-dimensional mesh deformation method based on sphere relaxation

    International Nuclear Information System (INIS)

    Zhou, Xuan; Li, Shuixiang

    2015-01-01

    In our previous work (2013) [19], we developed a disk relaxation based mesh deformation method for two-dimensional mesh deformation. In this paper, the idea of the disk relaxation is extended to the sphere relaxation for three-dimensional meshes with large deformations. We develop a node based pre-displacement procedure to apply initial movements on nodes according to their layer indices. Afterwards, the nodes are moved locally by the improved sphere relaxation algorithm to transfer boundary deformations and increase the mesh quality. A three-dimensional mesh smoothing method is also adopted to prevent the occurrence of the negative volume of elements, and further improve the mesh quality. Numerical applications in three-dimension including the wing rotation, bending beam and morphing aircraft are carried out. The results demonstrate that the sphere relaxation based approach generates the deformed mesh with high quality, especially regarding complex boundaries and large deformations

  10. A novel three-dimensional mesh deformation method based on sphere relaxation

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Xuan [Department of Mechanics & Engineering Science, College of Engineering, Peking University, Beijing, 100871 (China); Institute of Applied Physics and Computational Mathematics, Beijing, 100094 (China); Li, Shuixiang, E-mail: lsx@pku.edu.cn [Department of Mechanics & Engineering Science, College of Engineering, Peking University, Beijing, 100871 (China)

    2015-10-01

    In our previous work (2013) [19], we developed a disk relaxation based mesh deformation method for two-dimensional mesh deformation. In this paper, the idea of the disk relaxation is extended to the sphere relaxation for three-dimensional meshes with large deformations. We develop a node based pre-displacement procedure to apply initial movements on nodes according to their layer indices. Afterwards, the nodes are moved locally by the improved sphere relaxation algorithm to transfer boundary deformations and increase the mesh quality. A three-dimensional mesh smoothing method is also adopted to prevent the occurrence of the negative volume of elements, and further improve the mesh quality. Numerical applications in three-dimension including the wing rotation, bending beam and morphing aircraft are carried out. The results demonstrate that the sphere relaxation based approach generates the deformed mesh with high quality, especially regarding complex boundaries and large deformations.

  11. Demonstration of Inexact Computing Implemented in the JPEG Compression Algorithm using Probabilistic Boolean Logic applied to CMOS Components

    Science.gov (United States)

    2015-12-24

    manufacturing today (namely, the 14nm FinFET silicon CMOS technology). The JPEG algorithm is selected as a motivational example since it is widely...TIFF images of a U.S. Air Force F-16 aircraft provided by the University of Southern California Signal and Image Processing Institute (SIPI) image...silicon CMOS technology currently in high volume manufac- turing today (the 14 nm FinFET silicon CMOS technology). The main contribution of this

  12. Optimization the initial weights of artificial neural networks via genetic algorithm applied to hip bone fracture prediction

    OpenAIRE

    Chang, Y-T; Lin, J; Shieh, J-S; Abbod, MF

    2012-01-01

    This paper aims to find the optimal set of initial weights to enhance the accuracy of artificial neural networks (ANNs) by using genetic algorithms (GA). The sample in this study included 228 patients with first low-trauma hip fracture and 215 patients without hip fracture, both of them were interviewed with 78 questions. We used logistic regression to select 5 important factors (i.e., bone mineral density, experience of fracture, average hand grip strength, intake of coffee, and peak expirat...

  13. Flat Knitting Loop Deformation Simulation Based on Interlacing Point Model

    Directory of Open Access Journals (Sweden)

    Jiang Gaoming

    2017-12-01

    Full Text Available In order to create realistic loop primitives suitable for the faster CAD of the flat-knitted fabric, we have performed research on the model of the loop as well as the variation of the loop surface. This paper proposes an interlacing point-based model for the loop center curve, and uses the cubic Bezier curve to fit the central curve of the regular loop, elongated loop, transfer loop, and irregular deformed loop. In this way, a general model for the central curve of the deformed loop is obtained. The obtained model is then utilized to perform texture mapping, texture interpolation, and brightness processing, simulating a clearly structured and lifelike deformed loop. The computer program LOOP is developed by using the algorithm. The deformed loop is simulated with different yarns, and the deformed loop is applied to design of a cable stitch, demonstrating feasibility of the proposed algorithm. This paper provides a loop primitive simulation method characterized by lifelikeness, yarn material variability, and deformation flexibility, and facilitates the loop-based fast computer-aided design (CAD of the knitted fabric.

  14. Improved image registration by sparse patch-based deformation estimation.

    Science.gov (United States)

    Kim, Minjeong; Wu, Guorong; Wang, Qian; Lee, Seong-Whan; Shen, Dinggang

    2015-01-15

    Despite intensive efforts for decades, deformable image registration is still a challenging problem due to the potential large anatomical differences across individual images, which limits the registration performance. Fortunately, this issue could be alleviated if a good initial deformation can be provided for the two images under registration, which are often termed as the moving subject and the fixed template, respectively. In this work, we present a novel patch-based initial deformation prediction framework for improving the performance of existing registration algorithms. Our main idea is to estimate the initial deformation between subject and template in a patch-wise fashion by using the sparse representation technique. We argue that two image patches should follow the same deformation toward the template image if their patch-wise appearance patterns are similar. To this end, our framework consists of two stages, i.e., the training stage and the application stage. In the training stage, we register all training images to the pre-selected template, such that the deformation of each training image with respect to the template is known. In the application stage, we apply the following four steps to efficiently calculate the initial deformation field for the new test subject: (1) We pick a small number of key points in the distinctive regions of the test subject; (2) for each key point, we extract a local patch and form a coupled appearance-deformation dictionary from training images where each dictionary atom consists of the image intensity patch as well as their respective local deformations; (3) a small set of training image patches in the coupled dictionary are selected to represent the image patch of each subject key point by sparse representation. Then, we can predict the initial deformation for each subject key point by propagating the pre-estimated deformations on the selected training patches with the same sparse representation coefficients; and (4) we

  15. A Comparison of FFD-based Nonrigid Registration and AAMs Applied to Myocardial Perfusion MRI

    DEFF Research Database (Denmark)

    Ólafsdóttir, Hildur; Stegmann, Mikkel Bille; Ersbøll, Bjarne Kjær

    2006-01-01

    -form deformations (FFDs). AAMs are known to be much faster than nonrigid registration algorithms. On the other hand nonrigid registration algorithms are independent of a training set as required to build an AAM. To obtain a further comparison of the two methods, they are both applied to automatically register multi...

  16. Applying Advances in GPM Radiometer Intercalibration and Algorithm Development to a Long-Term TRMM/GPM Global Precipitation Dataset

    Science.gov (United States)

    Berg, W. K.

    2016-12-01

    The Global Precipitation Mission (GPM) Core Observatory, which was launched in February of 2014, provides a number of advances for satellite monitoring of precipitation including a dual-frequency radar, high frequency channels on the GPM Microwave Imager (GMI), and coverage over middle and high latitudes. The GPM concept, however, is about producing unified precipitation retrievals from a constellation of microwave radiometers to provide approximately 3-hourly global sampling. This involves intercalibration of the input brightness temperatures from the constellation radiometers, development of an apriori precipitation database using observations from the state-of-the-art GPM radiometer and radars, and accounting for sensor differences in the retrieval algorithm in a physically-consistent way. Efforts by the GPM inter-satellite calibration working group, or XCAL team, and the radiometer algorithm team to create unified precipitation retrievals from the GPM radiometer constellation were fully implemented into the current version 4 GPM precipitation products. These include precipitation estimates from a total of seven conical-scanning and six cross-track scanning radiometers as well as high spatial and temporal resolution global level 3 gridded products. Work is now underway to extend this unified constellation-based approach to the combined TRMM/GPM data record starting in late 1997. The goal is to create a long-term global precipitation dataset employing these state-of-the-art calibration and retrieval algorithm approaches. This new long-term global precipitation dataset will incorporate the physics provided by the combined GPM GMI and DPR sensors into the apriori database, extend prior TRMM constellation observations to high latitudes, and expand the available TRMM precipitation data to the full constellation of available conical and cross-track scanning radiometers. This combined TRMM/GPM precipitation data record will thus provide a high-quality high

  17. Improved quantum-inspired evolutionary algorithm with diversity information applied to economic dispatch problem with prohibited operating zones

    International Nuclear Information System (INIS)

    Vianna Neto, Julio Xavier; Andrade Bernert, Diego Luis de; Santos Coelho, Leandro dos

    2011-01-01

    The objective of the economic dispatch problem (EDP) of electric power generation, whose characteristics are complex and highly nonlinear, is to schedule the committed generating unit outputs so as to meet the required load demand at minimum operating cost while satisfying all unit and system equality and inequality constraints. Recently, as an alternative to the conventional mathematical approaches, modern meta-heuristic optimization techniques have been given much attention by many researchers due to their ability to find an almost global optimal solution in EDPs. Research on merging evolutionary computation and quantum computation has been started since late 1990. Inspired on the quantum computation, this paper presented an improved quantum-inspired evolutionary algorithm (IQEA) based on diversity information of population. A classical quantum-inspired evolutionary algorithm (QEA) and the IQEA were implemented and validated for a benchmark of EDP with 15 thermal generators with prohibited operating zones. From the results for the benchmark problem, it is observed that the proposed IQEA approach provides promising results when compared to various methods available in the literature.

  18. Improved quantum-inspired evolutionary algorithm with diversity information applied to economic dispatch problem with prohibited operating zones

    Energy Technology Data Exchange (ETDEWEB)

    Vianna Neto, Julio Xavier, E-mail: julio.neto@onda.com.b [Pontifical Catholic University of Parana, PUCPR, Undergraduate Program at Mechatronics Engineering, Imaculada Conceicao, 1155, Zip code 80215-901, Curitiba, Parana (Brazil); Andrade Bernert, Diego Luis de, E-mail: dbernert@gmail.co [Pontifical Catholic University of Parana, PUCPR, Industrial and Systems Engineering Graduate Program, LAS/PPGEPS, Imaculada Conceicao, 1155, Zip code 80215-901, Curitiba, Parana (Brazil); Santos Coelho, Leandro dos, E-mail: leandro.coelho@pucpr.b [Pontifical Catholic University of Parana, PUCPR, Industrial and Systems Engineering Graduate Program, LAS/PPGEPS, Imaculada Conceicao, 1155, Zip code 80215-901, Curitiba, Parana (Brazil)

    2011-01-15

    The objective of the economic dispatch problem (EDP) of electric power generation, whose characteristics are complex and highly nonlinear, is to schedule the committed generating unit outputs so as to meet the required load demand at minimum operating cost while satisfying all unit and system equality and inequality constraints. Recently, as an alternative to the conventional mathematical approaches, modern meta-heuristic optimization techniques have been given much attention by many researchers due to their ability to find an almost global optimal solution in EDPs. Research on merging evolutionary computation and quantum computation has been started since late 1990. Inspired on the quantum computation, this paper presented an improved quantum-inspired evolutionary algorithm (IQEA) based on diversity information of population. A classical quantum-inspired evolutionary algorithm (QEA) and the IQEA were implemented and validated for a benchmark of EDP with 15 thermal generators with prohibited operating zones. From the results for the benchmark problem, it is observed that the proposed IQEA approach provides promising results when compared to various methods available in the literature.

  19. Improved quantum-inspired evolutionary algorithm with diversity information applied to economic dispatch problem with prohibited operating zones

    Energy Technology Data Exchange (ETDEWEB)

    Neto, Julio Xavier Vianna [Pontifical Catholic University of Parana, PUCPR, Undergraduate Program at Mechatronics Engineering, Imaculada Conceicao, 1155, Zip code 80215-901, Curitiba, Parana (Brazil); Bernert, Diego Luis de Andrade; Coelho, Leandro dos Santos [Pontifical Catholic University of Parana, PUCPR, Industrial and Systems Engineering Graduate Program, LAS/PPGEPS, Imaculada Conceicao, 1155, Zip code 80215-901, Curitiba, Parana (Brazil)

    2011-01-15

    The objective of the economic dispatch problem (EDP) of electric power generation, whose characteristics are complex and highly nonlinear, is to schedule the committed generating unit outputs so as to meet the required load demand at minimum operating cost while satisfying all unit and system equality and inequality constraints. Recently, as an alternative to the conventional mathematical approaches, modern meta-heuristic optimization techniques have been given much attention by many researchers due to their ability to find an almost global optimal solution in EDPs. Research on merging evolutionary computation and quantum computation has been started since late 1990. Inspired on the quantum computation, this paper presented an improved quantum-inspired evolutionary algorithm (IQEA) based on diversity information of population. A classical quantum-inspired evolutionary algorithm (QEA) and the IQEA were implemented and validated for a benchmark of EDP with 15 thermal generators with prohibited operating zones. From the results for the benchmark problem, it is observed that the proposed IQEA approach provides promising results when compared to various methods available in the literature. (author)

  20. A neural network based implementation of an MPC algorithm applied in the control systems of electromechanical plants

    Science.gov (United States)

    Marusak, Piotr M.; Kuntanapreeda, Suwat

    2018-01-01

    The paper considers application of a neural network based implementation of a model predictive control (MPC) control algorithm to electromechanical plants. Properties of such control plants implicate that a relatively short sampling time should be used. However, in such a case, finding the control value numerically may be too time-consuming. Therefore, the current paper tests the solution based on transforming the MPC optimization problem into a set of differential equations whose solution is the same as that of the original optimization problem. This set of differential equations can be interpreted as a dynamic neural network. In such an approach, the constraints can be introduced into the optimization problem with relative ease. Moreover, the solution of the optimization problem can be obtained faster than when the standard numerical quadratic programming routine is used. However, a very careful tuning of the algorithm is needed to achieve this. A DC motor and an electrohydraulic actuator are taken as illustrative examples. The feasibility and effectiveness of the proposed approach are demonstrated through numerical simulations.

  1. A Semiautomated Multilayer Picking Algorithm for Ice-sheet Radar Echograms Applied to Ground-Based Near-Surface Data

    Science.gov (United States)

    Onana, Vincent De Paul; Koenig, Lora Suzanne; Ruth, Julia; Studinger, Michael; Harbeck, Jeremy P.

    2014-01-01

    Snow accumulation over an ice sheet is the sole mass input, making it a primary measurement for understanding the past, present, and future mass balance. Near-surface frequency-modulated continuous-wave (FMCW) radars image isochronous firn layers recording accumulation histories. The Semiautomated Multilayer Picking Algorithm (SAMPA) was designed and developed to trace annual accumulation layers in polar firn from both airborne and ground-based radars. The SAMPA algorithm is based on the Radon transform (RT) computed by blocks and angular orientations over a radar echogram. For each echogram's block, the RT maps firn segmented-layer features into peaks, which are picked using amplitude and width threshold parameters of peaks. A backward RT is then computed for each corresponding block, mapping the peaks back into picked segmented-layers. The segmented layers are then connected and smoothed to achieve a final layer pick across the echogram. Once input parameters are trained, SAMPA operates autonomously and can process hundreds of kilometers of radar data picking more than 40 layers. SAMPA final pick results and layer numbering still require a cursory manual adjustment to correct noncontinuous picks, which are likely not annual, and to correct for inconsistency in layer numbering. Despite the manual effort to train and check SAMPA results, it is an efficient tool for picking multiple accumulation layers in polar firn, reducing time over manual digitizing efforts. The trackability of good detected layers is greater than 90%.

  2. A novel methodology for 3D deformable dosimetry.

    Science.gov (United States)

    Yeo, U J; Taylor, M L; Dunn, L; Kron, T; Smith, R L; Franich, R D

    2012-04-01

    a result of the change in shape of the target between irradiations, even for a relatively simple deformation. Discrepancies of up to 30% of the maximum dose were evident from dose difference maps for three orthogonal planes taken through the isocenter of a stereotactic field. This paper describes the first use of a tissue-equivalent, 3D dose-integrating deformable phantom that yields integrated or redistributed dosimetric information. The proposed methodology readily yields three-dimensional (3D) dosimetric data from radiation delivery to the DEFGEL phantom in deformed and undeformed states. The impacts of deformation on dose distributions were readily seen in the isodose contours and line profiles from the three arrangements. It is demonstrated that the system is potentially capable of reproducibly emulating the physical deformation of an organ, and therefore can be used to evaluate absorbed doses to deformable targets and organs at risk in three dimensions and to validate deformation algorithms applied to dose distributions.

  3. A novel methodology for 3D deformable dosimetry

    International Nuclear Information System (INIS)

    Yeo, U. J.; Taylor, M. L.; Dunn, L.; Kron, T.; Smith, R. L.; Franich, R. D.

    2012-01-01

    three dimensions occurring as a result of the change in shape of the target between irradiations, even for a relatively simple deformation. Discrepancies of up to 30% of the maximum dose were evident from dose difference maps for three orthogonal planes taken through the isocenter of a stereotactic field. Conclusions: This paper describes the first use of a tissue-equivalent, 3D dose-integrating deformable phantom that yields integrated or redistributed dosimetric information. The proposed methodology readily yields three-dimensional (3D) dosimetric data from radiation delivery to the DEFGEL phantom in deformed and undeformed states. The impacts of deformation on dose distributions were readily seen in the isodose contours and line profiles from the three arrangements. It is demonstrated that the system is potentially capable of reproducibly emulating the physical deformation of an organ, and therefore can be used to evaluate absorbed doses to deformable targets and organs at risk in three dimensions and to validate deformation algorithms applied to dose distributions.

  4. Applying an animal model to quantify the uncertainties of an image-based 4D-CT algorithm

    International Nuclear Information System (INIS)

    Pierce, Greg; Battista, Jerry; Wang, Kevin; Lee, Ting-Yim

    2012-01-01

    The purpose of this paper is to use an animal model to quantify the spatial displacement uncertainties and test the fundamental assumptions of an image-based 4D-CT algorithm in vivo. Six female Landrace cross pigs were ventilated and imaged using a 64-slice CT scanner (GE Healthcare) operating in axial cine mode. The breathing amplitude pattern of the pigs was varied by periodically crimping the ventilator gas return tube during the image acquisition. The image data were used to determine the displacement uncertainties that result from matching CT images at the same respiratory phase using normalized cross correlation (NCC) as the matching criteria. Additionally, the ability to match the respiratory phase of a 4.0 cm subvolume of the thorax to a reference subvolume using only a single overlapping 2D slice from the two subvolumes was tested by varying the location of the overlapping matching image within the subvolume and examining the effect this had on the displacement relative to the reference volume. The displacement uncertainty resulting from matching two respiratory images using NCC ranged from 0.54 ± 0.10 mm per match to 0.32 ± 0.16 mm per match in the lung of the animal. The uncertainty was found to propagate in quadrature, increasing with number of NCC matches performed. In comparison, the minimum displacement achievable if two respiratory images were matched perfectly in phase ranged from 0.77 ± 0.06 to 0.93 ± 0.06 mm in the lung. The assumption that subvolumes from separate cine scan could be matched by matching a single overlapping 2D image between to subvolumes was validated. An in vivo animal model was developed to test an image-based 4D-CT algorithm. The uncertainties associated with using NCC to match the respiratory phase of two images were quantified and the assumption that a 4.0 cm 3D subvolume can by matched in respiratory phase by matching a single 2D image from the 3D subvolume was validated. The work in this paper shows the image-based 4D

  5. An EM Algorithm for Double-Pareto-Lognormal Generalized Linear Model Applied to Heavy-Tailed Insurance Claims

    Directory of Open Access Journals (Sweden)

    Enrique Calderín-Ojeda

    2017-11-01

    Full Text Available Generalized linear models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the method for estimating the parameters of a double Pareto lognormal distribution (DPLN in Reed and Jorgensen (2004, we develop an EM algorithm for the heavy-tailed Double-Pareto-lognormal generalized linear model. The DPLN distribution is obtained as a mixture of a lognormal distribution with a double Pareto distribution. In this paper the associated generalized linear model has the location parameter equal to a linear predictor which is used to model insurance claim amounts for various data sets. The performance is compared with those of the generalized beta (of the second kind and lognorma distributions.

  6. Applying network analysis and Nebula (neighbor-edges based and unbiased leverage algorithm) to ToxCast data.

    Science.gov (United States)

    Ye, Hao; Luo, Heng; Ng, Hui Wen; Meehan, Joe; Ge, Weigong; Tong, Weida; Hong, Huixiao

    2016-01-01

    ToxCast data have been used to develop models for predicting in vivo toxicity. To predict the in vivo toxicity of a new chemical using a ToxCast data based model, its ToxCast bioactivity data are needed but not normally available. The capability of predicting ToxCast bioactivity data is necessary to fully utilize ToxCast data in the risk assessment of chemicals. We aimed to understand and elucidate the relationships between the chemicals and bioactivity data of the assays in ToxCast and to develop a network analysis based method for predicting ToxCast bioactivity data. We conducted modularity analysis on a quantitative network constructed from ToxCast data to explore the relationships between the assays and chemicals. We further developed Nebula (neighbor-edges based and unbiased leverage algorithm) for predicting ToxCast bioactivity data. Modularity analysis on the network constructed from ToxCast data yielded seven modules. Assays and chemicals in the seven modules were distinct. Leave-one-out cross-validation yielded a Q(2) of 0.5416, indicating ToxCast bioactivity data can be predicted by Nebula. Prediction domain analysis showed some types of ToxCast assay data could be more reliably predicted by Nebula than others. Network analysis is a promising approach to understand ToxCast data. Nebula is an effective algorithm for predicting ToxCast bioactivity data, helping fully utilize ToxCast data in the risk assessment of chemicals. Published by Elsevier Ltd.

  7. 3D noise power spectrum applied on clinical MDCT scanners: effects of reconstruction algorithms and reconstruction filters

    Science.gov (United States)

    Miéville, Frédéric A.; Bolard, Gregory; Benkreira, Mohamed; Ayestaran, Paul; Gudinchet, François; Bochud, François; Verdun, Francis R.

    2011-03-01

    The noise power spectrum (NPS) is the reference metric for understanding the noise content in computed tomography (CT) images. To evaluate the noise properties of clinical multidetector (MDCT) scanners, local 2D and 3D NPSs were computed for different acquisition reconstruction parameters. A 64- and a 128-MDCT scanners were employed. Measurements were performed on a water phantom in axial and helical acquisition modes. CT dose index was identical for both installations. Influence of parameters such as the pitch, the reconstruction filter (soft, standard and bone) and the reconstruction algorithm (filtered-back projection (FBP), adaptive statistical iterative reconstruction (ASIR)) were investigated. Images were also reconstructed in the coronal plane using a reformat process. Then 2D and 3D NPS methods were computed. In axial acquisition mode, the 2D axial NPS showed an important magnitude variation as a function of the z-direction when measured at the phantom center. In helical mode, a directional dependency with lobular shape was observed while the magnitude of the NPS was kept constant. Important effects of the reconstruction filter, pitch and reconstruction algorithm were observed on 3D NPS results for both MDCTs. With ASIR, a reduction of the NPS magnitude and a shift of the NPS peak to the low frequency range were visible. 2D coronal NPS obtained from the reformat images was impacted by the interpolation when compared to 2D coronal NPS obtained from 3D measurements. The noise properties of volume measured in last generation MDCTs was studied using local 3D NPS metric. However, impact of the non-stationarity noise effect may need further investigations.

  8. Continuous Recording and Interobserver Agreement Algorithms Reported in the "Journal of Applied Behavior Analysis" (1995-2005)

    Science.gov (United States)

    Mudford, Oliver C.; Taylor, Sarah Ann; Martin, Neil T.

    2009-01-01

    We reviewed all research articles in 10 recent volumes of the "Journal of Applied Behavior Analysis (JABA)": Vol. 28(3), 1995, through Vol. 38(2), 2005. Continuous recording was used in the majority (55%) of the 168 articles reporting data on free-operant human behaviors. Three methods for reporting interobserver agreement (exact agreement,…

  9. Fast free-form deformable registration via calculus of variations

    International Nuclear Information System (INIS)

    Lu Weiguo; Chen Mingli; Olivera, Gustavo H; Ruchala, Kenneth J; Mackie, Thomas R

    2004-01-01

    In this paper, we present a fully automatic, fast and accurate deformable registration technique. This technique deals with free-form deformation. It minimizes an energy functional that combines both similarity and smoothness measures. By using calculus of variations, the minimization problem was represented as a set of nonlinear elliptic partial differential equations (PDEs). A Gauss-Seidel finite difference scheme is used to iteratively solve the PDE. The registration is refined by a multi-resolution approach. The whole process is fully automatic. It takes less than 3 min to register two three-dimensional (3D) image sets of size 256 x 256 x 61 using a single 933 MHz personal computer. Extensive experiments are presented. These experiments include simulations, phantom studies and clinical image studies. Experimental results show that our model and algorithm are suited for registration of temporal images of a deformable body. The registration of inspiration and expiration phases of the lung images shows that the method is able to deal with large deformations. When applied to the daily CT images of a prostate patient, the results show that registration based on iterative refinement of displacement field is appropriate to describe the local deformations in the prostate and the rectum. Similarity measures improved significantly after the registration. The target application of this paper is for radiotherapy treatment planning and evaluation that incorporates internal organ deformation throughout the course of radiation therapy. The registration method could also be equally applied in diagnostic radiology

  10. Intelligent simulated annealing algorithm applied to the optimization of the main magnet for magnetic resonance imaging machine; Algoritmo simulated annealing inteligente aplicado a la optimizacion del iman principal de una maquina de resonancia magnetica de imagenes

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez Lopez, Hector [Universidad de Oriente, Santiago de Cuba (Cuba). Centro de Biofisica Medica]. E-mail: hsanchez@cbm.uo.edu.cu

    2001-08-01

    This work describes an alternative algorithm of Simulated Annealing applied to the design of the main magnet for a Magnetic Resonance Imaging machine. The algorithm uses a probabilistic radial base neuronal network to classify the possible solutions, before the objective function evaluation. This procedure allows reducing up to 50% the number of iterations required by simulated annealing to achieve the global maximum, when compared with the SA algorithm. The algorithm was applied to design a 0.1050 Tesla four coil resistive magnet, which produces a magnetic field 2.13 times more uniform than the solution given by SA. (author)

  11. Applying Probability Theory for the Quality Assessment of a Wildfire Spread Prediction Framework Based on Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Andrés Cencerrado

    2013-01-01

    Full Text Available This work presents a framework for assessing how the existing constraints at the time of attending an ongoing forest fire affect simulation results, both in terms of quality (accuracy obtained and the time needed to make a decision. In the wildfire spread simulation and prediction area, it is essential to properly exploit the computational power offered by new computing advances. For this purpose, we rely on a two-stage prediction process to enhance the quality of traditional predictions, taking advantage of parallel computing. This strategy is based on an adjustment stage which is carried out by a well-known evolutionary technique: Genetic Algorithms. The core of this framework is evaluated according to the probability theory principles. Thus, a strong statistical study is presented and oriented towards the characterization of such an adjustment technique in order to help the operation managers deal with the two aspects previously mentioned: time and quality. The experimental work in this paper is based on a region in Spain which is one of the most prone to forest fires: El Cap de Creus.

  12. An observation planning algorithm applied to multi-objective astronomical observations and its simulation in COSMOS field

    Science.gov (United States)

    Jin, Yi; Gu, Yonggang; Zhai, Chao

    2012-09-01

    Multi-Object Fiber Spectroscopic sky surveys are now booming, such as LAMOST already built by China, BIGBOSS project put forward by the U.S. Lawrence Berkeley National Lab and GTC (Gran Telescopio Canarias) telescope developed by the United States, Mexico and Spain. They all use or will use this approach and each fiber can be moved within a certain area for one astrology target, so observation planning is particularly important for this Sky Surveys. One observation planning algorithm used in multi-objective astronomical observations is developed. It can avoid the collision and interference between the fiber positioning units in the focal plane during the observation in one field of view, and the interested objects can be ovserved in a limited round with the maximize efficiency. Also, the observation simulation can be made for wide field of view through multi-FOV observation. After the observation planning is built ,the simulation is made in COSMOS field using GTC telescope. Interested galaxies, stars and high-redshift LBG galaxies are selected after the removal of the mask area, which may be bright stars. Then 9 FOV simulation is completed and observation efficiency and fiber utilization ratio for every round are given. Otherwise,allocating a certain number of fibers for background sky, giving different weights for different objects and how to move the FOV to improve the overall observation efficiency are discussed.

  13. Decision making based on data analysis and optimization algorithm applied for cogeneration systems integration into a grid

    Science.gov (United States)

    Asmar, Joseph Al; Lahoud, Chawki; Brouche, Marwan

    2018-05-01

    Cogeneration and trigeneration systems can contribute to the reduction of primary energy consumption and greenhouse gas emissions in residential and tertiary sectors, by reducing fossil fuels demand and grid losses with respect to conventional systems. The cogeneration systems are characterized by a very high energy efficiency (80 to 90%) as well as a less polluting aspect compared to the conventional energy production. The integration of these systems into the energy network must simultaneously take into account their economic and environmental challenges. In this paper, a decision-making strategy will be introduced and is divided into two parts. The first one is a strategy based on a multi-objective optimization tool with data analysis and the second part is based on an optimization algorithm. The power dispatching of the Lebanese electricity grid is then simulated and considered as a case study in order to prove the compatibility of the cogeneration power calculated by our decision-making technique. In addition, the thermal energy produced by the cogeneration systems which capacity is selected by our technique shows compatibility with the thermal demand for district heating.

  14. Thermal-economic optimisation of a CHP gas turbine system by applying a fit-problem genetic algorithm

    Science.gov (United States)

    Ferreira, Ana C. M.; Teixeira, Senhorinha F. C. F.; Silva, Rui G.; Silva, Ângela M.

    2018-04-01

    Cogeneration allows the optimal use of the primary energy sources and significant reductions in carbon emissions. Its use has great potential for applications in the residential sector. This study aims to develop a methodology for thermal-economic optimisation of small-scale micro-gas turbine for cogeneration purposes, able to fulfil domestic energy needs with a thermal power out of 125 kW. A constrained non-linear optimisation model was built. The objective function is the maximisation of the annual worth from the combined heat and power, representing the balance between the annual incomes and the expenditures subject to physical and economic constraints. A genetic algorithm coded in the java programming language was developed. An optimal micro-gas turbine able to produce 103.5 kW of electrical power with a positive annual profit (i.e. 11,925 €/year) was disclosed. The investment can be recovered in 4 years and 9 months, which is less than half of system lifetime expectancy.

  15. Plastic deformation

    NARCIS (Netherlands)

    Sitter, de L.U.

    1937-01-01

    § 1. Plastic deformation of solid matter under high confining pressures has been insufficiently studied. Jeffreys 1) devotes a few paragraphs to deformation of solid matter as a preface to his chapter on the isostasy problem. He distinguishes two properties of solid matter with regard to its

  16. Periodic modulation-based stochastic resonance algorithm applied to quantitative analysis for weak liquid chromatography-mass spectrometry signal of granisetron in plasma

    Science.gov (United States)

    Xiang, Suyun; Wang, Wei; Xiang, Bingren; Deng, Haishan; Xie, Shaofei

    2007-05-01

    The periodic modulation-based stochastic resonance algorithm (PSRA) was used to amplify and detect the weak liquid chromatography-mass spectrometry (LC-MS) signal of granisetron in plasma. In the algorithm, the stochastic resonance (SR) was achieved by introducing an external periodic force to the nonlinear system. The optimization of parameters was carried out in two steps to give attention to both the signal-to-noise ratio (S/N) and the peak shape of output signal. By applying PSRA with the optimized parameters, the signal-to-noise ratio of LC-MS peak was enhanced significantly and distorted peak shape that often appeared in the traditional stochastic resonance algorithm was corrected by the added periodic force. Using the signals enhanced by PSRA, this method extended the limit of detection (LOD) and limit of quantification (LOQ) of granisetron in plasma from 0.05 and 0.2 ng/mL, respectively, to 0.01 and 0.02 ng/mL, and exhibited good linearity, accuracy and precision, which ensure accurate determination of the target analyte.

  17. Recent infection testing algorithm (RITA) applied to new HIV diagnoses in England, Wales and Northern Ireland, 2009 to 2011.

    Science.gov (United States)

    Aghaizu, A; Murphy, G; Tosswill, J; DeAngelis, D; Charlett, A; Gill, O N; Ward, H; Lattimore, S; Simmons, Rd; Delpech, V

    2014-01-16

    In 2009, Public Health England (PHE) introduced the routine application of a recent infection testing algorithm (RITA) to new HIV diagnoses, where a positive RITA result indicates likely acquisition of infection in the previous six months. Laboratories submit serum specimens to PHE for testing using the HIV 1/2gO AxSYM assay modified for the determination of HIV antibody avidity. Results are classified according to avidity index and data on CD₄ count, antiretroviral treatment and the presence of an AIDS-defining illness. Between 2009 and 2011, 38.4% (6,966/18,134) of new HIV diagnoses in England, Wales and Northern Ireland were tested. Demographic characteristics of those tested were similar to all persons with diagnosed HIV. Overall, recent infection was 14.7% (1,022/6,966) and higher among men who have sex with men (MSM) (22.3%, 720/3,223) compared with heterosexual men and women (7.8%, 247/3,164). Higher proportions were among persons aged 15-24 years compared with those ≥50 years (MSM 31.2% (139/445) vs 13.6% (42/308); heterosexual men and women 17.3% (43/249) vs 6.2% (31/501)). Among heterosexual men and women, black Africans were least likely to have recent infection compared with whites (4.8%, 90/1,892 vs 13.3%, 97/728; adjusted odds ratio: 0.6; 95% CI: 0.4-0.9). Our results indicate evidence of ongoing HIV transmission during the study period, particularly among MSM.

  18. Deriving causes of child mortality by re–analyzing national verbal autopsy data applying a standardized computer algorithm in Uganda, Rwanda and Ghana

    Directory of Open Access Journals (Sweden)

    Li Liu

    2015-06-01

    Full Text Available Background To accelerate progress toward the Millennium Development Goal 4, reliable information on causes of child mortality is critical. With more national verbal autopsy (VA studies becoming available, how to improve consistency of national VA derived child causes of death should be considered for the purpose of global comparison. We aimed to adapt a standardized computer algorithm to re–analyze national child VA studies conducted in Uganda, Rwanda and Ghana recently, and compare our results with those derived from physician review to explore issues surrounding the application of the standardized algorithm in place of physician review. Methods and Findings We adapted the standardized computer algorithm considering the disease profile in Uganda, Rwanda and Ghana. We then derived cause–specific mortality fractions applying the adapted algorithm and compared the results with those ascertained by physician review by examining the individual– and population–level agreement. Our results showed that the leading causes of child mortality in Uganda, Rwanda and Ghana were pneumonia (16.5–21.1% and malaria (16.8–25.6% among children below five years and intrapartum–related complications (6.4–10.7% and preterm birth complications (4.5–6.3% among neonates. The individual level agreement was poor to substantial across causes (kappa statistics: –0.03 to 0.83, with moderate to substantial agreement observed for injury, congenital malformation, preterm birth complications, malaria and measles. At the population level, despite fairly different cause–specific mortality fractions, the ranking of the leading causes was largely similar. Conclusions The standardized computer algorithm produced internally consistent distribution of causes of child mortality. The results were also qualitatively comparable to those based on physician review from the perspective of public health policy. The standardized computer algorithm has the advantage of

  19. Natural speech algorithm applied to baseline interview data can predict which patients will respond to psilocybin for treatment-resistant depression.

    Science.gov (United States)

    Carrillo, Facundo; Sigman, Mariano; Fernández Slezak, Diego; Ashton, Philip; Fitzgerald, Lily; Stroud, Jack; Nutt, David J; Carhart-Harris, Robin L

    2018-04-01

    Natural speech analytics has seen some improvements over recent years, and this has opened a window for objective and quantitative diagnosis in psychiatry. Here, we used a machine learning algorithm applied to natural speech to ask whether language properties measured before psilocybin for treatment-resistant can predict for which patients it will be effective and for which it will not. A baseline autobiographical memory interview was conducted and transcribed. Patients with treatment-resistant depression received 2 doses of psilocybin, 10 mg and 25 mg, 7 days apart. Psychological support was provided before, during and after all dosing sessions. Quantitative speech measures were applied to the interview data from 17 patients and 18 untreated age-matched healthy control subjects. A machine learning algorithm was used to classify between controls and patients and predict treatment response. Speech analytics and machine learning successfully differentiated depressed patients from healthy controls and identified treatment responders from non-responders with a significant level of 85% of accuracy (75% precision). Automatic natural language analysis was used to predict effective response to treatment with psilocybin, suggesting that these tools offer a highly cost-effective facility for screening individuals for treatment suitability and sensitivity. The sample size was small and replication is required to strengthen inferences on these results. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Using Elman recurrent neural networks with conjugate gradient algorithm in determining the anesthetic the amount of anesthetic medicine to be applied.

    Science.gov (United States)

    Güntürkün, Rüştü

    2010-08-01

    In this study, Elman recurrent neural networks have been defined by using conjugate gradient algorithm in order to determine the depth of anesthesia in the continuation stage of the anesthesia and to estimate the amount of medicine to be applied at that moment. The feed forward neural networks are also used for comparison. The conjugate gradient algorithm is compared with back propagation (BP) for training of the neural Networks. The applied artificial neural network is composed of three layers, namely the input layer, the hidden layer and the output layer. The nonlinear activation function sigmoid (sigmoid function) has been used in the hidden layer and the output layer. EEG data has been recorded with Nihon Kohden 9200 brand 22-channel EEG device. The international 8-channel bipolar 10-20 montage system (8 TB-b system) has been used in assembling the recording electrodes. EEG data have been recorded by being sampled once in every 2 milliseconds. The artificial neural network has been designed so as to have 60 neurons in the input layer, 30 neurons in the hidden layer and 1 neuron in the output layer. The values of the power spectral density (PSD) of 10-second EEG segments which correspond to the 1-50 Hz frequency range; the ratio of the total power of PSD values of the EEG segment at that moment in the same range to the total of PSD values of EEG segment taken prior to the anesthesia.

  1. Two-step algorithm of generalized PAPA method applied to linear programming solution of dynamic matrix control

    International Nuclear Information System (INIS)

    Shimizu, Yoshiaki

    1991-01-01

    In recent complicated nuclear systems, there are increasing demands for developing highly advanced procedures for various problems-solvings. Among them keen interests have been paid on man-machine communications to improve both safety and economy factors. Many optimization methods have been good enough to elaborate on these points. In this preliminary note, we will concern with application of linear programming (LP) for this purpose. First we will present a new superior version of the generalized PAPA method (GEPAPA) to solve LP problems. We will then examine its effectiveness when applied to derive dynamic matrix control (DMC) as the LP solution. The approach is to aim at the above goal through a quality control of process that will appear in the system. (author)

  2. Strong discontinuity with cam clay under large deformations

    DEFF Research Database (Denmark)

    Katic, Natasa; Hededal, Ole

    2008-01-01

    The work shows simultaneous implementation of Strong discontinuity approach (SDA) by means of Enhanced Assumed Strain (EAS) and Critical State Soil Mechanics CSSM) in large strain regime. The numerical model is based on an additive decomposition of the displacement gradient into a conforming and ...... and an enhanced part. The localized deformations are approximated by means of a discontinuous displacement field. The applied algorithm leads to a predictor/corrector procedure which is formally identical to the returnmapping algorithm of classical (local and continuous) Cam clay model....

  3. CCS Site Optimization by Applying a Multi-objective Evolutionary Algorithm to Semi-Analytical Leakage Models

    Science.gov (United States)

    Cody, B. M.; Gonzalez-Nicolas, A.; Bau, D. A.

    2011-12-01

    Carbon capture and storage (CCS) has been proposed as a method of reducing global carbon dioxide (CO2) emissions. Although CCS has the potential to greatly retard greenhouse gas loading to the atmosphere while cleaner, more sustainable energy solutions are developed, there is a possibility that sequestered CO2 may leak and intrude into and adversely affect groundwater resources. It has been reported [1] that, while CO2 intrusion typically does not directly threaten underground drinking water resources, it may cause secondary effects, such as the mobilization of hazardous inorganic constituents present in aquifer minerals and changes in pH values. These risks must be fully understood and minimized before CCS project implementation. Combined management of project resources and leakage risk is crucial for the implementation of CCS. In this work, we present a method of: (a) minimizing the total CCS cost, the summation of major project costs with the cost associated with CO2 leakage; and (b) maximizing the mass of injected CO2, for a given proposed sequestration site. Optimization decision variables include the number of CO2 injection wells, injection rates, and injection well locations. The capital and operational costs of injection wells are directly related to injection well depth, location, injection flow rate, and injection duration. The cost of leakage is directly related to the mass of CO2 leaked through weak areas, such as abandoned oil wells, in the cap rock layers overlying the injected formation. Additional constraints on fluid overpressure caused by CO2 injection are imposed to maintain predefined effective stress levels that prevent cap rock fracturing. Here, both mass leakage and fluid overpressure are estimated using two semi-analytical models based upon work by [2,3]. A multi-objective evolutionary algorithm coupled with these semi-analytical leakage flow models is used to determine Pareto-optimal trade-off sets giving minimum total cost vs. maximum mass

  4. Evaluating the statistical performance of less applied algorithms in classification of worldview-3 imagery data in an urbanized landscape

    Science.gov (United States)

    Ranaie, Mehrdad; Soffianian, Alireza; Pourmanafi, Saeid; Mirghaffari, Noorollah; Tarkesh, Mostafa

    2018-03-01

    In recent decade, analyzing the remotely sensed imagery is considered as one of the most common and widely used procedures in the environmental studies. In this case, supervised image classification techniques play a central role. Hence, taking a high resolution Worldview-3 over a mixed urbanized landscape in Iran, three less applied image classification methods including Bagged CART, Stochastic gradient boosting model and Neural network with feature extraction were tested and compared with two prevalent methods: random forest and support vector machine with linear kernel. To do so, each method was run ten time and three validation techniques was used to estimate the accuracy statistics consist of cross validation, independent validation and validation with total of train data. Moreover, using ANOVA and Tukey test, statistical difference significance between the classification methods was significantly surveyed. In general, the results showed that random forest with marginal difference compared to Bagged CART and stochastic gradient boosting model is the best performing method whilst based on independent validation there was no significant difference between the performances of classification methods. It should be finally noted that neural network with feature extraction and linear support vector machine had better processing speed than other.

  5. Independent component analysis-based algorithm for automatic identification of Raman spectra applied to artistic pigments and pigment mixtures.

    Science.gov (United States)

    González-Vidal, Juan José; Pérez-Pueyo, Rosanna; Soneira, María José; Ruiz-Moreno, Sergio

    2015-03-01

    A new method has been developed to automatically identify Raman spectra, whether they correspond to single- or multicomponent spectra. The method requires no user input or judgment. There are thus no parameters to be tweaked. Furthermore, it provides a reliability factor on the resulting identification, with the aim of becoming a useful support tool for the analyst in the decision-making process. The method relies on the multivariate techniques of principal component analysis (PCA) and independent component analysis (ICA), and on some metrics. It has been developed for the application of automated spectral analysis, where the analyzed spectrum is provided by a spectrometer that has no previous knowledge of the analyzed sample, meaning that the number of components in the sample is unknown. We describe the details of this method and demonstrate its efficiency by identifying both simulated spectra and real spectra. The method has been applied to artistic pigment identification. The reliable and consistent results that were obtained make the methodology a helpful tool suitable for the identification of pigments in artwork or in paint in general.

  6. High resolution, large deformation 3D traction force microscopy.

    Directory of Open Access Journals (Sweden)

    Jennet Toyjanova

    Full Text Available Traction Force Microscopy (TFM is a powerful approach for quantifying cell-material interactions that over the last two decades has contributed significantly to our understanding of cellular mechanosensing and mechanotransduction. In addition, recent advances in three-dimensional (3D imaging and traction force analysis (3D TFM have highlighted the significance of the third dimension in influencing various cellular processes. Yet irrespective of dimensionality, almost all TFM approaches have relied on a linear elastic theory framework to calculate cell surface tractions. Here we present a new high resolution 3D TFM algorithm which utilizes a large deformation formulation to quantify cellular displacement fields with unprecedented resolution. The results feature some of the first experimental evidence that cells are indeed capable of exerting large material deformations, which require the formulation of a new theoretical TFM framework to accurately calculate the traction forces. Based on our previous 3D TFM technique, we reformulate our approach to accurately account for large material deformation and quantitatively contrast and compare both linear and large deformation frameworks as a function of the applied cell deformation. Particular attention is paid in estimating the accuracy penalty associated with utilizing a traditional linear elastic approach in the presence of large deformation gradients.

  7. Semiautomated four-dimensional computed tomography segmentation using deformable models

    International Nuclear Information System (INIS)

    Ragan, Dustin; Starkschall, George; McNutt, Todd; Kaus, Michael; Guerrero, Thomas; Stevens, Craig W.

    2005-01-01

    The purpose of this work is to demonstrate a proof of feasibility of the application of a commercial prototype deformable model algorithm to the problem of delineation of anatomic structures on four-dimensional (4D) computed tomography (CT) image data sets. We acquired a 4D CT image data set of a patient's thorax that consisted of three-dimensional (3D) image data sets from eight phases in the respiratory cycle. The contours of the right and left lungs, cord, heart, and esophagus were manually delineated on the end inspiration data set. An interactive deformable model algorithm, originally intended for deforming an atlas-based model surface to a 3D CT image data set, was applied in an automated fashion. Triangulations based on the contours generated on each phase were deformed to the CT data set on the succeeding phase to generate the contours on that phase. Deformation was propagated through the eight phases, and the contours obtained on the end inspiration data set were compared with the original manually delineated contours. Structures defined by high-density gradients, such as lungs, cord, and heart, were accurately reproduced, except in regions where other gradient boundaries may have confused the algorithm, such as near bronchi. The algorithm failed to accurately contour the esophagus, a soft-tissue structure completely surrounded by tissue of similar density, without manual interaction. This technique has the potential to facilitate contour delineation in 4D CT image data sets; and future evolution of the software is expected to improve the process

  8. Evaluation of a metal artifact reduction algorithm applied to post-interventional flat detector CT in comparison to pre-treatment CT in patients with acute subarachnoid haemorrhage

    Energy Technology Data Exchange (ETDEWEB)

    Mennecke, Angelika; Svergun, Stanislav; Doerfler, Arnd; Struffert, Tobias [University of Erlangen-Nuremberg, Department of Neuroradiology, Erlangen (Germany); Scholz, Bernhard [Siemens Healthcare GmbH, Forchheim (Germany); Royalty, Kevin [Siemens Medical Solutions, USA, Inc., Hoffman Estates, IL (United States)

    2017-01-15

    Metal artefacts can impair accurate diagnosis of haemorrhage using flat detector CT (FD-CT), especially after aneurysm coiling. Within this work we evaluate a prototype metal artefact reduction algorithm by comparison of the artefact-reduced and the non-artefact-reduced FD-CT images to pre-treatment FD-CT and multi-slice CT images. Twenty-five patients with acute aneurysmal subarachnoid haemorrhage (SAH) were selected retrospectively. FD-CT and multi-slice CT before endovascular treatment as well as FD-CT data sets after treatment were available for all patients. The algorithm was applied to post-treatment FD-CT. The effect of the algorithm was evaluated utilizing the pre-post concordance of a modified Fisher score, a subjective image quality assessment, the range of the Hounsfield units within three ROIs, and the pre-post slice-wise Pearson correlation. The pre-post concordance of the modified Fisher score, the subjective image quality, and the pre-post correlation of the ranges of the Hounsfield units were significantly higher for artefact-reduced than for non-artefact-reduced images. Within the metal-affected slices, the pre-post slice-wise Pearson correlation coefficient was higher for artefact-reduced than for non-artefact-reduced images. The overall diagnostic quality of the artefact-reduced images was improved and reached the level of the pre-interventional FD-CT images. The metal-unaffected parts of the image were not modified. (orig.)

  9. Evaluation of a metal artifact reduction algorithm applied to post-interventional flat detector CT in comparison to pre-treatment CT in patients with acute subarachnoid haemorrhage

    International Nuclear Information System (INIS)

    Mennecke, Angelika; Svergun, Stanislav; Doerfler, Arnd; Struffert, Tobias; Scholz, Bernhard; Royalty, Kevin

    2017-01-01

    Metal artefacts can impair accurate diagnosis of haemorrhage using flat detector CT (FD-CT), especially after aneurysm coiling. Within this work we evaluate a prototype metal artefact reduction algorithm by comparison of the artefact-reduced and the non-artefact-reduced FD-CT images to pre-treatment FD-CT and multi-slice CT images. Twenty-five patients with acute aneurysmal subarachnoid haemorrhage (SAH) were selected retrospectively. FD-CT and multi-slice CT before endovascular treatment as well as FD-CT data sets after treatment were available for all patients. The algorithm was applied to post-treatment FD-CT. The effect of the algorithm was evaluated utilizing the pre-post concordance of a modified Fisher score, a subjective image quality assessment, the range of the Hounsfield units within three ROIs, and the pre-post slice-wise Pearson correlation. The pre-post concordance of the modified Fisher score, the subjective image quality, and the pre-post correlation of the ranges of the Hounsfield units were significantly higher for artefact-reduced than for non-artefact-reduced images. Within the metal-affected slices, the pre-post slice-wise Pearson correlation coefficient was higher for artefact-reduced than for non-artefact-reduced images. The overall diagnostic quality of the artefact-reduced images was improved and reached the level of the pre-interventional FD-CT images. The metal-unaffected parts of the image were not modified. (orig.)

  10. Applying Genetic Algorithms and RIA technologies to the development of Complex-VRP Tools in real-world distribution of petroleum products

    Directory of Open Access Journals (Sweden)

    Antonio Moratilla Ocaña

    2014-12-01

    Full Text Available Distribution problems had held a large body of research and development covering the VRP problem and its multiple characteristics, but few investigations examine it as an Information System, and far fewer as how it should be addressed from a development and implementation point of view. This paper describes the characteristics of a real information system for fuel distribution problems at country scale, joining the VRP research and development work using Genetic Algorithms, with the design of a Web based Information System. In this paper a view of traditional workflow in this area is shown, with a new approximation in which is based proposed system. Taking into account all constraint in the field, the authors have developed a VRPWeb-based solution using Genetic Algorithms with multiple web frameworks for each architecture layer, focusing on functionality and usability, in order to minimizing human error and maximizing productivity. To achieve these goals, authors have use SmartGWT as a powerful Web based RIA SPA framework with java integration, and multiple server frameworks and OSS based solutions,applied to development of a very complex VRP system for a logistics operator of petroleum products.

  11. Fraktalnist deformational relief polycrystalline aluminum

    Directory of Open Access Journals (Sweden)

    М.В. Карускевич

    2006-02-01

    Full Text Available  The possibility of the fractal geometry method application for the analisys of surface deformation structures under cyclic loading is presented.It is shown, that deformation relief of the alclad aluminium alloyes meets the criteria of the fractality. For the fractal demention estimation the method of  “box-counting”can be applied.

  12. Mao-Gilles Stabilization Algorithm

    Directory of Open Access Journals (Sweden)

    Jérôme Gilles

    2013-07-01

    Full Text Available Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different scenarios involving non-rigid deformations.

  13. Deformation microstructures

    DEFF Research Database (Denmark)

    Hansen, N.; Huang, X.; Hughes, D.A.

    2004-01-01

    Microstructural characterization and modeling has shown that a variety of metals deformed by different thermomechanical processes follows a general path of grain subdivision, by dislocation boundaries and high angle boundaries. This subdivision has been observed to very small structural scales...... of the order of 10 nm, produced by deformation under large sliding loads. Limits to the evolution of microstructural parameters during monotonic loading have been investigated based on a characterization by transmission electron microscopy. Such limits have been observed at an equivalent strain of about 10...

  14. Developing a Reading Concentration Monitoring System by Applying an Artificial Bee Colony Algorithm to E-Books in an Intelligent Classroom

    Directory of Open Access Journals (Sweden)

    Yueh-Min Huang

    2012-10-01

    Full Text Available A growing number of educational studies apply sensors to improve student learning in real classroom settings. However, how can sensors be integrated into classrooms to help instructors find out students’ reading concentration rates and thus better increase learning effectiveness? The aim of the current study was to develop a reading concentration monitoring system for use with e-books in an intelligent classroom and to help instructors find out the students’ reading concentration rates. The proposed system uses three types of sensor technologies, namely a webcam, heartbeat sensor, and blood oxygen sensor to detect the learning behaviors of students by capturing various physiological signals. An artificial bee colony (ABC optimization approach is applied to the data gathered from these sensors to help instructors understand their students’ reading concentration rates in a classroom learning environment. The results show that the use of the ABC algorithm in the proposed system can effectively obtain near-optimal solutions. The system has a user-friendly graphical interface, making it easy for instructors to clearly understand the reading status of their students.

  15. Developing a reading concentration monitoring system by applying an artificial bee colony algorithm to e-books in an intelligent classroom.

    Science.gov (United States)

    Hsu, Chia-Cheng; Chen, Hsin-Chin; Su, Yen-Ning; Huang, Kuo-Kuang; Huang, Yueh-Min

    2012-10-22

    A growing number of educational studies apply sensors to improve student learning in real classroom settings. However, how can sensors be integrated into classrooms to help instructors find out students' reading concentration rates and thus better increase learning effectiveness? The aim of the current study was to develop a reading concentration monitoring system for use with e-books in an intelligent classroom and to help instructors find out the students' reading concentration rates. The proposed system uses three types of sensor technologies, namely a webcam, heartbeat sensor, and blood oxygen sensor to detect the learning behaviors of students by capturing various physiological signals. An artificial bee colony (ABC) optimization approach is applied to the data gathered from these sensors to help instructors understand their students' reading concentration rates in a classroom learning environment. The results show that the use of the ABC algorithm in the proposed system can effectively obtain near-optimal solutions. The system has a user-friendly graphical interface, making it easy for instructors to clearly understand the reading status of their students.

  16. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    Science.gov (United States)

    Cheng, Sheng-Yi; Liu, Wen-Jin; Chen, Shan-Qiu; Dong, Li-Zhi; Yang, Ping; Xu, Bing

    2015-08-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n2) ˜ O(n3) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ˜ (O(n)3/2), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. Project supported by the National Key Scientific and Research Equipment Development Project of China (Grant No. ZDYZ2013-2), the National Natural Science Foundation of China (Grant No. 11173008), and the Sichuan Provincial Outstanding Youth Academic Technology Leaders Program, China (Grant No. 2012JQ0012).

  17. A general scheme for training and optimization of the Grenander deformable template model

    DEFF Research Database (Denmark)

    Fisker, Rune; Schultz, Nette; Duta, N.

    2000-01-01

    parameters, a very fast general initialization algorithm and an adaptive likelihood model based on local means. The model parameters are trained by a combination of a 2D shape learning algorithm and a maximum likelihood based criteria. The fast initialization algorithm is based on a search approach using...... for applying the general deformable template model proposed by (Grenander et al., 1991) to a new problem with minimal manual interaction, beside supplying a training set, which can be done by a non-expert user. The main contributions compared to previous work are a supervised learning scheme for the model...

  18. Improving oncoplastic breast tumor bed localization for radiotherapy planning using image registration algorithms

    Science.gov (United States)

    Wodzinski, Marek; Skalski, Andrzej; Ciepiela, Izabela; Kuszewski, Tomasz; Kedzierawski, Piotr; Gajda, Janusz

    2018-02-01

    Knowledge about tumor bed localization and its shape analysis is a crucial factor for preventing irradiation of healthy tissues during supportive radiotherapy and as a result, cancer recurrence. The localization process is especially hard for tumors placed nearby soft tissues, which undergo complex, nonrigid deformations. Among them, breast cancer can be considered as the most representative example. A natural approach to improving tumor bed localization is the use of image registration algorithms. However, this involves two unusual aspects which are not common in typical medical image registration: the real deformation field is discontinuous, and there is no direct correspondence between the cancer and its bed in the source and the target 3D images respectively. The tumor no longer exists during radiotherapy planning. Therefore, a traditional evaluation approach based on known, smooth deformations and target registration error are not directly applicable. In this work, we propose alternative artificial deformations which model the tumor bed creation process. We perform a comprehensive evaluation of the most commonly used deformable registration algorithms: B-Splines free form deformations (B-Splines FFD), different variants of the Demons and TV-L1 optical flow. The evaluation procedure includes quantitative assessment of the dedicated artificial deformations, target registration error calculation, 3D contour propagation and medical experts visual judgment. The results demonstrate that the currently, practically applied image registration (rigid registration and B-Splines FFD) are not able to correctly reconstruct discontinuous deformation fields. We show that the symmetric Demons provide the most accurate soft tissues alignment in terms of the ability to reconstruct the deformation field, target registration error and relative tumor volume change, while B-Splines FFD and TV-L1 optical flow are not an appropriate choice for the breast tumor bed localization problem

  19. Mean template for tensor-based morphometry using deformation tensors.

    Science.gov (United States)

    Leporé, Natasha; Brun, Caroline; Pennec, Xavier; Chou, Yi-Yu; Lopez, Oscar L; Aizenstein, Howard J; Becker, James T; Toga, Arthur W; Thompson, Paul M

    2007-01-01

    Tensor-based morphometry (TBM) studies anatomical differences between brain images statistically, to identify regions that differ between groups, over time, or correlate with cognitive or clinical measures. Using a nonlinear registration algorithm, all images are mapped to a common space, and statistics are most commonly performed on the Jacobian determinant (local expansion factor) of the deformation fields. In, it was shown that the detection sensitivity of the standard TBM approach could be increased by using the full deformation tensors in a multivariate statistical analysis. Here we set out to improve the common space itself, by choosing the shape that minimizes a natural metric on the deformation tensors from that space to the population of control subjects. This method avoids statistical bias and should ease nonlinear registration of new subjects data to a template that is 'closest' to all subjects' anatomies. As deformation tensors are symmetric positive-definite matrices and do not form a vector space, all computations are performed in the log-Euclidean framework. The control brain B that is already the closest to 'average' is found. A gradient descent algorithm is then used to perform the minimization that iteratively deforms this template and obtains the mean shape. We apply our method to map the profile of anatomical differences in a dataset of 26 HIV/AIDS patients and 14 controls, via a log-Euclidean Hotelling's T2 test on the deformation tensors. These results are compared to the ones found using the 'best' control, B. Statistics on both shapes are evaluated using cumulative distribution functions of the p-values in maps of inter-group differences.

  20. Final Report- "An Algorithmic and Software Framework for Applied Partial Differential Equations (APDEC): A DOE SciDAC Integrated Software Infrastructure Center (ISIC)

    Energy Technology Data Exchange (ETDEWEB)

    Elbridge Gerry Puckett

    2008-05-13

    All of the work conducted under the auspices of DE-FC02-01ER25473 was characterized by exceptionally close collaboration with researchers at the Lawrence Berkeley National Laboratory (LBNL). This included having one of my graduate students - Sarah Williams - spend the summer working with Dr. Ann Almgren a staff scientist in the Center for Computational Sciences and Engineering (CCSE) which is a part of the National Energy Research Supercomputer Center (NERSC) at LBNL. As a result of this visit Sarah decided to work on a problem suggested by Dr. John Bell the head of CCSE for her PhD thesis, which she finished in June 2007. Writing a PhD thesis while working at one of the University of California (UC) managed DOE laboratories is a long established tradition at the University of California and I have always encouraged my students to consider doing this. For example, in 2000 one of my graduate students - Matthew Williams - finished his PhD thesis while working with Dr. Douglas Kothe at the Los Alamos National Laboratory (LANL). Matt is now a staff scientist in the Diagnostic Applications Group in the Applied Physics Division at LANL. Another one of my graduate students - Christopher Algieri - who was partially supported with funds from DE-FC02-01ER25473 wrote am MS Thesis that analyzed and extended work published by Dr. Phil Colella and his colleagues in 1998. Dr. Colella is the head of the Applied Numerical Algorithms Group (ANAG) in the National Energy Research Supercomputer Center at LBNL and is the lead PI for the APDEC ISIC which was comprised of several National Laboratory research groups and at least five University PI's at five different universities. Chris Algieri is now employed as a staff member in Dr. Bill Collins' research group at LBNL developing computational models for climate change research. Bill Collins was recently hired at LBNL to start and be the Head of the Climate Science Department in the Earth Sciences Division at LBNL. Prior to

  1. Nonrigid registration with tissue-dependent filtering of the deformation field

    International Nuclear Information System (INIS)

    Staring, Marius; Klein, Stefan; Pluim, Josien P W

    2007-01-01

    In present-day medical practice it is often necessary to nonrigidly align image data. Current registration algorithms do not generally take the characteristics of tissue into account. Consequently, rigid tissue, such as bone, can be deformed elastically, growth of tumours may be concealed, and contrast-enhanced structures may be reduced in volume. We propose a method to locally adapt the deformation field at structures that must be kept rigid, using a tissue-dependent filtering technique. This adaptive filtering of the deformation field results in locally linear transformations without scaling or shearing. The degree of filtering is related to tissue stiffness: more filtering is applied at stiff tissue locations, less at parts of the image containing nonrigid tissue. The tissue-dependent filter is incorporated in a commonly used registration algorithm, using mutual information as a similarity measure and cubic B-splines to model the deformation field. The new registration algorithm is compared with this popular method. Evaluation of the proposed tissue-dependent filtering is performed on 3D computed tomography (CT) data of the thorax and on 2D digital subtraction angiography (DSA) images. The results show that tissue-dependent filtering of the deformation field leads to improved registration results: tumour volumes and vessel widths are preserved rather than affected

  2. Using a Novel Evolutionary Algorithm to More Effectively Apply Community-Driven EcoHealth Interventions in Big Data with Application to Chagas Disease

    Science.gov (United States)

    Rizzo, D. M.; Hanley, J.; Monroy, C.; Rodas, A.; Stevens, L.; Dorn, P.

    2016-12-01

    algorithm to efficiently search for higher order interactions in a T. dimidiata infestation dataset that contains 1,132 houses, 61 risk factors (both nominal and ordinal), and 16% of the data is missing. Our goal is determine the risk factors that are most commonly associated with infestation to more efficiently apply EcoHealth interventions.

  3. Sensitivity study of voxel-based PET image comparison to image registration algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Yip, Stephen, E-mail: syip@lroc.harvard.edu; Chen, Aileen B.; Berbeco, Ross [Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 (United States); Aerts, Hugo J. W. L. [Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 and Department of Radiology, Brigham and Women’s Hospital and Harvard Medical School, Boston, Massachusetts 02115 (United States)

    2014-11-01

    Purpose: Accurate deformable registration is essential for voxel-based comparison of sequential positron emission tomography (PET) images for proper adaptation of treatment plan and treatment response assessment. The comparison may be sensitive to the method of deformable registration as the optimal algorithm is unknown. This study investigated the impact of registration algorithm choice on therapy response evaluation. Methods: Sixteen patients with 20 lung tumors underwent a pre- and post-treatment computed tomography (CT) and 4D FDG-PET scans before and after chemoradiotherapy. All CT images were coregistered using a rigid and ten deformable registration algorithms. The resulting transformations were then applied to the respective PET images. Moreover, the tumor region defined by a physician on the registered PET images was classified into progressor, stable-disease, and responder subvolumes. Particularly, voxels with standardized uptake value (SUV) decreases >30% were classified as responder, while voxels with SUV increases >30% were progressor. All other voxels were considered stable-disease. The agreement of the subvolumes resulting from difference registration algorithms was assessed by Dice similarity index (DSI). Coefficient of variation (CV) was computed to assess variability of DSI between individual tumors. Root mean square difference (RMS{sub rigid}) of the rigidly registered CT images was used to measure the degree of tumor deformation. RMS{sub rigid} and DSI were correlated by Spearman correlation coefficient (R) to investigate the effect of tumor deformation on DSI. Results: Median DSI{sub rigid} was found to be 72%, 66%, and 80%, for progressor, stable-disease, and responder, respectively. Median DSI{sub deformable} was 63%–84%, 65%–81%, and 82%–89%. Variability of DSI was substantial and similar for both rigid and deformable algorithms with CV > 10% for all subvolumes. Tumor deformation had moderate to significant impact on DSI for progressor

  4. Cracking and Deformation Modelling of Tensile RC Members Using Stress Transfer Approach

    Directory of Open Access Journals (Sweden)

    Ronaldas Jakubovskis

    2016-12-01

    Full Text Available The paper presents a modeling technique for bond, cracking and deformation analysis of RC members. The proposed mod-eling technique is not restricted by the geometrical dimensions of the analyzed member and may be applied for various load-ing conditions. Tensile as well as bending RC members may be analyzed using the proposed technique. Adequacy of the modeling strategy was evaluated by the developed numerical discrete crack algorithm, which allows modeling deformation and cracking behavior of tensile RC members. Comparison of experimental and numerical results proved the applicability of the proposed modeling strategy.

  5. Infinitesimal Deformations of a Formal Symplectic Groupoid

    Science.gov (United States)

    Karabegov, Alexander

    2011-09-01

    Given a formal symplectic groupoid G over a Poisson manifold ( M, π 0), we define a new object, an infinitesimal deformation of G, which can be thought of as a formal symplectic groupoid over the manifold M equipped with an infinitesimal deformation {π_0 + \\varepsilon π_1} of the Poisson bivector field π 0. To any pair of natural star products {(ast,tildeast)} having the same formal symplectic groupoid G we relate an infinitesimal deformation of G. We call it the deformation groupoid of the pair {(ast,tildeast)} . To each star product with separation of variables {ast} on a Kähler-Poisson manifold M we relate another star product with separation of variables {hatast} on M. We build an algorithm for calculating the principal symbols of the components of the logarithm of the formal Berezin transform of a star product with separation of variables {ast} . This algorithm is based upon the deformation groupoid of the pair {(ast,hatast)}.

  6. Computing layouts with deformable templates

    KAUST Repository

    Peng, Chi-Han

    2014-07-22

    In this paper, we tackle the problem of tiling a domain with a set of deformable templates. A valid solution to this problem completely covers the domain with templates such that the templates do not overlap. We generalize existing specialized solutions and formulate a general layout problem by modeling important constraints and admissible template deformations. Our main idea is to break the layout algorithm into two steps: a discrete step to lay out the approximate template positions and a continuous step to refine the template shapes. Our approach is suitable for a large class of applications, including floorplans, urban layouts, and arts and design. Copyright © ACM.

  7. Computing layouts with deformable templates

    KAUST Repository

    Peng, Chi-Han; Yang, Yongliang; Wonka, Peter

    2014-01-01

    In this paper, we tackle the problem of tiling a domain with a set of deformable templates. A valid solution to this problem completely covers the domain with templates such that the templates do not overlap. We generalize existing specialized solutions and formulate a general layout problem by modeling important constraints and admissible template deformations. Our main idea is to break the layout algorithm into two steps: a discrete step to lay out the approximate template positions and a continuous step to refine the template shapes. Our approach is suitable for a large class of applications, including floorplans, urban layouts, and arts and design. Copyright © ACM.

  8. Can the same edge-detection algorithm be applied to on-line and off-line analysis systems? Validation of a new cinefilm-based geometric coronary measurement software

    NARCIS (Netherlands)

    J. Haase (Jürgen); C. di Mario (Carlo); P.W.J.C. Serruys (Patrick); M.M.J.M. van der Linden (Mark); D.P. Foley (David); W.J. van der Giessen (Wim)

    1993-01-01

    textabstractIn the Cardiovascular Measurement System (CMS) the edge-detection algorithm, which was primarily designed for the Philips digital cardiac imaging system (DCI), is applied to cinefilms. Comparative validation of CMS and DCI was performed in vitro and in vivo with intracoronary insertion

  9. Deformation compensation in dynamic tomography; Compensation de deformations en tomographie dynamique

    Energy Technology Data Exchange (ETDEWEB)

    Desbat, L. [Universite Joseph Fourier, UMR CNRS 5525, 38 - Grenoble (France); Roux, S. [Universite Joseph Fourier, TIMC-IMAG, In3S, Faculte de Medecine, 38 - Grenoble (France)]|[CEA Grenoble, Lab. d' Electronique et de Technologie de l' Informatique (LETI), 38 (France); Grangeat, P. [CEA Grenoble, Lab. d' Electronique et de Technologie de l' Informatique (LETI), 38 (France)

    2005-07-01

    This work is a contribution to the compensation of motion in tomography. New classes of deformation are proposed, that compensates analytically by an algorithm of a F.B.P. type reconstruction. This work makes a generalisation of the known results for affine deformations, in parallel geometry and fan-beam, to deformation classes of infinite dimension able to include strong non linearities. (N.C.)

  10. Thorax deformity, joint hypermobility and anxiety disorder

    International Nuclear Information System (INIS)

    Gulsun, M.; Dumlu, K.; Erbas, M.; Yilmaz, Mehmet B.; Pinar, M.; Tonbul, M.; Celik, C.; Ozdemir, B.

    2007-01-01

    Objective was to evaluate the association between thorax deformities, panic disorder and joint hypermobility. The study includes 52 males diagnosed with thorax deformity, and 40 healthy male controls without thorax deformity, in Tatvan Bitlis and Isparta, Turkey. The study was carried out from 2004 to 2006. The teleradiographic and thoracic lateral images of the subjects were evaluated to obtain the Beighton scores; subjects psychiatric conditions were evaluated using the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-1), and the Hamilton Anxiety Scale (HAM-A) was applied in order to determine the anxiety levels. Both the subjects and controls were compared in sociodemographic, anxiety levels and joint mobility levels. In addition, males with joint hypermobility and thorax deformity were compared to the group with thorax deformity without joint hypermobility. A significant difference in HAM-A scores was found between the groups with thorax deformity and without. In addition, 21 subjects with thorax deformity met the joint hypermobility criteria in the group with thorax deformity and 7 subjects without thorax deformity met the joint hypermobility criteria in the group without thorax deformity, according to Beighton scoring. The Beighton score of subjects with thorax deformity were significantly different from those of the group without deformity. Additionally, anxiety scores of the males with thorax deformity and joint hypermobility were found higher than males with thorax deformity without joint hypermobility. Anxiety disorders, particularly panic disorder, have a significantly higher distribution in males subjects with thorax deformity compared to the healthy control group. In addition, the anxiety level of males with thorax deformity and joint hypermobility is higher than males with thorax deformity without joint hypermobility. (author)

  11. Bunionette deformity.

    Science.gov (United States)

    Cohen, Bruce E; Nicholson, Christopher W

    2007-05-01

    The bunionette, or tailor's bunion, is a lateral prominence of the fifth metatarsal head. Most commonly, bunionettes are the result of a widened 4-5 intermetatarsal angle with associated varus of the metatarsophalangeal joint. When symptomatic, these deformities often respond to nonsurgical treatment methods, such as wider shoes and padding techniques. When these methods are unsuccessful, surgical treatment is based on preoperative radiographs and associated lesions, such as hyperkeratoses. In rare situations, a simple lateral eminence resection is appropriate; however, the risk of recurrence or overresection is high with this technique. Patients with a lateral bow to the fifth metatarsal are treated with a distal chevron-type osteotomy. A widened 4-5 intermetatarsal angle often requires a diaphyseal osteotomy for correction.

  12. Improved Data Reduction Algorithm for the Needle Probe Method Applied to In-Situ Thermal Conductivity Measurements of Lunar and Planetary Regoliths

    Science.gov (United States)

    Nagihara, S.; Hedlund, M.; Zacny, K.; Taylor, P. T.

    2013-01-01

    The needle probe method (also known as the' hot wire' or 'line heat source' method) is widely used for in-situ thermal conductivity measurements on soils and marine sediments on the earth. Variants of this method have also been used (or planned) for measuring regolith on the surfaces of extra-terrestrial bodies (e.g., the Moon, Mars, and comets). In the near-vacuum condition on the lunar and planetary surfaces, the measurement method used on the earth cannot be simply duplicated, because thermal conductivity of the regolith can be approximately 2 orders of magnitude lower. In addition, the planetary probes have much greater diameters, due to engineering requirements associated with the robotic deployment on extra-terrestrial bodies. All of these factors contribute to the planetary probes requiring much longer time of measurement, several tens of (if not over a hundred) hours, while a conventional terrestrial needle probe needs only 1 to 2 minutes. The long measurement time complicates the surface operation logistics of the lander. It also negatively affects accuracy of the thermal conductivity measurement, because the cumulative heat loss along the probe is no longer negligible. The present study improves the data reduction algorithm of the needle probe method by shortening the measurement time on planetary surfaces by an order of magnitude. The main difference between the new scheme and the conventional one is that the former uses the exact mathematical solution to the thermal model on which the needle probe measurement theory is based, while the latter uses an approximate solution that is valid only for large times. The present study demonstrates the benefit of the new data reduction technique by applying it to data from a series of needle probe experiments carried out in a vacuum chamber on JSC-1A lunar regolith stimulant. The use of the exact solution has some disadvantage, however, in requiring three additional parameters, but two of them (the diameter and the

  13. Quantifying the Erlenmeyer flask deformity

    Science.gov (United States)

    Carter, A; Rajan, P S; Deegan, P; Cox, T M; Bearcroft, P

    2012-01-01

    Objective Erlenmeyer flask deformity is a common radiological finding in patients with Gaucher′s disease; however, no definition of this deformity exists and the reported prevalence of the deformity varies widely. To devise an easily applied definition of this deformity, we investigated a cohort of knee radiographs in which there was consensus between three experienced radiologists as to the presence or absence of Erlenmeyer flask morphology. Methods Using the presence or absence of Erlenmeyer flask morphology as a benchmark, we measured the diameter of the femur at the level of the physeal scar and serially at defined intervals along the metadiaphysis. Results A measured ratio in excess of 0.57 between the diameter of the femoral shaft 4 cm from the physis to the diameter of the physeal baseline itself on a frontal radiograph of the knee predicted the Erlenmeyer flask deformity with 95.6% sensitivity and 100% specificity in our series of 43 independently diagnosed adults with Gaucher′s disease. Application of this method to the distal femur detected the Erlenmeyer flask deformity reproducibly and was simple to carry out. Conclusion Unlike diagnostic assignments based on subjective review, our simple procedure for identifying the modelling deformity is based on robust quantitative measurement: it should facilitate comparative studies between different groups of patients, and may allow more rigorous exploration of the pathogenesis of the complex osseous manifestations of Gaucher′s disease to be undertaken. PMID:22010032

  14. Nonlinear Deformable-body Dynamics

    CERN Document Server

    Luo, Albert C J

    2010-01-01

    "Nonlinear Deformable-body Dynamics" mainly consists in a mathematical treatise of approximate theories for thin deformable bodies, including cables, beams, rods, webs, membranes, plates, and shells. The intent of the book is to stimulate more research in the area of nonlinear deformable-body dynamics not only because of the unsolved theoretical puzzles it presents but also because of its wide spectrum of applications. For instance, the theories for soft webs and rod-reinforced soft structures can be applied to biomechanics for DNA and living tissues, and the nonlinear theory of deformable bodies, based on the Kirchhoff assumptions, is a special case discussed. This book can serve as a reference work for researchers and a textbook for senior and postgraduate students in physics, mathematics, engineering and biophysics. Dr. Albert C.J. Luo is a Professor of Mechanical Engineering at Southern Illinois University, Edwardsville, IL, USA. Professor Luo is an internationally recognized scientist in the field of non...

  15. Detection of boiling by Piety's on-line PSD-pattern recognition algorithm applied to neutron noise signals in the SAPHIR reactor

    International Nuclear Information System (INIS)

    Spiekerman, G.

    1988-09-01

    A partial blockage of the cooling channels of a fuel element in a swimming pool reactor could lead to vapour generation and to burn-out. To detect such anomalies, a pattern recognition algorithm based on power spectra density (PSD) proposed by Piety was further developed and implemented on a PDP 11/23 for on-line applications. This algorithm identifies anomalies by measuring the PSD on the process signal and comparing them with a standard baseline previously formed. Up to 8 decision discriminants help to recognize spectral changes due to anomalies. In our application, to detect boiling as quickly as possible with sufficient sensitivity, Piety's algorithm was modified using overlapped Fast-Fourier-Transform-Processing and the averaging of the PSDs over a large sample of preceding instantaneous PSDs. This processing allows high sensitivity in detecting weak disturbances without reducing response time. The algorithm was tested with simulation-of-boiling experiments where nitrogen in a cooling channel of a mock-up of a fuel element was injected. Void fractions higher than 30 % in the channel can be detected. In the case of boiling, it is believed that this limit is lower because collapsing bubbles could give rise to stronger fluctuations. The algorithm was also tested with a boiling experiment where the reactor coolant flow was actually reduced. The results showed that the discriminant D5 of Piety's algorithm based on neutron noise obtained from the existing neutron chambers of the reactor control system could sensitively recognize boiling. The detection time amounts to 7-30 s depending on the strength of the disturbances. Other events, which arise during a normal reactor run like scrams, removal of isotope elements without scramming or control rod movements and which could lead to false alarms, can be distinguished from boiling. 49 refs., 104 figs., 5 tabs

  16. Design of problem-specific evolutionary algorithm/mixed-integer programming hybrids: two-stage stochastic integer programming applied to chemical batch scheduling

    Science.gov (United States)

    Urselmann, Maren; Emmerich, Michael T. M.; Till, Jochen; Sand, Guido; Engell, Sebastian

    2007-07-01

    Engineering optimization often deals with large, mixed-integer search spaces with a rigid structure due to the presence of a large number of constraints. Metaheuristics, such as evolutionary algorithms (EAs), are frequently suggested as solution algorithms in such cases. In order to exploit the full potential of these algorithms, it is important to choose an adequate representation of the search space and to integrate expert-knowledge into the stochastic search operators, without adding unnecessary bias to the search. Moreover, hybridisation with mathematical programming techniques such as mixed-integer programming (MIP) based on a problem decomposition can be considered for improving algorithmic performance. In order to design problem-specific EAs it is desirable to have a set of design guidelines that specify properties of search operators and representations. Recently, a set of guidelines has been proposed that gives rise to so-called Metric-based EAs (MBEAs). Extended by the minimal moves mutation they allow for a generalization of EA with self-adaptive mutation strength in discrete search spaces. In this article, a problem-specific EA for process engineering task is designed, following the MBEA guidelines and minimal moves mutation. On the background of the application, the usefulness of the design framework is discussed, and further extensions and corrections proposed. As a case-study, a two-stage stochastic programming problem in chemical batch process scheduling is considered. The algorithm design problem can be viewed as the choice of a hierarchical decision structure, where on different layers of the decision process symmetries and similarities can be exploited for the design of minimal moves. After a discussion of the design approach and its instantiation for the case-study, the resulting problem-specific EA/MIP is compared to a straightforward application of a canonical EA/MIP and to a monolithic mathematical programming algorithm. In view of the

  17. Plastic Deformation of Metal Surfaces

    DEFF Research Database (Denmark)

    Hansen, Niels; Zhang, Xiaodan; Huang, Xiaoxu

    2013-01-01

    of metal components. An optimization of processes and material parameters must be based on a quantification of stress and strain gradients at the surface and in near surface layer where the structural scale can reach few tens of nanometers. For such fine structures it is suggested to quantify structural...... parameters by TEM and EBSD and apply strength-structural relationships established for the bulk metal deformed to high strains. This technique has been applied to steel deformed by high energy shot peening and a calculated stress gradient at or near the surface has been successfully validated by hardness...

  18. Fast Detection of Material Deformation through Structural Dissimilarity

    Energy Technology Data Exchange (ETDEWEB)

    Ushizima, Daniela; Perciano, Talita; Parkinson, Dilworth

    2015-10-29

    Designing materials that are resistant to extreme temperatures and brittleness relies on assessing structural dynamics of samples. Algorithms are critically important to characterize material deformation under stress conditions. Here, we report on our design of coarse-grain parallel algorithms for image quality assessment based on structural information and on crack detection of gigabyte-scale experimental datasets. We show how key steps can be decomposed into distinct processing flows, one based on structural similarity (SSIM) quality measure, and another on spectral content. These algorithms act upon image blocks that fit into memory, and can execute independently. We discuss the scientific relevance of the problem, key developments, and decomposition of complementary tasks into separate executions. We show how to apply SSIM to detect material degradation, and illustrate how this metric can be allied to spectral analysis for structure probing, while using tiled multi-resolution pyramids stored in HDF5 chunked multi-dimensional arrays. Results show that the proposed experimental data representation supports an average compression rate of 10X, and data compression scales linearly with the data size. We also illustrate how to correlate SSIM to crack formation, and how to use our numerical schemes to enable fast detection of deformation from 3D datasets evolving in time.

  19. A novel deformation mechanism for superplastic deformation

    Energy Technology Data Exchange (ETDEWEB)

    Muto, H.; Sakai, M. (Toyohashi Univ. of Technology (Japan). Dept. of Materials Science)

    1999-01-01

    Uniaxial compressive creep tests with strain value up to -0.1 for a [beta]-spodumene glass ceramic are conducted at 1060 C. From the observation of microstructural changes between before and after the creep deformations, it is shown that the grain-boundary sliding takes place via cooperative movement of groups of grains rather than individual grains under the large-scale-deformation. The deformation process and the surface technique used in this work are not only applicable to explain the deformation and flow of two-phase ceramics but also the superplastic deformation. (orig.) 12 refs.

  20. Evaluation of geometric changes of parotid glands during head and neck cancer radiotherapy using daily MVCT and automatic deformable registration

    International Nuclear Information System (INIS)

    Lee, Choonik; Langen, Katja M.; Lu, Weiguo; Haimerl, Jason; Schnarr, Eric; Ruchala, Kenneth J.; Olivera, Gustavo H.; Meeks, Sanford L.; Kupelian, Patrick A.; Shellenberger, Thomas D.; Manon, Rafael R.

    2008-01-01

    Background and purpose: To assess and evaluate geometrical changes in parotid glands using deformable image registration and megavoltage CT (MVCT) images. Methods: A deformable registration algorithm was applied to 330 daily MVCT images (10 patients) to create deformed parotid contours. The accuracy and robustness of the algorithm was evaluated through visual review, comparison with manual contours, and precision analysis. Temporal changes in the parotid gland geometry were observed. Results: The deformed parotid contours were qualitatively judged to be acceptable. Compared with manual contours, the uncertainties of automatically deformed contours were similar with regard to geometry and dosimetric endpoint. The day-to-day variations (1 standard deviation of errors) in the center-of-mass distance and volume were 1.61 mm and 4.36%, respectively. The volumes tended to decrease with a median total loss of 21.3% (6.7-31.5%) and a median change rate of 0.7%/day (0.4-1.3%/day). Parotids migrated toward the patient center with a median total distance change of -5.26 mm (0.00 to -16.35 mm) and a median change rate of -0.22 mm/day (0.02 to -0.56 mm/day). Conclusion: The deformable image registration and daily MVCT images provide an efficient and reliable assessment of parotid changes over the course of a radiation therapy

  1. The particle swarm optimization algorithm applied to nuclear systems surveillance test planning; Otimizacao aplicada ao planejamento de politicas de testes em sistemas nucleares por enxame de particulas

    Energy Technology Data Exchange (ETDEWEB)

    Siqueira, Newton Norat

    2006-12-15

    This work shows a new approach to solve availability maximization problems in electromechanical systems, under periodic preventive scheduled tests. This approach uses a new Optimization tool called PSO developed by Kennedy and Eberhart (2001), Particle Swarm Optimization, integrated with probabilistic safety analysis model. Two maintenance optimization problems are solved by the proposed technique, the first one is a hypothetical electromechanical configuration and the second one is a real case from a nuclear power plant (Emergency Diesel Generators). For both problem PSO is compared to a genetic algorithm (GA). In the experiments made, PSO was able to obtain results comparable or even slightly better than those obtained b GA. Therefore, the PSO algorithm is simpler and its convergence is faster, indicating that PSO is a good alternative for solving such kind of problems. (author)

  2. Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) Applied in Optimization of Radiation Pattern Control of Phased-Array Radars for Rocket Tracking Systems

    Science.gov (United States)

    Silva, Leonardo W. T.; Barros, Vitor F.; Silva, Sandro G.

    2014-01-01

    In launching operations, Rocket Tracking Systems (RTS) process the trajectory data obtained by radar sensors. In order to improve functionality and maintenance, radars can be upgraded by replacing antennas with parabolic reflectors (PRs) with phased arrays (PAs). These arrays enable the electronic control of the radiation pattern by adjusting the signal supplied to each radiating element. However, in projects of phased array radars (PARs), the modeling of the problem is subject to various combinations of excitation signals producing a complex optimization problem. In this case, it is possible to calculate the problem solutions with optimization methods such as genetic algorithms (GAs). For this, the Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) method was developed to control the radiation pattern of PAs. The GA-MMC uses a reconfigurable algorithm with multiple objectives, differentiated coding and a new crossover genetic operator. This operator has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, GA-MMC was successful in more than 90% of the tests for each application, increased the fitness of the final population by more than 20% and reduced the premature convergence. PMID:25196013

  3. Mao-Gilles Stabilization Algorithm

    OpenAIRE

    Jérôme Gilles

    2013-01-01

    Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different sce...

  4. Registration of deformed multimodality medical images

    International Nuclear Information System (INIS)

    Moshfeghi, M.; Naidich, D.

    1989-01-01

    The registration and combination of images from different modalities have several potential applications, such as functional and anatomic studies, 3D radiation treatment planning, surgical planning, and retrospective studies. Image registration algorithms should correct for any local deformations caused by respiration, heart beat, imaging device distortions, and so forth. This paper reports on an elastic matching technique for registering deformed multimodality images. Correspondences between contours in the two images are used to stretch the deformed image toward its goal image. This process is repeated a number of times, with decreasing image stiffness. As the iterations continue, the stretched image better approximates its goal image

  5. Mono and multi-objective optimization techniques applied to a large range of industrial test cases using Metamodel assisted Evolutionary Algorithms

    Science.gov (United States)

    Fourment, Lionel; Ducloux, Richard; Marie, Stéphane; Ejday, Mohsen; Monnereau, Dominique; Massé, Thomas; Montmitonnet, Pierre

    2010-06-01

    The use of material processing numerical simulation allows a strategy of trial and error to improve virtual processes without incurring material costs or interrupting production and therefore save a lot of money, but it requires user time to analyze the results, adjust the operating conditions and restart the simulation. Automatic optimization is the perfect complement to simulation. Evolutionary Algorithm coupled with metamodelling makes it possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. Ten industrial partners have been selected to cover the different area of the mechanical forging industry and provide different examples of the forming simulation tools. It aims to demonstrate that it is possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. The large computational time is handled by a metamodel approach. It allows interpolating the objective function on the entire parameter space by only knowing the exact function values at a reduced number of "master points". Two algorithms are used: an evolution strategy combined with a Kriging metamodel and a genetic algorithm combined with a Meshless Finite Difference Method. The later approach is extended to multi-objective optimization. The set of solutions, which corresponds to the best possible compromises between the different objectives, is then computed in the same way. The population based approach allows using the parallel capabilities of the utilized computer with a high efficiency. An optimization module, fully embedded within the Forge2009 IHM, makes possible to cover all the defined examples, and the use of new multi-core hardware to compute several simulations at the same time reduces the needed time dramatically. The presented examples

  6. New Optimization Algorithms in Physics

    CERN Document Server

    Hartmann, Alexander K

    2004-01-01

    Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.

  7. Prediction of Endocrine System Affectation in Fisher 344 Rats by Food Intake Exposed with Malathion, Applying Naïve Bayes Classifier and Genetic Algorithms.

    Science.gov (United States)

    Mora, Juan David Sandino; Hurtado, Darío Amaya; Sandoval, Olga Lucía Ramos

    2016-01-01

    Reported cases of uncontrolled use of pesticides and its produced effects by direct or indirect exposition, represent a high risk for human health. Therefore, in this paper, it is shown the results of the development and execution of an algorithm that predicts the possible effects in endocrine system in Fisher 344 (F344) rats, occasioned by ingestion of malathion. It was referred to ToxRefDB database in which different case studies in F344 rats exposed to malathion were collected. The experimental data were processed using Naïve Bayes (NB) machine learning classifier, which was subsequently optimized using genetic algorithms (GAs). The model was executed in an application with a graphical user interface programmed in C#. There was a tendency to suffer bigger alterations, increasing levels in the parathyroid gland in dosages between 4 and 5 mg/kg/day, in contrast to the thyroid gland for doses between 739 and 868 mg/kg/day. It was showed a greater resistance for females to contract effects on the endocrine system by the ingestion of malathion. Females were more susceptible to suffer alterations in the pituitary gland with exposure times between 3 and 6 months. The prediction model based on NB classifiers allowed to analyze all the possible combinations of the studied variables and improving its accuracy using GAs. Excepting the pituitary gland, females demonstrated better resistance to contract effects by increasing levels on the rest of endocrine system glands.

  8. New Methodology for Optimal Flight Control Using Differential Evolution Algorithms Applied on the Cessna Citation X Business Aircraft – Part 1. Design and Optimization

    Directory of Open Access Journals (Sweden)

    Yamina BOUGHARI

    2017-06-01

    Full Text Available Setting the appropriate controllers for aircraft stability and control augmentation systems are complicated and time consuming tasks. As in the Linear Quadratic Regulator method gains are found by selecting the appropriate weights or as in the Proportional Integrator Derivative control by tuning gains. A trial and error process is usually employed for the determination of weighting matrices, which is normally a time consuming procedure. Flight Control Law were optimized and designed by combining the Deferential Evolution algorithm, the Linear Quadratic Regulator method, and the Proportional Integral controller. The optimal controllers were used to reach satisfactory aircraft’s dynamic and safe flight operations with respect to the augmentation systems’ handling qualities, and design requirements for different flight conditions. Furthermore the design and the clearance of the controllers over the flight envelope were automated using a Graphical User Interface, which offers to the designer, the flexibility to change the design requirements. In the aim of reducing time, and costs of the Flight Control Law design, one fitness function has been used for both optimizations, and using design requirements as constraints. Consequently the Flight Control Law design process complexity was reduced by using the meta-heuristic algorithm.

  9. Deformation Models Tracking, Animation and Applications

    CERN Document Server

    Torres, Arnau; Gómez, Javier

    2013-01-01

    The computational modelling of deformations has been actively studied for the last thirty years. This is mainly due to its large range of applications that include computer animation, medical imaging, shape estimation, face deformation as well as other parts of the human body, and object tracking. In addition, these advances have been supported by the evolution of computer processing capabilities, enabling realism in a more sophisticated way. This book encompasses relevant works of expert researchers in the field of deformation models and their applications.  The book is divided into two main parts. The first part presents recent object deformation techniques from the point of view of computer graphics and computer animation. The second part of this book presents six works that study deformations from a computer vision point of view with a common characteristic: deformations are applied in real world applications. The primary audience for this work are researchers from different multidisciplinary fields, s...

  10. Central composite design and genetic algorithm applied for the optimization of ultrasonic-assisted removal of malachite green by ZnO Nanorod-loaded activated carbon

    Science.gov (United States)

    Ghaedi, M.; Azad, F. Nasiri; Dashtian, K.; Hajati, S.; Goudarzi, A.; Soylak, M.

    2016-10-01

    Maximum malachite green (MG) adsorption onto ZnO Nanorod-loaded activated carbon (ZnO-NR-AC) was achieved following the optimization of conditions, while the mass transfer was accelerated by ultrasonic. The central composite design (CCD) and genetic algorithm (GA) were used to estimate the effect of individual variables and their mutual interactions on the MG adsorption as response and to optimize the adsorption process. The ZnO-NR-AC surface morphology and its properties were identified via FESEM, XRD and FTIR. The adsorption equilibrium isotherm and kinetic models investigation revealed the well fit of the experimental data to Langmuir isotherm and pseudo-second-order kinetic model, respectively. It was shown that a small amount of ZnO-NR-AC (with adsorption capacity of 20 mg g- 1) is sufficient for the rapid removal of high amount of MG dye in short time (3.99 min).

  11. A machine learning approach for real-time modelling of tissue deformation in image-guided neurosurgery.

    Science.gov (United States)

    Tonutti, Michele; Gras, Gauthier; Yang, Guang-Zhong

    2017-07-01

    Accurate reconstruction and visualisation of soft tissue deformation in real time is crucial in image-guided surgery, particularly in augmented reality (AR) applications. Current deformation models are characterised by a trade-off between accuracy and computational speed. We propose an approach to derive a patient-specific deformation model for brain pathologies by combining the results of pre-computed finite element method (FEM) simulations with machine learning algorithms. The models can be computed instantaneously and offer an accuracy comparable to FEM models. A brain tumour is used as the subject of the deformation model. Load-driven FEM simulations are performed on a tetrahedral brain mesh afflicted by a tumour. Forces of varying magnitudes, positions, and inclination angles are applied onto the brain's surface. Two machine learning algorithms-artificial neural networks (ANNs) and support vector regression (SVR)-are employed to derive a model that can predict the resulting deformation for each node in the tumour's mesh. The tumour deformation can be predicted in real time given relevant information about the geometry of the anatomy and the load, all of which can be measured instantly during a surgical operation. The models can predict the position of the nodes with errors below 0.3mm, beyond the general threshold of surgical accuracy and suitable for high fidelity AR systems. The SVR models perform better than the ANN's, with positional errors for SVR models reaching under 0.2mm. The results represent an improvement over existing deformation models for real time applications, providing smaller errors and high patient-specificity. The proposed approach addresses the current needs of image-guided surgical systems and has the potential to be employed to model the deformation of any type of soft tissue. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Close coupling of pre- and post-processing vision stations using inexact algorithms

    Science.gov (United States)

    Shih, Chi-Hsien V.; Sherkat, Nasser; Thomas, Peter D.

    1996-02-01

    Work has been reported using lasers to cut deformable materials. Although the use of laser reduces material deformation, distortion due to mechanical feed misalignment persists. Changes in the lace patten are also caused by the release of tension in the lace structure as it is cut. To tackle the problem of distortion due to material flexibility, the 2VMethod together with the Piecewise Error Compensation Algorithm incorporating the inexact algorithms, i.e., fuzzy logic, neural networks and neural fuzzy technique, are developed. A spring mounted pen is used to emulate the distortion of the lace pattern caused by tactile cutting and feed misalignment. Using pre- and post-processing vision systems, it is possible to monitor the scalloping process and generate on-line information for the artificial intelligence engines. This overcomes the problems of lace distortion due to the trimming process. Applying the algorithms developed, the system can produce excellent results, much better than a human operator.

  13. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  14. Algoritmo genético aplicado a la programación en talleres de maquinado//Genetic algorithm applied to scheduling in machine shops

    Directory of Open Access Journals (Sweden)

    José Eduardo Márquez-Delgado

    2012-09-01

    Full Text Available En este trabajo se utiliza la metaheurística nombrada algoritmo genético, para dos variantes típicas de problemas de planificación presentes en un taller de maquinado de piezas: las variantes flujo general y flujo regular, y se ha seleccionado la minimización del tiempo de finalización de todos los trabajos ocamino máximo, como objetivo a optimizar en un plan de trabajo. Este problema es considerado de difícil solución y es típico de la optimización combinatoria. Los resultados demuestran la calidad de las soluciones encontradas en correspondencia con el tiempo de cómputo empleado, al ser comparados conproblemas clásicos reportados por otros autores. La representación propuesta de cada cromosoma genera el universo completo de soluciones factibles, donde es posible encontrar valores óptimos globales de solución y cumple con las restricciones del problema.Palabras claves: algoritmo genético, cromosomas, flujo general, flujo regular, planificación, camino máximo._____________________________________________________________________________AbstractIn this paper we use the metaheuristic named genetic algorithm, for two typical variants of problems of scheduling present in a in a machine shop parts: the variant job shop and flow shop, and the minimization of the time of finalization of all the works has been selected, good known as makespan, as objective tooptimize in a work schedule. This problem is considered to be a difficult solution and is typical in combinatory optimization. The results demonstrate the quality of the solutions found in correspondence with the time of used computation, when being compared with classic problems reported by other authors.The proposed representation of each chromosome generates the complete universe of feasible solutions, where it is possible to find global good values of solution and it fulfills the restrictions of the problem.Key words: genetic algorithm, chromosomes, flow shop, job shop

  15. Applying genetic algorithms for calibrating a hexagonal cellular automata model for the simulation of debris flows characterised by strong inertial effects

    Science.gov (United States)

    Iovine, G.; D'Ambrosio, D.; Di Gregorio, S.

    2005-03-01

    In modelling complex a-centric phenomena which evolve through local interactions within a discrete time-space, cellular automata (CA) represent a valid alternative to standard solution methods based on differential equations. Flow-type phenomena (such as lava flows, pyroclastic flows, earth flows, and debris flows) can be viewed as a-centric dynamical systems, and they can therefore be properly investigated in CA terms. SCIDDICA S 4a is the last release of a two-dimensional hexagonal CA model for simulating debris flows characterised by strong inertial effects. S 4a has been obtained by progressively enriching an initial simplified model, originally derived for simulating very simple cases of slow-moving flow-type landslides. Using an empirical strategy, in S 4a, the inertial character of the flowing mass is translated into CA terms by means of local rules. In particular, in the transition function of the model, the distribution of landslide debris among the cells is obtained through a double cycle of computation. In the first phase, the inertial character of the landslide debris is taken into account by considering indicators of momentum. In the second phase, any remaining debris in the central cell is distributed among the adjacent cells, according to the principle of maximum possible equilibrium. The complexities of the model and of the phenomena to be simulated suggested the need for an automated technique of evaluation for the determination of the best set of global parameters. Accordingly, the model is calibrated using a genetic algorithm and by considering the May 1998 Curti-Sarno (Southern Italy) debris flow. The boundaries of the area affected by the debris flow are simulated well with the model. Errors computed by comparing the simulations with the mapped areal extent of the actual landslide are smaller than those previously obtained without genetic algorithms. As the experiments have been realised in a sequential computing environment, they could be

  16. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  17. Canny edge-based deformable image registration.

    Science.gov (United States)

    Kearney, Vasant; Huang, Yihui; Mao, Weihua; Yuan, Baohong; Tang, Liping

    2017-02-07

    This work focuses on developing a 2D Canny edge-based deformable image registration (Canny DIR) algorithm to register in vivo white light images taken at various time points. This method uses a sparse interpolation deformation algorithm to sparsely register regions of the image with strong edge information. A stability criterion is enforced which removes regions of edges that do not deform in a smooth uniform manner. Using a synthetic mouse surface ground truth model, the accuracy of the Canny DIR algorithm was evaluated under axial rotation in the presence of deformation. The accuracy was also tested using fluorescent dye injections, which were then used for gamma analysis to establish a second ground truth. The results indicate that the Canny DIR algorithm performs better than rigid registration, intensity corrected Demons, and distinctive features for all evaluation matrices and ground truth scenarios. In conclusion Canny DIR performs well in the presence of the unique lighting and shading variations associated with white-light-based image registration.

  18. An On-Chip RBC Deformability Checker Significantly Improves Velocity-Deformation Correlation

    Directory of Open Access Journals (Sweden)

    Chia-Hung Dylan Tsai

    2016-10-01

    Full Text Available An on-chip deformability checker is proposed to improve the velocity–deformation correlation for red blood cell (RBC evaluation. RBC deformability has been found related to human diseases, and can be evaluated based on RBC velocity through a microfluidic constriction as in conventional approaches. The correlation between transit velocity and amount of deformation provides statistical information of RBC deformability. However, such correlations are usually only moderate, or even weak, in practical evaluations due to limited range of RBC deformation. To solve this issue, we implemented three constrictions of different width in the proposed checker, so that three different deformation regions can be applied to RBCs. By considering cell responses from the three regions as a whole, we practically extend the range of cell deformation in the evaluation, and could resolve the issue about the limited range of RBC deformation. RBCs from five volunteer subjects were tested using the proposed checker. The results show that the correlation between cell deformation and transit velocity is significantly improved by the proposed deformability checker. The absolute values of the correlation coefficients are increased from an average of 0.54 to 0.92. The effects of cell size, shape and orientation to the evaluation are discussed according to the experimental results. The proposed checker is expected to be useful for RBC evaluation in medical practices.

  19. Applied geodesy

    International Nuclear Information System (INIS)

    Turner, S.

    1987-01-01

    This volume is based on the proceedings of the CERN Accelerator School's course on Applied Geodesy for Particle Accelerators held in April 1986. The purpose was to record and disseminate the knowledge gained in recent years on the geodesy of accelerators and other large systems. The latest methods for positioning equipment to sub-millimetric accuracy in deep underground tunnels several tens of kilometers long are described, as well as such sophisticated techniques as the Navstar Global Positioning System and the Terrameter. Automation of better known instruments such as the gyroscope and Distinvar is also treated along with the highly evolved treatment of components in a modern accelerator. Use of the methods described can be of great benefit in many areas of research and industrial geodesy such as surveying, nautical and aeronautical engineering, astronomical radio-interferometry, metrology of large components, deformation studies, etc

  20. Evaluating the diagnostic utility of applying a machine learning algorithm to diffusion tensor MRI measures in individuals with major depressive disorder.

    Science.gov (United States)

    Schnyer, David M; Clasen, Peter C; Gonzalez, Christopher; Beevers, Christopher G

    2017-06-30

    Using MRI to diagnose mental disorders has been a long-term goal. Despite this, the vast majority of prior neuroimaging work has been descriptive rather than predictive. The current study applies support vector machine (SVM) learning to MRI measures of brain white matter to classify adults with Major Depressive Disorder (MDD) and healthy controls. In a precisely matched group of individuals with MDD (n =25) and healthy controls (n =25), SVM learning accurately (74%) classified patients and controls across a brain map of white matter fractional anisotropy values (FA). The study revealed three main findings: 1) SVM applied to DTI derived FA maps can accurately classify MDD vs. healthy controls; 2) prediction is strongest when only right hemisphere white matter is examined; and 3) removing FA values from a region identified by univariate contrast as significantly different between MDD and healthy controls does not change the SVM accuracy. These results indicate that SVM learning applied to neuroimaging data can classify the presence versus absence of MDD and that predictive information is distributed across brain networks rather than being highly localized. Finally, MDD group differences revealed through typical univariate contrasts do not necessarily reveal patterns that provide accurate predictive information. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  1. CTC-ask: a new algorithm for conversion of CT numbers to tissue parameters for Monte Carlo dose calculations applying DICOM RS knowledge

    International Nuclear Information System (INIS)

    Ottosson, Rickard O; Behrens, Claus F

    2011-01-01

    One of the building blocks in Monte Carlo (MC) treatment planning is to convert patient CT data to MC compatible phantoms, consisting of density and media matrices. The resulting dose distribution is highly influenced by the accuracy of the conversion. Two major contributing factors are precise conversion of CT number to density and proper differentiation between air and lung. Existing tools do not address this issue specifically. Moreover, their density conversion may depend on the number of media used. Differentiation between air and lung is an important task in MC treatment planning and misassignment may lead to local dose errors on the order of 10%. A novel algorithm, CTC-ask, is presented in this study. It enables locally confined constraints for the media assignment and is independent of the number of media used for the conversion of CT number to density. MC compatible phantoms were generated for two clinical cases using a CT-conversion scheme implemented in both CTC-ask and the DICOM-RT toolbox. Full MC dose calculation was subsequently conducted and the resulting dose distributions were compared. The DICOM-RT toolbox inaccurately assigned lung in 9.9% and 12.2% of the voxels located outside of the lungs for the two cases studied, respectively. This was completely avoided by CTC-ask. CTC-ask is able to reduce anatomically irrational media assignment. The CTC-ask source code can be made available upon request to the authors. (note)

  2. Research on Innovating, Applying Multiple Paths Routing Technique Based on Fuzzy Logic and Genetic Algorithm for Routing Messages in Service - Oriented Routing

    Directory of Open Access Journals (Sweden)

    Nguyen Thanh Long

    2015-02-01

    Full Text Available MANET (short for Mobile Ad-Hoc Network consists of a set of mobile network nodes, network configuration changes very fast. In content based routing, data is transferred from source node to request nodes is not based on destination addresses. Therefore, it is very flexible and reliable, because source node does not need to know destination nodes. If We can find multiple paths that satisfies bandwidth requirement, split the original message into multiple smaller messages to transmit concurrently on these paths. On destination nodes, combine separated messages into the original message. Hence it can utilize better network resources, causes data transfer rate to be higher, load balancing, failover. Service Oriented Routing is inherited from the model of content based routing (CBR, combined with several advanced techniques such as Multicast, multiple path routing, Genetic algorithm to increase the data rate, and data encryption to ensure information security. Fuzzy logic is a logical field study evaluating the accuracy of the results based on the approximation of the components involved, make decisions based on many factors relative accuracy based on experimental or mathematical proof. This article presents some techniques to support multiple path routing from one network node to a set of nodes with guaranteed quality of service. By using these techniques can decrease the network load, congestion, use network resources efficiently.

  3. Physics-based deformable organisms for medical image analysis

    Science.gov (United States)

    Hamarneh, Ghassan; McIntosh, Chris

    2005-04-01

    Previously, "Deformable organisms" were introduced as a novel paradigm for medical image analysis that uses artificial life modelling concepts. Deformable organisms were designed to complement the classical bottom-up deformable models methodologies (geometrical and physical layers), with top-down intelligent deformation control mechanisms (behavioral and cognitive layers). However, a true physical layer was absent and in order to complete medical image segmentation tasks, deformable organisms relied on pure geometry-based shape deformations guided by sensory data, prior structural knowledge, and expert-generated schedules of behaviors. In this paper we introduce the use of physics-based shape deformations within the deformable organisms framework yielding additional robustness by allowing intuitive real-time user guidance and interaction when necessary. We present the results of applying our physics-based deformable organisms, with an underlying dynamic spring-mass mesh model, to segmenting and labelling the corpus callosum in 2D midsagittal magnetic resonance images.

  4. Interactive deformation registration of endorectal prostate MRI using ITK thin plate splines.

    Science.gov (United States)

    Cheung, M Rex; Krishnan, Karthik

    2009-03-01

    Magnetic resonance imaging with an endorectal coil allows high-resolution imaging of prostate cancer and the surrounding normal organs. These anatomic details can be used to direct radiotherapy. However, organ deformation introduced by the endorectal coil makes it difficult to register magnetic resonance images for treatment planning. In this study, plug-ins for the volume visualization software VolView were implemented on the basis of algorithms from the National Library of Medicine's Insight Segmentation and Registration Toolkit (ITK). Magnetic resonance images of a phantom simulating human pelvic structures were obtained with and without the endorectal coil balloon inflated. The prostate not deformed by the endorectal balloon was registered to the deformed prostate using an ITK thin plate spline (TPS). This plug-in allows the use of crop planes to limit the deformable registration in the region of interest around the prostate. These crop planes restricted the support of the TPS to the area around the prostate, where most of the deformation occurred. The region outside the crop planes was anchored by grid points. The TPS was more accurate in registering the local deformation of the prostate compared with a TPS variant, the elastic body spline. The TPS was also applied to register an in vivo T(2)-weighted endorectal magnetic resonance image. The intraprostatic tumor was accurately registered. This could potentially guide the boosting of intraprostatic targets. The source and target landmarks were placed graphically. This TPS plug-in allows the registration to be undone. The landmarks could be added, removed, and adjusted in real time and in three dimensions between repeated registrations. This interactive TPS plug-in allows a user to obtain a high level of accuracy satisfactory to a specific application efficiently. Because it is open-source software, the imaging community will be able to validate and improve the algorithm.

  5. Measurement of deforming mode of lattice truss structures under impact loading

    Directory of Open Access Journals (Sweden)

    Zhao H.

    2012-08-01

    Full Text Available Lattice truss structures, which are used as a core material in sandwich panels, were widely investigated experimentally and theoretically. However, explanation of the deforming mechanism using reliable experimental results is almost rarely reported, particularly for the dynamic deforming mechanism. The present work aimed at the measurement of the deforming mode of lattice truss structures. Indeed, quasi-static and Split Hopkinson Pressure Bar (SHPB tests have been performed on the tetrahedral truss cores structures made of Aluminum 3003-O. Global values such as crushing forces and displacements between the loading platens are obtained. However, in order to understand the deforming mechanism and to explain the observed impact strength enhancement observed in the experiments, images of the truss core element during the tests are recorded. A method based on the edge detection algorithm is developed and applied to these images. The deforming profiles of one beam are extracted and it allows for calculating the length of beam. It is found that these lengths diminish to a critical value (due to compression and remain constant afterwards (because of significant bending. The comparison between quasi-static and impact tests shows that the beam were much more compressed under impact loading, which could be understood as the lateral inertia effect in dynamic bucking. Therefore, the impact strength enhancement of tetrahedral truss core sandwich panel can be explained by the delayed buckling of beam under impact (more compression reached, together with the strain hardening of base material.

  6. Algebraic Algorithm Design and Local Search

    National Research Council Canada - National Science Library

    Graham, Robert

    1996-01-01

    .... Algebraic techniques have been applied successfully to algorithm synthesis by the use of algorithm theories and design tactics, an approach pioneered in the Kestrel Interactive Development System (KIDS...

  7. Simultaneous data pre-processing and SVM classification model selection based on a parallel genetic algorithm applied to spectroscopic data of olive oils.

    Science.gov (United States)

    Devos, Olivier; Downey, Gerard; Duponchel, Ludovic

    2014-04-01

    Classification is an important task in chemometrics. For several years now, support vector machines (SVMs) have proven to be powerful for infrared spectral data classification. However such methods require optimisation of parameters in order to control the risk of overfitting and the complexity of the boundary. Furthermore, it is established that the prediction ability of classification models can be improved using pre-processing in order to remove unwanted variance in the spectra. In this paper we propose a new methodology based on genetic algorithm (GA) for the simultaneous optimisation of SVM parameters and pre-processing (GENOPT-SVM). The method has been tested for the discrimination of the geographical origin of Italian olive oil (Ligurian and non-Ligurian) on the basis of near infrared (NIR) or mid infrared (FTIR) spectra. Different classification models (PLS-DA, SVM with mean centre data, GENOPT-SVM) have been tested and statistically compared using McNemar's statistical test. For the two datasets, SVM with optimised pre-processing give models with higher accuracy than the one obtained with PLS-DA on pre-processed data. In the case of the NIR dataset, most of this accuracy improvement (86.3% compared with 82.8% for PLS-DA) occurred using only a single pre-processing step. For the FTIR dataset, three optimised pre-processing steps are required to obtain SVM model with significant accuracy improvement (82.2%) compared to the one obtained with PLS-DA (78.6%). Furthermore, this study demonstrates that even SVM models have to be developed on the basis of well-corrected spectral data in order to obtain higher classification rates. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Algoritmos genéticos aplicados a la optimización de antenas Yagi-Uda Genetic algorithms applied to Yagi-Uda antenna optimization

    Directory of Open Access Journals (Sweden)

    Edgardo César De La Asunción López

    2009-07-01

    Full Text Available En el presente artículo se muestra un proceso de optimización implementado usando algoritmos genéticos. La población inicial del AG está compuesta por 128 cromosomas con 11 genes por cromosoma. Los cromosomas del AG están compuestos por las longitudes y separaciones de los elementos de la antena Yagi-Uda; los rangos de estos genes fueron escogidos siguiendo estándares de diseño para dichas antenas. Los genes pasan un proceso de análisis para medir cada una las antenas de cada generación de del AG para asignar la aptitud de los individuos. Con el fin de verificar los resultados obtenidos, se aplicaron varias pruebas, entre ellas la construcción de una antena Yagi-Uda optimizada a la cual se le midieron y verificaron sus características electromagnéticas.This paper describes an optimization process implemented using Genetic Algorithms. The initial population of the GA is composed of 128 chromosomes with 11 genes per chromosome. The chromosomes of the GA are composed by the length and separations of the elements of the Yagi-Uda antenna; the ranks of this genes where chosen by design standards for such antennas. All genes undergo a process of analysis to assess every one of the antennas of each generation of the GA to assign the fitness of the individuals. In order to verify the obtained results, various tests were made, and among them excel the construction of the optimized Yagi-Uda antenna to measure and verify it electromagnetic characteristics.

  9. A Discussion on Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms based on Kalman Filter Estimation Applied to Prognostics of Electronics Components

    Science.gov (United States)

    Celaya, Jose R.; Saxen, Abhinav; Goebel, Kai

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process and how it relates to uncertainty representation, management, and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function and the true remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for the two while considering prognostics in making critical decisions.

  10. Recursive forgetting algorithms

    DEFF Research Database (Denmark)

    Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan

    1992-01-01

    In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...

  11. Deformable image registration in radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Seung Jong; Kim, Si Yong [Dept. of Radiation Oncology, Virginia Commonwealth University, Richmond (United States)

    2017-06-15

    The number of imaging data sets has significantly increased during radiation treatment after introducing a diverse range of advanced techniques into the field of radiation oncology. As a consequence, there have been many studies proposing meaningful applications of imaging data set use. These applications commonly require a method to align the data sets at a reference. Deformable image registration (DIR) is a process which satisfies this requirement by locally registering image data sets into a reference image set. DIR identifies the spatial correspondence in order to minimize the differences between two or among multiple sets of images. This article describes clinical applications, validation, and algorithms of DIR techniques. Applications of DIR in radiation treatment include dose accumulation, mathematical modeling, automatic segmentation, and functional imaging. Validation methods discussed are based on anatomical landmarks, physical phantoms, digital phantoms, and per application purpose. DIR algorithms are also briefly reviewed with respect to two algorithmic components: similarity index and deformation models.

  12. Quantifying Damage Accumulation During Ductile Plastic Deformation Using Synchrotron Radiation

    Energy Technology Data Exchange (ETDEWEB)

    Suter, Robert M. [Carnegie Mellon Univ., Pittsburgh, PA (United States); Rollett, Anthony D. [Carnegie Mellon Univ., Pittsburgh, PA (United States)

    2015-08-15

    Under this grant, we have developed and demonstrated the ability of near-field High Energy Diffraction Microscopy (nf-HEDM) to map crystal orientation fields over three dimensions in deformed polycrystalline materials. Experimental work was performed at the Advanced Photon Source (APS) at beamline 1-ID. Applications of this new capability to ductile deformation of copper and zirconium samples were demonstrated as was the comparison of the experimental observations to computational plasticity models using a fast Fourier transform based algorithm that is able to handle the large experimental data sets. No such spatially resolved, direct comparison between measured and computed microstructure evolutions had previously been possible. The impact of this work is reflected in numerous publications and presentations as well as in the investments by DOE and DOD laboratories of millions of dollars in applying the technique, developing sophisticated new hardware that allows the technique to be applied to a wide variety of materials and materials problems, and in the use of the technique by other researchers. In essence, the grant facilitated the development of a new form of three dimensional microscopy and its application to technologically critical states of polycrystalline materials that are used throughout the U.S. and world economies. On-going collaborative work is further optimizing experimental and computational facilities at the APS and is pursuing expanded facilities.

  13. Trends in causes of death among children under 5 in Bangladesh, 1993-2004: an exercise applying a standardized computer algorithm to assign causes of death using verbal autopsy data

    Directory of Open Access Journals (Sweden)

    Walker Neff

    2011-08-01

    Full Text Available Abstract Background Trends in the causes of child mortality serve as important global health information to guide efforts to improve child survival. With child mortality declining in Bangladesh, the distribution of causes of death also changes. The three verbal autopsy (VA studies conducted with the Bangladesh Demographic and Health Surveys provide a unique opportunity to study these changes in child causes of death. Methods To ensure comparability of these trends, we developed a standardized algorithm to assign causes of death using symptoms collected through the VA studies. The original algorithms applied were systematically reviewed and key differences in cause categorization, hierarchy, case definition, and the amount of data collected were compared to inform the development of the standardized algorithm. Based primarily on the 2004 cause categorization and hierarchy, the standardized algorithm guarantees comparability of the trends by only including symptom data commonly available across all three studies. Results Between 1993 and 2004, pneumonia remained the leading cause of death in Bangladesh, contributing to 24% to 33% of deaths among children under 5. The proportion of neonatal mortality increased significantly from 36% (uncertainty range [UR]: 31%-41% to 56% (49%-62% during the same period. The cause-specific mortality fractions due to birth asphyxia/birth injury and prematurity/low birth weight (LBW increased steadily, with both rising from 3% (2%-5% to 13% (10%-17% and 10% (7%-15%, respectively. The cause-specific mortality rates decreased significantly due to neonatal tetanus and several postneonatal causes (tetanus: from 7 [4-11] to 2 [0.4-4] per 1,000 live births (LB; pneumonia: from 26 [20-33] to 15 [11-20] per 1,000 LB; diarrhea: from 12 [8-17] to 4 [2-7] per 1,000 LB; measles: from 5 [2-8] to 0.2 [0-0.7] per 1,000 LB; injury: from 11 [7-17] to 3 [1-5] per 1,000 LB; and malnutrition: from 9 [6-13] to 5 [2-7]. Conclusions

  14. Exactly marginal deformations from exceptional generalised geometry

    Energy Technology Data Exchange (ETDEWEB)

    Ashmore, Anthony [Merton College, University of Oxford,Merton Street, Oxford, OX1 4JD (United Kingdom); Mathematical Institute, University of Oxford,Andrew Wiles Building, Woodstock Road, Oxford, OX2 6GG (United Kingdom); Gabella, Maxime [Institute for Advanced Study,Einstein Drive, Princeton, NJ 08540 (United States); Graña, Mariana [Institut de Physique Théorique, CEA/Saclay,91191 Gif-sur-Yvette (France); Petrini, Michela [Sorbonne Université, UPMC Paris 05, UMR 7589, LPTHE,75005 Paris (France); Waldram, Daniel [Department of Physics, Imperial College London,Prince Consort Road, London, SW7 2AZ (United Kingdom)

    2017-01-27

    We apply exceptional generalised geometry to the study of exactly marginal deformations of N=1 SCFTs that are dual to generic AdS{sub 5} flux backgrounds in type IIB or eleven-dimensional supergravity. In the gauge theory, marginal deformations are parametrised by the space of chiral primary operators of conformal dimension three, while exactly marginal deformations correspond to quotienting this space by the complexified global symmetry group. We show how the supergravity analysis gives a geometric interpretation of the gauge theory results. The marginal deformations arise from deformations of generalised structures that solve moment maps for the generalised diffeomorphism group and have the correct charge under the generalised Reeb vector, generating the R-symmetry. If this is the only symmetry of the background, all marginal deformations are exactly marginal. If the background possesses extra isometries, there are obstructions that come from fixed points of the moment maps. The exactly marginal deformations are then given by a further quotient by these extra isometries. Our analysis holds for any N=2 AdS{sub 5} flux background. Focussing on the particular case of type IIB Sasaki-Einstein backgrounds we recover the result that marginal deformations correspond to perturbing the solution by three-form flux at first order. In various explicit examples, we show that our expression for the three-form flux matches those in the literature and the obstruction conditions match the one-loop beta functions of the dual SCFT.

  15. Frequency of foot deformity in preschool girls

    Directory of Open Access Journals (Sweden)

    Mihajlović Ilona

    2010-01-01

    Full Text Available Background/Aim. In order to determine the moment of creation of postural disorders, regardless of the causes of this problem, it is necessary to examine the moment of entry of children into a new environment, ie. in kindergarten or school. There is a weak evidence about the age period when foot deformity occurs, and the type of these deformities. The aim of this study was to establish the relationship between the occurrence of foot deformities and age characteristics of girls. Methods. The research was conducted in preschools 'Radosno detinjstvo' in the region of Novi Sad, using the method of random selection, on the sample of 272 girls, 4-7 years of age, classified into four strata according to the year of birth. To determine the foot deformities measurement technique using computerized digitized pedografy (CDP was applied. Results. In preschool population girls pes transversoplanus and calcanei valga deformities occurred in a very high percentage (over 90%. Disturbed longitudinal instep ie flat feet also appeared in a high percentage, but we noted the improvement of this deformity according to increasing age. Namely, there was a statistically significant correlation between the age and this deformity. As a child grows older, the deformity is lower. Conclusion. This study confirmed that the formation of foot arches probably does not end at the age of 3-4 years but lasts until school age.

  16. A GPU based high-resolution multilevel biomechanical head and neck model for validating deformable image registration

    International Nuclear Information System (INIS)

    Neylon, J.; Qi, X.; Sheng, K.; Low, D. A.; Kupelian, P.; Santhanam, A.; Staton, R.; Pukala, J.; Manon, R.

    2015-01-01

    Purpose: Validating the usage of deformable image registration (DIR) for daily patient positioning is critical for adaptive radiotherapy (RT) applications pertaining to head and neck (HN) radiotherapy. The authors present a methodology for generating biomechanically realistic ground-truth data for validating DIR algorithms for HN anatomy by (a) developing a high-resolution deformable biomechanical HN model from a planning CT, (b) simulating deformations for a range of interfraction posture changes and physiological regression, and (c) generating subsequent CT images representing the deformed anatomy. Methods: The biomechanical model was developed using HN kVCT datasets and the corresponding structure contours. The voxels inside a given 3D contour boundary were clustered using a graphics processing unit (GPU) based algorithm that accounted for inconsistencies and gaps in the boundary to form a volumetric structure. While the bony anatomy was modeled as rigid body, the muscle and soft tissue structures were modeled as mass–spring-damper models with elastic material properties that corresponded to the underlying contoured anatomies. Within a given muscle structure, the voxels were classified using a uniform grid and a normalized mass was assigned to each voxel based on its Hounsfield number. The soft tissue deformation for a given skeletal actuation was performed using an implicit Euler integration with each iteration split into two substeps: one for the muscle structures and the other for the remaining soft tissues. Posture changes were simulated by articulating the skeletal structure and enabling the soft structures to deform accordingly. Physiological changes representing tumor regression were simulated by reducing the target volume and enabling the surrounding soft structures to deform accordingly. Finally, the authors also discuss a new approach to generate kVCT images representing the deformed anatomy that accounts for gaps and antialiasing artifacts that may

  17. A GPU based high-resolution multilevel biomechanical head and neck model for validating deformable image registration

    Energy Technology Data Exchange (ETDEWEB)

    Neylon, J., E-mail: jneylon@mednet.ucla.edu; Qi, X.; Sheng, K.; Low, D. A.; Kupelian, P.; Santhanam, A. [Department of Radiation Oncology, University of California Los Angeles, 200 Medical Plaza, #B265, Los Angeles, California 90095 (United States); Staton, R.; Pukala, J.; Manon, R. [Department of Radiation Oncology, M.D. Anderson Cancer Center, Orlando, 1440 South Orange Avenue, Orlando, Florida 32808 (United States)

    2015-01-15

    Purpose: Validating the usage of deformable image registration (DIR) for daily patient positioning is critical for adaptive radiotherapy (RT) applications pertaining to head and neck (HN) radiotherapy. The authors present a methodology for generating biomechanically realistic ground-truth data for validating DIR algorithms for HN anatomy by (a) developing a high-resolution deformable biomechanical HN model from a planning CT, (b) simulating deformations for a range of interfraction posture changes and physiological regression, and (c) generating subsequent CT images representing the deformed anatomy. Methods: The biomechanical model was developed using HN kVCT datasets and the corresponding structure contours. The voxels inside a given 3D contour boundary were clustered using a graphics processing unit (GPU) based algorithm that accounted for inconsistencies and gaps in the boundary to form a volumetric structure. While the bony anatomy was modeled as rigid body, the muscle and soft tissue structures were modeled as mass–spring-damper models with elastic material properties that corresponded to the underlying contoured anatomies. Within a given muscle structure, the voxels were classified using a uniform grid and a normalized mass was assigned to each voxel based on its Hounsfield number. The soft tissue deformation for a given skeletal actuation was performed using an implicit Euler integration with each iteration split into two substeps: one for the muscle structures and the other for the remaining soft tissues. Posture changes were simulated by articulating the skeletal structure and enabling the soft structures to deform accordingly. Physiological changes representing tumor regression were simulated by reducing the target volume and enabling the surrounding soft structures to deform accordingly. Finally, the authors also discuss a new approach to generate kVCT images representing the deformed anatomy that accounts for gaps and antialiasing artifacts that may

  18. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  19. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  20. Portfolios of quantum algorithms.

    Science.gov (United States)

    Maurer, S M; Hogg, T; Huberman, B A

    2001-12-17

    Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.

  1. Magnetic Barkhausen emission in lightly deformed AISI 1070 steel

    Energy Technology Data Exchange (ETDEWEB)

    Capo Sanchez, J., E-mail: jcapo@cnt.uo.edu.cu [Departamento de Fisica, Facultad de Ciencias Naturales, Universidad de Oriente, Av. Patricio Lumumba s/n, 90500 Santiago de Cuba (Cuba); Campos, M.F. de [EEIMVR-Universidade Federal Fluminense, Av. dos Trabalhadores 420, Vila Santa Cecilia, 27255-125 Volta Redonda, RJ (Brazil); Padovese, L.R. [Departamento de Engenharia Mecanica, Escola Politecnica, Universidade de Sao Paulo, Av. Prof. Mello Moraes, 2231, 05508-900 Sao Paulo (Brazil)

    2012-01-15

    The Magnetic Barkhausen Noise (MBN) technique can evaluate both micro- and macro-residual stresses, and provides indication about the relevance of contribution of these different stress components. MBN measurements were performed in AISI 1070 steel sheet samples, where different strains were applied. The Barkhausen emission is also analyzed when two different sheets, deformed and non-deformed, are evaluated together. This study is useful to understand the effect of a deformed region near the surface on MBN. The low permeability of the deformed region affects MBN, and if the deformed region is below the surface the magnetic Barkhausen signal increases. - Highlights: > Evaluated residual stresses by the magnetic Barkhausen technique. > Indication about the relevance of micro-and macro-stress components. > Magnetic Barkhausen measurements were carried out in AISI 1070 steel sheet samples. > Two different sheets, deformed and non-deformed, are evaluated together. > Magnetic Barkhausen signal increases when deformed region is below the surface.

  2. A q-deformed nonlinear map

    International Nuclear Information System (INIS)

    Jaganathan, Ramaswamy; Sinha, Sudeshna

    2005-01-01

    A scheme of q-deformation of nonlinear maps is introduced. As a specific example, a q-deformation procedure related to the Tsallis q-exponential function is applied to the logistic map. Compared to the canonical logistic map, the resulting family of q-logistic maps is shown to have a wider spectrum of interesting behaviours, including the co-existence of attractors-a phenomenon rare in one-dimensional maps

  3. Deformations of superconformal theories

    Energy Technology Data Exchange (ETDEWEB)

    Córdova, Clay [School of Natural Sciences, Institute for Advanced Study,1 Einstein Drive, Princeton, NJ 08540 (United States); Dumitrescu, Thomas T. [Department of Physics, Harvard University,17 Oxford Street, Cambridge, MA 02138 (United States); Intriligator, Kenneth [Department of Physics, University of California,9500 Gilman Drive, San Diego, La Jolla, CA 92093 (United States)

    2016-11-22

    We classify possible supersymmetry-preserving relevant, marginal, and irrelevant deformations of unitary superconformal theories in d≥3 dimensions. Our method only relies on symmetries and unitarity. Hence, the results are model independent and do not require a Lagrangian description. Two unifying themes emerge: first, many theories admit deformations that reside in multiplets together with conserved currents. Such deformations can lead to modifications of the supersymmetry algebra by central and non-central charges. Second, many theories with a sufficient amount of supersymmetry do not admit relevant or marginal deformations, and some admit neither. The classification is complicated by the fact that short superconformal multiplets display a rich variety of sporadic phenomena, including supersymmetric deformations that reside in the middle of a multiplet. We illustrate our results with examples in diverse dimensions. In particular, we explain how the classification of irrelevant supersymmetric deformations can be used to derive known and new constraints on moduli-space effective actions.

  4. Sistema de informação geográfica para mapeamento da renda líquida aplicado no planejamento da agricultura irrigada Algorithm to mapping net income applied in irrigated agriculture planning

    Directory of Open Access Journals (Sweden)

    Wilson A. Silva

    2008-03-01

    Full Text Available O objetivo deste trabalho foi desenvolver um algoritmo na linguagem computacional MATLAB para aplicações em sistemas de informações geográficas, visando ao mapeamento da renda líquida maximizada de cultivos irrigados. O estudo foi desenvolvido para as culturas do maracujá, da cana-de-açúcar, do abacaxi e do mamão, em área de aproximadamente 2.500 ha, localizada no município de Campos dos Goytacazes, norte do Estado do Rio de Janeiro. Os dados de entrada do algoritmo foram informações edafoclimáticas, funções de resposta das culturas à água, dados de localização geográfica da área e índices econômicos referentes ao custo do processo produtivo. Os resultados permitiram concluir que o algoritmo desenvolvido se mostrou eficiente para o mapeamento da renda líquida de cultivos irrigados, sendo capaz de localizar áreas que apresentam maiores retornos econômicos.The objective of this work was to develop an algorithm in MATLAB computational language to be applied in geographical information systems to map net income irrigated crops to plan irrigated agriculture. The study was developed for the crops of passion fruit plant, sugarcane, pineapple and papaya, in an area of approximately 2,500 ha, at Campos dos Goytacazes, located at north of the State of Rio de Janeiro, Brazil. The algorithm input data were: information about soil, climate, crop water response functions, geographical location and economical cost indexes of the productive process. The results allowed concluding that developed algorithm was efficient to map net income of irrigated crops, been able to locate areas that present larger economical net income.

  5. Quantum deformed magnon kinematics

    OpenAIRE

    Gómez, César; Hernández Redondo, Rafael

    2007-01-01

    The dispersion relation for planar N=4 supersymmetric Yang-Mills is identified with the Casimir of a quantum deformed two-dimensional kinematical symmetry, E_q(1,1). The quantum deformed symmetry algebra is generated by the momentum, energy and boost, with deformation parameter q=e^{2\\pi i/\\lambda}. Representing the boost as the infinitesimal generator for translations on the rapidity space leads to an elliptic uniformization with crossing transformations implemented through translations by t...

  6. Mechanics of deformable bodies

    CERN Document Server

    Sommerfeld, Arnold Johannes Wilhelm

    1950-01-01

    Mechanics of Deformable Bodies: Lectures on Theoretical Physics, Volume II covers topics on the mechanics of deformable bodies. The book discusses the kinematics, statics, and dynamics of deformable bodies; the vortex theory; as well as the theory of waves. The text also describes the flow with given boundaries. Supplementary notes on selected hydrodynamic problems and supplements to the theory of elasticity are provided. Physicists, mathematicians, and students taking related courses will find the book useful.

  7. TO THE MODELING ISSUES OF LIFE CYCLE OF DEFORMATION WORK OF THE RAILWAY TRACK ELEMENTS

    Directory of Open Access Journals (Sweden)

    I. O. Bondarenko

    2014-12-01

    Full Text Available Purpose. This article highlightsthe operational cycle modeling of the railway track elements for the development processes study of deformability as the basis of creating a regulatory framework of the track while ensuring the reliability of the railways. Methodology.The basic theory of wave propagation process in describing the interaction of track and rolling stock are used to achieve the goal. Findings. The basic provisions concerning the concept «the operational cycle of the deformation track» were proposed and formulated. The method was set. On its base the algorithm for determining the dynamic effects of the rolling stock on the way was obtained. The basic principles for the calculation schemes of railway track components for process evaluation of the deformability of the way were formulated. An algorithm was developed, which allows getting the field values of stresses, strains and displacements of all points of the track design elements. Based on the fields of stress-strain state of the track, an algorithm to establish the dependence of the process of deformability and the amount of energy expended on the deformability of the track operation was created. Originality.The research of track reliability motivates the development of new models, provides an opportunity to consider it for some developments. There is a need to define the criteria on which the possibility of assessing and forecasting changes in the track states in the course of its operation. The paper proposed the basic principles, methods, algorithms, and the terms relating to the conduct of the study, questions the reliability of the track. Practical value. Analytical models, used to determine the parameters of strength and stability of tracks, fully meet its objectives, but cannot be applied to determine the parameters of track reliability. One of the main factors of impossibility to apply these models is a quasi-dynamic approach. Therefore, as a rule, not only one dynamic

  8. Semioptimal practicable algorithmic cooling

    International Nuclear Information System (INIS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-01-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  9. Performance through Deformation and Instability

    Science.gov (United States)

    Bertoldi, Katia

    2015-03-01

    Materials capable of undergoing large deformations like elastomers and gels are ubiquitous in daily life and nature. An exciting field of engineering is emerging that uses these compliant materials to design active devices, such as actuators, adaptive optical systems and self-regulating fluidics. Compliant structures may significantly change their architecture in response to diverse stimuli. When excessive deformation is applied, they may eventually become unstable. Traditionally, mechanical instabilities have been viewed as an inconvenience, with research focusing on how to avoid them. Here, I will demonstrate that these instabilities can be exploited to design materials with novel, switchable functionalities. The abrupt changes introduced into the architecture of soft materials by instabilities will be used to change their shape in a sudden, but controlled manner. Possible and exciting applications include materials with unusual properties such negative Poisson's ratio, phononic crystals with tunable low-frequency acoustic band gaps and reversible encapsulation systems.

  10. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  11. AIRBORNE LIGHT DETECTION AND RANGING (LIDAR DERIVED DEFORMATION FROM THE MW 6.0 24 AUGUST, 2014 SOUTH NAPA EARTHQUAKE ESTIMATED BY TWO AND THREE DIMENSIONAL POINT CLOUD CHANGE DETECTION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    A. W. Lyda

    2016-06-01

    Full Text Available Remote sensing via LiDAR (Light Detection And Ranging has proven extremely useful in both Earth science and hazard related studies. Surveys taken before and after an earthquake for example, can provide decimeter-level, 3D near-field estimates of land deformation that offer better spatial coverage of the near field rupture zone than other geodetic methods (e.g., InSAR, GNSS, or alignment array. In this study, we compare and contrast estimates of deformation obtained from different pre and post-event airborne laser scanning (ALS data sets of the 2014 South Napa Earthquake using two change detection algorithms, Iterative Control Point (ICP and Particle Image Velocimetry (PIV. The ICP algorithm is a closest point based registration algorithm that can iteratively acquire three dimensional deformations from airborne LiDAR data sets. By employing a newly proposed partition scheme, “moving window,” to handle the large spatial scale point cloud over the earthquake rupture area, the ICP process applies a rigid registration of data sets within an overlapped window to enhance the change detection results of the local, spatially varying surface deformation near-fault. The other algorithm, PIV, is a well-established, two dimensional image co-registration and correlation technique developed in fluid mechanics research and later applied to geotechnical studies. Adapted here for an earthquake with little vertical movement, the 3D point cloud is interpolated into a 2D DTM image and horizontal deformation is determined by assessing the cross-correlation of interrogation areas within the images to find the most likely deformation between two areas. Both the PIV process and the ICP algorithm are further benefited by a presented, novel use of urban geodetic markers. Analogous to the persistent scatterer technique employed with differential radar observations, this new LiDAR application exploits a classified point cloud dataset to assist the change detection

  12. TU-H-CAMPUS-JeP1-04: Deformable Image Registration Performances in Pelvis Patients: Impact of CBCT Image Quality

    International Nuclear Information System (INIS)

    Fusella, M; Loi, G; Fiandra, C; Lanzi, E

    2016-01-01

    Purpose: To investigate the accuracy and robustness, against image noise and artifacts (typical of CBCT images), of a commercial algorithm for deformable image registration (DIR), to propagate regions of interest (ROIs) in computational phantoms based on real prostate patient images. Methods: The Anaconda DIR algorithm, implemented in RayStation was tested. Two specific Deformation Vector Fields (DVFs) were applied to the reference data set (CTref) using the ImSimQA software, obtaining two deformed CTs. For each dataset twenty-four different level of noise and/or capping artifacts were applied to simulate CBCT images. DIR was performed between CTref and each deformed CTs and CBCTs. In order to investigate the relationship between image quality parameters and the DIR results (expressed by a logit transform of the Dice Index) a bilinear regression was defined. Results: More than 550 DIR-mapped ROIs were analyzed. The Statistical analysis states that deformation strenght and artifacts were significant prognostic factors of DIR performances, while noise appeared to have a minor role in DIR process as implemented in RayStation as expected by the image similarity metric built in the registration algorithm. Capping artifacts reveals a determinant role for the accuracy of DIR results. Two optimal values for capping artifacts were found to obtain acceptable DIR results (DICE> 075/ 0.85). Various clinical CBCT acquisition protocol were reported to evaluate the significance of the study. Conclusion: This work illustrates the impact of image quality on DIR performance. Clinical issues like Adaptive Radiation Therapy (ART) and Dose Accumulation need accurate and robust DIR software. The RayStation DIR algorithm resulted robust against noise, but sensitive to image artifacts. This result highlights the need of robustness quality assurance against image noise and artifacts in the commissioning of a DIR commercial system and underlines the importance to adopt optimized protocols

  13. TU-H-CAMPUS-JeP1-04: Deformable Image Registration Performances in Pelvis Patients: Impact of CBCT Image Quality

    Energy Technology Data Exchange (ETDEWEB)

    Fusella, M [I.O.V. - Istituto Oncologico Veneto - I.R.C.C.S., Padova (Italy); Loi, G [University Hospital Maggiore della Carita, Novara, Italy, Novara (Italy); Fiandra, C [University of Torino, Turin, Italy, Torino (Italy); Lanzi, E [Tecnologie Avanzate Srl, Turin, Italy, Torino (Italy)

    2016-06-15

    Purpose: To investigate the accuracy and robustness, against image noise and artifacts (typical of CBCT images), of a commercial algorithm for deformable image registration (DIR), to propagate regions of interest (ROIs) in computational phantoms based on real prostate patient images. Methods: The Anaconda DIR algorithm, implemented in RayStation was tested. Two specific Deformation Vector Fields (DVFs) were applied to the reference data set (CTref) using the ImSimQA software, obtaining two deformed CTs. For each dataset twenty-four different level of noise and/or capping artifacts were applied to simulate CBCT images. DIR was performed between CTref and each deformed CTs and CBCTs. In order to investigate the relationship between image quality parameters and the DIR results (expressed by a logit transform of the Dice Index) a bilinear regression was defined. Results: More than 550 DIR-mapped ROIs were analyzed. The Statistical analysis states that deformation strenght and artifacts were significant prognostic factors of DIR performances, while noise appeared to have a minor role in DIR process as implemented in RayStation as expected by the image similarity metric built in the registration algorithm. Capping artifacts reveals a determinant role for the accuracy of DIR results. Two optimal values for capping artifacts were found to obtain acceptable DIR results (DICE> 075/ 0.85). Various clinical CBCT acquisition protocol were reported to evaluate the significance of the study. Conclusion: This work illustrates the impact of image quality on DIR performance. Clinical issues like Adaptive Radiation Therapy (ART) and Dose Accumulation need accurate and robust DIR software. The RayStation DIR algorithm resulted robust against noise, but sensitive to image artifacts. This result highlights the need of robustness quality assurance against image noise and artifacts in the commissioning of a DIR commercial system and underlines the importance to adopt optimized protocols

  14. Accelerated Deformable Registration of Repetitive MRI during Radiotherapy in Cervical Cancer

    DEFF Research Database (Denmark)

    Noe, Karsten Østergaard; Tanderup, Kari; Kiritsis, Christian

    2006-01-01

    Tumour regression and organ deformations during radiotherapy (RT) of cervical cancer represent major challenges regarding accurate conformation and calculation of dose when using image-guided adaptive radiotherapy. Deformable registration algorithms are able to handle organ deformations, which can...... be useful with advanced tools such as auto segmentation of organs and dynamic adaptation of radiotherapy. The aim of this study was to accelerate and validate deformable registration in MRI-based image-guided radiotherapy of cervical cancer.    ...

  15. Intracrystalline deformation of calcite

    NARCIS (Netherlands)

    Bresser, J.H.P. de

    1991-01-01

    It is well established from observations on natural calcite tectonites that intracrystalline plastic mechanisms are important during the deformation of calcite rocks in nature. In this thesis, new data are presented on fundamental aspects of deformation behaviour of calcite under conditions where

  16. The Spherical Deformation Model

    DEFF Research Database (Denmark)

    Hobolth, Asgar

    2003-01-01

    Miller et al. (1994) describe a model for representing spatial objects with no obvious landmarks. Each object is represented by a global translation and a normal deformation of a sphere. The normal deformation is defined via the orthonormal spherical-harmonic basis. In this paper we analyse the s...

  17. Applied algebra codes, ciphers and discrete algorithms

    CERN Document Server

    Hardy, Darel W; Walker, Carol L

    2009-01-01

    This book attempts to show the power of algebra in a relatively simple setting.-Mathematical Reviews, 2010… The book supports learning by doing. In each section we can find many examples which clarify the mathematics introduced in the section and each section is followed by a series of exercises of which approximately half are solved in the end of the book. Additional the book comes with a CD-ROM containing an interactive version of the book powered by the computer algebra system Scientific Notebook. … the mathematics in the book are developed as needed and the focus of the book lies clearly o

  18. Comparison of Small Baseline Interferometric SAR Processors for Estimating Ground Deformation

    Directory of Open Access Journals (Sweden)

    Wenyu Gong

    2016-04-01

    Full Text Available The small Baseline Synthetic Aperture Radar (SAR Interferometry (SBI technique has been widely and successfully applied in various ground deformation monitoring applications. Over the last decade, a variety of SBI algorithms have been developed based on the same fundamental concepts. Recently developed SBI toolboxes provide an open environment for researchers to apply different SBI methods for various purposes. However, there has been no thorough discussion that compares the particular characteristics of different SBI methods and their corresponding performance in ground deformation reconstruction. Thus, two SBI toolboxes that implement a total of four SBI algorithms were selected for comparison. This study discusses and summarizes the main differences, pros and cons of these four SBI implementations, which could help users to choose a suitable SBI method for their specific application. The study focuses on exploring the suitability of each SBI module under various data set conditions, including small/large number of interferograms, the presence or absence of larger time gaps, urban/vegetation ground coverage, and temporally regular/irregular ground displacement with multiple spatial scales. Within this paper we discuss the corresponding theoretical background of each SBI method. We present a performance analysis of these SBI modules based on two real data sets characterized by different environmental and surface deformation conditions. The study shows that all four SBI processors are capable of generating similar ground deformation results when the data set has sufficient temporal sampling and a stable ground backscatter mechanism like urban area. Strengths and limitations of different SBI processors were analyzed based on data set configuration and environmental conditions and are summarized in this paper to guide future users of SBI techniques.

  19. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  20. Hydrological deformation signals in karst systems: new evidence from the European Alps

    Science.gov (United States)

    Serpelloni, E.; Pintori, F.; Gualandi, A.; Scoccimarro, E.; Cavaliere, A.; Anderlini, L.; Belardinelli, M. E.; Todesco, M.

    2017-12-01

    The influence of rainfall on crustal deformation has been described at local scales, using tilt and strain meters, in several tectonic settings. However, the literature on the spatial extent of rainfall-induced deformation is still scarce. We analyzed 10 years of displacement time-series from 150 continuous GPS stations operating across the broad zone of deformation accommodating the N-S Adria-Eurasia convergence and the E-ward escape of the Eastern Alps toward the Pannonian basin. We applied a blind-source-separation algorithm based on a variational Bayesian Independent Component Analysis method to the de-trended time-series, being able to characterize the temporal and spatial features of several deformation signals. The most important ones are a common mode annual signal, with spatially uniform response in the vertical and horizontal components and a time-variable, non-cyclic, signal characterized by a spatially variable response in the horizontal components, with stations moving (up to 8 mm) in the opposite directions, reversing the sense of movement in time. This implies a succession of extensional/compressional strains, with variable amplitudes through time, oriented normal to rock fractures in karst areas. While seasonal displacements in the vertical component (with an average amplitude of 4 mm over the study area) are satisfactorily reproduced by surface hydrological loading, estimated from global assimilation models, the non seasonal signal is associated with groundwater flow in karst systems, and is mainly influencing the horizontal component. The temporal evolution of this deformation signal is correlated with cumulated precipitation values over periods of 200-300 days. This horizontal deformation can be explained by pressure changes associated with variable water levels within vertical fractures in the vadose zones of karst systems, and the water level changes required to open or close these fractures are consistent with the fluctuations of precipitation

  1. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  2. 2D vector-cyclic deformable templates

    DEFF Research Database (Denmark)

    Schultz, Nette; Conradsen, Knut

    1998-01-01

    In this paper the theory of deformable templates is a vector cycle in 2D is described. The deformable template model originated in (Grenander, 1983) and was further investigated in (Grenander et al., 1991). A template vector distribution is induced by parameter distribution from transformation...... matrices applied to the vector cycle. An approximation in the parameter distribution is introduced. The main advantage by using the deformable template model is the ability to simulate a wide range of objects trained by e.g. their biological variations, and thereby improve restoration, segmentation...... and probabillity measurement. The case study concerns estimation of meat percent in pork carcasses. Given two cross-sectional images - one at the front and one near the ham of the carcass - the areas of lean and fat and a muscle in the lean area are measured automatically by the deformable templates....

  3. Effects of thermal deformation on optical instruments for space application

    Science.gov (United States)

    Segato, E.; Da Deppo, V.; Debei, S.; Cremonese, G.

    2017-11-01

    Optical instruments for space missions work in hostile environment, it's thus necessary to accurately study the effects of ambient parameters variations on the equipment. In particular optical instruments are very sensitive to ambient conditions, especially temperature. This variable can cause dilatations and misalignments of the optical elements, and can also lead to rise of dangerous stresses in the optics. Their displacements and the deformations degrade the quality of the sampled images. In this work a method for studying the effects of the temperature variations on the performance of imaging instrument is presented. The optics and their mountings are modeled and processed by a thermo-mechanical Finite Element Model (FEM) analysis, then the output data, which describe the deformations of the optical element surfaces, are elaborated using an ad hoc MATLAB routine: a non-linear least square optimization algorithm is adopted to determine the surface equations (plane, spherical, nth polynomial) which best fit the data. The obtained mathematical surface representations are then directly imported into ZEMAX for sequential raytracing analysis. The results are the variations of the Spot Diagrams, of the MTF curves and of the Diffraction Ensquared Energy due to simulated thermal loads. This method has been successfully applied to the Stereo Camera for the BepiColombo mission reproducing expected operative conditions. The results help to design and compare different optical housing systems for a feasible solution and show that it is preferable to use kinematic constraints on prisms and lenses to minimize the variation of the optical performance of the Stereo Camera.

  4. Is nucleon deformed?

    International Nuclear Information System (INIS)

    Abbas, Afsar

    1992-01-01

    The surprising answer to this question Is nucleon deformed? is : Yes. The evidence comes from a study of the quark model of the single nucleon and when it is found in a nucleus. It turns out that many of the long standing problems of the Naive Quark Model are taken care of if the nucleon is assumed to be deformed. Only one value of the parameter P D ∼1/4 (which specifies deformation) fits g A (the axial vector coupling constant) for all the semileptonic decay of baryons, the F/D ratio, the pion-nucleon-delta coupling constant fsub(πNΔ), the double delta coupling constant 1 fsub(πΔΔ), the Ml transition moment μΔN and g 1 p the spin structure function of proton 2 . All this gives strong hint that both neutron and proton are deformed. It is important to look for further signatures of this deformation. When this deformed nucleon finds itself in a nuclear medium its deformation decreases. So much that in a heavy nucleus the nucleons are actually spherical. We look into the Gamow-Teller strengths, magnetic moments and magnetic transition strengths in nuclei to study this property. (author). 15 refs

  5. Deformed exponentials and portfolio selection

    Science.gov (United States)

    Rodrigues, Ana Flávia P.; Guerreiro, Igor M.; Cavalcante, Charles Casimiro

    In this paper, we present a method for portfolio selection based on the consideration on deformed exponentials in order to generalize the methods based on the gaussianity of the returns in portfolio, such as the Markowitz model. The proposed method generalizes the idea of optimizing mean-variance and mean-divergence models and allows a more accurate behavior for situations where heavy-tails distributions are necessary to describe the returns in a given time instant, such as those observed in economic crises. Numerical results show the proposed method outperforms the Markowitz portfolio for the cumulated returns with a good convergence rate of the weights for the assets which are searched by means of a natural gradient algorithm.

  6. Shape reconstruction from apparent contours theory and algorithms

    CERN Document Server

    Bellettini, Giovanni; Paolini, Maurizio

    2015-01-01

    Motivated by a variational model concerning the depth of the objects in a picture and the problem of hidden and illusory contours, this book investigates one of the central problems of computer vision: the topological and algorithmic reconstruction of a smooth three dimensional scene starting from the visible part of an apparent contour. The authors focus their attention on the manipulation of apparent contours using a finite set of elementary moves, which correspond to diffeomorphic deformations of three dimensional scenes. A large part of the book is devoted to the algorithmic part, with implementations, experiments, and computed examples. The book is intended also as a user's guide to the software code appcontour, written for the manipulation of apparent contours and their invariants. This book is addressed to theoretical and applied scientists working in the field of mathematical models of image segmentation.

  7. The Application Research of Inverse Finite Element Method for Frame Deformation Estimation

    Directory of Open Access Journals (Sweden)

    Yong Zhao

    2017-01-01

    Full Text Available A frame deformation estimation algorithm is investigated for the purpose of real-time control and health monitoring of flexible lightweight aerospace structures. The inverse finite element method (iFEM for beam deformation estimation was recently proposed by Gherlone and his collaborators. The methodology uses a least squares principle involving section strains of Timoshenko theory for stretching, torsion, bending, and transverse shearing. The proposed methodology is based on stain-displacement relations only, without invoking force equilibrium. Thus, the displacement fields can be reconstructed without the knowledge of structural mode shapes, material properties, and applied loading. In this paper, the number of the locations where the section strains are evaluated in the iFEM is discussed firstly, and the algorithm is subsequently investigated through a simple supplied beam and an experimental aluminum wing-like frame model in the loading case of end-node force. The estimation results from the iFEM are compared with reference displacements from optical measurement and computational analysis, and the accuracy of the algorithm estimation is quantified by the root-mean-square error and percentage difference error.

  8. Dynamic skin deformation simulation using musculoskeletal model and soft tissue dynamics

    Institute of Scientific and Technical Information of China (English)

    Akihiko Murai; Q. Youn Hong; Katsu Yamane; Jessica K. Hodgins

    2017-01-01

    Deformation of skin and muscle is essential for bringing an animated character to life. This deformation is difficult to animate in a realistic fashion using traditional techniques because of the subtlety of the skin deformations that must move appropriately for the character design. In this paper, we present an algorithm that generates natural, dynamic, and detailed skin deformation (movement and jiggle) from joint angle data sequences. The algorithm has two steps: identification of parameters for a quasi-static muscle deformation model, and simulation of skin deformation. In the identification step, we identify the model parameters using a musculoskeletal model and a short sequence of skin deformation data captured via a dense marker set. The simulation step first uses the quasi-static muscle deformation model to obtain the quasi-static muscle shape at each frame of the given motion sequence (slow jump). Dynamic skin deformation is then computed by simulating the passive muscle and soft tissue dynamics modeled as a mass–spring–damper system. Having obtained the model parameters, we can simulate dynamic skin deformations for subjects with similar body types from new motion data. We demonstrate our method by creating skin deformations for muscle co-contraction and external impacts from four different behaviors captured as skeletal motion capture data. Experimental results show that the simulated skin deformations are quantitatively and qualitatively similar to measured actual skin deformations.

  9. Dynamic skin deformation simulation using musculoskeletal model and soft tissue dynamics

    Institute of Scientific and Technical Information of China (English)

    Akihiko Murai; Q.Youn Hong; Katsu Yamane; Jessica K.Hodgins

    2017-01-01

    Deformation of skin and muscle is essential for bringing an animated character to life. This deformation is difficult to animate in a realistic fashion using traditional techniques because of the subtlety of the skin deformations that must move appropriately for the character design. In this paper, we present an algorithm that generates natural, dynamic, and detailed skin deformation(movement and jiggle) from joint angle data sequences. The algorithm has two steps: identification of parameters for a quasi-static muscle deformation model, and simulation of skin deformation. In the identification step, we identify the model parameters using a musculoskeletal model and a short sequence of skin deformation data captured via a dense marker set. The simulation step first uses the quasi-static muscle deformation model to obtain the quasi-static muscle shape at each frame of the given motion sequence(slow jump). Dynamic skin deformation is then computed by simulating the passive muscle and soft tissue dynamics modeled as a mass–spring–damper system. Having obtained the model parameters, we can simulate dynamic skin deformations for subjects with similar body types from new motion data. We demonstrate our method by creating skin deformations for muscle co-contraction and external impacts from four different behaviors captured as skeletal motion capture data. Experimental results show that the simulated skin deformations are quantitatively and qualitatively similar to measured actual skin deformations.

  10. Deformable segmentation via sparse representation and dictionary learning.

    Science.gov (United States)

    Zhang, Shaoting; Zhan, Yiqiang; Metaxas, Dimitris N

    2012-10-01

    "Shape" and "appearance", the two pillars of a deformable model, complement each other in object segmentation. In many medical imaging applications, while the low-level appearance information is weak or mis-leading, shape priors play a more important role to guide a correct segmentation, thanks to the strong shape characteristics of biological structures. Recently a novel shape prior modeling method has been proposed based on sparse learning theory. Instead of learning a generative shape model, shape priors are incorporated on-the-fly through the sparse shape composition (SSC). SSC is robust to non-Gaussian errors and still preserves individual shape characteristics even when such characteristics is not statistically significant. Although it seems straightforward to incorporate SSC into a deformable segmentation framework as shape priors, the large-scale sparse optimization of SSC has low runtime efficiency, which cannot satisfy clinical requirements. In this paper, we design two strategies to decrease the computational complexity of SSC, making a robust, accurate and efficient deformable segmentation system. (1) When the shape repository contains a large number of instances, which is often the case in 2D problems, K-SVD is used to learn a more compact but still informative shape dictionary. (2) If the derived shape instance has a large number of vertices, which often appears in 3D problems, an affinity propagation method is used to partition the surface into small sub-regions, on which the sparse shape composition is performed locally. Both strategies dramatically decrease the scale of the sparse optimization problem and hence speed up the algorithm. Our method is applied on a diverse set of biomedical image analysis problems. Compared to the original SSC, these two newly-proposed modules not only significant reduce the computational complexity, but also improve the overall accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Pixels Intensity Evolution to Describe the Plastic Films Deformation

    Directory of Open Access Journals (Sweden)

    Juan C. Briñez-De León

    2013-11-01

    Full Text Available This work proposes an approach for mechanical behavior description in the plastic film deformation using techniques for the images analysis, which are based on the intensities evolution of fixed pixels applied to an images sequence acquired through polarizing optical assembly implemented around the platform of the plastic film deformation. The pixels intensities evolution graphs, and mechanical behavior graphic of the deformation has dynamic behaviors zones which could be associated together.

  12. Extremely deformable structures

    CERN Document Server

    2015-01-01

    Recently, a new research stimulus has derived from the observation that soft structures, such as biological systems, but also rubber and gel, may work in a post critical regime, where elastic elements are subject to extreme deformations, though still exhibiting excellent mechanical performances. This is the realm of ‘extreme mechanics’, to which this book is addressed. The possibility of exploiting highly deformable structures opens new and unexpected technological possibilities. In particular, the challenge is the design of deformable and bi-stable mechanisms which can reach superior mechanical performances and can have a strong impact on several high-tech applications, including stretchable electronics, nanotube serpentines, deployable structures for aerospace engineering, cable deployment in the ocean, but also sensors and flexible actuators and vibration absorbers. Readers are introduced to a variety of interrelated topics involving the mechanics of extremely deformable structures, with emphasis on ...

  13. Diffeomorphic Statistical Deformation Models

    DEFF Research Database (Denmark)

    Hansen, Michael Sass; Hansen, Mads/Fogtman; Larsen, Rasmus

    2007-01-01

    In this paper we present a new method for constructing diffeomorphic statistical deformation models in arbitrary dimensional images with a nonlinear generative model and a linear parameter space. Our deformation model is a modified version of the diffeomorphic model introduced by Cootes et al....... The modifications ensure that no boundary restriction has to be enforced on the parameter space to prevent folds or tears in the deformation field. For straightforward statistical analysis, principal component analysis and sparse methods, we assume that the parameters for a class of deformations lie on a linear...... with ground truth in form of manual expert annotations, and compared to Cootes's model. We anticipate applications in unconstrained diffeomorphic synthesis of images, e.g. for tracking, segmentation, registration or classification purposes....

  14. Deformations of symplectic Lie algebroids, deformations of holomorphic symplectic structures, and index theorems

    DEFF Research Database (Denmark)

    Nest, Ryszard; Tsygan, Boris

    2001-01-01

    Recently Kontsevich solved the classification problem for deformation quantizations of all Poisson structures on a manifold. In this paper we study those Poisson structures for which the explicit methods of Fedosov can be applied, namely the Poisson structures coming from symplectic Lie algebroids......, as well as holomorphic symplectic structures. For deformations of these structures we prove the classification theorems and a general a general index theorem....

  15. Interactive collision detection for deformable models using streaming AABBs.

    Science.gov (United States)

    Zhang, Xinyu; Kim, Young J

    2007-01-01

    We present an interactive and accurate collision detection algorithm for deformable, polygonal objects based on the streaming computational model. Our algorithm can detect all possible pairwise primitive-level intersections between two severely deforming models at highly interactive rates. In our streaming computational model, we consider a set of axis aligned bounding boxes (AABBs) that bound each of the given deformable objects as an input stream and perform massively-parallel pairwise, overlapping tests onto the incoming streams. As a result, we are able to prevent performance stalls in the streaming pipeline that can be caused by expensive indexing mechanism required by bounding volume hierarchy-based streaming algorithms. At runtime, as the underlying models deform over time, we employ a novel, streaming algorithm to update the geometric changes in the AABB streams. Moreover, in order to get only the computed result (i.e., collision results between AABBs) without reading back the entire output streams, we propose a streaming en/decoding strategy that can be performed in a hierarchical fashion. After determining overlapped AABBs, we perform a primitive-level (e.g., triangle) intersection checking on a serial computational model such as CPUs. We implemented the entire pipeline of our algorithm using off-the-shelf graphics processors (GPUs), such as nVIDIA GeForce 7800 GTX, for streaming computations, and Intel Dual Core 3.4G processors for serial computations. We benchmarked our algorithm with different models of varying complexities, ranging from 15K up to 50K triangles, under various deformation motions, and the timings were obtained as 30 approximately 100 FPS depending on the complexity of models and their relative configurations. Finally, we made comparisons with a well-known GPU-based collision detection algorithm, CULLIDE [4] and observed about three times performance improvement over the earlier approach. We also made comparisons with a SW-based AABB

  16. The Spherical Deformation Model

    DEFF Research Database (Denmark)

    Hobolth, Asgar

    2003-01-01

    Miller et al. (1994) describe a model for representing spatial objects with no obvious landmarks. Each object is represented by a global translation and a normal deformation of a sphere. The normal deformation is defined via the orthonormal spherical-harmonic basis. In this paper we analyse the s...... a single central section of the object. We use maximum-likelihood-based inference for this purpose and demonstrate the suggested methods on real data....

  17. An electromechanical based deformable model for soft tissue simulation.

    Science.gov (United States)

    Zhong, Yongmin; Shirinzadeh, Bijan; Smith, Julian; Gu, Chengfan

    2009-11-01

    Soft tissue deformation is of great importance to surgery simulation. Although a significant amount of research efforts have been dedicated to simulating the behaviours of soft tissues, modelling of soft tissue deformation is still a challenging problem. This paper presents a new deformable model for simulation of soft tissue deformation from the electromechanical viewpoint of soft tissues. Soft tissue deformation is formulated as a reaction-diffusion process coupled with a mechanical load. The mechanical load applied to a soft tissue to cause a deformation is incorporated into the reaction-diffusion system, and consequently distributed among mass points of the soft tissue. Reaction-diffusion of mechanical load and non-rigid mechanics of motion are combined to govern the simulation dynamics of soft tissue deformation. An improved reaction-diffusion model is developed to describe the distribution of the mechanical load in soft tissues. A three-layer artificial cellular neural network is constructed to solve the reaction-diffusion model for real-time simulation of soft tissue deformation. A gradient based method is established to derive internal forces from the distribution of the mechanical load. Integration with a haptic device has also been achieved to simulate soft tissue deformation with haptic feedback. The proposed methodology does not only predict the typical behaviours of living tissues, but it also accepts both local and large-range deformations. It also accommodates isotropic, anisotropic and inhomogeneous deformations by simple modification of diffusion coefficients.

  18. Deformable Organic Nanowire Field-Effect Transistors.

    Science.gov (United States)

    Lee, Yeongjun; Oh, Jin Young; Kim, Taeho Roy; Gu, Xiaodan; Kim, Yeongin; Wang, Ging-Ji Nathan; Wu, Hung-Chin; Pfattner, Raphael; To, John W F; Katsumata, Toru; Son, Donghee; Kang, Jiheong; Matthews, James R; Niu, Weijun; He, Mingqian; Sinclair, Robert; Cui, Yi; Tok, Jeffery B-H; Lee, Tae-Woo; Bao, Zhenan

    2018-02-01

    Deformable electronic devices that are impervious to mechanical influence when mounted on surfaces of dynamically changing soft matters have great potential for next-generation implantable bioelectronic devices. Here, deformable field-effect transistors (FETs) composed of single organic nanowires (NWs) as the semiconductor are presented. The NWs are composed of fused thiophene diketopyrrolopyrrole based polymer semiconductor and high-molecular-weight polyethylene oxide as both the molecular binder and deformability enhancer. The obtained transistors show high field-effect mobility >8 cm 2 V -1 s -1 with poly(vinylidenefluoride-co-trifluoroethylene) polymer dielectric and can easily be deformed by applied strains (both 100% tensile and compressive strains). The electrical reliability and mechanical durability of the NWs can be significantly enhanced by forming serpentine-like structures of the NWs. Remarkably, the fully deformable NW FETs withstand 3D volume changes (>1700% and reverting back to original state) of a rubber balloon with constant current output, on the surface of which it is attached. The deformable transistors can robustly operate without noticeable degradation on a mechanically dynamic soft matter surface, e.g., a pulsating balloon (pulse rate: 40 min -1 (0.67 Hz) and 40% volume expansion) that mimics a beating heart, which underscores its potential for future biomedical applications. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. A Hybrid 3D Colon Segmentation Method Using Modified Geometric Deformable Models

    Directory of Open Access Journals (Sweden)

    S. Falahieh Hamidpour

    2007-06-01

    Full Text Available Introduction: Nowadays virtual colonoscopy has become a reliable and efficient method of detecting primary stages of colon cancer such as polyp detection. One of the most important and crucial stages of virtual colonoscopy is colon segmentation because an incorrect segmentation may lead to a misdiagnosis.  Materials and Methods: In this work, a hybrid method based on Geometric Deformable Models (GDM in combination with an advanced region growing and thresholding methods is proposed. GDM are found to be an attractive tool for structural based image segmentation particularly for extracting the objects with complicated topology. There are two main parameters influencing the overall performance of GDM algorithm; the distance between the initial contour and the actual object’s contours and secondly the stopping term which controls the deformation. To overcome these limitations, a two stage hybrid based segmentation method is suggested to extract the rough but precise initial contours at the first stage of the segmentation. The extracted boundaries are smoothed and improved using a modified GDM algorithm by improving the stopping terms of the algorithm based on the gradient value of image voxels. Results: The proposed algorithm was implemented on forty data sets each containing 400-480 slices. The results show an improvement in the accuracy and smoothness of the extracted boundaries. The improvement obtained for the accuracy of segmentation is about 6% in comparison to the one achieved by the methods based on thresholding and region growing only. Discussion and Conclusion: The extracted contours using modified GDM are smoother and finer. The improvement achieved in this work on the performance of stopping function of GDM model together with applying two stage segmentation of boundaries have resulted in a great improvement on the computational efficiency of GDM algorithm while making smoother and finer colon borders.

  20. Discrete Surface Evolution and Mesh Deformation for Aircraft Icing Applications

    Science.gov (United States)

    Thompson, David; Tong, Xiaoling; Arnoldus, Qiuhan; Collins, Eric; McLaurin, David; Luke, Edward; Bidwell, Colin S.

    2013-01-01

    Robust, automated mesh generation for problems with deforming geometries, such as ice accreting on aerodynamic surfaces, remains a challenging problem. Here we describe a technique to deform a discrete surface as it evolves due to the accretion of ice. The surface evolution algorithm is based on a smoothed, face-offsetting approach. We also describe a fast algebraic technique to propagate the computed surface deformations into the surrounding volume mesh while maintaining geometric mesh quality. Preliminary results presented here demonstrate the ecacy of the approach for a sphere with a prescribed accretion rate, a rime ice accretion, and a more complex glaze ice accretion.

  1. Deformation Monitoring of Motorway Underpasses Using Laser Scanning Data

    Science.gov (United States)

    Puente, I.; González-Jorge, H.; Riveiro, B.; Arias, P.

    2012-07-01

    The motorway Ourense - Celanova will become the next years in one of the main roads of inland Galicia (northwest region of Spain) that will connect quickly with the cities of Northern Portugal. This highway is projected as a public - private partnership between the regional government of Xunta de Galicia and the construction companies Copasa SA and Extraco SA. There are currently under construction the 19 km of this road and presents a number of structures as viaducts, overpasses and underpasses. The viaducts are part of the main road, allowing passage of the vehicles at conventional speed. Overpasses are mainly used in the connection of the highway with secondary roads. Moreover, the underpasses are better suited for the passage of wildlife animals, persons or agricultural machinery. The underpass arch-shape structures used for this project consist of two reinforced concrete voussoirs placed on two small concrete walls. For each set of voussoirs there are three joining points, two between the walls and the voussoirs and one between the both voussoirs at the top of the structure. These underpasses suffer significant mechanical stress during construction, because during the backfilling process asymmetric loads are applied to both sides. Thus, it is very important the monitoring of the structure using geodetic techniques as total stations, levels or laser scanners The underpass selected for this study is located at the kilometric point 4.9 of the highway, with a total length of 50.38 m, maximum span of 13.30 m and rise of 7.23 m. Voussoirs has a thickness of 0.35 m and a length of 2.52 m. The small lateral walls exhibit a height of 2.35 m and thickness of 0.85 m. The underpass presents a slope of approximately 4 % and the maximum height of the backfill over the top of the structure is 3.80 m. The foundation consists of a concrete slab arch-shape (curvature opposite the main arch) with a thickness of 0.7 m. The geodetic technology used for the deformation monitoring

  2. DEFORMATION MONITORING OF MOTORWAY UNDERPASSES USING LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    I. Puente

    2012-07-01

    deformation monitoring is a Optech Lynx mobile LiDAR. This laser scanner is based on time of flight technology and presents an accuracy of 6 mm in the determination of the geometrical coordinates. This accuracy can be improved to around 1 mm using fitting post-processing techniques and makes this technology very useful for studies related with deformation monitoring. The laser scanner, in comparison with other geodetic techniques as total stations, allows the control of all the structure, including unexpected deformations. Reflective targets are permanently positioned over the small walls of the structure to allow the 3D orientation of the different scans. Two main scans are made for this study, before and after the backfilling process. Backfilling takes about 10 days for the construction companies. The scans need a time of approximately 12 minutes. Construction works do not need to be interrupted during the scans. Point clouds are then post-processed using QT Modeler Software. First, the point cloud is cleaned to use only the data directly related with the structure under study. Then, using the target coordinates, both point clouds are moved to the same coordinate system. Finally, the deformation of the underpass is studied using two algorithms specifically developed using Matlab software. First algorithm fits a geometrical surface to the point cloud of the first scan and evaluates the residuals of both scans for this fitting surface. Differences in the residuals give the deformation map of the structure. Second algorithm takes a portion of the point cloud from the top of the structure, where it is located the joining point between the voussoirs. The joining between two voussoirs shows a height step that in an ideal case must tend to zero. Deformations produced by the loading of the structure are measured as a comparison between the steps before and after the backfilling process. The analysis of the results show as some deformation occurs in the structure in the joining

  3. A Hybrid Chaotic Quantum Evolutionary Algorithm

    DEFF Research Database (Denmark)

    Cai, Y.; Zhang, M.; Cai, H.

    2010-01-01

    A hybrid chaotic quantum evolutionary algorithm is proposed to reduce amount of computation, speed up convergence and restrain premature phenomena of quantum evolutionary algorithm. The proposed algorithm adopts the chaotic initialization method to generate initial population which will form a pe...... tests. The presented algorithm is applied to urban traffic signal timing optimization and the effect is satisfied....

  4. Central limit theorem and deformed exponentials

    International Nuclear Information System (INIS)

    Vignat, C; Plastino, A

    2007-01-01

    The central limit theorem (CLT) can be ranked among the most important ones in probability theory and statistics and plays an essential role in several basic and applied disciplines, notably in statistical thermodynamics. We show that there exists a natural extension of the CLT from exponentials to so-called deformed exponentials (also denoted as q-Gaussians). Our proposal applies exactly in the usual conditions in which the classical CLT is used. (fast track communication)

  5. Autogenous Deformation of Concrete

    DEFF Research Database (Denmark)

    Autogenous deformation of concrete can be defined as the free deformation of sealed concrete at a constant temperature. A number of observed problems with early age cracking of high-performance concretes can be attributed to this phenomenon. During the last 10 years , this has led to an increased...... focus on autogenous deformation both within concrete practice and concrete research. Since 1996 the interest has been significant enough to hold international, yearly conferences entirely devoted to this subject. The papers in this publication were presented at two consecutive half-day sessions...... at the American Concrete Institute’s Fall Convention in Phoenix, Arizona, October 29, 2002. All papers have been reviewed according to ACI rules. This publication, as well as the sessions, was sponsored by ACI committee 236, Material Science of Concrete. The 12 presentations from 8 different countries indicate...

  6. Interfacial Bubble Deformations

    Science.gov (United States)

    Seymour, Brian; Shabane, Parvis; Cypull, Olivia; Cheng, Shengfeng; Feitosa, Klebert

    Soap bubbles floating at an air-water experience deformations as a result of surface tension and hydrostatic forces. In this experiment, we investigate the nature of such deformations by taking cross-sectional images of bubbles of different volumes. The results show that as their volume increases, bubbles transition from spherical to hemispherical shape. The deformation of the interface also changes with bubble volume with the capillary rise converging to the capillary length as volume increases. The profile of the top and bottom of the bubble and the capillary rise are completely determined by the volume and pressure differences. James Madison University Department of Physics and Astronomy, 4VA Consortium, Research Corporation for Advancement of Science.

  7. Mechanisms of deformation and of recrystallization of imperfect uranium monocrystals

    International Nuclear Information System (INIS)

    Calais, D.

    1960-04-01

    The various means by which plastic deformations by slip, twinning or kinking are produced by tension of imperfect α uranium single crystals prepared by a β → α phase change, have been studied by X-rays and micrographic examination. Depending on the crystallographic orientation with respect to the direction of the applied tension, and depending on the magnitude of the change in length, the crystals are deformed either preferentially according to a single mechanism, for example twinning, or simultaneously according to two or three mechanisms. The results of a subsequent annealing of the deformed single in the α phase are studied with respect to the deformation mechanisms. In the case of a deformation due primarily to (010) [100], (011) [100] or (110) [001] sliding, there occurs recrystallization by crystal growth selectivity. If the deformation occurs via deformation bands, there is recrystallization by 'oriented nucleation'. The crystals deformed preponderantly by twinning give on recrystallization perfect crystals having optimum dimensions and having orientational characteristics closely related to those of the original crystal. Finally are discussed some criteria relating to the geometry and the dynamics with a view to explaining the occurrence of such and such a deformation mechanism of a single crystal with a given orientation. This study, in conclusion, must help to define the best conditions (crystalline orientation and process of deformation) which will promote the growth of large, perfect, single crystals. (author) [fr

  8. Study on MPGA-BP of Gravity Dam Deformation Prediction

    Directory of Open Access Journals (Sweden)

    Xiaoyu Wang

    2017-01-01

    Full Text Available Displacement is an important physical quantity of hydraulic structures deformation monitoring, and its prediction accuracy is the premise of ensuring the safe operation. Most existing metaheuristic methods have three problems: (1 falling into local minimum easily, (2 slowing convergence, and (3 the initial value’s sensitivity. Resolving these three problems and improving the prediction accuracy necessitate the application of genetic algorithm-based backpropagation (GA-BP neural network and multiple population genetic algorithm (MPGA. A hybrid multiple population genetic algorithm backpropagation (MPGA-BP neural network algorithm is put forward to optimize deformation prediction from periodic monitoring surveys of hydraulic structures. This hybrid model is employed for analyzing the displacement of a gravity dam in China. The results show the proposed model is superior to an ordinary BP neural network and statistical regression model in the aspect of global search, convergence speed, and prediction accuracy.

  9. Joining by plastic deformation

    DEFF Research Database (Denmark)

    Mori, Ken-ichiro; Bay, Niels; Fratini, Livan

    2013-01-01

    As the scale and complexity of products such as aircraft and cars increase, demand for new functional processes to join mechanical parts grows. The use of plastic deformation for joining parts potentially offers improved accuracy, reliability and environmental safety as well as creating opportuni......As the scale and complexity of products such as aircraft and cars increase, demand for new functional processes to join mechanical parts grows. The use of plastic deformation for joining parts potentially offers improved accuracy, reliability and environmental safety as well as creating...

  10. Applying genetic algorithms to set the optimal combination of forest fire related variables and model forest fire susceptibility based on data mining models. The case of Dayu County, China.

    Science.gov (United States)

    Hong, Haoyuan; Tsangaratos, Paraskevas; Ilia, Ioanna; Liu, Junzhi; Zhu, A-Xing; Xu, Chong

    2018-07-15

    The main objective of the present study was to utilize Genetic Algorithms (GA) in order to obtain the optimal combination of forest fire related variables and apply data mining methods for constructing a forest fire susceptibility map. In the proposed approach, a Random Forest (RF) and a Support Vector Machine (SVM) was used to produce a forest fire susceptibility map for the Dayu County which is located in southwest of Jiangxi Province, China. For this purpose, historic forest fires and thirteen forest fire related variables were analyzed, namely: elevation, slope angle, aspect, curvature, land use, soil cover, heat load index, normalized difference vegetation index, mean annual temperature, mean annual wind speed, mean annual rainfall, distance to river network and distance to road network. The Natural Break and the Certainty Factor method were used to classify and weight the thirteen variables, while a multicollinearity analysis was performed to determine the correlation among the variables and decide about their usability. The optimal set of variables, determined by the GA limited the number of variables into eight excluding from the analysis, aspect, land use, heat load index, distance to river network and mean annual rainfall. The performance of the forest fire models was evaluated by using the area under the Receiver Operating Characteristic curve (ROC-AUC) based on the validation dataset. Overall, the RF models gave higher AUC values. Also the results showed that the proposed optimized models outperform the original models. Specifically, the optimized RF model gave the best results (0.8495), followed by the original RF (0.8169), while the optimized SVM gave lower values (0.7456) than the RF, however higher than the original SVM (0.7148) model. The study highlights the significance of feature selection techniques in forest fire susceptibility, whereas data mining methods could be considered as a valid approach for forest fire susceptibility modeling

  11. Identification of Pou5f1, Sox2, and Nanog downstream target genes with statistical confidence by applying a novel algorithm to time course microarray and genome-wide chromatin immunoprecipitation data

    Directory of Open Access Journals (Sweden)

    Xin Li

    2008-06-01

    Full Text Available Abstract Background Target genes of a transcription factor (TF Pou5f1 (Oct3/4 or Oct4, which is essential for pluripotency maintenance and self-renewal of embryonic stem (ES cells, have previously been identified based on their response to Pou5f1 manipulation and occurrence of Chromatin-immunoprecipitation (ChIP-binding sites in promoters. However, many responding genes with binding sites may not be direct targets because response may be mediated by other genes and ChIP-binding site may not be functional in terms of transcription regulation. Results To reduce the number of false positives, we propose to separate responding genes into groups according to direction, magnitude, and time of response, and to apply the false discovery rate (FDR criterion to each group individually. Using this novel algorithm with stringent statistical criteria (FDR Pou5f1 suppression and published ChIP data, we identified 420 tentative target genes (TTGs for Pou5f1. The majority of TTGs (372 were down-regulated after Pou5f1 suppression, indicating that the Pou5f1 functions as an activator of gene expression when it binds to promoters. Interestingly, many activated genes are potent suppressors of transcription, which include polycomb genes, zinc finger TFs, chromatin remodeling factors, and suppressors of signaling. Similar analysis showed that Sox2 and Nanog also function mostly as transcription activators in cooperation with Pou5f1. Conclusion We have identified the most reliable sets of direct target genes for key pluripotency genes – Pou5f1, Sox2, and Nanog, and found that they predominantly function as activators of downstream gene expression. Thus, most genes related to cell differentiation are suppressed indirectly.

  12. Characterizing volumetric deformation behavior of naturally occuring bituminous sand materials

    CSIR Research Space (South Africa)

    Anochie-Boateng, Joseph

    2009-05-01

    Full Text Available newly proposed hydrostatic compression test procedure. The test procedure applies field loading conditions of off-road construction and mining equipment to closely simulate the volumetric deformation and stiffness behaviour of oil sand materials. Based...

  13. Plate motions and deformations from geologic and geodetic data

    Science.gov (United States)

    Jordan, T. H.

    1986-06-01

    Research effort on behalf of the Crustal Dynamics Project focused on the development of methodologies suitable for the analysis of space-geodetic data sets for the estimation of crustal motions, in conjunction with results derived from land-based geodetic data, neo-tectonic studies, and other geophysical data. These methodologies were used to provide estimates of both global plate motions and intraplate deformation in the western U.S. Results from the satellite ranging experiment for the rate of change of the baseline length between San Diego and Quincy, California indicated that relative motion between the North American and Pacific plates over the course of the observing period during 1972 to 1982 were consistent with estimates calculated from geologic data averaged over the past few million years. This result, when combined with other kinematic constraints on western U.S. deformation derived from land-based geodesy, neo-tectonic studies, and other geophysical data, places limits on the possible extension of the Basin and Range province, and implies significant deformation is occurring west of the San Andreas fault. A new methodology was developed to analyze vector-position space-geodetic data to provide estimates of relative vector motions of the observing sites. The algorithm is suitable for the reduction of large, inhomogeneous data sets, and takes into account the full position covariances, errors due to poorly resolved Earth orientation parameters and vertical positions, and reduces baises due to inhomogeneous sampling of the data. This methodology was applied to the problem of estimating the rate-scaling parameter of a global plate tectonic model using satellite laser ranging observations over a five-year interval. The results indicate that the mean rate of global plate motions for that interval are consistent with those averaged over several million years, and are not consistent with quiescent or greatly accelerated plate motions. This methodology was also

  14. Texture and deformation mechanism of yttrium

    International Nuclear Information System (INIS)

    Adamesku, R.A.; Grebenkin, S.V.; Stepanenko, A.V.

    1992-01-01

    X-ray pole figure analysis was applied to study texture and deformation mechanism in pure and commercial polycrystalline yttrium on cold working. It was found that in cast yttrium the texture manifected itself weakly enough both for pure and commercial metal. Analysis of the data obtained made it possible to assert that cold deformation of pure yttrium in the initial stage occurred mainly by slip the role of which decreased at strains higher than 36%. The texture of heavily deformed commercial yttrium contained two components, these were an 'ideal' basic orientation and an axial one with the angle of inclination about 20 deg. Twinning mechanism was revealed to be also possible in commercial yttrium

  15. Numerical Modeling of Subglacial Sediment Deformation

    DEFF Research Database (Denmark)

    Damsgaard, Anders

    2015-01-01

    may cause mass loss in the near future to exceed current best estimates. Ice flow in larger ice sheets focuses in fast-moving streams due to mechanical non-linearity of ice. These ice streams often move at velocities several magnitudes larger than surrounding ice and consequentially constitute...... glaciers move by deforming their sedimentary beds. Several modern ice streams, in particular, move as plug flows due to basal sediment deformation. An intense and long-winded discussion about the appropriate description for subglacial sediment mechanics followed this discovery, with good reason...... incompatible with commonly accepted till rheology models. Variation in pore-water pressure proves to cause reorganization in the internal stress network and leads to slow creeping deformation. The rate of creep is non-linearly dependent on the applied stresses. Granular creep can explain slow glacial...

  16. Predicting Hot Deformation of AA5182 Sheet

    Science.gov (United States)

    Lee, John T.; Carpenter, Alexander J.; Jodlowski, Jakub P.; Taleff, Eric M.

    Aluminum 5000-series alloy sheet materials exhibit substantial ductilities at hot and warm temperatures, even when grain size is not particularly fine. The relatively high strain-rate sensitivity exhibited by these non-superplastic materials, when deforming under solute-drag creep, is a primary contributor to large tensile ductilities. This active deformation mechanism influences both plastic flow and microstructure evolution across conditions of interest for hot- and warm-forming. Data are presented from uniaxial tensile and biaxial bulge tests of AA5182 sheet material at elevated temperatures. These data are used to construct a material constitutive model for plastic flow, which is applied in finite-element-method (FEM) simulations of plastic deformation under multiaxial stress states. Simulation results are directly compared against experimental data to explore the usefulness of this constitutive model. The effects of temperature and stress state on plastic response and microstructure evolution are discussed.

  17. Seismic anisotropy in deforming salt bodies

    Science.gov (United States)

    Prasse, P.; Wookey, J. M.; Kendall, J. M.; Dutko, M.

    2017-12-01

    Salt is often involved in forming hydrocarbon traps. Studying salt dynamics and the deformation processes is important for the exploration industry. We have performed numerical texture simulations of single halite crystals deformed by simple shear and axial extension using the visco-plastic self consistent approach (VPSC). A methodology from subduction studies to estimate strain in a geodynamic simulation is applied to a complex high-resolution salt diapir model. The salt diapir deformation is modelled with the ELFEN software by our industrial partner Rockfield, which is based on a finite-element code. High strain areas at the bottom of the head-like strctures of the salt diapir show high amount of seismic anisotropy due to LPO development of halite crystals. The results demonstrate that a significant degree of seismic anisotropy can be generated, validating the view that this should be accounted for in the treatment of seismic data in, for example, salt diapir settings.

  18. An Efficient Virtual Trachea Deformation Model

    Directory of Open Access Journals (Sweden)

    Cui Tong

    2016-01-01

    Full Text Available In this paper, we present a virtual tactile model with the physically based skeleton to simulate force and deformation between a rigid tool and the soft organ. When the virtual trachea is handled, a skeleton model suitable for interactive environments is established, which consists of ligament layers, cartilage rings and muscular bars. In this skeleton, the contact force goes through the ligament layer, and produces the load effects of the joints , which are connecting the ligament layer and cartilage rings. Due to the nonlinear shape deformation inside the local neighbourhood of a contact region, the RBF method is applied to modify the result of linear global shape deformation by adding the nonlinear effect inside. Users are able to handle the virtual trachea, and the results from the examples with the mechanical properties of the human trachea are given to demonstrate the effectiveness of the approach.

  19. Deformations of vector-scalar models

    Science.gov (United States)

    Barnich, Glenn; Boulanger, Nicolas; Henneaux, Marc; Julia, Bernard; Lekeu, Victor; Ranjbar, Arash

    2018-02-01

    Abelian vector fields non-minimally coupled to uncharged scalar fields arise in many contexts. We investigate here through algebraic methods their consistent deformations ("gaugings"), i.e., the deformations that preserve the number (but not necessarily the form or the algebra) of the gauge symmetries. Infinitesimal consistent deformations are given by the BRST cohomology classes at ghost number zero. We parametrize explicitly these classes in terms of various types of global symmetries and corresponding Noether currents through the characteristic cohomology related to antifields and equations of motion. The analysis applies to all ghost numbers and not just ghost number zero. We also provide a systematic discussion of the linear and quadratic constraints on these parameters that follow from higher-order consistency. Our work is relevant to the gaugings of extended supergravities.

  20. a New Approach for Subway Tunnel Deformation Monitoring: High-Resolution Terrestrial Laser Scanning

    Science.gov (United States)

    Li, J.; Wan, Y.; Gao, X.

    2012-07-01

    With the improvement of the accuracy and efficiency of laser scanning technology, high-resolution terrestrial laser scanning (TLS) technology can obtain high precise points-cloud and density distribution and can be applied to high-precision deformation monitoring of subway tunnels and high-speed railway bridges and other fields. In this paper, a new approach using a points-cloud segmentation method based on vectors of neighbor points and surface fitting method based on moving least squares was proposed and applied to subway tunnel deformation monitoring in Tianjin combined with a new high-resolution terrestrial laser scanner (Riegl VZ-400). There were three main procedures. Firstly, a points-cloud consisted of several scanning was registered by linearized iterative least squares approach to improve the accuracy of registration, and several control points were acquired by total stations (TS) and then adjusted. Secondly, the registered points-cloud was resampled and segmented based on vectors of neighbor points to select suitable points. Thirdly, the selected points were used to fit the subway tunnel surface with moving least squares algorithm. Then a series of parallel sections obtained from temporal series of fitting tunnel surfaces were compared to analysis the deformation. Finally, the results of the approach in z direction were compared with the fiber optical displacement sensor approach and the results in x, y directions were compared with TS respectively, and comparison results showed the accuracy errors of x, y, z directions were respectively about 1.5 mm, 2 mm, 1 mm. Therefore the new approach using high-resolution TLS can meet the demand of subway tunnel deformation monitoring.

  1. A NEW APPROACH FOR SUBWAY TUNNEL DEFORMATION MONITORING: HIGH-RESOLUTION TERRESTRIAL LASER SCANNING

    Directory of Open Access Journals (Sweden)

    J. Li

    2012-07-01

    Full Text Available With the improvement of the accuracy and efficiency of laser scanning technology, high-resolution terrestrial laser scanning (TLS technology can obtain high precise points-cloud and density distribution and can be applied to high-precision deformation monitoring of subway tunnels and high-speed railway bridges and other fields. In this paper, a new approach using a points-cloud segmentation method based on vectors of neighbor points and surface fitting method based on moving least squares was proposed and applied to subway tunnel deformation monitoring in Tianjin combined with a new high-resolution terrestrial laser scanner (Riegl VZ-400. There were three main procedures. Firstly, a points-cloud consisted of several scanning was registered by linearized iterative least squares approach to improve the accuracy of registration, and several control points were acquired by total stations (TS and then adjusted. Secondly, the registered points-cloud was resampled and segmented based on vectors of neighbor points to select suitable points. Thirdly, the selected points were used to fit the subway tunnel surface with moving least squares algorithm. Then a series of parallel sections obtained from temporal series of fitting tunnel surfaces were compared to analysis the deformation. Finally, the results of the approach in z direction were compared with the fiber optical displacement sensor approach and the results in x, y directions were compared with TS respectively, and comparison results showed the accuracy errors of x, y, z directions were respectively about 1.5 mm, 2 mm, 1 mm. Therefore the new approach using high-resolution TLS can meet the demand of subway tunnel deformation monitoring.

  2. Extended Josephson Relation and Abrikosov lattice deformation

    International Nuclear Information System (INIS)

    Matlock, Peter

    2012-01-01

    From the point of view of time-dependent Ginzburg Landau (TDGL) theory, a Josephson-like relation is derived for an Abrikosov vortex lattice accelerated and deformed by applied fields. Beginning with a review of the Josephson Relation derived from the two ingredients of a lattice-kinematics assumption in TDGL theory and gauge invariance, we extend the construction to accommodate a time-dependent applied magnetic field, a Floating-Kernel formulation of normal current, and finally lattice deformation due to the electric field and inertial effects of vortex-lattice motion. The resulting Josephson-like relation, which we call an Extended Josephson Relation, applies to a much wider set of experimental conditions than the original Josephson Relation, and is explicitly compatible with the considerations of TDGL theory.

  3. Marginally Deformed Starobinsky Gravity

    DEFF Research Database (Denmark)

    Codello, A.; Joergensen, J.; Sannino, Francesco

    2015-01-01

    We show that quantum-induced marginal deformations of the Starobinsky gravitational action of the form $R^{2(1 -\\alpha)}$, with $R$ the Ricci scalar and $\\alpha$ a positive parameter, smaller than one half, can account for the recent experimental observations by BICEP2 of primordial tensor modes....

  4. Transfer involving deformed nuclei

    International Nuclear Information System (INIS)

    Rasmussen, J.O.; Guidry, M.W.; Canto, L.F.

    1985-03-01

    Results are reviewed of 1- and 2-neutron transfer reactions at near-barrier energies for deformed nuclei. Rotational angular momentum and excitation patterns are examined. A strong tendency to populating high spin states within a few MeV of the yrast line is noted, and it is interpreted as preferential transfer to rotation-aligned states. 16 refs., 12 figs

  5. Advanced Curvature Deformable Mirrors

    Science.gov (United States)

    2010-09-01

    ORGANIZATION NAME(S) AND ADDRESS(ES) University of Hawaii ,Institute for Astronomy,640 North A‘ohoku Place, #209 , Hilo ,HI,96720-2700 8. PERFORMING...Advanced Curvature Deformable Mirrors Christ Ftaclas1,2, Aglae Kellerer2 and Mark Chun2 Institute for Astronomy, University of Hawaii

  6. Deformations of free jets

    Science.gov (United States)

    Paruchuri, Srinivas

    This thesis studies three different problems. First we demonstrate that a flowing liquid jet can be controllably split into two separate subfilaments through the applications of a sufficiently strong tangential stress to the surface of the jet. In contrast, normal stresses can never split a liquid jet. We apply these results to observations of uncontrolled splitting of jets in electric fields. The experimental realization of controllable jet splitting would provide an entirely novel route for producing small polymeric fibers. In the second chapter we present an analytical model for the bending of liquid jets and sheets from temperature gradients, as recently observed by Chwalek et al. [Phys. Fluids, 14, L37 (2002)]. The bending arises from a local couple caused by Marangoni forces. The dependence of the bending angle on experimental parameters is presented, in qualitative agreement with reported experiments. The methodology gives a simple framework for understanding the mechanisms for jet and sheet bending. In chapter 4 we address the discrepancy between hydrodynamic theory of liquid jets, and the snap-off of narrow liquid jets observed in molecular dynamics (MD) simulations [23]. This has been previously attributed to the significant role of thermal fluctuations in nanofluidic systems. We argue that hydrodynamic description of such systems should include corrections to the Laplace pressure which result from the failure of the sharp interface assumption when the jet diameter becomes small enough. We show that this effect can in principle give rise to jet shapes similar to those observed in MD simulations, even when thermal fluctuations are completely neglected. Finally we summarize an algorithm developed to simulate droplet impact on a smooth surface.

  7. Automated landmark-guided deformable image registration.

    Science.gov (United States)

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-07

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency.

  8. Automated landmark-guided deformable image registration

    International Nuclear Information System (INIS)

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-01

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency. (paper)

  9. Deformation Characteristics of Composite Structures

    Directory of Open Access Journals (Sweden)

    Theddeus T. AKANO

    2016-08-01

    Full Text Available The composites provide design flexibility because many of them can be moulded into complex shapes. The carbon fibre-reinforced epoxy composites exhibit excellent fatigue tolerance and high specific strength and stiffness which have led to numerous advanced applications ranging from the military and civil aircraft structures to the consumer products. However, the modelling of the beams undergoing the arbitrarily large displacements and rotations, but small strains, is a common problem in the application of these engineering composite systems. This paper presents a nonlinear finite element model which is able to estimate the deformations of the fibre-reinforced epoxy composite beams. The governing equations are based on the Euler-Bernoulli beam theory (EBBT with a von Kármán type of kinematic nonlinearity. The anisotropic elasticity is employed for the material model of the composite material. Moreover, the characterization of the mechanical properties of the composite material is achieved through a tensile test, while a simple laboratory experiment is used to validate the model. The results reveal that the composite fibre orientation, the type of applied load and boundary condition, affect the deformation characteristics of the composite structures. The nonlinearity is an important factor that should be taken into consideration in the analysis of the fibre-reinforced epoxy composites.

  10. Dynamics of viscoplastic deformation in amorphous solids

    International Nuclear Information System (INIS)

    Falk, M.L.; Langer, J.S.

    1998-01-01

    We propose a dynamical theory of low-temperature shear deformation in amorphous solids. Our analysis is based on molecular-dynamics simulations of a two-dimensional, two-component noncrystalline system. These numerical simulations reveal behavior typical of metallic glasses and other viscoplastic materials, specifically, reversible elastic deformation at small applied stresses, irreversible plastic deformation at larger stresses, a stress threshold above which unbounded plastic flow occurs, and a strong dependence of the state of the system on the history of past deformations. Microscopic observations suggest that a dynamically complete description of the macroscopic state of this deforming body requires specifying, in addition to stress and strain, certain average features of a population of two-state shear transformation zones. Our introduction of these state variables into the constitutive equations for this system is an extension of earlier models of creep in metallic glasses. In the treatment presented here, we specialize to temperatures far below the glass transition and postulate that irreversible motions are governed by local entropic fluctuations in the volumes of the transformation zones. In most respects, our theory is in good quantitative agreement with the rich variety of phenomena seen in the simulations. copyright 1998 The American Physical Society

  11. Motion and deformation estimation from medical imagery by modeling sub-structure interaction and constraints

    KAUST Repository

    Sundaramoorthi, Ganesh; Hong, Byungwoo; Yezzi, Anthony J.

    2012-01-01

    of cardiac MRI which includes the detection of the left ventricle boundary and its deformation. The experimental results indicate the potential of the algorithm as an assistant tool for the quantitative analysis of cardiac functions in the diagnosis of heart

  12. A survey of parallel multigrid algorithms

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1987-01-01

    A typical multigrid algorithm applied to well-behaved linear-elliptic partial-differential equations (PDEs) is described. Criteria for designing and evaluating parallel algorithms are presented. Before evaluating the performance of some parallel multigrid algorithms, consideration is given to some theoretical complexity results for solving PDEs in parallel and for executing the multigrid algorithm. The effect of mapping and load imbalance on the partial efficiency of the algorithm is studied.

  13. Applied mechanics of solids

    CERN Document Server

    Bower, Allan F

    2009-01-01

    Modern computer simulations make stress analysis easy. As they continue to replace classical mathematical methods of analysis, these software programs require users to have a solid understanding of the fundamental principles on which they are based. Develop Intuitive Ability to Identify and Avoid Physically Meaningless Predictions Applied Mechanics of Solids is a powerful tool for understanding how to take advantage of these revolutionary computer advances in the field of solid mechanics. Beginning with a description of the physical and mathematical laws that govern deformation in solids, the text presents modern constitutive equations, as well as analytical and computational methods of stress analysis and fracture mechanics. It also addresses the nonlinear theory of deformable rods, membranes, plates, and shells, and solutions to important boundary and initial value problems in solid mechanics. The author uses the step-by-step manner of a blackboard lecture to explain problem solving methods, often providing...

  14. A virtual phantom library for the quantification of deformable image registration uncertainties in patients with cancers of the head and neck.

    Science.gov (United States)

    Pukala, Jason; Meeks, Sanford L; Staton, Robert J; Bova, Frank J; Mañon, Rafael R; Langen, Katja M

    2013-11-01

    Deformable image registration (DIR) is being used increasingly in various clinical applications. However, the underlying uncertainties of DIR are not well-understood and a comprehensive methodology has not been developed for assessing a range of interfraction anatomic changes during head and neck cancer radiotherapy. This study describes the development of a library of clinically relevant virtual phantoms for the purpose of aiding clinicians in the QA of DIR software. These phantoms will also be available to the community for the independent study and comparison of other DIR algorithms and processes. Each phantom was derived from a pair of kVCT volumetric image sets. The first images were acquired of head and neck cancer patients prior to the start-of-treatment and the second were acquired near the end-of-treatment. A research algorithm was used to autosegment and deform the start-of-treatment (SOT) images according to a biomechanical model. This algorithm allowed the user to adjust the head position, mandible position, and weight loss in the neck region of the SOT images to resemble the end-of-treatment (EOT) images. A human-guided thin-plate splines algorithm was then used to iteratively apply further deformations to the images with the objective of matching the EOT anatomy as closely as possible. The deformations from each algorithm were combined into a single deformation vector field (DVF) and a simulated end-of-treatment (SEOT) image dataset was generated from that DVF. Artificial noise was added to the SEOT images and these images, along with the original SOT images, created a virtual phantom where the underlying "ground-truth" DVF is known. Images from ten patients were deformed in this fashion to create ten clinically relevant virtual phantoms. The virtual phantoms were evaluated to identify unrealistic DVFs using the normalized cross correlation (NCC) and the determinant of the Jacobian matrix. A commercial deformation algorithm was applied to the virtual

  15. Precise object tracking under deformation

    International Nuclear Information System (INIS)

    Saad, M.H

    2010-01-01

    The precise object tracking is an essential issue in several serious applications such as; robot vision, automated surveillance (civil and military), inspection, biomedical image analysis, video coding, motion segmentation, human-machine interface, visualization, medical imaging, traffic systems, satellite imaging etc. This frame-work focuses on the precise object tracking under deformation such as scaling , rotation, noise, blurring and change of illumination. This research is a trail to solve these serious problems in visual object tracking by which the quality of the overall system will be improved. Developing a three dimensional (3D) geometrical model to determine the current pose of an object and predict its future location based on FIR model learned by the OLS. This framework presents a robust ranging technique to track a visual target instead of the traditional expensive ranging sensors. The presented research work is applied to real video stream and achieved high precession results.

  16. Localized deformation of zirconium-liner tube

    International Nuclear Information System (INIS)

    Nagase, Fumihisa; Uchida, Masaaki

    1988-03-01

    Zirconium-liner tube has come to be used in BWR. Zirconium liner mitigates the localized stress produced by the pellet-cladding interaction (PCI). In this study, simulating the ridging, stresses were applied to the inner surfaces of zirconium-liner tubes and Zircaloy-2 tubes, and, to investigate the mechanism and the extent of the effect, the behavior of zirconium liner was examined. As the result of examination, stress was concentrated especially at the edge of the deformed region, where zirconium liner was highly deformed. Even after high stress was applied, the deformation of Zircaloy part was small, since almost the concentrated stress was mitigated by the deformation of zirconium liner. In addition, stress and strain distributions in the cross section of specimen were calculated with a computer code FEMAXI-III. The results also showed that zirconium liner mitigated the localized stress in Zircaloy, although the affected zone was restricted to the region near the boundary between zirconium liner and Zircaloy. (author)

  17. Precise Object Tracking under Deformation

    International Nuclear Information System (INIS)

    Saad, M.H.

    2010-01-01

    The precise object tracking is an essential issue in several serious applications such as; robot vision, automated surveillance (civil and military), inspection, biomedical image analysis, video coding, motion segmentation, human-machine interface, visualization, medical imaging, traffic systems, satellite imaging etc. This framework focuses on the precise object tracking under deformation such as scaling, rotation, noise, blurring and change of illumination. This research is a trail to solve these serious problems in visual object tracking by which the quality of the overall system will be improved. Developing a three dimensional (3D) geometrical model to determine the current pose of an object and predict its future location based on FIR model learned by the OLS. This framework presents a robust ranging technique to track a visual target instead of the traditional expensive ranging sensors. The presented research work is applied to real video stream and achieved high precession results. xiiiThe precise object tracking is an essential issue in several serious applications such as; robot vision, automated surveillance (civil and military), inspection, biomedical image analysis, video coding, motion segmentation, human-machine interface, visualization, medical imaging, traffic systems, satellite imaging etc. This framework focuses on the precise object tracking under deformation such as scaling, rotation, noise, blurring and change of illumination. This research is a trail to solve these serious problems in visual object tracking by which the quality of the overall system will be improved. Developing a three dimensional (3D) geometrical model to determine the current pose of an object and predict its future location based on FIR model learned by the OLS. This framework presents a robust ranging technique to track a visual target instead of the traditional expensive ranging sensors. The presented research work is applied to real video stream and achieved high

  18. Adaptive switching gravitational search algorithm: an attempt to ...

    Indian Academy of Sciences (India)

    Nor Azlina Ab Aziz

    An adaptive gravitational search algorithm (GSA) that switches between synchronous and ... genetic algorithm (GA), bat-inspired algorithm (BA) and grey wolf optimizer (GWO). ...... heuristic with applications in applied electromagnetics. Prog.

  19. q-Deformed Kink solutions

    International Nuclear Information System (INIS)

    Lima, A.F. de

    2003-01-01

    The q-deformed kink of the λφ 4 -model is obtained via the normalisable ground state eigenfunction of a fluctuation operator associated with the q-deformed hyperbolic functions. The kink mass, the bosonic zero-mode and the q-deformed potential in 1+1 dimensions are found. (author)

  20. Cosmetic and Functional Nasal Deformities

    Science.gov (United States)

    ... nasal complaints. Nasal deformity can be categorized as “cosmetic” or “functional.” Cosmetic deformity of the nose results in a less ... taste , nose bleeds and/or recurrent sinusitis . A cosmetic or functional nasal deformity may occur secondary to ...

  1. Algorithmic alternatives

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  2. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  3. [Babies with cranial deformity].

    Science.gov (United States)

    Feijen, Michelle M W; Claessens, Edith A W M Habets; Dovens, Anke J Leenders; Vles, Johannes S; van der Hulst, Rene R W J

    2009-01-01

    Plagiocephaly was diagnosed in a baby aged 4 months and brachycephaly in a baby aged 5 months. Positional or deformational plagio- or brachycephaly is characterized by changes in shape and symmetry of the cranial vault. Treatment options are conservative and may include physiotherapy and helmet therapy. During the last two decades the incidence of positional plagiocephaly has increased in the Netherlands. This increase is due to the recommendation that babies be laid on their backs in order to reduce the risk of sudden infant death syndrome. We suggest the following: in cases of positional preference of the infant, referral to a physiotherapist is indicated. In cases of unacceptable deformity of the cranium at the age 5 months, moulding helmet therapy is a possible treatment option.

  4. Deformed supersymmetric mechanics

    International Nuclear Information System (INIS)

    Ivanov, E.; Sidorov, S.

    2013-01-01

    Motivated by a recent interest in curved rigid supersymmetries, we construct a new type of N = 4, d = 1 supersymmetric systems by employing superfields defined on the cosets of the supergroup SU(2|1). The relevant worldline supersymmetry is a deformation of the standard N = 4, d = 1 supersymmetry by a mass parameter m. As instructive examples we consider at the classical and quantum levels the models associated with the supermultiplets (1,4,3) and (2,4,2) and find out interesting interrelations with some previous works on nonstandard d = 1 supersymmetry. In particular, the d = 1 systems with 'weak supersymmetry' are naturally reproduced within our SU(2|1) superfield approach as a subclass of the (1,4,3) models. A generalization to the N = 8, d = 1 case implies the supergroup SU(2|2) as a candidate deformed worldline supersymmetry

  5. Deformation Theory ( Lecture Notes )

    Czech Academy of Sciences Publication Activity Database

    Doubek, M.; Markl, Martin; Zima, P.

    2007-01-01

    Roč. 43, č. 5 (2007), s. 333-371 ISSN 0044-8753. [Winter School Geometry and Physics/27./. Srní, 13.01.2007-20.01.2007] R&D Projects: GA ČR GA201/05/2117 Institutional research plan: CEZ:AV0Z10190503 Keywords : deformation * Mauerer-Cartan equation * strongly homotopy Lie algebra Subject RIV: BA - General Mathematics

  6. Deformations of fractured rock

    International Nuclear Information System (INIS)

    Stephansson, O.

    1977-09-01

    Results of the DBM and FEM analysis in this study indicate that a suitable rock mass for repository of radioactive waste should be moderately jointed (about 1 joint/m 2 ) and surrounded by shear zones of the first order. This allowes for a gentle and flexible deformation under tectonic stresses and prevent the development of large cross-cutting failures in the repository area. (author)

  7. A deformable-model approach to semi-automatic segmentation of CT images demonstrated by application to the spinal canal

    International Nuclear Information System (INIS)

    Burnett, Stuart S.C.; Starkschall, George; Stevens, Craig W.; Liao Zhongxing

    2004-01-01

    Because of the importance of accurately defining the target in radiation treatment planning, we have developed a deformable-template algorithm for the semi-automatic delineation of normal tissue structures on computed tomography (CT) images. We illustrate the method by applying it to the spinal canal. Segmentation is performed in three steps: (a) partial delineation of the anatomic structure is obtained by wavelet-based edge detection; (b) a deformable-model template is fitted to the edge set by chamfer matching; and (c) the template is relaxed away from its original shape into its final position. Appropriately chosen ranges for the model parameters limit the deformations of the template, accounting for interpatient variability. Our approach differs from those used in other deformable models in that it does not inherently require the modeling of forces. Instead, the spinal canal was modeled using Fourier descriptors derived from four sets of manually drawn contours. Segmentation was carried out, without manual intervention, on five CT data sets and the algorithm's performance was judged subjectively by two radiation oncologists. Two assessments were considered: in the first, segmentation on a random selection of 100 axial CT images was compared with the corresponding contours drawn manually by one of six dosimetrists, also chosen randomly; in the second assessment, the segmentation of each image in the five evaluable CT sets (a total of 557 axial images) was rated as either successful, unsuccessful, or requiring further editing. Contours generated by the algorithm were more likely than manually drawn contours to be considered acceptable by the oncologists. The mean proportions of acceptable contours were 93% (automatic) and 69% (manual). Automatic delineation of the spinal canal was deemed to be successful on 91% of the images, unsuccessful on 2% of the images, and requiring further editing on 7% of the images. Our deformable template algorithm thus gives a robust

  8. Compensation of some time dependent deformations in two dimensional (2D) tomography

    International Nuclear Information System (INIS)

    Desbat, L.; Roux, S.; Grangeat, P.

    2005-01-01

    This work is a contribution to motion compensation in tomography. It has been shown that much more general deformations than affine transforms can be analytically compensated in dynamic tomography. The class of deformations that transformed only a parallel projection geometry into an other parallel projection geometry, or a divergent projection geometry into an other divergent geometry have been considered. Among these deformation, it has been shown that those involving only an affine deformation along each line (this affine deformation can vary from line to line), can be efficiently analytically compensated, i e within a F.B.P. algorithm. This class of deformations is much larger than the very small class of affine deformation. It involves more local deformation possibilities. Deformations from this considered class have been written as a composition of an affine transform and deformations that can be compensated with weighting and re-binning step, the admissibility conditions and the F.B.P. algorithm are the same those given. (N.C.)

  9. Compensation of some time dependent deformations in two dimensional (2D) tomography

    Energy Technology Data Exchange (ETDEWEB)

    Desbat, L. [Universite Joseph Fourier, UMR CNRS 5525, 38 - Grenoble (France); Roux, S. [Universite Joseph Fourier, TIMC-IMAG, In3S, Faculte de Medecine, 38 - Grenoble (France)]|[CEA Grenoble, Lab. d' Electronique et de Technologie de l' Informatique (LETI), 38 (France); Grangeat, P. [CEA Grenoble, Lab. d' Electronique et de Technologie de l' Informatique (LETI), 38 (France)

    2005-07-01

    This work is a contribution to motion compensation in tomography. It has been shown that much more general deformations than affine transforms can be analytically compensated in dynamic tomography. The class of deformations that transformed only a parallel projection geometry into an other parallel projection geometry, or a divergent projection geometry into an other divergent geometry have been considered. Among these deformation, it has been shown that those involving only an affine deformation along each line (this affine deformation can vary from line to line), can be efficiently analytically compensated, i e within a F.B.P. algorithm. This class of deformations is much larger than the very small class of affine deformation. It involves more local deformation possibilities. Deformations from this considered class have been written as a composition of an affine transform and deformations that can be compensated with weighting and re-binning step, the admissibility conditions and the F.B.P. algorithm are the same those given. (N.C.)

  10. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  11. Multisensor data fusion algorithm development

    Energy Technology Data Exchange (ETDEWEB)

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  12. Deformation strain inhomogeneity in columnar grain nickel

    DEFF Research Database (Denmark)

    Wu, G.L.; Godfrey, A.; Juul Jensen, D.

    2005-01-01

    A method is presented for determination of the local deformation strain of individual grains in the bulk of a columnar grain sample. The method, based on measurement of the change in grain area of each grain, is applied to 12% cold rolled nickel. Large variations are observed in the local strain...... associated with each grain. (c) 2005 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved....

  13. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  14. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  15. Dealing with difficult deformations: Construction of a knowledge-based deformation atlas

    DEFF Research Database (Denmark)

    Thorup, Signe Strann; Darvann, T.A.; Hermann, N.V.

    2010-01-01

    from pre- to post-surgery using thin-plate spline warping. The registration results are convincing and represent a first move towards an automatic registration method for dealing with difficult deformations due to this type of surgery. New or breakthrough work to be presented: The method provides...... was needed. We have previously demonstrated that non-rigid registration using B-splines is able to provide automated determination of point correspondences in populations of infants without cleft lip. However, this type of registration fails when applied to the task of determining the complex deformation...

  16. Mapping ground surface deformation using temporarily coherent point SAR interferometry: Application to Los Angeles Basin

    Science.gov (United States)

    Zhang, L.; Lu, Zhong; Ding, X.; Jung, H.-S.; Feng, G.; Lee, C.-W.

    2012-01-01

    Multi-temporal interferometric synthetic aperture radar (InSAR) is an effective tool to detect long-term seismotectonic motions by reducing the atmospheric artifacts, thereby providing more precise deformation signal. The commonly used approaches such as persistent scatterer InSAR (PSInSAR) and small baseline subset (SBAS) algorithms need to resolve the phase ambiguities in interferogram stacks either by searching a predefined solution space or by sparse phase unwrapping methods; however the efficiency and the success of phase unwrapping cannot be guaranteed. We present here an alternative approach – temporarily coherent point (TCP) InSAR (TCPInSAR) – to estimate the long term deformation rate without the need of phase unwrapping. The proposed approach has a series of innovations including TCP identification, TCP network and TCP least squares estimator. We apply the proposed method to the Los Angeles Basin in southern California where structurally active faults are believed capable of generating damaging earthquakes. The analysis is based on 55 interferograms from 32 ERS-1/2 images acquired during Oct. 1995 and Dec. 2000. To evaluate the performance of TCPInSAR on a small set of observations, a test with half of interferometric pairs is also performed. The retrieved TCPInSAR measurements have been validated by a comparison with GPS observations from Southern California Integrated GPS Network. Our result presents a similar deformation pattern as shown in past InSAR studies but with a smaller average standard deviation (4.6 mm) compared with GPS observations, indicating that TCPInSAR is a promising alternative for efficiently mapping ground deformation even from a relatively smaller set of interferograms.

  17. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  18. Study beryllium microplastic deformation

    International Nuclear Information System (INIS)

    Papirov, I.I.; Ivantsov, V.I.; Nikolaenko, A.A.; Shokurov, V.S.; Tuzov, Yu.V.

    2015-01-01

    Microplastic flow characteristics systematically studied for different varieties beryllium. In isostatically pressed beryllium it decreased with increasing particle size of the powder, increasing temperature and increasing the pressing metal purity. High initial values of the limit microelasticity and microflow in some cases are due a high level of internal stresses of thermal origin and over time it can relax slowly. During long-term storage of beryllium materials with high initial resistance values microplastic deformation microflow limit and microflow stress markedly reduced, due mainly to the relaxation of thermal microstrain

  19. Viscoelastic deformation of lipid bilayer vesicles†

    Science.gov (United States)

    Wu, Shao-Hua; Sankhagowit, Shalene; Biswas, Roshni; Wu, Shuyang; Povinelli, Michelle L.

    2015-01-01

    Lipid bilayers form the boundaries of the cell and its organelles. Many physiological processes, such as cell movement and division, involve bending and folding of the bilayer at high curvatures. Currently, bending of the bilayer is treated as an elastic deformation, such that its stress-strain response is independent of the rate at which bending strain is applied. We present here the first direct measurement of viscoelastic response in a lipid bilayer vesicle. We used a dual-beam optical trap (DBOT) to stretch 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) giant unilamellar vesicles (GUVs). Upon application of a step optical force, the vesicle membrane deforms in two regimes: a fast, instantaneous area increase, followed by a much slower stretching to an eventual plateau deformation. From measurements of dozens of GUVs, the average time constant of the slower stretching response was 0.225 ± 0.033 s (standard deviation, SD). Increasing the fluid viscosity did not affect the observed time constant. We performed a set of experiments to rule out heating by laser absorption as a cause of the transient behavior. Thus, we demonstrate here that the bending deformation of lipid bilayer membranes should be treated as viscoelastic. PMID:26268612

  20. Viscoelastic deformation of lipid bilayer vesicles.

    Science.gov (United States)

    Wu, Shao-Hua; Sankhagowit, Shalene; Biswas, Roshni; Wu, Shuyang; Povinelli, Michelle L; Malmstadt, Noah

    2015-10-07

    Lipid bilayers form the boundaries of the cell and its organelles. Many physiological processes, such as cell movement and division, involve bending and folding of the bilayer at high curvatures. Currently, bending of the bilayer is treated as an elastic deformation, such that its stress-strain response is independent of the rate at which bending strain is applied. We present here the first direct measurement of viscoelastic response in a lipid bilayer vesicle. We used a dual-beam optical trap (DBOT) to stretch 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) giant unilamellar vesicles (GUVs). Upon application of a step optical force, the vesicle membrane deforms in two regimes: a fast, instantaneous area increase, followed by a much slower stretching to an eventual plateau deformation. From measurements of dozens of GUVs, the average time constant of the slower stretching response was 0.225 ± 0.033 s (standard deviation, SD). Increasing the fluid viscosity did not affect the observed time constant. We performed a set of experiments to rule out heating by laser absorption as a cause of the transient behavior. Thus, we demonstrate here that the bending deformation of lipid bilayer membranes should be treated as viscoelastic.

  1. Algorithm FIRE-Feynman Integral REduction

    International Nuclear Information System (INIS)

    Smirnov, A.V.

    2008-01-01

    The recently developed algorithm FIRE performs the reduction of Feynman integrals to master integrals. It is based on a number of strategies, such as applying the Laporta algorithm, the s-bases algorithm, region-bases and integrating explicitly over loop momenta when possible. Currently it is being used in complicated three-loop calculations.

  2. Applied physics

    International Nuclear Information System (INIS)

    Anon.

    1980-01-01

    The Physics Division research program that is dedicated primarily to applied research goals involves the interaction of energetic particles with solids. This applied research is carried out in conjunction with the basic research studies from which it evolved

  3. Control of cooperative manipulators in holding deformable objects

    Science.gov (United States)

    Alkathiri, A. A.; Azlan, N. Z.

    2017-11-01

    This paper presents the implementation of a control system to control cooperative manipulators to hold deformable objects. The aim is to hold the deformable object without having information on the shape and stiffness of the deformable object beforehand. The prototype of a pair of manipulators has been designed and built to test the controller. A force sensor and a rotary encoder are used to give feedback to the controller, which controls the DC motor actuators accordingly. A position proportional-integral-derivative (PID) controller technique has been applied for one of the manipulators and a PID force control technique is applied to the other. Simulations and experimental tests have been conducted on models and the controller has been implemented on the real plant. Both simulation and test results prove that the implemented control technique has successfully provided the desired position and force to hold the deformable object with maximum experimental errors of 0.34mm and 50mN respectively.

  4. Deformable image registration for cone-beam CT guided transoral robotic base-of-tongue surgery

    International Nuclear Information System (INIS)

    Reaungamornrat, S; Liu, W P; Otake, Y; Uneri, A; Siewerdsen, J H; Taylor, R H; Wang, A S; Nithiananthan, S; Schafer, S; Tryggestad, E; Richmon, J; Sorger, J M

    2013-01-01

    Transoral robotic surgery (TORS) offers a minimally invasive approach to resection of base-of-tongue tumors. However, precise localization of the surgical target and adjacent critical structures can be challenged by the highly deformed intraoperative setup. We propose a deformable registration method using intraoperative cone-beam computed tomography (CBCT) to accurately align preoperative CT or MR images with the intraoperative scene. The registration method combines a Gaussian mixture (GM) model followed by a variation of the Demons algorithm. First, following segmentation of the volume of interest (i.e. volume of the tongue extending to the hyoid), a GM model is applied to surface point clouds for rigid initialization (GM rigid) followed by nonrigid deformation (GM nonrigid). Second, the registration is refined using the Demons algorithm applied to distance map transforms of the (GM-registered) preoperative image and intraoperative CBCT. Performance was evaluated in repeat cadaver studies (25 image pairs) in terms of target registration error (TRE), entropy correlation coefficient (ECC) and normalized pointwise mutual information (NPMI). Retraction of the tongue in the TORS operative setup induced gross deformation >30 mm. The mean TRE following the GM rigid, GM nonrigid and Demons steps was 4.6, 2.1 and 1.7 mm, respectively. The respective ECC was 0.57, 0.70 and 0.73, and NPMI was 0.46, 0.57 and 0.60. Registration accuracy was best across the superior aspect of the tongue and in proximity to the hyoid (by virtue of GM registration of surface points on these structures). The Demons step refined registration primarily in deeper portions of the tongue further from the surface and hyoid bone. Since the method does not use image intensities directly, it is suitable to multi-modality registration of preoperative CT or MR with intraoperative CBCT. Extending the 3D image registration to the fusion of image and planning data in stereo-endoscopic video is anticipated to

  5. Toward the development of intrafraction tumor deformation tracking using a dynamic multi-leaf collimator

    Energy Technology Data Exchange (ETDEWEB)

    Ge, Yuanyuan; O’Brien, Ricky T.; Shieh, Chun-Chien; Keall, Paul J., E-mail: paul.keall@sydney.edu.au [Radiation Physics Laboratory, University of Sydney, NSW 2006 (Australia); Booth, Jeremy T. [Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, NSW 2065 (Australia)

    2014-06-15

    Purpose: Intrafraction deformation limits targeting accuracy in radiotherapy. Studies show tumor deformation of over 10 mm for both single tumor deformation and system deformation (due to differential motion between primary tumors and involved lymph nodes). Such deformation cannot be adapted to with current radiotherapy methods. The objective of this study was to develop and experimentally investigate the ability of a dynamic multi-leaf collimator (DMLC) tracking system to account for tumor deformation. Methods: To compensate for tumor deformation, the DMLC tracking strategy is to warp the planned beam aperture directly to conform to the new tumor shape based on real time tumor deformation input. Two deformable phantoms that correspond to a single tumor and a tumor system were developed. The planar deformations derived from the phantom images in beam's eye view were used to guide the aperture warping. An in-house deformable image registration software was developed to automatically trigger the registration once new target image was acquired and send the computed deformation to the DMLC tracking software. Because the registration speed is not fast enough to implement the experiment in real-time manner, the phantom deformation only proceeded to the next position until registration of the current deformation position was completed. The deformation tracking accuracy was evaluated by a geometric target coverage metric defined as the sum of the area incorrectly outside and inside the ideal aperture. The individual contributions from the deformable registration algorithm and the finite leaf width to the tracking uncertainty were analyzed. Clinical proof-of-principle experiment of deformation tracking using previously acquired MR images of a lung cancer patient was implemented to represent the MRI-Linac environment. Intensity-modulated radiation therapy (IMRT) treatment delivered with enabled deformation tracking was simulated and demonstrated. Results: The first

  6. Complex structure-induced deformations of σ-models

    Energy Technology Data Exchange (ETDEWEB)

    Bykov, Dmitri [Max-Planck-Institut für Gravitationsphysik, Albert-Einstein-Institut,Am Mühlenberg 1, D-14476 Potsdam-Golm (Germany); Steklov Mathematical Institute of Russ. Acad. Sci.,Gubkina str. 8, 119991 Moscow (Russian Federation)

    2017-03-24

    We describe a deformation of the principal chiral model (with an even-dimensional target space G) by a B-field proportional to the Kähler form on the target space. The equations of motion of the deformed model admit a zero-curvature representation. As a simplest example, we consider the case of G=S{sup 1}×S{sup 3}. We also apply a variant of the construction to a deformation of the AdS{sub 3}×S{sup 3}×S{sup 1} (super-)σ-model.

  7. New design deforming controlling system of the active stressed lap

    Science.gov (United States)

    Ying, Li; Wang, Daxing

    2008-07-01

    A 450mm diameter active stressed lap has been developed in NIAOT by 2003. We design a new lap in 2007. This paper puts on emphases on introducing the new deforming control system of the lap. Aiming at the control characteristic of the lap, a new kind of digital deforming controller is designed. The controller consists of 3 parts: computer signal disposing, motor driving and force sensor signal disposing. Intelligent numeral PID method is applied in the controller instead of traditional PID. In the end, the result of new deformation are given.

  8. Reactive Collision Avoidance Algorithm

    Science.gov (United States)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  9. Monte Carlo algorithms with absorbing Markov chains: Fast local algorithms for slow dynamics

    International Nuclear Information System (INIS)

    Novotny, M.A.

    1995-01-01

    A class of Monte Carlo algorithms which incorporate absorbing Markov chains is presented. In a particular limit, the lowest order of these algorithms reduces to the n-fold way algorithm. These algorithms are applied to study the escape from the metastable state in the two-dimensional square-lattice nearest-neighbor Ising ferromagnet in an unfavorable applied field, and the agreement with theoretical predictions is very good. It is demonstrated that the higher-order algorithms can be many orders of magnitude faster than either the traditional Monte Carlo or n-fold way algorithms

  10. Nuclear fuel deformation phenomena

    International Nuclear Information System (INIS)

    Van Brutzel, L.; Dingreville, R.; Bartel, T.J.

    2015-01-01

    Nuclear fuel encounters severe thermomechanical environments. Its mechanical response is profoundly influenced by an underlying heterogeneous microstructure but also inherently dependent on the temperature and stress level histories. The ability to adequately simulate the response of such microstructures, to elucidate the associated macroscopic response in such extreme environments is crucial for predicting both performance and transient fuel mechanical responses. This chapter discusses key physical phenomena and the status of current modelling techniques to evaluate and predict fuel deformations: creep, swelling, cracking and pellet-clad interaction. This chapter only deals with nuclear fuel; deformations of cladding materials are discussed elsewhere. An obvious need for a multi-physics and multi-scale approach to develop a fundamental understanding of properties of complex nuclear fuel materials is presented. The development of such advanced multi-scale mechanistic frameworks should include either an explicit (domain decomposition, homogenisation, etc.) or implicit (scaling laws, hand-shaking,...) linkage between the different time and length scales involved, in order to accurately predict the fuel thermomechanical response for a wide range of operating conditions and fuel types (including Gen-IV and TRU). (authors)

  11. Control over structure-specific flexibility improves anatomical accuracy for point-based deformable registration in bladder cancer radiotherapy.

    Science.gov (United States)

    Wognum, S; Bondar, L; Zolnay, A G; Chai, X; Hulshof, M C C M; Hoogeman, M S; Bel, A

    2013-02-01

    Future developments in image guided adaptive radiotherapy (IGART) for bladder cancer require accurate deformable image registration techniques for the precise assessment of tumor and bladder motion and deformation that occur as a result of large bladder volume changes during the course of radiotherapy treatment. The aim was to employ an extended version of a point-based deformable registration algorithm that allows control over tissue-specific flexibility in combination with the authors' unique patient dataset, in order to overcome two major challenges of bladder cancer registration, i.e., the difficulty in accounting for the difference in flexibility between the bladder wall and tumor and the lack of visible anatomical landmarks for validation. The registration algorithm used in the current study is an extension of the symmetric-thin plate splines-robust point matching (S-TPS-RPM) algorithm, a symmetric feature-based registration method. The S-TPS-RPM algorithm has been previously extended to allow control over the degree of flexibility of different structures via a weight parameter. The extended weighted S-TPS-RPM algorithm was tested and validated on CT data (planning- and four to five repeat-CTs) of five urinary bladder cancer patients who received lipiodol injections before radiotherapy. The performance of the weighted S-TPS-RPM method, applied to bladder and tumor structures simultaneously, was compared with a previous version of the S-TPS-RPM algorithm applied to bladder wall structure alone and with a simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. Performance was assessed in terms of anatomical and geometric accuracy. The anatomical accuracy was calculated as the residual distance error (RDE) of the lipiodol markers and the geometric accuracy was determined by the surface distance, surface coverage, and inverse consistency errors. Optimal parameter values for the flexibility and bladder weight parameters were determined

  12. Control over structure-specific flexibility improves anatomical accuracy for point-based deformable registration in bladder cancer radiotherapy

    International Nuclear Information System (INIS)

    Wognum, S.; Chai, X.; Hulshof, M. C. C. M.; Bel, A.; Bondar, L.; Zolnay, A. G.; Hoogeman, M. S.

    2013-01-01

    Purpose: Future developments in image guided adaptive radiotherapy (IGART) for bladder cancer require accurate deformable image registration techniques for the precise assessment of tumor and bladder motion and deformation that occur as a result of large bladder volume changes during the course of radiotherapy treatment. The aim was to employ an extended version of a point-based deformable registration algorithm that allows control over tissue-specific flexibility in combination with the authors’ unique patient dataset, in order to overcome two major challenges of bladder cancer registration, i.e., the difficulty in accounting for the difference in flexibility between the bladder wall and tumor and the lack of visible anatomical landmarks for validation. Methods: The registration algorithm used in the current study is an extension of the symmetric-thin plate splines-robust point matching (S-TPS-RPM) algorithm, a symmetric feature-based registration method. The S-TPS-RPM algorithm has been previously extended to allow control over the degree of flexibility of different structures via a weight parameter. The extended weighted S-TPS-RPM algorithm was tested and validated on CT data (planning- and four to five repeat-CTs) of five urinary bladder cancer patients who received lipiodol injections before radiotherapy. The performance of the weighted S-TPS-RPM method, applied to bladder and tumor structures simultaneously, was compared with a previous version of the S-TPS-RPM algorithm applied to bladder wall structure alone and with a simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. Performance was assessed in terms of anatomical and geometric accuracy. The anatomical accuracy was calculated as the residual distance error (RDE) of the lipiodol markers and the geometric accuracy was determined by the surface distance, surface coverage, and inverse consistency errors. Optimal parameter values for the flexibility and bladder weight

  13. Control over structure-specific flexibility improves anatomical accuracy for point-based deformable registration in bladder cancer radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Wognum, S.; Chai, X.; Hulshof, M. C. C. M.; Bel, A. [Department of Radiotherapy, Academic Medical Center, Meiberdreef 9, 1105 AZ Amsterdam (Netherlands); Bondar, L.; Zolnay, A. G.; Hoogeman, M. S. [Department of Radiation Oncology, Daniel den Hoed Cancer Center, Erasmus Medical Center, Groene Hilledijk 301, 3075 EA Rotterdam (Netherlands)

    2013-02-15

    Purpose: Future developments in image guided adaptive radiotherapy (IGART) for bladder cancer require accurate deformable image registration techniques for the precise assessment of tumor and bladder motion and deformation that occur as a result of large bladder volume changes during the course of radiotherapy treatment. The aim was to employ an extended version of a point-based deformable registration algorithm that allows control over tissue-specific flexibility in combination with the authors' unique patient dataset, in order to overcome two major challenges of bladder cancer registration, i.e., the difficulty in accounting for the difference in flexibility between the bladder wall and tumor and the lack of visible anatomical landmarks for validation. Methods: The registration algorithm used in the current study is an extension of the symmetric-thin plate splines-robust point matching (S-TPS-RPM) algorithm, a symmetric feature-based registration method. The S-TPS-RPM algorithm has been previously extended to allow control over the degree of flexibility of different structures via a weight parameter. The extended weighted S-TPS-RPM algorithm was tested and validated on CT data (planning- and four to five repeat-CTs) of five urinary bladder cancer patients who received lipiodol injections before radiotherapy. The performance of the weighted S-TPS-RPM method, applied to bladder and tumor structures simultaneously, was compared with a previous version of the S-TPS-RPM algorithm applied to bladder wall structure alone and with a simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. Performance was assessed in terms of anatomical and geometric accuracy. The anatomical accuracy was calculated as the residual distance error (RDE) of the lipiodol markers and the geometric accuracy was determined by the surface distance, surface coverage, and inverse consistency errors. Optimal parameter values for the flexibility and bladder weight

  14. The strain accommodation in Ti–28Nb–12Ta–5Zr alloy during warm deformation

    International Nuclear Information System (INIS)

    Farghadany, E.; Zarei-Hanzaki, A.; Abedi, H.R.; Dietrich, D.; Lampke, T.

    2014-01-01

    The warm deformation behavior of a β-type Ti alloys, composing of Ti–27.96Nb–11.97Ta–5.02Zr %wt, (so called TNTZ alloy), has been investigated in the present work in a warm deformation temperature. A variety of deformation features are characterized in the material microstructure after applied warm deformation scheme. The XRD analysis confirms an enhancement in martensite volume fraction. The electron back scatter diffractometry (EBSD) elucidates that the martensite has been mainly formed by laterally at the vicinity of different types of deformation bands. Both the well-known twining systems in TNTZ series have been occurred during deformation. The micro-shear bands, which are defined as highly concentrated plastic strain regions, are characterized in the deformed microstructure. The micro-shear bands are severely formed in the regions, which accommodate the most amount of applied strain

  15. Deformable image registration for image guided prostate radiotherapy

    International Nuclear Information System (INIS)

    Cassetta, Roberto; Riboldi, Marco; Baroni, Guido; Leandro, Kleber; Novaes, Paulo Eduardo; Goncalves, Vinicius; Sakuraba, Roberto; Fattori, Giovanni

    2016-01-01

    In this study, we present a CT to CBCT deformable registration method based on the ITK library. An algorithm was developed in order to explore the soft tissue information of the CT-CBCT images to perform deformable image registration (DIR), making efforts to overcome the poor signal-to-noise ratio and HU calibration issues that limits CBCT use for treatment planning purposes. Warped CT images and contours were generated and their impact in adaptive radiotherapy was evaluated by DVH analysis for photon and proton treatments. Considerable discrepancies, related to the treatment planning dose distribution, might be found due to changes in patient’s anatomy. (author)

  16. A deformable head and neck phantom with in-vivo dosimetry for adaptive radiotherapy quality assurance

    Energy Technology Data Exchange (ETDEWEB)

    Graves, Yan Jiang [Center for Advanced Radiotherapy Technologies and Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California 92037-0843 and Department of Physics, University of California San Diego, La Jolla, California 92093 (United States); Smith, Arthur-Allen; Mcilvena, David; Manilay, Zherrina; Lai, Yuet Kong [Department of Mechanical and Aerospace Engineering, University of California San Diego, La Jolla, California 92093 (United States); Rice, Roger; Mell, Loren; Cerviño, Laura, E-mail: lcervino@ucsd.edu, E-mail: steve.jiang@utsouthwestern.edu [Center for Advanced Radiotherapy Technologies and Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California 92037-0843 (United States); Jia, Xun; Jiang, Steve B., E-mail: lcervino@ucsd.edu, E-mail: steve.jiang@utsouthwestern.edu [Center for Advanced Radiotherapy Technologies and Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California 92037-0843 and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas 75235 (United States)

    2015-04-15

    Purpose: Patients’ interfractional anatomic changes can compromise the initial treatment plan quality. To overcome this issue, adaptive radiotherapy (ART) has been introduced. Deformable image registration (DIR) is an important tool for ART and several deformable phantoms have been built to evaluate the algorithms’ accuracy. However, there is a lack of deformable phantoms that can also provide dosimetric information to verify the accuracy of the whole ART process. The goal of this work is to design and construct a deformable head and neck (HN) ART quality assurance (QA) phantom with in vivo dosimetry. Methods: An axial slice of a HN patient is taken as a model for the phantom construction. Six anatomic materials are considered, with HU numbers similar to a real patient. A filled balloon inside the phantom tissue is inserted to simulate tumor. Deflation of the balloon simulates tumor shrinkage. Nonradiopaque surface markers, which do not influence DIR algorithms, provide the deformation ground truth. Fixed and movable holders are built in the phantom to hold a diode for dosimetric measurements. Results: The measured deformations at the surface marker positions can be compared with deformations calculated by a DIR algorithm to evaluate its accuracy. In this study, the authors selected a Demons algorithm as a DIR algorithm example for demonstration purposes. The average error magnitude is 2.1 mm. The point dose measurements from the in vivo diode dosimeters show a good agreement with the calculated doses from the treatment planning system with a maximum difference of 3.1% of prescription dose, when the treatment plans are delivered to the phantom with original or deformed geometry. Conclusions: In this study, the authors have presented the functionality of this deformable HN phantom for testing the accuracy of DIR algorithms and verifying the ART dosimetric accuracy. The authors’ experiments demonstrate the feasibility of this phantom serving as an end

  17. Neutron halo in deformed nuclei

    International Nuclear Information System (INIS)

    Zhou Shangui; Meng Jie; Ring, P.; Zhao Enguang

    2010-01-01

    Halo phenomena in deformed nuclei are investigated within a deformed relativistic Hartree Bogoliubov (DRHB) theory. These weakly bound quantum systems present interesting examples for the study of the interdependence between the deformation of the core and the particles in the halo. Contributions of the halo, deformation effects, and large spatial extensions of these systems are described in a fully self-consistent way by the DRHB equations in a spherical Woods-Saxon basis with the proper asymptotic behavior at a large distance from the nuclear center. Magnesium and neon isotopes are studied and detailed results are presented for the deformed neutron-rich and weakly bound nucleus 44 Mg. The core of this nucleus is prolate, but the halo has a slightly oblate shape. This indicates a decoupling of the halo orbitals from the deformation of the core. The generic conditions for the occurrence of this decoupling effects are discussed.

  18. Rotary deformity in degenerative spondylolisthesis

    International Nuclear Information System (INIS)

    Kang, Sung Gwon; Kim, Jeong; Kho, Hyen Sim; Yun, Sung Su; Oh, Jae Hee; Byen, Ju Nam; Kim, Young Chul

    1994-01-01

    We studied to determine whether the degenerative spondylolisthesis has rotary deformity in addition to forward displacement. We have made analysis of difference of rotary deformity between the 31 study groups of symptomatic degenerative spondylolisthesis and 31 control groups without any symptom, statistically. We also reviewed CT findings in 15 study groups. The mean rotary deformity in study groups was 6.1 degree(the standard deviation is 5.20), and the mean rotary deformity in control groups was 2.52 degree(the standard deviation is 2.16)(p < 0.01). The rotary deformity can be accompanied with degenerative spondylolisthesis. We may consider the rotary deformity as a cause of symptomatic degenerative spondylolisthesis in case that any other cause is not detected

  19. New Methodology for Optimal Flight Control using Differential Evolution Algorithms applied on the Cessna Citation X Business Aircraft – Part 2. Validation on Aircraft Research Flight Level D Simulator

    OpenAIRE

    Yamina BOUGHARI; Georges GHAZI; Ruxandra Mihaela BOTEZ; Florian THEEL

    2017-01-01

    In this paper the Cessna Citation X clearance criteria were evaluated for a new Flight Controller. The Flight Control Law were optimized and designed for the Cessna Citation X flight envelope by combining the Deferential Evolution algorithm, the Linear Quadratic Regulator method, and the Proportional Integral controller during a previous research presented in part 1. The optimal controllers were used to reach satisfactory aircraft’s dynamic and safe flight operations with respect to the augme...

  20. Neural Networks and Genetic Algorithms Applied for Implementing the Management Model “Triple A” in a Supply Chain. Case: Collection Centers of Raw Milk in the Azuay Province

    Directory of Open Access Journals (Sweden)

    Juan Pablo Bermeo M.

    2016-01-01

    Full Text Available To get successful the companies need a combination of several factors, the most important one is the management of Supply Chain. This paper proposes the use of intelligent systems such as Artificial Neural Networks (ANN and Genetic Algorithms as support systems together with monitoring indicators and monitoring, in order to implement the management model Triple A, which is focused on Agility-Adaptability-Alignment, where the “Agility” is the speed of response to changes in demand, “Adaptability” is the ability to tailor the supply chain front market fluctuations and "Alignment" is to align the chain between consumers and suppliers. The Neural Network was trained to work as a predictor of demand and will improve the “agility” of the supply chain, the genetic algorithm is used to obtain optimal routes of pickup from providers, this support to the “alignment” the product of suppliers in the supply chain to final customers; the Neural Network with the Genetic Algorithm together serve as support to “adapt” the supply chain to variations of demand and the suppliers, however, for successful of the model are need other factors such as the use of indicators and training of staff on the administration of management model triple A in the supply chain.

  1. Majorization arrow in quantum-algorithm design

    International Nuclear Information System (INIS)

    Latorre, J.I.; Martin-Delgado, M.A.

    2002-01-01

    We apply majorization theory to study the quantum algorithms known so far and find that there is a majorization principle underlying the way they operate. Grover's algorithm is a neat instance of this principle where majorization works step by step until the optimal target state is found. Extensions of this situation are also found in algorithms based in quantum adiabatic evolution and the family of quantum phase-estimation algorithms, including Shor's algorithm. We state that in quantum algorithms the time arrow is a majorization arrow

  2. Comparing Online Algorithms for Bin Packing Problems

    DEFF Research Database (Denmark)

    Epstein, Leah; Favrholdt, Lene Monrad; Kohrt, Jens Svalgaard

    2012-01-01

    The relative worst-order ratio is a measure of the quality of online algorithms. In contrast to the competitive ratio, this measure compares two online algorithms directly instead of using an intermediate comparison with an optimal offline algorithm. In this paper, we apply the relative worst-ord......-order ratio to online algorithms for several common variants of the bin packing problem. We mainly consider pairs of algorithms that are not distinguished by the competitive ratio and show that the relative worst-order ratio prefers the intuitively better algorithm of each pair....

  3. Hybrid employment recommendation algorithm based on Spark

    Science.gov (United States)

    Li, Zuoquan; Lin, Yubei; Zhang, Xingming

    2017-08-01

    Aiming at the real-time application of collaborative filtering employment recommendation algorithm (CF), a clustering collaborative filtering recommendation algorithm (CCF) is developed, which applies hierarchical clustering to CF and narrows the query range of neighbour items. In addition, to solve the cold-start problem of content-based recommendation algorithm (CB), a content-based algorithm with users’ information (CBUI) is introduced for job recommendation. Furthermore, a hybrid recommendation algorithm (HRA) which combines CCF and CBUI algorithms is proposed, and implemented on Spark platform. The experimental results show that HRA can overcome the problems of cold start and data sparsity, and achieve good recommendation accuracy and scalability for employment recommendation.

  4. q-deformed Brownian motion

    CERN Document Server

    Man'ko, V I

    1993-01-01

    Brownian motion may be embedded in the Fock space of bosonic free field in one dimension.Extending this correspondence to a family of creation and annihilation operators satisfying a q-deformed algebra, the notion of q-deformation is carried from the algebra to the domain of stochastic processes.The properties of q-deformed Brownian motion, in particular its non-Gaussian nature and cumulant structure,are established.

  5. Unsupervised segmentation of lung fields in chest radiographs using multiresolution fractal feature vector and deformable models.

    Science.gov (United States)

    Lee, Wen-Li; Chang, Koyin; Hsieh, Kai-Sheng

    2016-09-01

    Segmenting lung fields in a chest radiograph is essential for automatically analyzing an image. We present an unsupervised method based on multiresolution fractal feature vector. The feature vector characterizes the lung field region effectively. A fuzzy c-means clustering algorithm is then applied to obtain a satisfactory initial contour. The final contour is obtained by deformable models. The results show the feasibility and high performance of the proposed method. Furthermore, based on the segmentation of lung fields, the cardiothoracic ratio (CTR) can be measured. The CTR is a simple index for evaluating cardiac hypertrophy. After identifying a suspicious symptom based on the estimated CTR, a physician can suggest that the patient undergoes additional extensive tests before a treatment plan is finalized.

  6. The application of 3-dimensional CAT scan reconstruction for maxillofacial deformities

    International Nuclear Information System (INIS)

    Shimbashi, Takeshi; Tomonari, Hiroshi; Ishii, Masahiro; Sakurai, Nobuaki; Kodachi, Ken; Kubo, Eiichi; Tsuchida, Yoshitaka; Takagi, Hiroshi.

    1987-01-01

    It has been found very useful to recognize craniofacial deformities 3-dimensionally, and to observe 3-D Cat scan reconstructions that have been performed by others. Thus, starting in 1985, we have developed a 3-D CT system that combines conventional X-ray CAT scan hardware to a 3-Dimensional display software. In this paper we report on our 3-CT system, its basic algorithm, and its basic processes, i.e., the threshold process, the perspective process, the shading process and the display. The mixture shading which we have developed makes 3-D displays clearer and more natural. Also, we have applied our 3-D display to 39 cases of maxillofacial diformities. (author)

  7. q-deformed Minkowski space

    International Nuclear Information System (INIS)

    Ogievetsky, O.; Pillin, M.; Schmidke, W.B.; Wess, J.; Zumino, B.

    1993-01-01

    In this lecture I discuss the algebraic structure of a q-deformed four-vector space. It serves as a good example of quantizing Minkowski space. To give a physical interpretation of such a quantized Minkowski space we construct the Hilbert space representation and find that the relevant time and space operators have a discrete spectrum. Thus the q-deformed Minkowski space has a lattice structure. Nevertheless this lattice structure is compatible with the operation of q-deformed Lorentz transformations. The generators of the q-deformed Lorentz group can be represented as linear operators in the same Hilbert space. (orig.)

  8. Deformable paper origami optoelectronic devices

    KAUST Repository

    He, Jr-Hau

    2017-01-19

    Deformable optoelectronic devices are provided, including photodetectors, photodiodes, and photovoltaic cells. The devices can be made on a variety of paper substrates, and can include a plurality of fold segments in the paper substrate creating a deformable pattern. Thin electrode layers and semiconductor nanowire layers can be attached to the substrate, creating the optoelectronic device. The devices can be highly deformable, e.g. capable of undergoing strains of 500% or more, bending angles of 25° or more, and/or twist angles of 270° or more. Methods of making the deformable optoelectronic devices and methods of using, e.g. as a photodetector, are also provided.

  9. Deformation behaviour of turbine foundations

    International Nuclear Information System (INIS)

    Koch, W.; Klitzing, R.; Pietzonka, R.; Wehr, J.

    1979-01-01

    The effects of foundation deformation on alignment in turbine generator sets have gained significance with the transition to modern units at the limit of design possibilities. It is therefore necessary to obtain clarification about the remaining operational variations of turbine foundations. Static measurement programmes, which cover both deformation processes as well as individual conditions of deformation are described in the paper. In order to explain the deformations measured structural engineering model calculations are being undertaken which indicate the effect of limiting factors. (orig.) [de

  10. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  11. Algorithm 865

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy

    2007-01-01

    We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...

  12. Formulation and integration of constitutive models describing large deformations in thermoplasticity and thermoviscoplasticity

    International Nuclear Information System (INIS)

    Jansohn, W.

    1997-10-01

    This report deals with the formulation and numerical integration of constitutive models in the framework of finite deformation thermomechanics. Based on the concept of dual variables, plasticity and viscoplasticity models exhibiting nonlinear kinematic hardening as well as nonlinear isotropic hardening rules are presented. Care is taken that the evolution equations governing the hardening response fulfill the intrinsic dissipation inequality in every admissible process. In view of the development of an efficient numerical integration procedure, simplified versions of these constitutive models are supposed. In these versions, the thermoelastic strains are assumed to be small and a simplified kinematic hardening rule is considered. Additionally, in view of an implementation into the ABAQUS finite element code, the elasticity law is approximated by a hypoelasticity law. For the simplified onstitutive models, an implicit time-integration algorithm is developed. First, in order to obtain a numerical objective integration scheme, use is made of the HUGHES-WINGET-Algorithm. In the resulting system of ordinary differential equations, it can be distinguished between three differential operators representing different physical effects. The structure of this system of differential equations allows to apply an operator split scheme, which leads to an efficient integration scheme for the constitutive equations. By linearizing the integration algorithm the consistent tangent modulus is derived. In this way, the quadratic convergence of Newton's method used to solve the basic finite element equations (i.e. the finite element discretization of the governing thermomechanical field equations) is preserved. The resulting integration scheme is implemented as a user subroutine UMAT in ABAQUS. The properties of the applied algorithm are first examined by test calculations on a single element under tension-compression-loading. For demonstrating the capabilities of the constitutive theory

  13. AC electric field induced droplet deformation in a microfluidic T-junction.

    Science.gov (United States)

    Xi, Heng-Dong; Guo, Wei; Leniart, Michael; Chong, Zhuang Zhi; Tan, Say Hwa

    2016-08-02

    We present for the first time an experimental study on the droplet deformation induced by an AC electric field in droplet-based microfluidics. It is found that the deformation of the droplets becomes stronger with increasing electric field intensity and frequency. The measured electric field intensity dependence of the droplet deformation is consistent with an early theoretical prediction for stationary droplets. We also proposed a simple equivalent circuit model to account for the frequency dependence of the droplet deformation. The model well explains our experimental observations. In addition, we found that the droplets can be deformed repeatedly by applying an amplitude modulation (AM) signal.

  14. Algorithms for boundary detection in radiographic images

    International Nuclear Information System (INIS)

    Gonzaga, Adilson; Franca, Celso Aparecido de

    1996-01-01

    Edge detecting techniques applied to radiographic digital images are discussed. Some algorithms have been implemented and the results are displayed to enhance boundary or hide details. An algorithm applied in a pre processed image with contrast enhanced is proposed and the results are discussed

  15. Deformed chiral nucleons

    Energy Technology Data Exchange (ETDEWEB)

    Price, C E; Shepard, J R [Colorado Univ., Boulder (USA). Dept. of Physics

    1991-04-18

    We compute properties of the nucleon in a hybrid chiral model based on the linear {sigma}-model with quark degrees of freedom treated explicity. In contrast to previous calculations, we do not use the hedgehog ansatz. Instead we solve self-consistently for a state with well defined spin and isospin projections. We allow this state to be deformed and find that, although d- and g-state admixtures in the predominantly s-state single quark wave functions are not large, they have profound effects on many nucleon properties including magnetic moments and g{sub A}. Our best fit parameters provide excellent agreement with experiment but are much different from those determined in hedgehog calculations. (orig.).

  16. Deformations of surface singularities

    CERN Document Server

    Szilárd, ágnes

    2013-01-01

    The present publication contains a special collection of research and review articles on deformations of surface singularities, that put together serve as an introductory survey of results and methods of the theory, as well as open problems, important examples and connections to other areas of mathematics. The aim is to collect material that will help mathematicians already working or wishing to work in this area to deepen their insight and eliminate the technical barriers in this learning process. This also is supported by review articles providing some global picture and an abundance of examples. Additionally, we introduce some material which emphasizes the newly found relationship with the theory of Stein fillings and symplectic geometry.  This links two main theories of mathematics: low dimensional topology and algebraic geometry. The theory of normal surface singularities is a distinguished part of analytic or algebraic geometry with several important results, its own technical machinery, and several op...

  17. Deformable image registration using convolutional neural networks

    Science.gov (United States)

    Eppenhof, Koen A. J.; Lafarge, Maxime W.; Moeskops, Pim; Veta, Mitko; Pluim, Josien P. W.

    2018-03-01

    Deformable image registration can be time-consuming and often needs extensive parameterization to perform well on a specific application. We present a step towards a registration framework based on a three-dimensional convolutional neural network. The network directly learns transformations between pairs of three-dimensional images. The outputs of the network are three maps for the x, y, and z components of a thin plate spline transformation grid. The network is trained on synthetic random transformations, which are applied to a small set of representative images for the desired application. Training therefore does not require manually annotated ground truth deformation information. The methodology is demonstrated on public data sets of inspiration-expiration lung CT image pairs, which come with annotated corresponding landmarks for evaluation of the registration accuracy. Advantages of this methodology are its fast registration times and its minimal parameterization.

  18. IBA in deformed nuclei

    International Nuclear Information System (INIS)

    Casten, R.F.; Warner, D.D.

    1982-01-01

    The structure and characteristic properties and predictions of the IBA in deformed nuclei are reviewed, and compared with experiment, in particular for 168 Er. Overall, excellent agreement, with a minimum of free parameters (in effect, two, neglecting scale factors on energy differences), was obtained. A particularly surprising, and unavoidable, prediction is that of strong β → γ transitions, a feature characteristically absent in the geometrical model, but manifest empirically. Some discrepancies were also noted, principally for the K=4 excitation, and the detailed magnitudes of some specific B(E2) values. Considerable attention is paid to analyzing the structure of the IBA states and their relation to geometric models. The bandmixing formalism was studied to interpret both the aforementioned discrepancies and the origin of the β → γ transitions. The IBA states, extremely complex in the usual SU(5) basis, are transformed to the SU(3) basis, as is the interaction Hamiltonian. The IBA wave functions appear with much simplified structure in this way as does the structure of the associated B(E2) values. The nature of the symmetry breaking of SU(3) for actual deformed nuclei is seen to be predominantly ΔK=0 mixing. A modified, and more consistent, formalism for the IBA-1 is introduced which is simpler, has fewer free parameters (in effect, one, neglecting scale factors on energy differences), is in at least as good agreement with experiment as the earlier formalism, contains a special case of the 0(6) limit which corresponds to that known empirically, and appears to have a close relationship to the IBA-2. The new formalism facilitates the construction of contour plots of various observables (e.g., energy or B(E2) ratios) as functions of N and chi/sub Q/ which allow the parameter-free discussion of qualitative trajectories or systematics

  19. Effect of normalization methods on the performance of supervised learning algorithms applied to HTSeq-FPKM-UQ data sets: 7SK RNA expression as a predictor of survival in patients with colon adenocarcinoma.

    Science.gov (United States)

    Shahriyari, Leili

    2017-11-03

    One of the main challenges in machine learning (ML) is choosing an appropriate normalization method. Here, we examine the effect of various normalization methods on analyzing FPKM upper quartile (FPKM-UQ) RNA sequencing data sets. We collect the HTSeq-FPKM-UQ files of patients with colon adenocarcinoma from TCGA-COAD project. We compare three most common normalization methods: scaling, standardizing using z-score and vector normalization by visualizing the normalized data set and evaluating the performance of 12 supervised learning algorithms on the normalized data set. Additionally, for each of these normalization methods, we use two different normalization strategies: normalizing samples (files) or normalizing features (genes). Regardless of normalization methods, a support vector machine (SVM) model with the radial basis function kernel had the maximum accuracy (78%) in predicting the vital status of the patients. However, the fitting time of SVM depended on the norm