WorldWideScience

Sample records for curvature computation algorithm

  1. Computational methods for investigation of surface curvature effects on airfoil boundary layer behavior

    Directory of Open Access Journals (Sweden)

    Xiang Shen

    2017-03-01

    Full Text Available This article presents computational algorithms for the design, analysis, and optimization of airfoil aerodynamic performance. The prescribed surface curvature distribution blade design (CIRCLE method is applied to a symmetrical airfoil NACA0012 and a non-symmetrical airfoil E387 to remove their surface curvature and slope-of-curvature discontinuities. Computational fluid dynamics analysis is used to investigate the effects of curvature distribution on aerodynamic performance of the original and modified airfoils. An inviscid–viscid interaction scheme is introduced to predict the positions of laminar separation bubbles. The results are compared with experimental data obtained from tests on the original airfoil geometry. The computed aerodynamic advantages of the modified airfoils are analyzed in different operating conditions. The leading edge singularity of NACA0012 is removed and it is shown that the surface curvature discontinuity affects aerodynamic performance near the stalling angle of attack. The discontinuous slope-of-curvature distribution of E387 results in a larger laminar separation bubble at lower angles of attack and lower Reynolds numbers. It also affects the inherent performance of the airfoil at higher Reynolds numbers. It is shown that at relatively high angles of attack, a continuous slope-of-curvature distribution reduces the skin friction by suppressing both laminar and turbulent separation, and by delaying laminar-turbulent transition. It is concluded that the surface curvature distribution has significant effects on the boundary layer behavior and consequently an improved curvature distribution will lead to higher aerodynamic efficiency.

  2. A new accurate curvature matching and optimal tool based five-axis machining algorithm

    International Nuclear Information System (INIS)

    Lin, Than; Lee, Jae Woo; Bohez, Erik L. J.

    2009-01-01

    Free-form surfaces are widely used in CAD systems to describe the part surface. Today, the most advanced machining of free from surfaces is done in five-axis machining using a flat end mill cutter. However, five-axis machining requires complex algorithms for gouging avoidance, collision detection and powerful computer-aided manufacturing (CAM) systems to support various operations. An accurate and efficient method is proposed for five-axis CNC machining of free-form surfaces. The proposed algorithm selects the best tool and plans the tool path autonomously using curvature matching and integrated inverse kinematics of the machine tool. The new algorithm uses the real cutter contact tool path generated by the inverse kinematics and not the linearized piecewise real cutter location tool path

  3. A curvature-based weighted fuzzy c-means algorithm for point clouds de-noising

    Science.gov (United States)

    Cui, Xin; Li, Shipeng; Yan, Xiutian; He, Xinhua

    2018-04-01

    In order to remove the noise of three-dimensional scattered point cloud and smooth the data without damnify the sharp geometric feature simultaneity, a novel algorithm is proposed in this paper. The feature-preserving weight is added to fuzzy c-means algorithm which invented a curvature weighted fuzzy c-means clustering algorithm. Firstly, the large-scale outliers are removed by the statistics of r radius neighboring points. Then, the algorithm estimates the curvature of the point cloud data by using conicoid parabolic fitting method and calculates the curvature feature value. Finally, the proposed clustering algorithm is adapted to calculate the weighted cluster centers. The cluster centers are regarded as the new points. The experimental results show that this approach is efficient to different scale and intensities of noise in point cloud with a high precision, and perform a feature-preserving nature at the same time. Also it is robust enough to different noise model.

  4. Robust estimation of adaptive tensors of curvature by tensor voting.

    Science.gov (United States)

    Tong, Wai-Shun; Tang, Chi-Keung

    2005-03-01

    Although curvature estimation from a given mesh or regularly sampled point set is a well-studied problem, it is still challenging when the input consists of a cloud of unstructured points corrupted by misalignment error and outlier noise. Such input is ubiquitous in computer vision. In this paper, we propose a three-pass tensor voting algorithm to robustly estimate curvature tensors, from which accurate principal curvatures and directions can be calculated. Our quantitative estimation is an improvement over the previous two-pass algorithm, where only qualitative curvature estimation (sign of Gaussian curvature) is performed. To overcome misalignment errors, our improved method automatically corrects input point locations at subvoxel precision, which also rejects outliers that are uncorrectable. To adapt to different scales locally, we define the RadiusHit of a curvature tensor to quantify estimation accuracy and applicability. Our curvature estimation algorithm has been proven with detailed quantitative experiments, performing better in a variety of standard error metrics (percentage error in curvature magnitudes, absolute angle difference in curvature direction) in the presence of a large amount of misalignment noise.

  5. Mean curvature and texture constrained composite weighted random walk algorithm for optic disc segmentation towards glaucoma screening.

    Science.gov (United States)

    Panda, Rashmi; Puhan, N B; Panda, Ganapati

    2018-02-01

    Accurate optic disc (OD) segmentation is an important step in obtaining cup-to-disc ratio-based glaucoma screening using fundus imaging. It is a challenging task because of the subtle OD boundary, blood vessel occlusion and intensity inhomogeneity. In this Letter, the authors propose an improved version of the random walk algorithm for OD segmentation to tackle such challenges. The algorithm incorporates the mean curvature and Gabor texture energy features to define the new composite weight function to compute the edge weights. Unlike the deformable model-based OD segmentation techniques, the proposed algorithm remains unaffected by curve initialisation and local energy minima problem. The effectiveness of the proposed method is verified with DRIVE, DIARETDB1, DRISHTI-GS and MESSIDOR database images using the performance measures such as mean absolute distance, overlapping ratio, dice coefficient, sensitivity, specificity and precision. The obtained OD segmentation results and quantitative performance measures show robustness and superiority of the proposed algorithm in handling the complex challenges in OD segmentation.

  6. Space-Variant Post-Filtering for Wavefront Curvature Correction in Polar-Formatted Spotlight-Mode SAR Imagery

    Energy Technology Data Exchange (ETDEWEB)

    DOREN,NEALL E.

    1999-10-01

    Wavefront curvature defocus effects occur in spotlight-mode SAR imagery when reconstructed via the well-known polar-formatting algorithm (PFA) under certain imaging scenarios. These include imaging at close range, using a very low radar center frequency, utilizing high resolution, and/or imaging very large scenes. Wavefront curvature effects arise from the unrealistic assumption of strictly planar wavefronts illuminating the imaged scene. This dissertation presents a method for the correction of wavefront curvature defocus effects under these scenarios, concentrating on the generalized: squint-mode imaging scenario and its computational aspects. This correction is accomplished through an efficient one-dimensional, image domain filter applied as a post-processing step to PF.4. This post-filter, referred to as SVPF, is precalculated from a theoretical derivation of the wavefront curvature effect and varies as a function of scene location. Prior to SVPF, severe restrictions were placed on the imaged scene size in order to avoid defocus effects under these scenarios when using PFA. The SVPF algorithm eliminates the need for scene size restrictions when wavefront curvature effects are present, correcting for wavefront curvature in broadside as well as squinted collection modes while imposing little additional computational penalty for squinted images. This dissertation covers the theoretical development, implementation and analysis of the generalized, squint-mode SVPF algorithm (of which broadside-mode is a special case) and provides examples of its capabilities and limitations as well as offering guidelines for maximizing its computational efficiency. Tradeoffs between the PFA/SVPF combination and other spotlight-mode SAR image formation techniques are discussed with regard to computational burden, image quality, and imaging geometry constraints. It is demonstrated that other methods fail to exhibit a clear computational advantage over polar-formatting in conjunction

  7. Integrating 3D seismic curvature and curvature gradient attributes for fracture characterization: Methodologies and interpretational implications

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Dengliang

    2013-03-01

    In 3D seismic interpretation, curvature is a popular attribute that depicts the geometry of seismic reflectors and has been widely used to detect faults in the subsurface; however, it provides only part of the solutions to subsurface structure analysis. This study extends the curvature algorithm to a new curvature gradient algorithm, and integrates both algorithms for fracture detection using a 3D seismic test data set over Teapot Dome (Wyoming). In fractured reservoirs at Teapot Dome known to be formed by tectonic folding and faulting, curvature helps define the crestal portion of the reservoirs that is associated with strong seismic amplitude and high oil productivity. In contrast, curvature gradient helps better define the regional northwest-trending and the cross-regional northeast-trending lineaments that are associated with weak seismic amplitude and low oil productivity. In concert with previous reports from image logs, cores, and outcrops, the current study based on an integrated seismic curvature and curvature gradient analysis suggests that curvature might help define areas of enhanced potential to form tensile fractures, whereas curvature gradient might help define zones of enhanced potential to develop shear fractures. In certain fractured reservoirs such as at Teapot Dome where faulting and fault-related folding contribute dominantly to the formation and evolution of fractures, curvature and curvature gradient attributes can be potentially applied to differentiate fracture mode, to predict fracture intensity and orientation, to detect fracture volume and connectivity, and to model fracture networks.

  8. Curvature computation in volume-of-fluid method based on point-cloud sampling

    Science.gov (United States)

    Kassar, Bruno B. M.; Carneiro, João N. E.; Nieckele, Angela O.

    2018-01-01

    This work proposes a novel approach to compute interface curvature in multiphase flow simulation based on Volume of Fluid (VOF) method. It is well documented in the literature that curvature and normal vector computation in VOF may lack accuracy mainly due to abrupt changes in the volume fraction field across the interfaces. This may cause deterioration on the interface tension forces estimates, often resulting in inaccurate results for interface tension dominated flows. Many techniques have been presented over the last years in order to enhance accuracy in normal vectors and curvature estimates including height functions, parabolic fitting of the volume fraction, reconstructing distance functions, coupling Level Set method with VOF, convolving the volume fraction field with smoothing kernels among others. We propose a novel technique based on a representation of the interface by a cloud of points. The curvatures and the interface normal vectors are computed geometrically at each point of the cloud and projected onto the Eulerian grid in a Front-Tracking manner. Results are compared to benchmark data and significant reduction on spurious currents as well as improvement in the pressure jump are observed. The method was developed in the open source suite OpenFOAM® extending its standard VOF implementation, the interFoam solver.

  9. Surface meshing with curvature convergence

    KAUST Repository

    Li, Huibin; Zeng, Wei; Morvan, Jean-Marie; Chen, Liming; Gu, Xianfengdavid

    2014-01-01

    Surface meshing plays a fundamental role in graphics and visualization. Many geometric processing tasks involve solving geometric PDEs on meshes. The numerical stability, convergence rates and approximation errors are largely determined by the mesh qualities. In practice, Delaunay refinement algorithms offer satisfactory solutions to high quality mesh generations. The theoretical proofs for volume based and surface based Delaunay refinement algorithms have been established, but those for conformal parameterization based ones remain wide open. This work focuses on the curvature measure convergence for the conformal parameterization based Delaunay refinement algorithms. Given a metric surface, the proposed approach triangulates its conformal uniformization domain by the planar Delaunay refinement algorithms, and produces a high quality mesh. We give explicit estimates for the Hausdorff distance, the normal deviation, and the differences in curvature measures between the surface and the mesh. In contrast to the conventional results based on volumetric Delaunay refinement, our stronger estimates are independent of the mesh structure and directly guarantee the convergence of curvature measures. Meanwhile, our result on Gaussian curvature measure is intrinsic to the Riemannian metric and independent of the embedding. In practice, our meshing algorithm is much easier to implement and much more efficient. The experimental results verified our theoretical results and demonstrated the efficiency of the meshing algorithm. © 2014 IEEE.

  10. Surface meshing with curvature convergence

    KAUST Repository

    Li, Huibin

    2014-06-01

    Surface meshing plays a fundamental role in graphics and visualization. Many geometric processing tasks involve solving geometric PDEs on meshes. The numerical stability, convergence rates and approximation errors are largely determined by the mesh qualities. In practice, Delaunay refinement algorithms offer satisfactory solutions to high quality mesh generations. The theoretical proofs for volume based and surface based Delaunay refinement algorithms have been established, but those for conformal parameterization based ones remain wide open. This work focuses on the curvature measure convergence for the conformal parameterization based Delaunay refinement algorithms. Given a metric surface, the proposed approach triangulates its conformal uniformization domain by the planar Delaunay refinement algorithms, and produces a high quality mesh. We give explicit estimates for the Hausdorff distance, the normal deviation, and the differences in curvature measures between the surface and the mesh. In contrast to the conventional results based on volumetric Delaunay refinement, our stronger estimates are independent of the mesh structure and directly guarantee the convergence of curvature measures. Meanwhile, our result on Gaussian curvature measure is intrinsic to the Riemannian metric and independent of the embedding. In practice, our meshing algorithm is much easier to implement and much more efficient. The experimental results verified our theoretical results and demonstrated the efficiency of the meshing algorithm. © 2014 IEEE.

  11. The motion of a vortex on a closed surface of constant negative curvature.

    Science.gov (United States)

    Ragazzo, C Grotta

    2017-10-01

    The purpose of this work is to present an algorithm to determine the motion of a single hydrodynamic vortex on a closed surface of constant curvature and of genus greater than one. The algorithm is based on a relation between the Laplace-Beltrami Green function and the heat kernel. The algorithm is used to compute the motion of a vortex on the Bolza surface. This is the first determination of the orbits of a vortex on a closed surface of genus greater than one. The numerical results show that all the 46 vortex equilibria can be explicitly computed using the symmetries of the Bolza surface. Some of these equilibria allow for the construction of the first two examples of infinite vortex crystals on the hyperbolic disc. The following theorem is proved: 'a Weierstrass point of a hyperellitic surface of constant curvature is always a vortex equilibrium'.

  12. Measuring the composition-curvature coupling in binary lipid membranes by computer simulations

    International Nuclear Information System (INIS)

    Barragán Vidal, I. A.; Müller, M.; Rosetti, C. M.; Pastorino, C.

    2014-01-01

    The coupling between local composition fluctuations in binary lipid membranes and curvature affects the lateral membrane structure. We propose an efficient method to compute the composition-curvature coupling in molecular simulations and apply it to two coarse-grained membrane models—a minimal, implicit-solvent model and the MARTINI model. Both the weak-curvature behavior that is typical for thermal fluctuations of planar bilayer membranes as well as the strong-curvature regime corresponding to narrow cylindrical membrane tubes are studied by molecular dynamics simulation. The simulation results are analyzed by using a phenomenological model of the thermodynamics of curved, mixed bilayer membranes that accounts for the change of the monolayer area upon bending. Additionally the role of thermodynamic characteristics such as the incompatibility between the two lipid species and asymmetry of composition are investigated

  13. Measuring the composition-curvature coupling in binary lipid membranes by computer simulations

    Energy Technology Data Exchange (ETDEWEB)

    Barragán Vidal, I. A., E-mail: vidal@theorie.physik.uni-goettingen.de; Müller, M., E-mail: mmueller@theorie.physik.uni-goettingen.de [Institut für Theoretische Physik, Georg-August-Universität, Friedrich-Hund-Platz 1, 37077 Göttingen (Germany); Rosetti, C. M., E-mail: carla@dqb.fcq.unc.edu.ar [Centro de Investigaciones en Química Biológica de Córdoba, Departamento de Química Biológica, Facultad de Ciencias Químicas, Universidad Nacional de Córdoba, Ciudad Universitaria, Córdoba (Argentina); Pastorino, C., E-mail: pastor@cnea.gov.ar [Departamento de Física de la Materia Condensada, Centro Atómico Constituyentes, CNEA/CONICET, Av. Gral. Paz 1499, 1650 Pcia. de Buenos Aires (Argentina)

    2014-11-21

    The coupling between local composition fluctuations in binary lipid membranes and curvature affects the lateral membrane structure. We propose an efficient method to compute the composition-curvature coupling in molecular simulations and apply it to two coarse-grained membrane models—a minimal, implicit-solvent model and the MARTINI model. Both the weak-curvature behavior that is typical for thermal fluctuations of planar bilayer membranes as well as the strong-curvature regime corresponding to narrow cylindrical membrane tubes are studied by molecular dynamics simulation. The simulation results are analyzed by using a phenomenological model of the thermodynamics of curved, mixed bilayer membranes that accounts for the change of the monolayer area upon bending. Additionally the role of thermodynamic characteristics such as the incompatibility between the two lipid species and asymmetry of composition are investigated.

  14. Incorporating contact angles in the surface tension force with the ACES interface curvature scheme

    Science.gov (United States)

    Owkes, Mark

    2017-11-01

    In simulations of gas-liquid flows interacting with solid boundaries, the contact line dynamics effect the interface motion and flow field through the surface tension force. The surface tension force is directly proportional to the interface curvature and the problem of accurately imposing a contact angle must be incorporated into the interface curvature calculation. Many commonly used algorithms to compute interface curvatures (e.g., height function method) require extrapolating the interface, with defined contact angle, into the solid to allow for the calculation of a curvature near a wall. Extrapolating can be an ill-posed problem, especially in three-dimensions or when multiple contact lines are near each other. We have developed an accurate methodology to compute interface curvatures that allows for contact angles to be easily incorporated while avoiding extrapolation and the associated challenges. The method, known as Adjustable Curvature Evaluation Scale (ACES), leverages a least squares fit of a polynomial to points computed on the volume-of-fluid (VOF) representation of the gas-liquid interface. The method is tested by simulating canonical test cases and then applied to simulate the injection and motion of water droplets in a channel (relevant to PEM fuel cells).

  15. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  16. Mesoscale computational studies of membrane bilayer remodeling by curvature-inducing proteins

    Science.gov (United States)

    Ramakrishnan, N.; Sunil Kumar, P. B.; Radhakrishnan, Ravi

    2014-01-01

    Biological membranes constitute boundaries of cells and cell organelles. These membranes are soft fluid interfaces whose thermodynamic states are dictated by bending moduli, induced curvature fields, and thermal fluctuations. Recently, there has been a flood of experimental evidence highlighting active roles for these structures in many cellular processes ranging from trafficking of cargo to cell motility. It is believed that the local membrane curvature, which is continuously altered due to its interactions with myriad proteins and other macromolecules attached to its surface, holds the key to the emergent functionality in these cellular processes. Mechanisms at the atomic scale are dictated by protein-lipid interaction strength, lipid composition, lipid distribution in the vicinity of the protein, shape and amino acid composition of the protein, and its amino acid contents. The specificity of molecular interactions together with the cooperativity of multiple proteins induce and stabilize complex membrane shapes at the mesoscale. These shapes span a wide spectrum ranging from the spherical plasma membrane to the complex cisternae of the Golgi apparatus. Mapping the relation between the protein-induced deformations at the molecular scale and the resulting mesoscale morphologies is key to bridging cellular experiments across the various length scales. In this review, we focus on the theoretical and computational methods used to understand the phenomenology underlying protein-driven membrane remodeling. Interactions at the molecular scale can be computationally probed by all atom and coarse grained molecular dynamics (MD, CGMD), as well as dissipative particle dynamics (DPD) simulations, which we only describe in passing. We choose to focus on several continuum approaches extending the Canham - Helfrich elastic energy model for membranes to include the effect of curvature-inducing proteins and explore the conformational phase space of such systems. In this

  17. Mesoscale computational studies of membrane bilayer remodeling by curvature-inducing proteins

    Energy Technology Data Exchange (ETDEWEB)

    Ramakrishnan, N., E-mail: ramn@seas.upenn.edu [Department of Chemical and Biomolecular Engineering, University of Pennsylvania, Philadelphia, PA-19104 (United States); Department of Bioengineering, University of Pennsylvania, Philadelphia, PA-19104 (United States); Department of Biochemistry and Biophysics, University of Pennsylvania, Philadelphia, PA-19104 (United States); Sunil Kumar, P.B., E-mail: sunil@physics.iitm.ac.in [Department of Physics, Indian Institute of Technology Madras, Chennai, 600036 (India); Radhakrishnan, Ravi, E-mail: rradhak@seas.upenn.edu [Department of Chemical and Biomolecular Engineering, University of Pennsylvania, Philadelphia, PA-19104 (United States); Department of Bioengineering, University of Pennsylvania, Philadelphia, PA-19104 (United States); Department of Biochemistry and Biophysics, University of Pennsylvania, Philadelphia, PA-19104 (United States)

    2014-10-01

    Biological membranes constitute boundaries of cells and cell organelles. These membranes are soft fluid interfaces whose thermodynamic states are dictated by bending moduli, induced curvature fields, and thermal fluctuations. Recently, there has been a flood of experimental evidence highlighting active roles for these structures in many cellular processes ranging from trafficking of cargo to cell motility. It is believed that the local membrane curvature, which is continuously altered due to its interactions with myriad proteins and other macromolecules attached to its surface, holds the key to the emergent functionality in these cellular processes. Mechanisms at the atomic scale are dictated by protein–lipid interaction strength, lipid composition, lipid distribution in the vicinity of the protein, shape and amino acid composition of the protein, and its amino acid contents. The specificity of molecular interactions together with the cooperativity of multiple proteins induce and stabilize complex membrane shapes at the mesoscale. These shapes span a wide spectrum ranging from the spherical plasma membrane to the complex cisternae of the Golgi apparatus. Mapping the relation between the protein-induced deformations at the molecular scale and the resulting mesoscale morphologies is key to bridging cellular experiments across various length scales. In this review, we focus on the theoretical and computational methods used to understand the phenomenology underlying protein-driven membrane remodeling. Interactions at the molecular scale can be computationally probed by all atom and coarse grained molecular dynamics (MD, CGMD), as well as dissipative particle dynamics (DPD) simulations, which we only describe in passing. We choose to focus on several continuum approaches extending the Canham–Helfrich elastic energy model for membranes to include the effect of curvature-inducing proteins and explore the conformational phase space of such systems. In this description

  18. Mesoscale computational studies of membrane bilayer remodeling by curvature-inducing proteins

    International Nuclear Information System (INIS)

    Ramakrishnan, N.; Sunil Kumar, P.B.; Radhakrishnan, Ravi

    2014-01-01

    Biological membranes constitute boundaries of cells and cell organelles. These membranes are soft fluid interfaces whose thermodynamic states are dictated by bending moduli, induced curvature fields, and thermal fluctuations. Recently, there has been a flood of experimental evidence highlighting active roles for these structures in many cellular processes ranging from trafficking of cargo to cell motility. It is believed that the local membrane curvature, which is continuously altered due to its interactions with myriad proteins and other macromolecules attached to its surface, holds the key to the emergent functionality in these cellular processes. Mechanisms at the atomic scale are dictated by protein–lipid interaction strength, lipid composition, lipid distribution in the vicinity of the protein, shape and amino acid composition of the protein, and its amino acid contents. The specificity of molecular interactions together with the cooperativity of multiple proteins induce and stabilize complex membrane shapes at the mesoscale. These shapes span a wide spectrum ranging from the spherical plasma membrane to the complex cisternae of the Golgi apparatus. Mapping the relation between the protein-induced deformations at the molecular scale and the resulting mesoscale morphologies is key to bridging cellular experiments across various length scales. In this review, we focus on the theoretical and computational methods used to understand the phenomenology underlying protein-driven membrane remodeling. Interactions at the molecular scale can be computationally probed by all atom and coarse grained molecular dynamics (MD, CGMD), as well as dissipative particle dynamics (DPD) simulations, which we only describe in passing. We choose to focus on several continuum approaches extending the Canham–Helfrich elastic energy model for membranes to include the effect of curvature-inducing proteins and explore the conformational phase space of such systems. In this description

  19. Mesoscale computational studies of membrane bilayer remodeling by curvature-inducing proteins.

    Science.gov (United States)

    Ramakrishnan, N; Sunil Kumar, P B; Radhakrishnan, Ravi

    2014-10-01

    Biological membranes constitute boundaries of cells and cell organelles. These membranes are soft fluid interfaces whose thermodynamic states are dictated by bending moduli, induced curvature fields, and thermal fluctuations. Recently, there has been a flood of experimental evidence highlighting active roles for these structures in many cellular processes ranging from trafficking of cargo to cell motility. It is believed that the local membrane curvature, which is continuously altered due to its interactions with myriad proteins and other macromolecules attached to its surface, holds the key to the emergent functionality in these cellular processes. Mechanisms at the atomic scale are dictated by protein-lipid interaction strength, lipid composition, lipid distribution in the vicinity of the protein, shape and amino acid composition of the protein, and its amino acid contents. The specificity of molecular interactions together with the cooperativity of multiple proteins induce and stabilize complex membrane shapes at the mesoscale. These shapes span a wide spectrum ranging from the spherical plasma membrane to the complex cisternae of the Golgi apparatus. Mapping the relation between the protein-induced deformations at the molecular scale and the resulting mesoscale morphologies is key to bridging cellular experiments across the various length scales. In this review, we focus on the theoretical and computational methods used to understand the phenomenology underlying protein-driven membrane remodeling. Interactions at the molecular scale can be computationally probed by all atom and coarse grained molecular dynamics (MD, CGMD), as well as dissipative particle dynamics (DPD) simulations, which we only describe in passing. We choose to focus on several continuum approaches extending the Canham - Helfrich elastic energy model for membranes to include the effect of curvature-inducing proteins and explore the conformational phase space of such systems. In this

  20. Flow rate impacts on capillary pressure and interface curvature of connected and disconnected fluid phases during multiphase flow in sandstone

    Science.gov (United States)

    Herring, Anna L.; Middleton, Jill; Walsh, Rick; Kingston, Andrew; Sheppard, Adrian

    2017-09-01

    We investigate capillary pressure-saturation (PC-S) relationships for drainage-imbibition experiments conducted with air (nonwetting phase) and brine (wetting phase) in Bentheimer sandstone cores. Three different flow rate conditions, ranging over three orders of magnitude, are investigated. X-ray micro-computed tomographic imaging is used to characterize the distribution and amount of fluids and their interfacial characteristics. Capillary pressure is measured via (1) bulk-phase pressure transducer measurements, and (2) image-based curvature measurements, calculated using a novel 3D curvature algorithm. We distinguish between connected (percolating) and disconnected air clusters: curvatures measured on the connected phase interfaces are used to validate the curvature algorithm and provide an indication of the equilibrium condition of the data; curvature and volume distributions of disconnected clusters provide insight to the snap-off processes occurring during drainage and imbibition under different flow rate conditions.

  1. Quantitative three-dimensional analysis of root canal curvature in maxillary first molars using micro-computed tomography.

    Science.gov (United States)

    Lee, Jong-Ki; Ha, Byung-Hyun; Choi, Jeong-Ho; Heo, Seok-Mo; Perinpanayagam, Hiran

    2006-10-01

    In endodontic therapy, access and instrumentation are strongly affected by root canal curvature. However, the few studies that have actually measured curvature are mostly from two-dimensional radiographs. The purpose of this study was to measure the three-dimensional (3D) canal curvature in maxillary first molars using micro-computed tomography (microCT) and mathematical modeling. Extracted maxillary first molars (46) were scanned by microCT (502 image slices/tooth, 1024 X 1024 pixels, voxel size of 19.5 x 19.5 x 39.0 microm) and their canals reconstructed by 3D modeling software. The intersection of major and minor axes in the canal space of each image slice were connected to create an imaginary central axis for each canal. The radius of curvature of the tangential circle was measured and inverted as a measure of curvature using custom-made mathematical modeling software. Root canal curvature was greatest in the apical third and least in the middle third for all canals. The greatest curvatures were in the mesiobuccal (MB) canal (0.76 +/- 0.48 mm(-1)) with abrupt curves, and the least curvatures were in the palatal (P) canal (0.38 +/- 0.34 mm(-1)) with a gradual curve. This study has measured the 3D curvature of root canals in maxillary first molars and reinforced the value of microCT with mathematical modeling.

  2. Implementing quantum Ricci curvature

    Science.gov (United States)

    Klitgaard, N.; Loll, R.

    2018-05-01

    Quantum Ricci curvature has been introduced recently as a new, geometric observable characterizing the curvature properties of metric spaces, without the need for a smooth structure. Besides coordinate invariance, its key features are scalability, computability, and robustness. We demonstrate that these properties continue to hold in the context of nonperturbative quantum gravity, by evaluating the quantum Ricci curvature numerically in two-dimensional Euclidean quantum gravity, defined in terms of dynamical triangulations. Despite the well-known, highly nonclassical properties of the underlying quantum geometry, its Ricci curvature can be matched well to that of a five-dimensional round sphere.

  3. Curvature Entropy for Curved Profile Generation

    Directory of Open Access Journals (Sweden)

    Koichiro Sato

    2012-03-01

    Full Text Available In a curved surface design, the overall shape features that emerge from combinations of shape elements are important. However, controlling the features of the overall shape in curved profiles is difficult using conventional microscopic shape information such as dimension. Herein two types of macroscopic shape information, curvature entropy and quadrature curvature entropy, quantitatively represent the features of the overall shape. The curvature entropy is calculated by the curvature distribution, and represents the complexity of a shape (one of the overall shape features. The quadrature curvature entropy is an improvement of the curvature entropy by introducing a Markov process to evaluate the continuity of a curvature and to approximate human cognition of the shape. Additionally, a shape generation method using a genetic algorithm as a calculator and the entropy as a shape generation index is presented. Finally, the applicability of the proposed method is demonstrated using the side view of an automobile as a design example.

  4. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals.

    Science.gov (United States)

    Guven, Onur; Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G

    2016-06-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors' previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp-p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat.

  5. Algorithmic Mechanism Design of Evolutionary Computation.

    Science.gov (United States)

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm.

  6. A Computational Fluid Dynamics Algorithm on a Massively Parallel Computer

    Science.gov (United States)

    Jespersen, Dennis C.; Levit, Creon

    1989-01-01

    The discipline of computational fluid dynamics is demanding ever-increasing computational power to deal with complex fluid flow problems. We investigate the performance of a finite-difference computational fluid dynamics algorithm on a massively parallel computer, the Connection Machine. Of special interest is an implicit time-stepping algorithm; to obtain maximum performance from the Connection Machine, it is necessary to use a nonstandard algorithm to solve the linear systems that arise in the implicit algorithm. We find that the Connection Machine ran achieve very high computation rates on both explicit and implicit algorithms. The performance of the Connection Machine puts it in the same class as today's most powerful conventional supercomputers.

  7. On the improvement of two-dimensional curvature computation and its application to turbulent premixed flame correlations

    International Nuclear Information System (INIS)

    Chrystie, R S M; Burns, I S; Hult, J; Kaminski, C F

    2008-01-01

    Measurement of curvature of the flamefront of premixed turbulent flames is important for the validation of numerical models for combustion. In this work, curvature is measured from contours that outline the flamefront, which are generated from laser-induced fluorescence images. The contours are inherently digitized, resulting in pixelation effects that lead to difficulties in computing curvature of the flamefront accurately. A common approach is to fit functions locally to short sections along the flame contour, and this approach is also followed in this work; the method helps smoothen the pixelation before curvature is measured. However, the length and degree of the polynomial, and hence the amount of smoothing, must be correctly set in order to maximize the precision and accuracy of the curvature measurements. Other researchers have applied polynomials of different orders and over different segment lengths to circles of known curvature as a test to determine the appropriate choice of polynomial; it is shown here that this method results in a sub-optimal choice of polynomial function. Here, we determine more suitable polynomial functions through use of a circle whose radius is sinusoidally modulated. We show that this leads to a more consistent and reliable choice for the local polynomial functions fitted to experimental data. A polynomial function thus determined is then applied to flame contour data to measure curvature of experimentally acquired flame contours. The results show that there is an enhancement in local flame speed at sections of the flamefront with a non-zero curvature, and this agrees with numerical models

  8. Curvature correction of retinal OCTs using graph-based geometry detection

    Science.gov (United States)

    Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D.; Sonka, Milan

    2013-05-01

    In this paper, we present a new algorithm as an enhancement and preprocessing step for acquired optical coherence tomography (OCT) images of the retina. The proposed method is composed of two steps, first of which is a denoising algorithm with wavelet diffusion based on a circular symmetric Laplacian model, and the second part can be described in terms of graph-based geometry detection and curvature correction according to the hyper-reflective complex layer in the retina. The proposed denoising algorithm showed an improvement of contrast-to-noise ratio from 0.89 to 1.49 and an increase of signal-to-noise ratio (OCT image SNR) from 18.27 to 30.43 dB. By applying the proposed method for estimation of the interpolated curve using a full automatic method, the mean ± SD unsigned border positioning error was calculated for normal and abnormal cases. The error values of 2.19 ± 1.25 and 8.53 ± 3.76 µm were detected for 200 randomly selected slices without pathological curvature and 50 randomly selected slices with pathological curvature, respectively. The important aspect of this algorithm is its ability in detection of curvature in strongly pathological images that surpasses previously introduced methods; the method is also fast, compared to the relatively low speed of similar methods.

  9. Improved curvature-based inpainting applied to fine art: recovering van Gogh's partially hidden brush strokes

    Science.gov (United States)

    Kuang, Yubin; Stork, David G.; Kahl, Fredrik

    2011-03-01

    Underdrawings and pentimenti-typically revealed through x-ray imaging and infrared reflectography-comprise important evidence about the intermediate states of an artwork and thus the working methods of its creator.1 To this end, Shahram, Stork and Donoho introduced the De-pict algorithm, which recovers layers of brush strokes in paintings with open brush work where several layers are partially visible, such as in van Gogh's Self portrait with a grey felt hat.2 While that preliminary work served as a proof of concept that computer image analytic methods could recover some occluded brush strokes, the work needed further refinement before it could be a tool for art scholars. Our current work makes several steps to improve that algorithm. Specifically, we refine the inpainting step through the inclusion of curvature-based constraints, in which a mathematical curvature penalty biases the reconstruction toward matching the artist's smooth hand motion. We refine and test our methods using "ground truth" image data: passages of four layers of brush strokes in which the intermediate layers were recorded photographically. At each successive top layer (currently identified by the user), we used k-means clustering combined with graph cuts to obtain chromatically and spatially coherent segmentation of brush strokes. We then reconstructed strokes at the deeper layer with our new curvature-based inpainting algorithm based on chromatic level lines. Our methods are clearly superior to previous versions of the De-pict algorithm on van Gogh's works giving smoother, natural strokes that more closely match the shapes of unoccluded strokes. Our improved method might be applied to the classic drip paintings of Jackson Pollock, where the drip work is more open and the physics of splashing paint ensures that the curvature more uniform than in the brush strokes of van Gogh.

  10. Introducing quantum Ricci curvature

    Science.gov (United States)

    Klitgaard, N.; Loll, R.

    2018-02-01

    Motivated by the search for geometric observables in nonperturbative quantum gravity, we define a notion of coarse-grained Ricci curvature. It is based on a particular way of extracting the local Ricci curvature of a smooth Riemannian manifold by comparing the distance between pairs of spheres with that of their centers. The quantum Ricci curvature is designed for use on non-smooth and discrete metric spaces, and to satisfy the key criteria of scalability and computability. We test the prescription on a variety of regular and random piecewise flat spaces, mostly in two dimensions. This enables us to quantify its behavior for short lattices distances and compare its large-scale behavior with that of constantly curved model spaces. On the triangulated spaces considered, the quantum Ricci curvature has good averaging properties and reproduces classical characteristics on scales large compared to the discretization scale.

  11. Computational geometry algorithms and applications

    CERN Document Server

    de Berg, Mark; Overmars, Mark; Schwarzkopf, Otfried

    1997-01-01

    Computational geometry emerged from the field of algorithms design and anal­ ysis in the late 1970s. It has grown into a recognized discipline with its own journals, conferences, and a large community of active researchers. The suc­ cess of the field as a research discipline can on the one hand be explained from the beauty of the problems studied and the solutions obtained, and, on the other hand, by the many application domains--computer graphics, geographic in­ formation systems (GIS), robotics, and others-in which geometric algorithms play a fundamental role. For many geometric problems the early algorithmic solutions were either slow or difficult to understand and implement. In recent years a number of new algorithmic techniques have been developed that improved and simplified many of the previous approaches. In this textbook we have tried to make these modem algorithmic solutions accessible to a large audience. The book has been written as a textbook for a course in computational geometry, but it can ...

  12. Parallel algorithms and cluster computing

    CERN Document Server

    Hoffmann, Karl Heinz

    2007-01-01

    This book presents major advances in high performance computing as well as major advances due to high performance computing. It contains a collection of papers in which results achieved in the collaboration of scientists from computer science, mathematics, physics, and mechanical engineering are presented. From the science problems to the mathematical algorithms and on to the effective implementation of these algorithms on massively parallel and cluster computers we present state-of-the-art methods and technology as well as exemplary results in these fields. This book shows that problems which seem superficially distinct become intimately connected on a computational level.

  13. Curvature correction of retinal OCTs using graph-based geometry detection

    International Nuclear Information System (INIS)

    Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D; Sonka, Milan

    2013-01-01

    In this paper, we present a new algorithm as an enhancement and preprocessing step for acquired optical coherence tomography (OCT) images of the retina. The proposed method is composed of two steps, first of which is a denoising algorithm with wavelet diffusion based on a circular symmetric Laplacian model, and the second part can be described in terms of graph-based geometry detection and curvature correction according to the hyper-reflective complex layer in the retina. The proposed denoising algorithm showed an improvement of contrast-to-noise ratio from 0.89 to 1.49 and an increase of signal-to-noise ratio (OCT image SNR) from 18.27 to 30.43 dB. By applying the proposed method for estimation of the interpolated curve using a full automatic method, the mean ± SD unsigned border positioning error was calculated for normal and abnormal cases. The error values of 2.19 ± 1.25 and 8.53 ± 3.76 µm were detected for 200 randomly selected slices without pathological curvature and 50 randomly selected slices with pathological curvature, respectively. The important aspect of this algorithm is its ability in detection of curvature in strongly pathological images that surpasses previously introduced methods; the method is also fast, compared to the relatively low speed of similar methods. (paper)

  14. Essential algorithms a practical approach to computer algorithms

    CERN Document Server

    Stephens, Rod

    2013-01-01

    A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s

  15. Continuous-Curvature Path Generation Using Fermat's Spiral

    Directory of Open Access Journals (Sweden)

    Anastasios M. Lekkas

    2013-10-01

    Full Text Available This paper proposes a novel methodology, based on Fermat's spiral (FS, for constructing curvature-continuous parametric paths in a plane. FS has a zero curvature at its origin, a property that allows it to be connected with a straight line smoothly, that is, without the curvature discontinuity which occurs at the transition point between a line and a circular arc when constructing Dubins paths. Furthermore, contrary to the computationally expensive clothoids, FS is described by very simple parametric equations that are trivial to compute. On the downside, computing the length of an FS arc involves a Gaussian hypergeometric function. However, this function is absolutely convergent and it is also shown that it poses no restrictions to the domain within which the length can be calculated. In addition, we present an alternative parametrization of FS which eliminates the parametric speed singularity at the origin, hence making the spiral suitable for path-tracking applications. A detailed description of how to construct curvature-continuous paths with FS is given.

  16. Quantum algorithms for computational nuclear physics

    Directory of Open Access Journals (Sweden)

    Višňák Jakub

    2015-01-01

    Full Text Available While quantum algorithms have been studied as an efficient tool for the stationary state energy determination in the case of molecular quantum systems, no similar study for analogical problems in computational nuclear physics (computation of energy levels of nuclei from empirical nucleon-nucleon or quark-quark potentials have been realized yet. Although the difference between the above mentioned studies might seem negligible, it will be examined. First steps towards a particular simulation (on classical computer of the Iterative Phase Estimation Algorithm for deuterium and tritium nuclei energy level computation will be carried out with the aim to prove algorithm feasibility (and extensibility to heavier nuclei for its possible practical realization on a real quantum computer.

  17. Computer Vision Utilization for Detection of Green House Tomato under Natural Illumination

    Directory of Open Access Journals (Sweden)

    H Mohamadi Monavar

    2013-02-01

    Full Text Available Agricultural sector experiences the application of automated systems since two decades ago. These systems are applied to harvest fruits in agriculture. Computer vision is one of the technologies that are most widely used in food industries and agriculture. In this paper, an automated system based on computer vision for harvesting greenhouse tomatoes is presented. A CCD camera takes images from workspace and tomatoes with over 50 percent ripeness are detected through an image processing algorithm. In this research three color spaces including RGB, HSI and YCbCr and three algorithms including threshold recognition, curvature of the image and red/green ratio were used in order to identify the ripe tomatoes from background under natural illumination. The average error of threshold recognition, red/green ratio and curvature of the image algorithms were 11.82%, 10.03% and 7.95% in HSI, RGB and YCbCr color spaces, respectively. Therefore, the YCbCr color space and curvature of the image algorithm were identified as the most suitable for recognizing fruits under natural illumination condition.

  18. Quantum Computations: Fundamentals and Algorithms

    International Nuclear Information System (INIS)

    Duplij, S.A.; Shapoval, I.I.

    2007-01-01

    Basic concepts of quantum information theory, principles of quantum calculations and the possibility of creation on this basis unique on calculation power and functioning principle device, named quantum computer, are concerned. The main blocks of quantum logic, schemes of quantum calculations implementation, as well as some known today effective quantum algorithms, called to realize advantages of quantum calculations upon classical, are presented here. Among them special place is taken by Shor's algorithm of number factorization and Grover's algorithm of unsorted database search. Phenomena of decoherence, its influence on quantum computer stability and methods of quantum errors correction are described

  19. Cortical surface registration using spherical thin-plate spline with sulcal lines and mean curvature as features.

    Science.gov (United States)

    Park, Hyunjin; Park, Jun-Sung; Seong, Joon-Kyung; Na, Duk L; Lee, Jong-Min

    2012-04-30

    Analysis of cortical patterns requires accurate cortical surface registration. Many researchers map the cortical surface onto a unit sphere and perform registration of two images defined on the unit sphere. Here we have developed a novel registration framework for the cortical surface based on spherical thin-plate splines. Small-scale composition of spherical thin-plate splines was used as the geometric interpolant to avoid folding in the geometric transform. Using an automatic algorithm based on anisotropic skeletons, we extracted seven sulcal lines, which we then incorporated as landmark information. Mean curvature was chosen as an additional feature for matching between spherical maps. We employed a two-term cost function to encourage matching of both sulcal lines and the mean curvature between the spherical maps. Application of our registration framework to fifty pairwise registrations of T1-weighted MRI scans resulted in improved registration accuracy, which was computed from sulcal lines. Our registration approach was tested as an additional procedure to improve an existing surface registration algorithm. Our registration framework maintained an accurate registration over the sulcal lines while significantly increasing the cross-correlation of mean curvature between the spherical maps being registered. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Local curvature analysis for classifying breast tumors: Preliminary analysis in dedicated breast CT

    International Nuclear Information System (INIS)

    Lee, Juhun; Nishikawa, Robert M.; Reiser, Ingrid; Boone, John M.; Lindfors, Karen K.

    2015-01-01

    Purpose: The purpose of this study is to measure the effectiveness of local curvature measures as novel image features for classifying breast tumors. Methods: A total of 119 breast lesions from 104 noncontrast dedicated breast computed tomography images of women were used in this study. Volumetric segmentation was done using a seed-based segmentation algorithm and then a triangulated surface was extracted from the resulting segmentation. Total, mean, and Gaussian curvatures were then computed. Normalized curvatures were used as classification features. In addition, traditional image features were also extracted and a forward feature selection scheme was used to select the optimal feature set. Logistic regression was used as a classifier and leave-one-out cross-validation was utilized to evaluate the classification performances of the features. The area under the receiver operating characteristic curve (AUC, area under curve) was used as a figure of merit. Results: Among curvature measures, the normalized total curvature (C_T) showed the best classification performance (AUC of 0.74), while the others showed no classification power individually. Five traditional image features (two shape, two margin, and one texture descriptors) were selected via the feature selection scheme and its resulting classifier achieved an AUC of 0.83. Among those five features, the radial gradient index (RGI), which is a margin descriptor, showed the best classification performance (AUC of 0.73). A classifier combining RGI and C_T yielded an AUC of 0.81, which showed similar performance (i.e., no statistically significant difference) to the classifier with the above five traditional image features. Additional comparisons in AUC values between classifiers using different combinations of traditional image features and C_T were conducted. The results showed that C_T was able to replace the other four image features for the classification task. Conclusions: The normalized curvature measure

  1. A Faster Algorithm for Computing Straight Skeletons

    KAUST Repository

    Cheng, Siu-Wing

    2014-09-01

    We present a new algorithm for computing the straight skeleton of a polygon. For a polygon with n vertices, among which r are reflex vertices, we give a deterministic algorithm that reduces the straight skeleton computation to a motorcycle graph computation in O(n (logn)logr) time. It improves on the previously best known algorithm for this reduction, which is randomized, and runs in expected O(n√h+1log2n) time for a polygon with h holes. Using known motorcycle graph algorithms, our result yields improved time bounds for computing straight skeletons. In particular, we can compute the straight skeleton of a non-degenerate polygon in O(n (logn) logr + r 4/3 + ε ) time for any ε > 0. On degenerate input, our time bound increases to O(n (logn) logr + r 17/11 + ε ).

  2. A Faster Algorithm for Computing Straight Skeletons

    KAUST Repository

    Mencel, Liam A.

    2014-05-06

    We present a new algorithm for computing the straight skeleton of a polygon. For a polygon with n vertices, among which r are reflex vertices, we give a deterministic algorithm that reduces the straight skeleton computation to a motorcycle graph computation in O(n (log n) log r) time. It improves on the previously best known algorithm for this reduction, which is randomised, and runs in expected O(n √(h+1) log² n) time for a polygon with h holes. Using known motorcycle graph algorithms, our result yields improved time bounds for computing straight skeletons. In particular, we can compute the straight skeleton of a non-degenerate polygon in O(n (log n) log r + r^(4/3 + ε)) time for any ε > 0. On degenerate input, our time bound increases to O(n (log n) log r + r^(17/11 + ε))

  3. A Faster Algorithm for Computing Straight Skeletons

    KAUST Repository

    Cheng, Siu-Wing; Mencel, Liam A.; Vigneron, Antoine E.

    2014-01-01

    We present a new algorithm for computing the straight skeleton of a polygon. For a polygon with n vertices, among which r are reflex vertices, we give a deterministic algorithm that reduces the straight skeleton computation to a motorcycle graph computation in O(n (logn)logr) time. It improves on the previously best known algorithm for this reduction, which is randomized, and runs in expected O(n√h+1log2n) time for a polygon with h holes. Using known motorcycle graph algorithms, our result yields improved time bounds for computing straight skeletons. In particular, we can compute the straight skeleton of a non-degenerate polygon in O(n (logn) logr + r 4/3 + ε ) time for any ε > 0. On degenerate input, our time bound increases to O(n (logn) logr + r 17/11 + ε ).

  4. Level-set reconstruction algorithm for ultrafast limited-angle X-ray computed tomography of two-phase flows.

    Science.gov (United States)

    Bieberle, M; Hampel, U

    2015-06-13

    Tomographic image reconstruction is based on recovering an object distribution from its projections, which have been acquired from all angular views around the object. If the angular range is limited to less than 180° of parallel projections, typical reconstruction artefacts arise when using standard algorithms. To compensate for this, specialized algorithms using a priori information about the object need to be applied. The application behind this work is ultrafast limited-angle X-ray computed tomography of two-phase flows. Here, only a binary distribution of the two phases needs to be reconstructed, which reduces the complexity of the inverse problem. To solve it, a new reconstruction algorithm (LSR) based on the level-set method is proposed. It includes one force function term accounting for matching the projection data and one incorporating a curvature-dependent smoothing of the phase boundary. The algorithm has been validated using simulated as well as measured projections of known structures, and its performance has been compared to the algebraic reconstruction technique and a binary derivative of it. The validation as well as the application of the level-set reconstruction on a dynamic two-phase flow demonstrated its applicability and its advantages over other reconstruction algorithms. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  5. Advanced entry guidance algorithm with landing footprint computation

    Science.gov (United States)

    Leavitt, James Aaron

    The design and performance evaluation of an entry guidance algorithm for future space transportation vehicles is presented. The algorithm performs two functions: on-board trajectory planning and trajectory tracking. The planned longitudinal path is followed by tracking drag acceleration, as is done by the Space Shuttle entry guidance. Unlike the Shuttle entry guidance, lateral path curvature is also planned and followed. A new trajectory planning function for the guidance algorithm is developed that is suitable for suborbital entry and that significantly enhances the overall performance of the algorithm for both orbital and suborbital entry. In comparison with the previous trajectory planner, the new planner produces trajectories that are easier to track, especially near the upper and lower drag boundaries and for suborbital entry. The new planner accomplishes this by matching the vehicle's initial flight path angle and bank angle, and by enforcing the full three-degree-of-freedom equations of motion with control derivative limits. Insights gained from trajectory optimization results contribute to the design of the new planner, giving it near-optimal downrange and crossrange capabilities. Planned trajectories and guidance simulation results are presented that demonstrate the improved performance. Based on the new planner, a method is developed for approximating the landing footprint for entry vehicles in near real-time, as would be needed for an on-board flight management system. The boundary of the footprint is constructed from the endpoints of extreme downrange and crossrange trajectories generated by the new trajectory planner. The footprint algorithm inherently possesses many of the qualities of the new planner, including quick execution, the ability to accurately approximate the vehicle's glide capabilities, and applicability to a wide range of entry conditions. Footprints can be generated for orbital and suborbital entry conditions using a pre

  6. Statistical mechanics of paths with curvature dependent action

    International Nuclear Information System (INIS)

    Ambjoern, J.; Durhuus, B.; Jonsson, T.

    1987-01-01

    We analyze the scaling limit of discretized random paths with curvature dependent action. For finite values of the curvature coupling constant the theory belongs to the universality class of simple random walk. It is possible to define a non-trivial scaling limit if the curvature coupling tends to infinity. We compute exactly the two point function in this limit and discuss the relevance of our results for random surfaces and string theories. (orig.)

  7. A micro-hydrology computation ordering algorithm

    Science.gov (United States)

    Croley, Thomas E.

    1980-11-01

    Discrete-distributed-parameter models are essential for watershed modelling where practical consideration of spatial variations in watershed properties and inputs is desired. Such modelling is necessary for analysis of detailed hydrologic impacts from management strategies and land-use effects. Trade-offs between model validity and model complexity exist in resolution of the watershed. Once these are determined, the watershed is then broken into sub-areas which each have essentially spatially-uniform properties. Lumped-parameter (micro-hydrology) models are applied to these sub-areas and their outputs are combined through the use of a computation ordering technique, as illustrated by many discrete-distributed-parameter hydrology models. Manual ordering of these computations requires fore-thought, and is tedious, error prone, sometimes storage intensive and least adaptable to changes in watershed resolution. A programmable algorithm for ordering micro-hydrology computations is presented that enables automatic ordering of computations within the computer via an easily understood and easily implemented "node" definition, numbering and coding scheme. This scheme and the algorithm are detailed in logic flow-charts and an example application is presented. Extensions and modifications of the algorithm are easily made for complex geometries or differing microhydrology models. The algorithm is shown to be superior to manual ordering techniques and has potential use in high-resolution studies.

  8. A micro-hydrology computation ordering algorithm

    International Nuclear Information System (INIS)

    Croley, T.E. II

    1980-01-01

    Discrete-distributed-parameter models are essential for watershed modelling where practical consideration of spatial variations in watershed properties and inputs is desired. Such modelling is necessary for analysis of detailed hydrologic impacts from management strategies and land-use effects. Trade-offs between model validity and model complexity exist in resolution of the watershed. Once these are determined, the watershed is then broken into sub-areas which each have essentially spatially-uniform properties. Lumped-parameter (micro-hydrology) models are applied to these sub-areas and their outputs are combined through the use of a computation ordering technique, as illustrated by many discrete-distributed-parameter hydrology models. Manual ordering of these computations requires fore-thought, and is tedious, error prone, sometimes storage intensive and least adaptable to changes in watershed resolution. A programmable algorithm for ordering micro-hydrology computations is presented that enables automatic ordering of computations within the computer via an easily understood and easily implemented node definition, numbering and coding scheme. This scheme and the algorithm are detailed in logic flow-charts and an example application is presented. Extensions and modifications of the algorithm are easily made for complex geometries or differing micro-hydrology models. The algorithm is shown to be superior to manual ordering techniques and has potential use in high-resolution studies. (orig.)

  9. Computational algorithm for molybdenite concentrate annealing

    International Nuclear Information System (INIS)

    Alkatseva, V.M.

    1995-01-01

    Computational algorithm is presented for annealing of molybdenite concentrate with granulated return dust and that of granulated molybdenite concentrate. The algorithm differs from the known analogies for sulphide raw material annealing by including the calculation of return dust mass in stationary annealing; the latter quantity varies form the return dust mass value obtained in the first iteration step. Masses of solid products are determined by distribution of concentrate annealing products, including return dust and benthonite. The algorithm is applied to computations for annealing of other sulphide materials. 3 refs

  10. High-order hydrodynamic algorithms for exascale computing

    Energy Technology Data Exchange (ETDEWEB)

    Morgan, Nathaniel Ray [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-02-05

    Hydrodynamic algorithms are at the core of many laboratory missions ranging from simulating ICF implosions to climate modeling. The hydrodynamic algorithms commonly employed at the laboratory and in industry (1) typically lack requisite accuracy for complex multi- material vortical flows and (2) are not well suited for exascale computing due to poor data locality and poor FLOP/memory ratios. Exascale computing requires advances in both computer science and numerical algorithms. We propose to research the second requirement and create a new high-order hydrodynamic algorithm that has superior accuracy, excellent data locality, and excellent FLOP/memory ratios. This proposal will impact a broad range of research areas including numerical theory, discrete mathematics, vorticity evolution, gas dynamics, interface instability evolution, turbulent flows, fluid dynamics and shock driven flows. If successful, the proposed research has the potential to radically transform simulation capabilities and help position the laboratory for computing at the exascale.

  11. Bioinspired computation in combinatorial optimization: algorithms and their computational complexity

    DEFF Research Database (Denmark)

    Neumann, Frank; Witt, Carsten

    2012-01-01

    Bioinspired computation methods, such as evolutionary algorithms and ant colony optimization, are being applied successfully to complex engineering and combinatorial optimization problems, and it is very important that we understand the computational complexity of these algorithms. This tutorials...... problems. Classical single objective optimization is examined first. They then investigate the computational complexity of bioinspired computation applied to multiobjective variants of the considered combinatorial optimization problems, and in particular they show how multiobjective optimization can help...... to speed up bioinspired computation for single-objective optimization problems. The tutorial is based on a book written by the authors with the same title. Further information about the book can be found at www.bioinspiredcomputation.com....

  12. Parallel Computing Strategies for Irregular Algorithms

    Science.gov (United States)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  13. Analysis of growth patterns during gravitropic curvature in roots of Zea mays by use of a computer-based video digitizer

    Science.gov (United States)

    Nelson, A. J.; Evans, M. L.

    1986-01-01

    A computer-based video digitizer system is described which allows automated tracking of markers placed on a plant surface. The system uses customized software to calculate relative growth rates at selected positions along the plant surface and to determine rates of gravitropic curvature based on the changing pattern of distribution of the surface markers. The system was used to study the time course of gravitropic curvature and changes in relative growth rate along the upper and lower surface of horizontally-oriented roots of maize (Zea mays L.). The growing region of the root was found to extend from about 1 mm behind the tip to approximately 6 mm behind the tip. In vertically-oriented roots the relative growth rate was maximal at about 2.5 mm behind the tip and declined smoothly on either side of the maximum. Curvature was initiated approximately 30 min after horizontal orientation with maximal (50 degrees) curvature being attained in 3 h. Analysis of surface extension patterns during the response indicated that curvature results from a reduction in growth rate along both the upper and lower surfaces with stronger reduction along the lower surface.

  14. A Faster Algorithm for Computing Motorcycle Graphs

    KAUST Repository

    Vigneron, Antoine E.; Yan, Lie

    2014-01-01

    We present a new algorithm for computing motorcycle graphs that runs in (Formula presented.) time for any (Formula presented.), improving on all previously known algorithms. The main application of this result is to computing the straight skeleton of a polygon. It allows us to compute the straight skeleton of a non-degenerate polygon with (Formula presented.) holes in (Formula presented.) expected time. If all input coordinates are (Formula presented.)-bit rational numbers, we can compute the straight skeleton of a (possibly degenerate) polygon with (Formula presented.) holes in (Formula presented.) expected time. In particular, it means that we can compute the straight skeleton of a simple polygon in (Formula presented.) expected time if all input coordinates are (Formula presented.)-bit rationals, while all previously known algorithms have worst-case running time (Formula presented.). © 2014 Springer Science+Business Media New York.

  15. A Faster Algorithm for Computing Motorcycle Graphs

    KAUST Repository

    Vigneron, Antoine E.

    2014-08-29

    We present a new algorithm for computing motorcycle graphs that runs in (Formula presented.) time for any (Formula presented.), improving on all previously known algorithms. The main application of this result is to computing the straight skeleton of a polygon. It allows us to compute the straight skeleton of a non-degenerate polygon with (Formula presented.) holes in (Formula presented.) expected time. If all input coordinates are (Formula presented.)-bit rational numbers, we can compute the straight skeleton of a (possibly degenerate) polygon with (Formula presented.) holes in (Formula presented.) expected time. In particular, it means that we can compute the straight skeleton of a simple polygon in (Formula presented.) expected time if all input coordinates are (Formula presented.)-bit rationals, while all previously known algorithms have worst-case running time (Formula presented.). © 2014 Springer Science+Business Media New York.

  16. A simple algorithm for computing the smallest enclosing circle

    DEFF Research Database (Denmark)

    Skyum, Sven

    1991-01-01

    Presented is a simple O(n log n) algorithm for computing the smallest enclosing circle of a convex polygon. It can be easily extended to algorithms that compute the farthest-and the closest-point Voronoi diagram of a convex polygon within the same time bound.......Presented is a simple O(n log n) algorithm for computing the smallest enclosing circle of a convex polygon. It can be easily extended to algorithms that compute the farthest-and the closest-point Voronoi diagram of a convex polygon within the same time bound....

  17. Quantum Genetic Algorithms for Computer Scientists

    OpenAIRE

    Lahoz Beltrá, Rafael

    2016-01-01

    Genetic algorithms (GAs) are a class of evolutionary algorithms inspired by Darwinian natural selection. They are popular heuristic optimisation methods based on simulated genetic mechanisms, i.e., mutation, crossover, etc. and population dynamical processes such as reproduction, selection, etc. Over the last decade, the possibility to emulate a quantum computer (a computer using quantum-mechanical phenomena to perform operations on data) has led to a new class of GAs known as “Quantum Geneti...

  18. Research article – Optimisation of paediatrics computed radiographyfor full spine curvature measurements using a phantom: a pilot study

    NARCIS (Netherlands)

    de Haan, Seraphine; Reis, Cláudia; Ndlovu, Junior; Serrenho, Catarina; Akhtar, Ifrah; Garcia, José Antonio; Linde, Daniël; Thorskog, Martine; Franco, Loris; Hogg, Peter

    2015-01-01

    Aim: Optimise a set of exposure factors, with the lowest effective dose, to delineate spinal curvature with the modified Cobb method in a full spine using computed radiography (CR) for a 5-year-old paediatric anthropomorphic phantom. Methods: Images were acquired by varying a set of parameters:

  19. The Relationship Between Surface Curvature and Abdominal Aortic Aneurysm Wall Stress.

    Science.gov (United States)

    de Galarreta, Sergio Ruiz; Cazón, Aitor; Antón, Raúl; Finol, Ender A

    2017-08-01

    The maximum diameter (MD) criterion is the most important factor when predicting risk of rupture of abdominal aortic aneurysms (AAAs). An elevated wall stress has also been linked to a high risk of aneurysm rupture, yet is an uncommon clinical practice to compute AAA wall stress. The purpose of this study is to assess whether other characteristics of the AAA geometry are statistically correlated with wall stress. Using in-house segmentation and meshing algorithms, 30 patient-specific AAA models were generated for finite element analysis (FEA). These models were subsequently used to estimate wall stress and maximum diameter and to evaluate the spatial distributions of wall thickness, cross-sectional diameter, mean curvature, and Gaussian curvature. Data analysis consisted of statistical correlations of the aforementioned geometry metrics with wall stress for the 30 AAA inner and outer wall surfaces. In addition, a linear regression analysis was performed with all the AAA wall surfaces to quantify the relationship of the geometric indices with wall stress. These analyses indicated that while all the geometry metrics have statistically significant correlations with wall stress, the local mean curvature (LMC) exhibits the highest average Pearson's correlation coefficient for both inner and outer wall surfaces. The linear regression analysis revealed coefficients of determination for the outer and inner wall surfaces of 0.712 and 0.516, respectively, with LMC having the largest effect on the linear regression equation with wall stress. This work underscores the importance of evaluating AAA mean wall curvature as a potential surrogate for wall stress.

  20. Algorithms and file structures for computational geometry

    International Nuclear Information System (INIS)

    Hinrichs, K.; Nievergelt, J.

    1983-01-01

    Algorithms for solving geometric problems and file structures for storing large amounts of geometric data are of increasing importance in computer graphics and computer-aided design. As examples of recent progress in computational geometry, we explain plane-sweep algorithms, which solve various topological and geometric problems efficiently; and we present the grid file, an adaptable, symmetric multi-key file structure that provides efficient access to multi-dimensional data along any space dimension. (orig.)

  1. Reachability by paths of bounded curvature in a convex polygon

    KAUST Repository

    Ahn, Heekap; Cheong, Otfried; Matoušek, Jiřǐ; Vigneron, Antoine E.

    2012-01-01

    Let B be a point robot moving in the plane, whose path is constrained to forward motions with curvature at most 1, and let P be a convex polygon with n vertices. Given a starting configuration (a location and a direction of travel) for B inside P, we characterize the region of all points of P that can be reached by B, and show that it has complexity O(n). We give an O(n2) time algorithm to compute this region. We show that a point is reachable only if it can be reached by a path of type CCSCS, where C denotes a unit circle arc and S denotes a line segment. © 2011 Elsevier B.V.

  2. Homogeneous Buchberger algorithms and Sullivant's computational commutative algebra challenge

    DEFF Research Database (Denmark)

    Lauritzen, Niels

    2005-01-01

    We give a variant of the homogeneous Buchberger algorithm for positively graded lattice ideals. Using this algorithm we solve the Sullivant computational commutative algebra challenge.......We give a variant of the homogeneous Buchberger algorithm for positively graded lattice ideals. Using this algorithm we solve the Sullivant computational commutative algebra challenge....

  3. Prospective Algorithms for Quantum Evolutionary Computation

    OpenAIRE

    Sofge, Donald A.

    2008-01-01

    This effort examines the intersection of the emerging field of quantum computing and the more established field of evolutionary computation. The goal is to understand what benefits quantum computing might offer to computational intelligence and how computational intelligence paradigms might be implemented as quantum programs to be run on a future quantum computer. We critically examine proposed algorithms and methods for implementing computational intelligence paradigms, primarily focused on ...

  4. Scalar curvature in conformal geometry of Connes-Landi noncommutative manifolds

    Science.gov (United States)

    Liu, Yang

    2017-11-01

    We first propose a conformal geometry for Connes-Landi noncommutative manifolds and study the associated scalar curvature. The new scalar curvature contains its Riemannian counterpart as the commutative limit. Similar to the results on noncommutative two tori, the quantum part of the curvature consists of actions of the modular derivation through two local curvature functions. Explicit expressions for those functions are obtained for all even dimensions (greater than two). In dimension four, the one variable function shows striking similarity to the analytic functions of the characteristic classes appeared in the Atiyah-Singer local index formula, namely, it is roughly a product of the j-function (which defines the A ˆ -class of a manifold) and an exponential function (which defines the Chern character of a bundle). By performing two different computations for the variation of the Einstein-Hilbert action, we obtain deep internal relations between two local curvature functions. Straightforward verification for those relations gives a strong conceptual confirmation for the whole computational machinery we have developed so far, especially the Mathematica code hidden behind the paper.

  5. Substrate Curvature Regulates Cell Migration -A Computational Study

    Science.gov (United States)

    He, Xiuxiu; Jiang, Yi

    Cell migration in host microenvironment is essential to cancer etiology, progression and metastasis. Cellular processes of adhesion, cytoskeletal polymerization, contraction, and matrix remodeling act in concert to regulate cell migration, while local extracellular matrix architecture modulate these processes. In this work we study how stromal microenvironment with native and cell-derived curvature at micron-meter scale regulate cell motility pattern. We developed a 3D model of single cell migration on a curved substrate. Mathematical analysis of cell morphological adaption to the cell-substrate interface shows that cell migration on convex surfaces deforms more than on concave surfaces. Both analytical and simulation results show that curved surfaces regulate the cell motile force for cell's protruding front through force balance with focal adhesion and cell contraction. We also found that cell migration on concave substrates is more persistent. These results offer a novel biomechanical explanation to substrate curvature regulation of cell migration. NIH 1U01CA143069.

  6. Contact-impact algorithms on parallel computers

    International Nuclear Information System (INIS)

    Zhong Zhihua; Nilsson, Larsgunnar

    1994-01-01

    Contact-impact algorithms on parallel computers are discussed within the context of explicit finite element analysis. The algorithms concerned include a contact searching algorithm and an algorithm for contact force calculations. The contact searching algorithm is based on the territory concept of the general HITA algorithm. However, no distinction is made between different contact bodies, or between different contact surfaces. All contact segments from contact boundaries are taken as a single set. Hierarchy territories and contact territories are expanded. A three-dimensional bucket sort algorithm is used to sort contact nodes. The defence node algorithm is used in the calculation of contact forces. Both the contact searching algorithm and the defence node algorithm are implemented on the connection machine CM-200. The performance of the algorithms is examined under different circumstances, and numerical results are presented. ((orig.))

  7. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  8. An introduction to quantum computing algorithms

    CERN Document Server

    Pittenger, Arthur O

    2000-01-01

    In 1994 Peter Shor [65] published a factoring algorithm for a quantum computer that finds the prime factors of a composite integer N more efficiently than is possible with the known algorithms for a classical com­ puter. Since the difficulty of the factoring problem is crucial for the se­ curity of a public key encryption system, interest (and funding) in quan­ tum computing and quantum computation suddenly blossomed. Quan­ tum computing had arrived. The study of the role of quantum mechanics in the theory of computa­ tion seems to have begun in the early 1980s with the publications of Paul Benioff [6]' [7] who considered a quantum mechanical model of computers and the computation process. A related question was discussed shortly thereafter by Richard Feynman [35] who began from a different perspec­ tive by asking what kind of computer should be used to simulate physics. His analysis led him to the belief that with a suitable class of "quantum machines" one could imitate any quantum system.

  9. Quantum Genetic Algorithms for Computer Scientists

    Directory of Open Access Journals (Sweden)

    Rafael Lahoz-Beltra

    2016-10-01

    Full Text Available Genetic algorithms (GAs are a class of evolutionary algorithms inspired by Darwinian natural selection. They are popular heuristic optimisation methods based on simulated genetic mechanisms, i.e., mutation, crossover, etc. and population dynamical processes such as reproduction, selection, etc. Over the last decade, the possibility to emulate a quantum computer (a computer using quantum-mechanical phenomena to perform operations on data has led to a new class of GAs known as “Quantum Genetic Algorithms” (QGAs. In this review, we present a discussion, future potential, pros and cons of this new class of GAs. The review will be oriented towards computer scientists interested in QGAs “avoiding” the possible difficulties of quantum-mechanical phenomena.

  10. Once upon an algorithm how stories explain computing

    CERN Document Server

    Erwig, Martin

    2017-01-01

    How Hansel and Gretel, Sherlock Holmes, the movie Groundhog Day, Harry Potter, and other familiar stories illustrate the concepts of computing. Picture a computer scientist, staring at a screen and clicking away frantically on a keyboard, hacking into a system, or perhaps developing an app. Now delete that picture. In Once Upon an Algorithm, Martin Erwig explains computation as something that takes place beyond electronic computers, and computer science as the study of systematic problem solving. Erwig points out that many daily activities involve problem solving. Getting up in the morning, for example: You get up, take a shower, get dressed, eat breakfast. This simple daily routine solves a recurring problem through a series of well-defined steps. In computer science, such a routine is called an algorithm. Erwig illustrates a series of concepts in computing with examples from daily life and familiar stories. Hansel and Gretel, for example, execute an algorithm to get home from the forest. The movie Groundho...

  11. Directable weathering of concave rock using curvature estimation.

    Science.gov (United States)

    Jones, Michael D; Farley, McKay; Butler, Joseph; Beardall, Matthew

    2010-01-01

    We address the problem of directable weathering of exposed concave rock for use in computer-generated animation or games. Previous weathering models that admit concave surfaces are computationally inefficient and difficult to control. In nature, the spheroidal and cavernous weathering rates depend on the surface curvature. Spheroidal weathering is fastest in areas with large positive mean curvature and cavernous weathering is fastest in areas with large negative mean curvature. We simulate both processes using an approximation of mean curvature on a voxel grid. Both weathering rates are also influenced by rock durability. The user controls rock durability by editing a durability graph before and during weathering simulation. Simulations of rockfall and colluvium deposition further improve realism. The profile of the final weathered rock matches the shape of the durability graph up to the effects of weathering and colluvium deposition. We demonstrate the top-down directability and visual plausibility of the resulting model through a series of screenshots and rendered images. The results include the weathering of a cube into a sphere and of a sheltered inside corner into a cavern as predicted by the underlying geomorphological models.

  12. Parallel algorithms for computation of the manipulator inertia matrix

    Science.gov (United States)

    Amin-Javaheri, Masoud; Orin, David E.

    1989-01-01

    The development of an O(log2N) parallel algorithm for the manipulator inertia matrix is presented. It is based on the most efficient serial algorithm which uses the composite rigid body method. Recursive doubling is used to reformulate the linear recurrence equations which are required to compute the diagonal elements of the matrix. It results in O(log2N) levels of computation. Computation of the off-diagonal elements involves N linear recurrences of varying-size and a new method, which avoids redundant computation of position and orientation transforms for the manipulator, is developed. The O(log2N) algorithm is presented in both equation and graphic forms which clearly show the parallelism inherent in the algorithm.

  13. Discrete Curvatures and Discrete Minimal Surfaces

    KAUST Repository

    Sun, Xiang

    2012-01-01

    This thesis presents an overview of some approaches to compute Gaussian and mean curvature on discrete surfaces and discusses discrete minimal surfaces. The variety of applications of differential geometry in visualization and shape design leads

  14. Curvature of Indoor Sensor Network: Clustering Coefficient

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available We investigate the geometric properties of the communication graph in realistic low-power wireless networks. In particular, we explore the concept of the curvature of a wireless network via the clustering coefficient. Clustering coefficient analysis is a computationally simplified, semilocal approach, which nevertheless captures such a large-scale feature as congestion in the underlying network. The clustering coefficient concept is applied to three cases of indoor sensor networks, under varying thresholds on the link packet reception rate (PRR. A transition from positive curvature (“meshed” network to negative curvature (“core concentric” network is observed by increasing the threshold. Even though this paper deals with network curvature per se, we nevertheless expand on the underlying congestion motivation, propose several new concepts (network inertia and centroid, and finally we argue that greedy routing on a virtual positively curved network achieves load balancing on the physical network.

  15. Sorting on STAR. [CDC computer algorithm timing comparison

    Science.gov (United States)

    Stone, H. S.

    1978-01-01

    Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

  16. Computed laminography and reconstruction algorithm

    International Nuclear Information System (INIS)

    Que Jiemin; Cao Daquan; Zhao Wei; Tang Xiao

    2012-01-01

    Computed laminography (CL) is an alternative to computed tomography if large objects are to be inspected with high resolution. This is especially true for planar objects. In this paper, we set up a new scanning geometry for CL, and study the algebraic reconstruction technique (ART) for CL imaging. We compare the results of ART with variant weighted functions by computer simulation with a digital phantom. It proves that ART algorithm is a good choice for the CL system. (authors)

  17. A Novel Parallel Algorithm for Edit Distance Computation

    Directory of Open Access Journals (Sweden)

    Muhammad Murtaza Yousaf

    2018-01-01

    Full Text Available The edit distance between two sequences is the minimum number of weighted transformation-operations that are required to transform one string into the other. The weighted transformation-operations are insert, remove, and substitute. Dynamic programming solution to find edit distance exists but it becomes computationally intensive when the lengths of strings become very large. This work presents a novel parallel algorithm to solve edit distance problem of string matching. The algorithm is based on resolving dependencies in the dynamic programming solution of the problem and it is able to compute each row of edit distance table in parallel. In this way, it becomes possible to compute the complete table in min(m,n iterations for strings of size m and n whereas state-of-the-art parallel algorithm solves the problem in max(m,n iterations. The proposed algorithm also increases the amount of parallelism in each of its iteration. The algorithm is also capable of exploiting spatial locality while its implementation. Additionally, the algorithm works in a load balanced way that further improves its performance. The algorithm is implemented for multicore systems having shared memory. Implementation of the algorithm in OpenMP shows linear speedup and better execution time as compared to state-of-the-art parallel approach. Efficiency of the algorithm is also proven better in comparison to its competitor.

  18. Applying Kitaev's algorithm in an ion trap quantum computer

    International Nuclear Information System (INIS)

    Travaglione, B.; Milburn, G.J.

    2000-01-01

    Full text: Kitaev's algorithm is a method of estimating eigenvalues associated with an operator. Shor's factoring algorithm, which enables a quantum computer to crack RSA encryption codes, is a specific example of Kitaev's algorithm. It has been proposed that the algorithm can also be used to generate eigenstates. We extend this proposal for small quantum systems, identifying the conditions under which the algorithm can successfully generate eigenstates. We then propose an implementation scheme based on an ion trap quantum computer. This scheme allows us to illustrate a simple example, in which the algorithm effectively generates eigenstates

  19. Approximate Computing Techniques for Iterative Graph Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Panyala, Ajay R.; Subasi, Omer; Halappanavar, Mahantesh; Kalyanaraman, Anantharaman; Chavarria Miranda, Daniel G.; Krishnamoorthy, Sriram

    2017-12-18

    Approximate computing enables processing of large-scale graphs by trading off quality for performance. Approximate computing techniques have become critical not only due to the emergence of parallel architectures but also the availability of large scale datasets enabling data-driven discovery. Using two prototypical graph algorithms, PageRank and community detection, we present several approximate computing heuristics to scale the performance with minimal loss of accuracy. We present several heuristics including loop perforation, data caching, incomplete graph coloring and synchronization, and evaluate their efficiency. We demonstrate performance improvements of up to 83% for PageRank and up to 450x for community detection, with low impact of accuracy for both the algorithms. We expect the proposed approximate techniques will enable scalable graph analytics on data of importance to several applications in science and their subsequent adoption to scale similar graph algorithms.

  20. Comparison of evolutionary computation algorithms for solving bi ...

    Indian Academy of Sciences (India)

    failure probability. Multiobjective Evolutionary Computation algorithms (MOEAs) are well-suited for Multiobjective task scheduling on heterogeneous environment. The two Multi-Objective Evolutionary Algorithms such as Multiobjective Genetic. Algorithm (MOGA) and Multiobjective Evolutionary Programming (MOEP) with.

  1. RES: Regularized Stochastic BFGS Algorithm

    Science.gov (United States)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  2. Quantum computation and Shor close-quote s factoring algorithm

    International Nuclear Information System (INIS)

    Ekert, A.; Jozsa, R.

    1996-01-01

    Current technology is beginning to allow us to manipulate rather than just observe individual quantum phenomena. This opens up the possibility of exploiting quantum effects to perform computations beyond the scope of any classical computer. Recently Peter Shor discovered an efficient algorithm for factoring whole numbers, which uses characteristically quantum effects. The algorithm illustrates the potential power of quantum computation, as there is no known efficient classical method for solving this problem. The authors give an exposition of Shor close-quote s algorithm together with an introduction to quantum computation and complexity theory. They discuss experiments that may contribute to its practical implementation. copyright 1996 The American Physical Society

  3. A neural algorithm for a fundamental computing problem.

    Science.gov (United States)

    Dasgupta, Sanjoy; Stevens, Charles F; Navlakha, Saket

    2017-11-10

    Similarity search-for example, identifying similar images in a database or similar documents on the web-is a fundamental computing problem faced by large-scale information retrieval systems. We discovered that the fruit fly olfactory circuit solves this problem with a variant of a computer science algorithm (called locality-sensitive hashing). The fly circuit assigns similar neural activity patterns to similar odors, so that behaviors learned from one odor can be applied when a similar odor is experienced. The fly algorithm, however, uses three computational strategies that depart from traditional approaches. These strategies can be translated to improve the performance of computational similarity searches. This perspective helps illuminate the logic supporting an important sensory function and provides a conceptually new algorithm for solving a fundamental computational problem. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  4. Parallel algorithms and architecture for computation of manipulator forward dynamics

    Science.gov (United States)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    Parallel computation of manipulator forward dynamics is investigated. Considering three classes of algorithms for the solution of the problem, that is, the O(n), the O(n exp 2), and the O(n exp 3) algorithms, parallelism in the problem is analyzed. It is shown that the problem belongs to the class of NC and that the time and processors bounds are of O(log2/2n) and O(n exp 4), respectively. However, the fastest stable parallel algorithms achieve the computation time of O(n) and can be derived by parallelization of the O(n exp 3) serial algorithms. Parallel computation of the O(n exp 3) algorithms requires the development of parallel algorithms for a set of fundamentally different problems, that is, the Newton-Euler formulation, the computation of the inertia matrix, decomposition of the symmetric, positive definite matrix, and the solution of triangular systems. Parallel algorithms for this set of problems are developed which can be efficiently implemented on a unique architecture, a triangular array of n(n+2)/2 processors with a simple nearest-neighbor interconnection. This architecture is particularly suitable for VLSI and WSI implementations. The developed parallel algorithm, compared to the best serial O(n) algorithm, achieves an asymptotic speedup of more than two orders-of-magnitude in the computation the forward dynamics.

  5. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R

    2010-01-01

    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  6. An Alternative Algorithm for Computing Watersheds on Shared Memory Parallel Computers

    NARCIS (Netherlands)

    Meijster, A.; Roerdink, J.B.T.M.

    1995-01-01

    In this paper a parallel implementation of a watershed algorithm is proposed. The algorithm can easily be implemented on shared memory parallel computers. The watershed transform is generally considered to be inherently sequential since the discrete watershed of an image is defined using recursion.

  7. A Visualization Review of Cloud Computing Algorithms in the Last Decade

    Directory of Open Access Journals (Sweden)

    Junhu Ruan

    2016-10-01

    Full Text Available Cloud computing has competitive advantages—such as on-demand self-service, rapid computing, cost reduction, and almost unlimited storage—that have attracted extensive attention from both academia and industry in recent years. Some review works have been reported to summarize extant studies related to cloud computing, but few analyze these studies based on the citations. Co-citation analysis can provide scholars a strong support to identify the intellectual bases and leading edges of a specific field. In addition, advanced algorithms, which can directly affect the availability, efficiency, and security of cloud computing, are the key to conducting computing across various clouds. Motivated by these observations, we conduct a specific visualization review of the studies related to cloud computing algorithms using one mainstream co-citation analysis tool—CiteSpace. The visualization results detect the most influential studies, journals, countries, institutions, and authors on cloud computing algorithms and reveal the intellectual bases and focuses of cloud computing algorithms in the literature, providing guidance for interested researchers to make further studies on cloud computing algorithms.

  8. Multiobjective Variable Neighborhood Search algorithm for scheduling independent jobs on computational grid

    Directory of Open Access Journals (Sweden)

    S. Selvi

    2015-07-01

    Full Text Available Grid computing solves high performance and high-throughput computing problems through sharing resources ranging from personal computers to super computers distributed around the world. As the grid environments facilitate distributed computation, the scheduling of grid jobs has become an important issue. In this paper, an investigation on implementing Multiobjective Variable Neighborhood Search (MVNS algorithm for scheduling independent jobs on computational grid is carried out. The performance of the proposed algorithm has been evaluated with Min–Min algorithm, Simulated Annealing (SA and Greedy Randomized Adaptive Search Procedure (GRASP algorithm. Simulation results show that MVNS algorithm generally performs better than other metaheuristics methods.

  9. A Novel Clustering Algorithm Inspired by Membrane Computing

    Directory of Open Access Journals (Sweden)

    Hong Peng

    2015-01-01

    Full Text Available P systems are a class of distributed parallel computing models; this paper presents a novel clustering algorithm, which is inspired from mechanism of a tissue-like P system with a loop structure of cells, called membrane clustering algorithm. The objects of the cells express the candidate centers of clusters and are evolved by the evolution rules. Based on the loop membrane structure, the communication rules realize a local neighborhood topology, which helps the coevolution of the objects and improves the diversity of objects in the system. The tissue-like P system can effectively search for the optimal partitioning with the help of its parallel computing advantage. The proposed clustering algorithm is evaluated on four artificial data sets and six real-life data sets. Experimental results show that the proposed clustering algorithm is superior or competitive to k-means algorithm and several evolutionary clustering algorithms recently reported in the literature.

  10. Parallel grid generation algorithm for distributed memory computers

    Science.gov (United States)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  11. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  12. Enabling high performance computational science through combinatorial algorithms

    International Nuclear Information System (INIS)

    Boman, Erik G; Bozdag, Doruk; Catalyurek, Umit V; Devine, Karen D; Gebremedhin, Assefaw H; Hovland, Paul D; Pothen, Alex; Strout, Michelle Mills

    2007-01-01

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation

  13. Enabling high performance computational science through combinatorial algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Boman, Erik G [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Bozdag, Doruk [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Catalyurek, Umit V [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Devine, Karen D [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Gebremedhin, Assefaw H [Computer Science and Center for Computational Science, Old Dominion University (United States); Hovland, Paul D [Mathematics and Computer Science Division, Argonne National Laboratory (United States); Pothen, Alex [Computer Science and Center for Computational Science, Old Dominion University (United States); Strout, Michelle Mills [Computer Science, Colorado State University (United States)

    2007-07-15

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation.

  14. A Faster Algorithm for Computing Straight Skeletons

    KAUST Repository

    Mencel, Liam A.

    2014-01-01

    computation in O(n (log n) log r) time. It improves on the previously best known algorithm for this reduction, which is randomised, and runs in expected O(n √(h+1) log² n) time for a polygon with h holes. Using known motorcycle graph algorithms, our result

  15. Efficient conjugate gradient algorithms for computation of the manipulator forward dynamics

    Science.gov (United States)

    Fijany, Amir; Scheid, Robert E.

    1989-01-01

    The applicability of conjugate gradient algorithms for computation of the manipulator forward dynamics is investigated. The redundancies in the previously proposed conjugate gradient algorithm are analyzed. A new version is developed which, by avoiding these redundancies, achieves a significantly greater efficiency. A preconditioned conjugate gradient algorithm is also presented. A diagonal matrix whose elements are the diagonal elements of the inertia matrix is proposed as the preconditioner. In order to increase the computational efficiency, an algorithm is developed which exploits the synergism between the computation of the diagonal elements of the inertia matrix and that required by the conjugate gradient algorithm.

  16. 3D Facial Similarity Measure Based on Geodesic Network and Curvatures

    Directory of Open Access Journals (Sweden)

    Junli Zhao

    2014-01-01

    Full Text Available Automated 3D facial similarity measure is a challenging and valuable research topic in anthropology and computer graphics. It is widely used in various fields, such as criminal investigation, kinship confirmation, and face recognition. This paper proposes a 3D facial similarity measure method based on a combination of geodesic and curvature features. Firstly, a geodesic network is generated for each face with geodesics and iso-geodesics determined and these network points are adopted as the correspondence across face models. Then, four metrics associated with curvatures, that is, the mean curvature, Gaussian curvature, shape index, and curvedness, are computed for each network point by using a weighted average of its neighborhood points. Finally, correlation coefficients according to these metrics are computed, respectively, as the similarity measures between two 3D face models. Experiments of different persons’ 3D facial models and different 3D facial models of the same person are implemented and compared with a subjective face similarity study. The results show that the geodesic network plays an important role in 3D facial similarity measure. The similarity measure defined by shape index is consistent with human’s subjective evaluation basically, and it can measure the 3D face similarity more objectively than the other indices.

  17. Computational algorithms for simulations in atmospheric optics.

    Science.gov (United States)

    Konyaev, P A; Lukin, V P

    2016-04-20

    A computer simulation technique for atmospheric and adaptive optics based on parallel programing is discussed. A parallel propagation algorithm is designed and a modified spectral-phase method for computer generation of 2D time-variant random fields is developed. Temporal power spectra of Laguerre-Gaussian beam fluctuations are considered as an example to illustrate the applications discussed. Implementation of the proposed algorithms using Intel MKL and IPP libraries and NVIDIA CUDA technology is shown to be very fast and accurate. The hardware system for the computer simulation is an off-the-shelf desktop with an Intel Core i7-4790K CPU operating at a turbo-speed frequency up to 5 GHz and an NVIDIA GeForce GTX-960 graphics accelerator with 1024 1.5 GHz processors.

  18. Fast algorithm for computing complex number-theoretic transforms

    Science.gov (United States)

    Reed, I. S.; Liu, K. Y.; Truong, T. K.

    1977-01-01

    A high-radix FFT algorithm for computing transforms over FFT, where q is a Mersenne prime, is developed to implement fast circular convolutions. This new algorithm requires substantially fewer multiplications than the conventional FFT.

  19. A fast algorithm for sparse matrix computations related to inversion

    International Nuclear Information System (INIS)

    Li, S.; Wu, W.; Darve, E.

    2013-01-01

    We have developed a fast algorithm for computing certain entries of the inverse of a sparse matrix. Such computations are critical to many applications, such as the calculation of non-equilibrium Green’s functions G r and G for nano-devices. The FIND (Fast Inverse using Nested Dissection) algorithm is optimal in the big-O sense. However, in practice, FIND suffers from two problems due to the width-2 separators used by its partitioning scheme. One problem is the presence of a large constant factor in the computational cost of FIND. The other problem is that the partitioning scheme used by FIND is incompatible with most existing partitioning methods and libraries for nested dissection, which all use width-1 separators. Our new algorithm resolves these problems by thoroughly decomposing the computation process such that width-1 separators can be used, resulting in a significant speedup over FIND for realistic devices — up to twelve-fold in simulation. The new algorithm also has the added advantage that desired off-diagonal entries can be computed for free. Consequently, our algorithm is faster than the current state-of-the-art recursive methods for meshes of any size. Furthermore, the framework used in the analysis of our algorithm is the first attempt to explicitly apply the widely-used relationship between mesh nodes and matrix computations to the problem of multiple eliminations with reuse of intermediate results. This framework makes our algorithm easier to generalize, and also easier to compare against other methods related to elimination trees. Finally, our accuracy analysis shows that the algorithms that require back-substitution are subject to significant extra round-off errors, which become extremely large even for some well-conditioned matrices or matrices with only moderately large condition numbers. When compared to these back-substitution algorithms, our algorithm is generally a few orders of magnitude more accurate, and our produced round-off errors

  20. A fast algorithm for sparse matrix computations related to inversion

    Energy Technology Data Exchange (ETDEWEB)

    Li, S., E-mail: lisong@stanford.edu [Institute for Computational and Mathematical Engineering, Stanford University, 496 Lomita Mall, Durand Building, Stanford, CA 94305 (United States); Wu, W. [Department of Electrical Engineering, Stanford University, 350 Serra Mall, Packard Building, Room 268, Stanford, CA 94305 (United States); Darve, E. [Institute for Computational and Mathematical Engineering, Stanford University, 496 Lomita Mall, Durand Building, Stanford, CA 94305 (United States); Department of Mechanical Engineering, Stanford University, 496 Lomita Mall, Durand Building, Room 209, Stanford, CA 94305 (United States)

    2013-06-01

    We have developed a fast algorithm for computing certain entries of the inverse of a sparse matrix. Such computations are critical to many applications, such as the calculation of non-equilibrium Green’s functions G{sup r} and G{sup <} for nano-devices. The FIND (Fast Inverse using Nested Dissection) algorithm is optimal in the big-O sense. However, in practice, FIND suffers from two problems due to the width-2 separators used by its partitioning scheme. One problem is the presence of a large constant factor in the computational cost of FIND. The other problem is that the partitioning scheme used by FIND is incompatible with most existing partitioning methods and libraries for nested dissection, which all use width-1 separators. Our new algorithm resolves these problems by thoroughly decomposing the computation process such that width-1 separators can be used, resulting in a significant speedup over FIND for realistic devices — up to twelve-fold in simulation. The new algorithm also has the added advantage that desired off-diagonal entries can be computed for free. Consequently, our algorithm is faster than the current state-of-the-art recursive methods for meshes of any size. Furthermore, the framework used in the analysis of our algorithm is the first attempt to explicitly apply the widely-used relationship between mesh nodes and matrix computations to the problem of multiple eliminations with reuse of intermediate results. This framework makes our algorithm easier to generalize, and also easier to compare against other methods related to elimination trees. Finally, our accuracy analysis shows that the algorithms that require back-substitution are subject to significant extra round-off errors, which become extremely large even for some well-conditioned matrices or matrices with only moderately large condition numbers. When compared to these back-substitution algorithms, our algorithm is generally a few orders of magnitude more accurate, and our produced round

  1. An Algorithm Computing the Local $b$ Function by an Approximate Division Algorithm in $\\hat{\\mathcal{D}}$

    OpenAIRE

    Nakayama, Hiromasa

    2006-01-01

    We give an algorithm to compute the local $b$ function. In this algorithm, we use the Mora division algorithm in the ring of differential operators and an approximate division algorithm in the ring of differential operators with power series coefficient.

  2. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  3. Computer and machine vision theory, algorithms, practicalities

    CERN Document Server

    Davies, E R

    2012-01-01

    Computer and Machine Vision: Theory, Algorithms, Practicalities (previously entitled Machine Vision) clearly and systematically presents the basic methodology of computer and machine vision, covering the essential elements of the theory while emphasizing algorithmic and practical design constraints. This fully revised fourth edition has brought in more of the concepts and applications of computer vision, making it a very comprehensive and up-to-date tutorial text suitable for graduate students, researchers and R&D engineers working in this vibrant subject. Key features include: Practical examples and case studies give the 'ins and outs' of developing real-world vision systems, giving engineers the realities of implementing the principles in practice New chapters containing case studies on surveillance and driver assistance systems give practical methods on these cutting-edge applications in computer vision Necessary mathematics and essential theory are made approachable by careful explanations and well-il...

  4. Computational Discovery of Materials Using the Firefly Algorithm

    Science.gov (United States)

    Avendaño-Franco, Guillermo; Romero, Aldo

    Our current ability to model physical phenomena accurately, the increase computational power and better algorithms are the driving forces behind the computational discovery and design of novel materials, allowing for virtual characterization before their realization in the laboratory. We present the implementation of a novel firefly algorithm, a population-based algorithm for global optimization for searching the structure/composition space. This novel computation-intensive approach naturally take advantage of concurrency, targeted exploration and still keeping enough diversity. We apply the new method in both periodic and non-periodic structures and we present the implementation challenges and solutions to improve efficiency. The implementation makes use of computational materials databases and network analysis to optimize the search and get insights about the geometric structure of local minima on the energy landscape. The method has been implemented in our software PyChemia, an open-source package for materials discovery. We acknowledge the support of DMREF-NSF 1434897 and the Donors of the American Chemical Society Petroleum Research Fund for partial support of this research under Contract 54075-ND10.

  5. Higher Curvature Gravity from Entanglement in Conformal Field Theories

    Science.gov (United States)

    Haehl, Felix M.; Hijano, Eliot; Parrikar, Onkar; Rabideau, Charles

    2018-05-01

    By generalizing different recent works to the context of higher curvature gravity, we provide a unifying framework for three related results: (i) If an asymptotically anti-de Sitter (AdS) spacetime computes the entanglement entropies of ball-shaped regions in a conformal field theory using a generalized Ryu-Takayanagi formula up to second order in state deformations around the vacuum, then the spacetime satisfies the correct gravitational equations of motion up to second order around the AdS background. (ii) The holographic dual of entanglement entropy in higher curvature theories of gravity is given by the Wald entropy plus a particular correction term involving extrinsic curvatures. (iii) Conformal field theory relative entropy is dual to gravitational canonical energy (also in higher curvature theories of gravity). Especially for the second point, our novel derivation of this previously known statement does not involve the Euclidean replica trick.

  6. FPGA Implementation of Computer Vision Algorithm

    OpenAIRE

    Zhou, Zhonghua

    2014-01-01

    Computer vision algorithms, which play an significant role in vision processing, is widely applied in many aspects such as geology survey, traffic management and medical care, etc.. Most of the situations require the process to be real-timed, in other words, as fast as possible. Field Programmable Gate Arrays (FPGAs) have a advantage of parallelism fabric in programming, comparing to the serial communications of CPUs, which makes FPGA a perfect platform for implementing vision algorithms. The...

  7. Using a Quadtree Algorithm To Assess Line of Sight

    Science.gov (United States)

    Gonzalez, Joseph; Chamberlain, Robert; Tailor, Eric; Gutt, Gary

    2006-01-01

    A matched pair of computer algorithms determines whether line of sight (LOS) is obstructed by terrain. These algorithms were originally designed for use in conjunction with combat-simulation software in military training exercises, but could also be used for such commercial purposes as evaluating lines of sight for antennas or determining what can be seen from a "room with a view." The quadtree preparation algorithm operates on an array of digital elevation data and only needs to be run once for a terrain region, which can be quite large. Relatively little computation time is needed, as each elevation value is considered only one and one-third times. The LOS assessment algorithm uses that quadtree to answer LOS queries. To determine whether LOS is obstructed, a piecewise-planar (or higher-order) terrain skin is computationally draped over the digital elevation data. Adjustments are made to compensate for curvature of the Earth and for refraction of the LOS by the atmosphere. Average computing time appears to be proportional to the number of queries times the logarithm of the number of elevation data points. Accuracy is as high as is possible for the available elevation data, and symmetric results are assured. In the simulation, the LOS query program runs as a separate process, thereby making more random-access memory available for other computations.

  8. a Voxel-Based Filtering Algorithm for Mobile LIDAR Data

    Science.gov (United States)

    Qin, H.; Guan, G.; Yu, Y.; Zhong, L.

    2018-04-01

    This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.

  9. Fast Algorithm for Computing the Discrete Hartley Transform of Type-II

    Directory of Open Access Journals (Sweden)

    Mounir Taha Hamood

    2016-06-01

    Full Text Available The generalized discrete Hartley transforms (GDHTs have proved to be an efficient alternative to the generalized discrete Fourier transforms (GDFTs for real-valued data applications. In this paper, the development of direct computation of radix-2 decimation-in-time (DIT algorithm for the fast calculation of the GDHT of type-II (DHT-II is presented. The mathematical analysis and the implementation of the developed algorithm are derived, showing that this algorithm possesses a regular structure and can be implemented in-place for efficient memory utilization.The performance of the proposed algorithm is analyzed and the computational complexity is calculated for different transform lengths. A comparison between this algorithm and existing DHT-II algorithms shows that it can be considered as a good compromise between the structural and computational complexities.

  10. Associative Algorithms for Computational Creativity

    Science.gov (United States)

    Varshney, Lav R.; Wang, Jun; Varshney, Kush R.

    2016-01-01

    Computational creativity, the generation of new, unimagined ideas or artifacts by a machine that are deemed creative by people, can be applied in the culinary domain to create novel and flavorful dishes. In fact, we have done so successfully using a combinatorial algorithm for recipe generation combined with statistical models for recipe ranking…

  11. Modern approaches to discrete curvature

    CERN Document Server

    Romon, Pascal

    2017-01-01

     This book provides a valuable glimpse into discrete curvature, a rich new field of research which blends discrete mathematics, differential geometry, probability and computer graphics. It includes a vast collection of ideas and tools which will offer something new to all interested readers. Discrete geometry has arisen as much as a theoretical development as in response to unforeseen challenges coming from applications. Discrete and continuous geometries have turned out to be intimately connected. Discrete curvature is the key concept connecting them through many bridges in numerous fields: metric spaces, Riemannian and Euclidean geometries, geometric measure theory, topology, partial differential equations, calculus of variations, gradient flows, asymptotic analysis, probability, harmonic analysis, graph theory, etc. In spite of its crucial importance both in theoretical mathematics and in applications, up to now, almost no books have provided a coherent outlook on this emerging field.

  12. CURVATURE-DRIVEN MOLECULAR FLOW ON MEMBRANE SURFACE.

    Science.gov (United States)

    Mikucki, Michael; Zhou, Y C

    2017-01-01

    This work presents a mathematical model for the localization of multiple species of diffusion molecules on membrane surfaces. Morphological change of bilayer membrane in vivo is generally modulated by proteins. Most of these modulations are associated with the localization of related proteins in the crowded lipid environments. We start with the energetic description of the distributions of molecules on curved membrane surface, and define the spontaneous curvature of bilayer membrane as a function of the molecule concentrations on membrane surfaces. A drift-diffusion equation governs the gradient flow of the surface molecule concentrations. We recast the energetic formulation and the related governing equations by using an Eulerian phase field description to define membrane morphology. Computational simulations with the proposed mathematical model and related numerical techniques predict (i) the molecular localization on static membrane surfaces at locations with preferred mean curvatures, and (ii) the generation of preferred mean curvature which in turn drives the molecular localization.

  13. Fault-tolerant search algorithms reliable computation with unreliable information

    CERN Document Server

    Cicalese, Ferdinando

    2013-01-01

    Why a book on fault-tolerant search algorithms? Searching is one of the fundamental problems in computer science. Time and again algorithmic and combinatorial issues originally studied in the context of search find application in the most diverse areas of computer science and discrete mathematics. On the other hand, fault-tolerance is a necessary ingredient of computing. Due to their inherent complexity, information systems are naturally prone to errors, which may appear at any level - as imprecisions in the data, bugs in the software, or transient or permanent hardware failures. This book pr

  14. Realization of seven-qubit Deutsch-Jozsa algorithm on NMR quantum computer

    International Nuclear Information System (INIS)

    Wei Daxiu; Yang Xiaodong; Luo Jun; Sun Xianping; Zeng Xizhi; Liu Maili; Ding Shangwu

    2002-01-01

    Recent years, remarkable progresses in experimental realization of quantum information have been made, especially based on nuclear magnetic resonance (NMR) theory. In all quantum algorithms, Deutsch-Jozsa algorithm has been widely studied. It can be realized on NMR quantum computer and also can be simplified by using the Cirac's scheme. At first the principle of Deutsch-Jozsa quantum algorithm is analyzed, then the authors implement the seven-qubit Deutsch-Jozsa algorithm on NMR quantum computer

  15. Static Load Balancing Algorithms In Cloud Computing Challenges amp Solutions

    Directory of Open Access Journals (Sweden)

    Nadeem Shah

    2015-08-01

    Full Text Available Abstract Cloud computing provides on-demand hosted computing resources and services over the Internet on a pay-per-use basis. It is currently becoming the favored method of communication and computation over scalable networks due to numerous attractive attributes such as high availability scalability fault tolerance simplicity of management and low cost of ownership. Due to the huge demand of cloud computing efficient load balancing becomes critical to ensure that computational tasks are evenly distributed across servers to prevent bottlenecks. The aim of this review paper is to understand the current challenges in cloud computing primarily in cloud load balancing using static algorithms and finding gaps to bridge for more efficient static cloud load balancing in the future. We believe the ideas suggested as new solution will allow researchers to redesign better algorithms for better functionalities and improved user experiences in simple cloud systems. This could assist small businesses that cannot afford infrastructure that supports complex amp dynamic load balancing algorithms.

  16. An algorithm of computing inhomogeneous differential equations for definite integrals

    OpenAIRE

    Nakayama, Hiromasa; Nishiyama, Kenta

    2010-01-01

    We give an algorithm to compute inhomogeneous differential equations for definite integrals with parameters. The algorithm is based on the integration algorithm for $D$-modules by Oaku. Main tool in the algorithm is the Gr\\"obner basis method in the ring of differential operators.

  17. A class of parallel algorithms for computation of the manipulator inertia matrix

    Science.gov (United States)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    Parallel and parallel/pipeline algorithms for computation of the manipulator inertia matrix are presented. An algorithm based on composite rigid-body spatial inertia method, which provides better features for parallelization, is used for the computation of the inertia matrix. Two parallel algorithms are developed which achieve the time lower bound in computation. Also described is the mapping of these algorithms with topological variation on a two-dimensional processor array, with nearest-neighbor connection, and with cardinality variation on a linear processor array. An efficient parallel/pipeline algorithm for the linear array was also developed, but at significantly higher efficiency.

  18. Cloud computing task scheduling strategy based on improved differential evolution algorithm

    Science.gov (United States)

    Ge, Junwei; He, Qian; Fang, Yiqiu

    2017-04-01

    In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.

  19. Fast algorithms for computing phylogenetic divergence time.

    Science.gov (United States)

    Crosby, Ralph W; Williams, Tiffani L

    2017-12-06

    The inference of species divergence time is a key step in most phylogenetic studies. Methods have been available for the last ten years to perform the inference, but the performance of the methods does not yet scale well to studies with hundreds of taxa and thousands of DNA base pairs. For example a study of 349 primate taxa was estimated to require over 9 months of processing time. In this work, we present a new algorithm, AncestralAge, that significantly improves the performance of the divergence time process. As part of AncestralAge, we demonstrate a new method for the computation of phylogenetic likelihood and our experiments show a 90% improvement in likelihood computation time on the aforementioned dataset of 349 primates taxa with over 60,000 DNA base pairs. Additionally, we show that our new method for the computation of the Bayesian prior on node ages reduces the running time for this computation on the 349 taxa dataset by 99%. Through the use of these new algorithms we open up the ability to perform divergence time inference on large phylogenetic studies.

  20. GLOA: A New Job Scheduling Algorithm for Grid Computing

    Directory of Open Access Journals (Sweden)

    Zahra Pooranian

    2013-03-01

    Full Text Available The purpose of grid computing is to produce a virtual supercomputer by using free resources available through widespread networks such as the Internet. This resource distribution, changes in resource availability, and an unreliable communication infrastructure pose a major challenge for efficient resource allocation. Because of the geographical spread of resources and their distributed management, grid scheduling is considered to be a NP-complete problem. It has been shown that evolutionary algorithms offer good performance for grid scheduling. This article uses a new evaluation (distributed algorithm inspired by the effect of leaders in social groups, the group leaders' optimization algorithm (GLOA, to solve the problem of scheduling independent tasks in a grid computing system. Simulation results comparing GLOA with several other evaluation algorithms show that GLOA produces shorter makespans.

  1. A recursive algorithm for computing the inverse of the Vandermonde matrix

    Directory of Open Access Journals (Sweden)

    Youness Aliyari Ghassabeh

    2016-12-01

    Full Text Available The inverse of a Vandermonde matrix has been used for signal processing, polynomial interpolation, curve fitting, wireless communication, and system identification. In this paper, we propose a novel fast recursive algorithm to compute the inverse of a Vandermonde matrix. The algorithm computes the inverse of a higher order Vandermonde matrix using the available lower order inverse matrix with a computational cost of $ O(n^2 $. The proposed algorithm is given in a matrix form, which makes it appropriate for hardware implementation. The running time of the proposed algorithm to find the inverse of a Vandermonde matrix using a lower order Vandermonde matrix is compared with the running time of the matrix inversion function implemented in MATLAB.

  2. Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds

    Science.gov (United States)

    Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni

    2012-09-01

    Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.

  3. Efficient Geo-Computational Algorithms for Constructing Space-Time Prisms in Road Networks

    Directory of Open Access Journals (Sweden)

    Hui-Ping Chen

    2016-11-01

    Full Text Available The Space-time prism (STP is a key concept in time geography for analyzing human activity-travel behavior under various Space-time constraints. Most existing time-geographic studies use a straightforward algorithm to construct STPs in road networks by using two one-to-all shortest path searches. However, this straightforward algorithm can introduce considerable computational overhead, given the fact that accessible links in a STP are generally a small portion of the whole network. To address this issue, an efficient geo-computational algorithm, called NTP-A*, is proposed. The proposed NTP-A* algorithm employs the A* and branch-and-bound techniques to discard inaccessible links during two shortest path searches, and thereby improves the STP construction performance. Comprehensive computational experiments are carried out to demonstrate the computational advantage of the proposed algorithm. Several implementation techniques, including the label-correcting technique and the hybrid link-node labeling technique, are discussed and analyzed. Experimental results show that the proposed NTP-A* algorithm can significantly improve STP construction performance in large-scale road networks by a factor of 100, compared with existing algorithms.

  4. An extended Intelligent Water Drops algorithm for workflow scheduling in cloud computing environment

    Directory of Open Access Journals (Sweden)

    Shaymaa Elsherbiny

    2018-03-01

    Full Text Available Cloud computing is emerging as a high performance computing environment with a large scale, heterogeneous collection of autonomous systems and flexible computational architecture. Many resource management methods may enhance the efficiency of the whole cloud computing system. The key part of cloud computing resource management is resource scheduling. Optimized scheduling of tasks on the cloud virtual machines is an NP-hard problem and many algorithms have been presented to solve it. The variations among these schedulers are due to the fact that the scheduling strategies of the schedulers are adapted to the changing environment and the types of tasks. The focus of this paper is on workflows scheduling in cloud computing, which is gaining a lot of attention recently because workflows have emerged as a paradigm to represent complex computing problems. We proposed a novel algorithm extending the natural-based Intelligent Water Drops (IWD algorithm that optimizes the scheduling of workflows on the cloud. The proposed algorithm is implemented and embedded within the workflows simulation toolkit and tested in different simulated cloud environments with different cost models. Our algorithm showed noticeable enhancements over the classical workflow scheduling algorithms. We made a comparison between the proposed IWD-based algorithm with other well-known scheduling algorithms, including MIN-MIN, MAX-MIN, Round Robin, FCFS, and MCT, PSO and C-PSO, where the proposed algorithm presented noticeable enhancements in the performance and cost in most situations.

  5. High-speed computation of the EM algorithm for PET image reconstruction

    International Nuclear Information System (INIS)

    Rajan, K.; Patnaik, L.M.; Ramakrishna, J.

    1994-01-01

    The PET image reconstruction based on the EM algorithm has several attractive advantages over the conventional convolution backprojection algorithms. However, two major drawbacks have impeded the routine use of the EM algorithm, namely, the long computational time due to slow convergence and the large memory required for the storage of the image, projection data and the probability matrix. In this study, the authors attempts to solve these two problems by parallelizing the EM algorithm on a multiprocessor system. The authors have implemented an extended hypercube (EH) architecture for the high-speed computation of the EM algorithm using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PEs). The authors discuss and compare the performance of the EM algorithm on a 386/387 machine, CD 4360 mainframe, and on the EH system. The results show that the computational speed performance of an EH using DSP chips as PEs executing the EM image reconstruction algorithm is about 130 times better than that of the CD 4360 mainframe. The EH topology is expandable with more number of PEs

  6. Computational Modeling of Teaching and Learning through Application of Evolutionary Algorithms

    Directory of Open Access Journals (Sweden)

    Richard Lamb

    2015-09-01

    Full Text Available Within the mind, there are a myriad of ideas that make sense within the bounds of everyday experience, but are not reflective of how the world actually exists; this is particularly true in the domain of science. Classroom learning with teacher explanation are a bridge through which these naive understandings can be brought in line with scientific reality. The purpose of this paper is to examine how the application of a Multiobjective Evolutionary Algorithm (MOEA can work in concert with an existing computational-model to effectively model critical-thinking in the science classroom. An evolutionary algorithm is an algorithm that iteratively optimizes machine learning based computational models. The research question is, does the application of an evolutionary algorithm provide a means to optimize the Student Task and Cognition Model (STAC-M and does the optimized model sufficiently represent and predict teaching and learning outcomes in the science classroom? Within this computational study, the authors outline and simulate the effect of teaching on the ability of a “virtual” student to solve a Piagetian task. Using the Student Task and Cognition Model (STAC-M a computational model of student cognitive processing in science class developed in 2013, the authors complete a computational experiment which examines the role of cognitive retraining on student learning. Comparison of the STAC-M and the STAC-M with inclusion of the Multiobjective Evolutionary Algorithm shows greater success in solving the Piagetian science-tasks post cognitive retraining with the Multiobjective Evolutionary Algorithm. This illustrates the potential uses of cognitive and neuropsychological computational modeling in educational research. The authors also outline the limitations and assumptions of computational modeling.

  7. Fundamentals of natural computing basic concepts, algorithms, and applications

    CERN Document Server

    de Castro, Leandro Nunes

    2006-01-01

    Introduction A Small Sample of Ideas The Philosophy of Natural Computing The Three Branches: A Brief Overview When to Use Natural Computing Approaches Conceptualization General Concepts PART I - COMPUTING INSPIRED BY NATURE Evolutionary Computing Problem Solving as a Search Task Hill Climbing and Simulated Annealing Evolutionary Biology Evolutionary Computing The Other Main Evolutionary Algorithms From Evolutionary Biology to Computing Scope of Evolutionary Computing Neurocomputing The Nervous System Artif

  8. Patch Similarity Modulus and Difference Curvature Based Fourth-Order Partial Differential Equation for Image Denoising

    Directory of Open Access Journals (Sweden)

    Yunjiao Bai

    2015-01-01

    Full Text Available The traditional fourth-order nonlinear diffusion denoising model suffers the isolated speckles and the loss of fine details in the processed image. For this reason, a new fourth-order partial differential equation based on the patch similarity modulus and the difference curvature is proposed for image denoising. First, based on the intensity similarity of neighbor pixels, this paper presents a new edge indicator called patch similarity modulus, which is strongly robust to noise. Furthermore, the difference curvature which can effectively distinguish between edges and noise is incorporated into the denoising algorithm to determine the diffusion process by adaptively adjusting the size of the diffusion coefficient. The experimental results show that the proposed algorithm can not only preserve edges and texture details, but also avoid isolated speckles and staircase effect while filtering out noise. And the proposed algorithm has a better performance for the images with abundant details. Additionally, the subjective visual quality and objective evaluation index of the denoised image obtained by the proposed algorithm are higher than the ones from the related methods.

  9. Quantum entanglement and quantum computational algorithms

    Indian Academy of Sciences (India)

    Abstract. The existence of entangled quantum states gives extra power to quantum computers over their classical counterparts. Quantum entanglement shows up qualitatively at the level of two qubits. We demonstrate that the one- and the two-bit Deutsch-Jozsa algorithm does not require entanglement and can be mapped ...

  10. Guide to Computational Geometry Processing

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas; Gravesen, Jens; Anton, François

    be processed before it is useful. This Guide to Computational Geometry Processing reviews the algorithms for processing geometric data, with a practical focus on important techniques not covered by traditional courses on computer vision and computer graphics. This is balanced with an introduction...... to the theoretical and mathematical underpinnings of each technique, enabling the reader to not only implement a given method, but also to understand the ideas behind it, its limitations and its advantages. Topics and features: Presents an overview of the underlying mathematical theory, covering vector spaces......, metric space, affine spaces, differential geometry, and finite difference methods for derivatives and differential equations Reviews geometry representations, including polygonal meshes, splines, and subdivision surfaces Examines techniques for computing curvature from polygonal meshes Describes...

  11. Cosmological signatures of anisotropic spatial curvature

    International Nuclear Information System (INIS)

    Pereira, Thiago S.; Marugán, Guillermo A. Mena; Carneiro, Saulo

    2015-01-01

    If one is willing to give up the cherished hypothesis of spatial isotropy, many interesting cosmological models can be developed beyond the simple anisotropically expanding scenarios. One interesting possibility is presented by shear-free models in which the anisotropy emerges at the level of the curvature of the homogeneous spatial sections, whereas the expansion is dictated by a single scale factor. We show that such models represent viable alternatives to describe the large-scale structure of the inflationary universe, leading to a kinematically equivalent Sachs-Wolfe effect. Through the definition of a complete set of spatial eigenfunctions we compute the two-point correlation function of scalar perturbations in these models. In addition, we show how such scenarios would modify the spectrum of the CMB assuming that the observations take place in a small patch of a universe with anisotropic curvature

  12. Cosmological signatures of anisotropic spatial curvature

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Thiago S. [Departamento de Física, Universidade Estadual de Londrina, 86057-970, Londrina – PR (Brazil); Marugán, Guillermo A. Mena [Instituto de Estructura de la Materia, IEM-CSIC, Serrano 121, 28006, Madrid (Spain); Carneiro, Saulo, E-mail: tspereira@uel.br, E-mail: mena@iem.cfmac.csic.es, E-mail: saulo.carneiro@pq.cnpq.br [Instituto de Física, Universidade Federal da Bahia, 40210-340, Salvador – BA (Brazil)

    2015-07-01

    If one is willing to give up the cherished hypothesis of spatial isotropy, many interesting cosmological models can be developed beyond the simple anisotropically expanding scenarios. One interesting possibility is presented by shear-free models in which the anisotropy emerges at the level of the curvature of the homogeneous spatial sections, whereas the expansion is dictated by a single scale factor. We show that such models represent viable alternatives to describe the large-scale structure of the inflationary universe, leading to a kinematically equivalent Sachs-Wolfe effect. Through the definition of a complete set of spatial eigenfunctions we compute the two-point correlation function of scalar perturbations in these models. In addition, we show how such scenarios would modify the spectrum of the CMB assuming that the observations take place in a small patch of a universe with anisotropic curvature.

  13. Workflow Scheduling Using Hybrid GA-PSO Algorithm in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Ahmad M. Manasrah

    2018-01-01

    Full Text Available Cloud computing environment provides several on-demand services and resource sharing for clients. Business processes are managed using the workflow technology over the cloud, which represents one of the challenges in using the resources in an efficient manner due to the dependencies between the tasks. In this paper, a Hybrid GA-PSO algorithm is proposed to allocate tasks to the resources efficiently. The Hybrid GA-PSO algorithm aims to reduce the makespan and the cost and balance the load of the dependent tasks over the heterogonous resources in cloud computing environments. The experiment results show that the GA-PSO algorithm decreases the total execution time of the workflow tasks, in comparison with GA, PSO, HSGA, WSGA, and MTCT algorithms. Furthermore, it reduces the execution cost. In addition, it improves the load balancing of the workflow application over the available resources. Finally, the obtained results also proved that the proposed algorithm converges to optimal solutions faster and with higher quality compared to other algorithms.

  14. The curvature coordinate system

    DEFF Research Database (Denmark)

    Almegaard, Henrik

    2007-01-01

    The paper describes a concept for a curvature coordinate system on regular curved surfaces from which faceted surfaces with plane quadrangular facets can be designed. The lines of curvature are used as parametric lines for the curvature coordinate system on the surface. A new conjugate set of lin...

  15. Combinatorial algorithms enabling computational science: tales from the front

    International Nuclear Information System (INIS)

    Bhowmick, Sanjukta; Boman, Erik G; Devine, Karen; Gebremedhin, Assefaw; Hendrickson, Bruce; Hovland, Paul; Munson, Todd; Pothen, Alex

    2006-01-01

    Combinatorial algorithms have long played a crucial enabling role in scientific and engineering computations. The importance of discrete algorithms continues to grow with the demands of new applications and advanced architectures. This paper surveys some recent developments in this rapidly changing and highly interdisciplinary field

  16. Combinatorial algorithms enabling computational science: tales from the front

    Energy Technology Data Exchange (ETDEWEB)

    Bhowmick, Sanjukta [Mathematics and Computer Science Division, Argonne National Laboratory (United States); Boman, Erik G [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Devine, Karen [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Gebremedhin, Assefaw [Computer Science Department, Old Dominion University (United States); Hendrickson, Bruce [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Hovland, Paul [Mathematics and Computer Science Division, Argonne National Laboratory (United States); Munson, Todd [Mathematics and Computer Science Division, Argonne National Laboratory (United States); Pothen, Alex [Computer Science Department, Old Dominion University (United States)

    2006-09-15

    Combinatorial algorithms have long played a crucial enabling role in scientific and engineering computations. The importance of discrete algorithms continues to grow with the demands of new applications and advanced architectures. This paper surveys some recent developments in this rapidly changing and highly interdisciplinary field.

  17. Improved FHT Algorithms for Fast Computation of the Discrete Hartley Transform

    Directory of Open Access Journals (Sweden)

    M. T. Hamood

    2013-05-01

    Full Text Available In this paper, by using the symmetrical properties of the discrete Hartley transform (DHT, an improved radix-2 fast Hartley transform (FHT algorithm with arithmetic complexity comparable to that of the real-valued fast Fourier transform (RFFT is developed. It has a simple and regular butterfly structure and possesses the in-place computation property. Furthermore, using the same principles, the development can be extended to more efficient radix-based FHT algorithms. An example for the improved radix-4 FHT algorithm is given to show the validity of the presented method. The arithmetic complexity for the new algorithms are computed and then compared with the existing FHT algorithms. The results of these comparisons have shown that the developed algorithms reduce the number of multiplications and additions considerably.

  18. Rational use of cognitive resources: levels of analysis between the computational and the algorithmic.

    Science.gov (United States)

    Griffiths, Thomas L; Lieder, Falk; Goodman, Noah D

    2015-04-01

    Marr's levels of analysis-computational, algorithmic, and implementation-have served cognitive science well over the last 30 years. But the recent increase in the popularity of the computational level raises a new challenge: How do we begin to relate models at different levels of analysis? We propose that it is possible to define levels of analysis that lie between the computational and the algorithmic, providing a way to build a bridge between computational- and algorithmic-level models. The key idea is to push the notion of rationality, often used in defining computational-level models, deeper toward the algorithmic level. We offer a simple recipe for reverse-engineering the mind's cognitive strategies by deriving optimal algorithms for a series of increasingly more realistic abstract computational architectures, which we call "resource-rational analysis." Copyright © 2015 Cognitive Science Society, Inc.

  19. Fast parallel algorithms that compute transitive closure of a fuzzy relation

    Science.gov (United States)

    Kreinovich, Vladik YA.

    1993-01-01

    The notion of a transitive closure of a fuzzy relation is very useful for clustering in pattern recognition, for fuzzy databases, etc. The original algorithm proposed by L. Zadeh (1971) requires the computation time O(n(sup 4)), where n is the number of elements in the relation. In 1974, J. C. Dunn proposed a O(n(sup 2)) algorithm. Since we must compute n(n-1)/2 different values s(a, b) (a not equal to b) that represent the fuzzy relation, and we need at least one computational step to compute each of these values, we cannot compute all of them in less than O(n(sup 2)) steps. So, Dunn's algorithm is in this sense optimal. For small n, it is ok. However, for big n (e.g., for big databases), it is still a lot, so it would be desirable to decrease the computation time (this problem was formulated by J. Bezdek). Since this decrease cannot be done on a sequential computer, the only way to do it is to use a computer with several processors working in parallel. We show that on a parallel computer, transitive closure can be computed in time O((log(sub 2)(n))2).

  20. Desiderata for computable representations of electronic health records-driven phenotype algorithms.

    Science.gov (United States)

    Mo, Huan; Thompson, William K; Rasmussen, Luke V; Pacheco, Jennifer A; Jiang, Guoqian; Kiefer, Richard; Zhu, Qian; Xu, Jie; Montague, Enid; Carrell, David S; Lingren, Todd; Mentch, Frank D; Ni, Yizhao; Wehbe, Firas H; Peissig, Peggy L; Tromp, Gerard; Larson, Eric B; Chute, Christopher G; Pathak, Jyotishman; Denny, Joshua C; Speltz, Peter; Kho, Abel N; Jarvik, Gail P; Bejan, Cosmin A; Williams, Marc S; Borthwick, Kenneth; Kitchner, Terrie E; Roden, Dan M; Harris, Paul A

    2015-11-01

    Electronic health records (EHRs) are increasingly used for clinical and translational research through the creation of phenotype algorithms. Currently, phenotype algorithms are most commonly represented as noncomputable descriptive documents and knowledge artifacts that detail the protocols for querying diagnoses, symptoms, procedures, medications, and/or text-driven medical concepts, and are primarily meant for human comprehension. We present desiderata for developing a computable phenotype representation model (PheRM). A team of clinicians and informaticians reviewed common features for multisite phenotype algorithms published in PheKB.org and existing phenotype representation platforms. We also evaluated well-known diagnostic criteria and clinical decision-making guidelines to encompass a broader category of algorithms. We propose 10 desired characteristics for a flexible, computable PheRM: (1) structure clinical data into queryable forms; (2) recommend use of a common data model, but also support customization for the variability and availability of EHR data among sites; (3) support both human-readable and computable representations of phenotype algorithms; (4) implement set operations and relational algebra for modeling phenotype algorithms; (5) represent phenotype criteria with structured rules; (6) support defining temporal relations between events; (7) use standardized terminologies and ontologies, and facilitate reuse of value sets; (8) define representations for text searching and natural language processing; (9) provide interfaces for external software algorithms; and (10) maintain backward compatibility. A computable PheRM is needed for true phenotype portability and reliability across different EHR products and healthcare systems. These desiderata are a guide to inform the establishment and evolution of EHR phenotype algorithm authoring platforms and languages. © The Author 2015. Published by Oxford University Press on behalf of the American Medical

  1. Quantum computation with classical light: The Deutsch Algorithm

    International Nuclear Information System (INIS)

    Perez-Garcia, Benjamin; Francis, Jason; McLaren, Melanie; Hernandez-Aranda, Raul I.; Forbes, Andrew; Konrad, Thomas

    2015-01-01

    We present an implementation of the Deutsch Algorithm using linear optical elements and laser light. We encoded two quantum bits in form of superpositions of electromagnetic fields in two degrees of freedom of the beam: its polarisation and orbital angular momentum. Our approach, based on a Sagnac interferometer, offers outstanding stability and demonstrates that optical quantum computation is possible using classical states of light. - Highlights: • We implement the Deutsh Algorithm using linear optical elements and classical light. • Our qubits are encoded in the polarisation and orbital angular momentum of the beam. • We show that it is possible to achieve quantum computation with two qubits in the classical domain of light

  2. Quantum computation with classical light: The Deutsch Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Garcia, Benjamin [Photonics and Mathematical Optics Group, Tecnológico de Monterrey, Monterrey 64849 (Mexico); University of the Witwatersrand, Private Bag 3, Johannesburg 2050 (South Africa); Francis, Jason [School of Chemistry and Physics, University of KwaZulu-Natal, Private Bag X54001, Durban 4000 (South Africa); McLaren, Melanie [University of the Witwatersrand, Private Bag 3, Johannesburg 2050 (South Africa); Hernandez-Aranda, Raul I. [Photonics and Mathematical Optics Group, Tecnológico de Monterrey, Monterrey 64849 (Mexico); Forbes, Andrew [University of the Witwatersrand, Private Bag 3, Johannesburg 2050 (South Africa); Konrad, Thomas, E-mail: konradt@ukzn.ac.za [School of Chemistry and Physics, University of KwaZulu-Natal, Private Bag X54001, Durban 4000 (South Africa); National Institute of Theoretical Physics, Durban Node, Private Bag X54001, Durban 4000 (South Africa)

    2015-08-28

    We present an implementation of the Deutsch Algorithm using linear optical elements and laser light. We encoded two quantum bits in form of superpositions of electromagnetic fields in two degrees of freedom of the beam: its polarisation and orbital angular momentum. Our approach, based on a Sagnac interferometer, offers outstanding stability and demonstrates that optical quantum computation is possible using classical states of light. - Highlights: • We implement the Deutsh Algorithm using linear optical elements and classical light. • Our qubits are encoded in the polarisation and orbital angular momentum of the beam. • We show that it is possible to achieve quantum computation with two qubits in the classical domain of light.

  3. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    Science.gov (United States)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  4. On Gauss-Bonnet Curvatures

    Directory of Open Access Journals (Sweden)

    Mohammed Larbi Labbi

    2007-12-01

    Full Text Available The $(2k$-th Gauss-Bonnet curvature is a generalization to higher dimensions of the $(2k$-dimensional Gauss-Bonnet integrand, it coincides with the usual scalar curvature for $k = 1$. The Gauss-Bonnet curvatures are used in theoretical physics to describe gravity in higher dimensional space times where they are known as the Lagrangian of Lovelock gravity, Gauss-Bonnet Gravity and Lanczos gravity. In this paper we present various aspects of these curvature invariants and review their variational properties. In particular, we discuss natural generalizations of the Yamabe problem, Einstein metrics and minimal submanifolds.

  5. Discrete Curvatures and Discrete Minimal Surfaces

    KAUST Repository

    Sun, Xiang

    2012-06-01

    This thesis presents an overview of some approaches to compute Gaussian and mean curvature on discrete surfaces and discusses discrete minimal surfaces. The variety of applications of differential geometry in visualization and shape design leads to great interest in studying discrete surfaces. With the rich smooth surface theory in hand, one would hope that this elegant theory can still be applied to the discrete counter part. Such a generalization, however, is not always successful. While discrete surfaces have the advantage of being finite dimensional, thus easier to treat, their geometric properties such as curvatures are not well defined in the classical sense. Furthermore, the powerful calculus tool can hardly be applied. The methods in this thesis, including angular defect formula, cotangent formula, parallel meshes, relative geometry etc. are approaches based on offset meshes or generalized offset meshes. As an important application, we discuss discrete minimal surfaces and discrete Koenigs meshes.

  6. Fibonacci’s Computation Methods vs Modern Algorithms

    Directory of Open Access Journals (Sweden)

    Ernesto Burattini

    2013-12-01

    Full Text Available In this paper we discuss some computational procedures given by Leonardo Pisano Fibonacci in his famous Liber Abaci book, and we propose their translation into a modern language for computers (C ++. Among the other we describe the method of “cross” multiplication, we evaluate its computational complexity in algorithmic terms and we show the output of a C ++ code that describes the development of the method applied to the product of two integers. In a similar way we show the operations performed on fractions introduced by Fibonacci. Thanks to the possibility to reproduce on a computer, the Fibonacci’s different computational procedures, it was possible to identify some calculation errors present in the different versions of the original text.

  7. Lectures on mean curvature flows

    CERN Document Server

    Zhu, Xi-Ping

    2002-01-01

    "Mean curvature flow" is a term that is used to describe the evolution of a hypersurface whose normal velocity is given by the mean curvature. In the simplest case of a convex closed curve on the plane, the properties of the mean curvature flow are described by Gage-Hamilton's theorem. This theorem states that under the mean curvature flow, the curve collapses to a point, and if the flow is diluted so that the enclosed area equals \\pi, the curve tends to the unit circle. In this book, the author gives a comprehensive account of fundamental results on singularities and the asymptotic behavior of mean curvature flows in higher dimensions. Among other topics, he considers in detail Huisken's theorem (a generalization of Gage-Hamilton's theorem to higher dimension), evolution of non-convex curves and hypersurfaces, and the classification of singularities of the mean curvature flow. Because of the importance of the mean curvature flow and its numerous applications in differential geometry and partial differential ...

  8. Generalization of the swelling method to measure the intrinsic curvature of lipids

    Science.gov (United States)

    Barragán Vidal, I. A.; Müller, M.

    2017-12-01

    Via computer simulation of a coarse-grained model of two-component lipid bilayers, we compare two methods of measuring the intrinsic curvatures of the constituting monolayers. The first one is a generalization of the swelling method that, in addition to the assumption that the spontaneous curvature linearly depends on the composition of the lipid mixture, incorporates contributions from its elastic energy. The second method measures the effective curvature-composition coupling between the apposing leaflets of bilayer structures (planar bilayers or cylindrical tethers) to extract the spontaneous curvature. Our findings demonstrate that both methods yield consistent results. However, we highlight that the two-leaflet structure inherent to the latter method has the advantage of allowing measurements for mixed lipid systems up to their critical point of demixing as well as in the regime of high concentration (of either species).

  9. Comprehensive Use of Curvature for Robust and Accurate Online Surface Reconstruction.

    Science.gov (United States)

    Lefloch, Damien; Kluge, Markus; Sarbolandi, Hamed; Weyrich, Tim; Kolb, Andreas

    2017-12-01

    Interactive real-time scene acquisition from hand-held depth cameras has recently developed much momentum, enabling applications in ad-hoc object acquisition, augmented reality and other fields. A key challenge to online reconstruction remains error accumulation in the reconstructed camera trajectory, due to drift-inducing instabilities in the range scan alignments of the underlying iterative-closest-point (ICP) algorithm. Various strategies have been proposed to mitigate that drift, including SIFT-based pre-alignment, color-based weighting of ICP pairs, stronger weighting of edge features, and so on. In our work, we focus on surface curvature as a feature that is detectable on range scans alone and hence does not depend on accurate multi-sensor alignment. In contrast to previous work that took curvature into consideration, however, we treat curvature as an independent quantity that we consistently incorporate into every stage of the real-time reconstruction pipeline, including densely curvature-weighted ICP, range image fusion, local surface reconstruction, and rendering. Using multiple benchmark sequences, and in direct comparison to other state-of-the-art online acquisition systems, we show that our approach significantly reduces drift, both when analyzing individual pipeline stages in isolation, as well as seen across the online reconstruction pipeline as a whole.

  10. Computational plasticity algorithm for particle dynamics simulations

    Science.gov (United States)

    Krabbenhoft, K.; Lyamin, A. V.; Vignes, C.

    2018-01-01

    The problem of particle dynamics simulation is interpreted in the framework of computational plasticity leading to an algorithm which is mathematically indistinguishable from the common implicit scheme widely used in the finite element analysis of elastoplastic boundary value problems. This algorithm provides somewhat of a unification of two particle methods, the discrete element method and the contact dynamics method, which usually are thought of as being quite disparate. In particular, it is shown that the former appears as the special case where the time stepping is explicit while the use of implicit time stepping leads to the kind of schemes usually labelled contact dynamics methods. The framing of particle dynamics simulation within computational plasticity paves the way for new approaches similar (or identical) to those frequently employed in nonlinear finite element analysis. These include mixed implicit-explicit time stepping, dynamic relaxation and domain decomposition schemes.

  11. Microscope self-calibration based on micro laser line imaging and soft computing algorithms

    Science.gov (United States)

    Apolinar Muñoz Rodríguez, J.

    2018-06-01

    A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.

  12. Some Inequalities for the -Curvature Image

    Directory of Open Access Journals (Sweden)

    Daijun Wei

    2009-01-01

    Full Text Available Lutwak introduced the notion of -curvature image and proved an inequality for the volumes of convex body and its -curvature image. In this paper, we first give an monotonic property of -curvature image. Further, we establish two inequalities for the -curvature image and its polar, respectively. Finally, an inequality for the volumes of -projection body and -curvature image is obtained.

  13. An inertia-free filter line-search algorithm for large-scale nonlinear programming

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Nai-Yuan; Zavala, Victor M.

    2016-02-15

    We present a filter line-search algorithm that does not require inertia information of the linear system. This feature enables the use of a wide range of linear algebra strategies and libraries, which is essential to tackle large-scale problems on modern computing architectures. The proposed approach performs curvature tests along the search step to detect negative curvature and to trigger convexification. We prove that the approach is globally convergent and we implement the approach within a parallel interior-point framework to solve large-scale and highly nonlinear problems. Our numerical tests demonstrate that the inertia-free approach is as efficient as inertia detection via symmetric indefinite factorizations. We also demonstrate that the inertia-free approach can lead to reductions in solution time because it reduces the amount of convexification needed.

  14. Sensitive zone parameters and curvature radius evaluation for polymer optical fiber curvature sensors

    Science.gov (United States)

    Leal-Junior, Arnaldo G.; Frizera, Anselmo; José Pontes, Maria

    2018-03-01

    Polymer optical fibers (POFs) are suitable for applications such as curvature sensors, strain, temperature, liquid level, among others. However, for enhancing sensitivity, many polymer optical fiber curvature sensors based on intensity variation require a lateral section. Lateral section length, depth, and surface roughness have great influence on the sensor sensitivity, hysteresis, and linearity. Moreover, the sensor curvature radius increase the stress on the fiber, which leads on variation of the sensor behavior. This paper presents the analysis relating the curvature radius and lateral section length, depth and surface roughness with the sensor sensitivity, hysteresis and linearity for a POF curvature sensor. Results show a strong correlation between the decision parameters behavior and the performance for sensor applications based on intensity variation. Furthermore, there is a trade-off among the sensitive zone length, depth, surface roughness, and curvature radius with the sensor desired performance parameters, which are minimum hysteresis, maximum sensitivity, and maximum linearity. The optimization of these parameters is applied to obtain a sensor with sensitivity of 20.9 mV/°, linearity of 0.9992 and hysteresis below 1%, which represent a better performance of the sensor when compared with the sensor without the optimization.

  15. Brane cosmology with curvature corrections

    International Nuclear Information System (INIS)

    Kofinas, Georgios; Maartens, Roy; Papantonopoulos, Eleftherios

    2003-01-01

    We study the cosmology of the Randall-Sundrum brane-world where the Einstein-Hilbert action is modified by curvature correction terms: a four-dimensional scalar curvature from induced gravity on the brane, and a five-dimensional Gauss-Bonnet curvature term. The combined effect of these curvature corrections to the action removes the infinite-density big bang singularity, although the curvature can still diverge for some parameter values. A radiation brane undergoes accelerated expansion near the minimal scale factor, for a range of parameters. This acceleration is driven by the geometric effects, without an inflation field or negative pressures. At late times, conventional cosmology is recovered. (author)

  16. Hard Real-Time Task Scheduling in Cloud Computing Using an Adaptive Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Amjad Mahmood

    2017-04-01

    Full Text Available In the Infrastructure-as-a-Service cloud computing model, virtualized computing resources in the form of virtual machines are provided over the Internet. A user can rent an arbitrary number of computing resources to meet their requirements, making cloud computing an attractive choice for executing real-time tasks. Economical task allocation and scheduling on a set of leased virtual machines is an important problem in the cloud computing environment. This paper proposes a greedy and a genetic algorithm with an adaptive selection of suitable crossover and mutation operations (named as AGA to allocate and schedule real-time tasks with precedence constraint on heterogamous virtual machines. A comprehensive simulation study has been done to evaluate the performance of the proposed algorithms in terms of their solution quality and efficiency. The simulation results show that AGA outperforms the greedy algorithm and non-adaptive genetic algorithm in terms of solution quality.

  17. The algorithmic level is the bridge between computation and brain.

    Science.gov (United States)

    Love, Bradley C

    2015-04-01

    Every scientist chooses a preferred level of analysis and this choice shapes the research program, even determining what counts as evidence. This contribution revisits Marr's (1982) three levels of analysis (implementation, algorithmic, and computational) and evaluates the prospect of making progress at each individual level. After reviewing limitations of theorizing within a level, two strategies for integration across levels are considered. One is top-down in that it attempts to build a bridge from the computational to algorithmic level. Limitations of this approach include insufficient theoretical constraint at the computation level to provide a foundation for integration, and that people are suboptimal for reasons other than capacity limitations. Instead, an inside-out approach is forwarded in which all three levels of analysis are integrated via the algorithmic level. This approach maximally leverages mutual data constraints at all levels. For example, algorithmic models can be used to interpret brain imaging data, and brain imaging data can be used to select among competing models. Examples of this approach to integration are provided. This merging of levels raises questions about the relevance of Marr's tripartite view. Copyright © 2015 Cognitive Science Society, Inc.

  18. Computional algorithm for lifetime exposure to antimicrobials in pigs using register data-The LEA algorithm.

    Science.gov (United States)

    Birkegård, Anna Camilla; Andersen, Vibe Dalhoff; Halasa, Tariq; Jensen, Vibeke Frøkjær; Toft, Nils; Vigre, Håkan

    2017-10-01

    Accurate and detailed data on antimicrobial exposure in pig production are essential when studying the association between antimicrobial exposure and antimicrobial resistance. Due to difficulties in obtaining primary data on antimicrobial exposure in a large number of farms, there is a need for a robust and valid method to estimate the exposure using register data. An approach that estimates the antimicrobial exposure in every rearing period during the lifetime of a pig using register data was developed into a computational algorithm. In this approach data from national registers on antimicrobial purchases, movements of pigs and farm demographics registered at farm level are used. The algorithm traces batches of pigs retrospectively from slaughter to the farm(s) that housed the pigs during their finisher, weaner, and piglet period. Subsequently, the algorithm estimates the antimicrobial exposure as the number of Animal Defined Daily Doses for treatment of one kg pig in each of the rearing periods. Thus, the antimicrobial purchase data at farm level are translated into antimicrobial exposure estimates at batch level. A batch of pigs is defined here as pigs sent to slaughter at the same day from the same farm. In this study we present, validate, and optimise a computational algorithm that calculate the lifetime exposure of antimicrobials for slaughter pigs. The algorithm was evaluated by comparing the computed estimates to data on antimicrobial usage from farm records in 15 farm units. We found a good positive correlation between the two estimates. The algorithm was run for Danish slaughter pigs sent to slaughter in January to March 2015 from farms with more than 200 finishers to estimate the proportion of farms that it was applicable for. In the final process, the algorithm was successfully run for batches of pigs originating from 3026 farms with finisher units (77% of the initial population). This number can be increased if more accurate register data can be

  19. Algorithms for computational fluid dynamics n parallel processors

    International Nuclear Information System (INIS)

    Van de Velde, E.F.

    1986-01-01

    A study of parallel algorithms for the numerical solution of partial differential equations arising in computational fluid dynamics is presented. The actual implementation on parallel processors of shared and nonshared memory design is discussed. The performance of these algorithms is analyzed in terms of machine efficiency, communication time, bottlenecks and software development costs. For elliptic equations, a parallel preconditioned conjugate gradient method is described, which has been used to solve pressure equations discretized with high order finite elements on irregular grids. A parallel full multigrid method and a parallel fast Poisson solver are also presented. Hyperbolic conservation laws were discretized with parallel versions of finite difference methods like the Lax-Wendroff scheme and with the Random Choice method. Techniques are developed for comparing the behavior of an algorithm on different architectures as a function of problem size and local computational effort. Effective use of these advanced architecture machines requires the use of machine dependent programming. It is shown that the portability problems can be minimized by introducing high level operations on vectors and matrices structured into program libraries

  20. Influence of implant rod curvature on sagittal correction of scoliosis deformity.

    Science.gov (United States)

    Salmingo, Remel Alingalan; Tadano, Shigeru; Abe, Yuichiro; Ito, Manabu

    2014-08-01

    Deformation of in vivo-implanted rods could alter the scoliosis sagittal correction. To our knowledge, no previous authors have investigated the influence of implanted-rod deformation on the sagittal deformity correction during scoliosis surgery. To analyze the changes of the implant rod's angle of curvature during surgery and establish its influence on sagittal correction of scoliosis deformity. A retrospective analysis of the preoperative and postoperative implant rod geometry and angle of curvature was conducted. Twenty adolescent idiopathic scoliosis patients underwent surgery. Average age at the time of operation was 14 years. The preoperative and postoperative implant rod angle of curvature expressed in degrees was obtained for each patient. Two implant rods were attached to the concave and convex side of the spinal deformity. The preoperative implant rod geometry was measured before surgical implantation. The postoperative implant rod geometry after surgery was measured by computed tomography. The implant rod angle of curvature at the sagittal plane was obtained from the implant rod geometry. The angle of curvature between the implant rod extreme ends was measured before implantation and after surgery. The sagittal curvature between the corresponding spinal levels of healthy adolescents obtained by previous studies was compared with the implant rod angle of curvature to evaluate the sagittal curve correction. The difference between the postoperative implant rod angle of curvature and normal spine sagittal curvature of the corresponding instrumented level was used to evaluate over or under correction of the sagittal deformity. The implant rods at the concave side of deformity of all patients were significantly deformed after surgery. The average degree of rod deformation Δθ at the concave and convex sides was 15.8° and 1.6°, respectively. The average preoperative and postoperative implant rod angle of curvature at the concave side was 33.6° and 17.8

  1. Noise filtering algorithm for the MFTF-B computer based control system

    International Nuclear Information System (INIS)

    Minor, E.G.

    1983-01-01

    An algorithm to reduce the message traffic in the MFTF-B computer based control system is described. The algorithm filters analog inputs to the control system. Its purpose is to distinguish between changes in the inputs due to noise and changes due to significant variations in the quantity being monitored. Noise is rejected while significant changes are reported to the control system data base, thus keeping the data base updated with a minimum number of messages. The algorithm is memory efficient, requiring only four bytes of storage per analog channel, and computationally simple, requiring only subtraction and comparison. Quantitative analysis of the algorithm is presented for the case of additive Gaussian noise. It is shown that the algorithm is stable and tends toward the mean value of the monitored variable over a wide variety of additive noise distributions

  2. Cosmic curvature tested directly from observations

    Science.gov (United States)

    Denissenya, Mikhail; Linder, Eric V.; Shafieloo, Arman

    2018-03-01

    Cosmic spatial curvature is a fundamental geometric quantity of the Universe. We investigate a model independent, geometric approach to measure spatial curvature directly from observations, without any derivatives of data. This employs strong lensing time delays and supernova distance measurements to measure the curvature itself, rather than just testing consistency with flatness. We define two curvature estimators, with differing error propagation characteristics, that can crosscheck each other, and also show how they can be used to map the curvature in redshift slices, to test constancy of curvature as required by the Robertson-Walker metric. Simulating realizations of redshift distributions and distance measurements of lenses and sources, we estimate uncertainties on the curvature enabled by next generation measurements. The results indicate that the model independent methods, using only geometry without assuming forms for the energy density constituents, can determine the curvature at the ~6×10‑3 level.

  3. Iterative concurrent reconstruction algorithms for emission computed tomography

    International Nuclear Information System (INIS)

    Brown, J.K.; Hasegawa, B.H.; Lang, T.F.

    1994-01-01

    Direct reconstruction techniques, such as those based on filtered backprojection, are typically used for emission computed tomography (ECT), even though it has been argued that iterative reconstruction methods may produce better clinical images. The major disadvantage of iterative reconstruction algorithms, and a significant reason for their lack of clinical acceptance, is their computational burden. We outline a new class of ''concurrent'' iterative reconstruction techniques for ECT in which the reconstruction process is reorganized such that a significant fraction of the computational processing occurs concurrently with the acquisition of ECT projection data. These new algorithms use the 10-30 min required for acquisition of a typical SPECT scan to iteratively process the available projection data, significantly reducing the requirements for post-acquisition processing. These algorithms are tested on SPECT projection data from a Hoffman brain phantom acquired with a 2 x 10 5 counts in 64 views each having 64 projections. The SPECT images are reconstructed as 64 x 64 tomograms, starting with six angular views. Other angular views are added to the reconstruction process sequentially, in a manner that reflects their availability for a typical acquisition protocol. The results suggest that if T s of concurrent processing are used, the reconstruction processing time required after completion of the data acquisition can be reduced by at least 1/3 T s. (Author)

  4. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation

    Science.gov (United States)

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133

  5. Mental Computation or Standard Algorithm? Children's Strategy Choices on Multi-Digit Subtractions

    Science.gov (United States)

    Torbeyns, Joke; Verschaffel, Lieven

    2016-01-01

    This study analyzed children's use of mental computation strategies and the standard algorithm on multi-digit subtractions. Fifty-eight Flemish 4th graders of varying mathematical achievement level were individually offered subtractions that either stimulated the use of mental computation strategies or the standard algorithm in one choice and two…

  6. Conformal geometry computational algorithms and engineering applications

    CERN Document Server

    Jin, Miao; He, Ying; Wang, Yalin

    2018-01-01

    This book offers an essential overview of computational conformal geometry applied to fundamental problems in specific engineering fields. It introduces readers to conformal geometry theory and discusses implementation issues from an engineering perspective.  The respective chapters explore fundamental problems in specific fields of application, and detail how computational conformal geometric methods can be used to solve them in a theoretically elegant and computationally efficient way. The fields covered include computer graphics, computer vision, geometric modeling, medical imaging, and wireless sensor networks. Each chapter concludes with a summary of the material covered and suggestions for further reading, and numerous illustrations and computational algorithms complement the text.  The book draws on courses given by the authors at the University of Louisiana at Lafayette, the State University of New York at Stony Brook, and Tsinghua University, and will be of interest to senior undergraduates, gradua...

  7. An efficient algorithm to compute subsets of points in ℤ n

    OpenAIRE

    Pacheco Martínez, Ana María; Real Jurado, Pedro

    2012-01-01

    In this paper we show a more efficient algorithm than that in [8] to compute subsets of points non-congruent by isometries. This algorithm can be used to reconstruct the object from the digital image. Both algorithms are compared, highlighting the improvements obtained in terms of CPU time.

  8. Regularized strings with extrinsic curvature

    International Nuclear Information System (INIS)

    Ambjoern, J.; Durhuus, B.

    1987-07-01

    We analyze models of discretized string theories, where the path integral over world sheet variables is regularized by summing over triangulated surfaces. The inclusion of curvature in the action is a necessity for the scaling of the string tension. We discuss the physical properties of models with extrinsic curvature terms in the action and show that the string tension vanishes at the critical point where the bare extrinsic curvature coupling tends to infinity. Similar results are derived for models with intrinsic curvature. (orig.)

  9. Algorithms for the Computation of Debris Risk

    Science.gov (United States)

    Matney, Mark J.

    2017-01-01

    Determining the risks from space debris involve a number of statistical calculations. These calculations inevitably involve assumptions about geometry - including the physical geometry of orbits and the geometry of satellites. A number of tools have been developed in NASA’s Orbital Debris Program Office to handle these calculations; many of which have never been published before. These include algorithms that are used in NASA’s Orbital Debris Engineering Model ORDEM 3.0, as well as other tools useful for computing orbital collision rates and ground casualty risks. This paper presents an introduction to these algorithms and the assumptions upon which they are based.

  10. Computational performance of a projection and rescaling algorithm

    OpenAIRE

    Pena, Javier; Soheili, Negar

    2018-01-01

    This paper documents a computational implementation of a {\\em projection and rescaling algorithm} for finding most interior solutions to the pair of feasibility problems \\[ \\text{find} \\; x\\in L\\cap\\mathbb{R}^n_{+} \\;\\;\\;\\; \\text{ and } \\; \\;\\;\\;\\; \\text{find} \\; \\hat x\\in L^\\perp\\cap\\mathbb{R}^n_{+}, \\] where $L$ denotes a linear subspace in $\\mathbb{R}^n$ and $L^\\perp$ denotes its orthogonal complement. The projection and rescaling algorithm is a recently developed method that combines a {\\...

  11. An algorithm of discovering signatures from DNA databases on a computer cluster.

    Science.gov (United States)

    Lee, Hsiao Ping; Sheu, Tzu-Fang

    2014-10-05

    Signatures are short sequences that are unique and not similar to any other sequence in a database that can be used as the basis to identify different species. Even though several signature discovery algorithms have been proposed in the past, these algorithms require the entirety of databases to be loaded in the memory, thus restricting the amount of data that they can process. It makes those algorithms unable to process databases with large amounts of data. Also, those algorithms use sequential models and have slower discovery speeds, meaning that the efficiency can be improved. In this research, we are debuting the utilization of a divide-and-conquer strategy in signature discovery and have proposed a parallel signature discovery algorithm on a computer cluster. The algorithm applies the divide-and-conquer strategy to solve the problem posed to the existing algorithms where they are unable to process large databases and uses a parallel computing mechanism to effectively improve the efficiency of signature discovery. Even when run with just the memory of regular personal computers, the algorithm can still process large databases such as the human whole-genome EST database which were previously unable to be processed by the existing algorithms. The algorithm proposed in this research is not limited by the amount of usable memory and can rapidly find signatures in large databases, making it useful in applications such as Next Generation Sequencing and other large database analysis and processing. The implementation of the proposed algorithm is available at http://www.cs.pu.edu.tw/~fang/DDCSDPrograms/DDCSD.htm.

  12. Computation-aware algorithm selection approach for interlaced-to-progressive conversion

    Science.gov (United States)

    Park, Sang-Jun; Jeon, Gwanggil; Jeong, Jechang

    2010-05-01

    We discuss deinterlacing results in a computationally constrained and varied environment. The proposed computation-aware algorithm selection approach (CASA) for fast interlaced to progressive conversion algorithm consists of three methods: the line-averaging (LA) method for plain regions, the modified edge-based line-averaging (MELA) method for medium regions, and the proposed covariance-based adaptive deinterlacing (CAD) method for complex regions. The proposed CASA uses two criteria, mean-squared error (MSE) and CPU time, for assigning the method. We proposed a CAD method. The principle idea of CAD is based on the correspondence between the high and low-resolution covariances. We estimated the local covariance coefficients from an interlaced image using Wiener filtering theory and then used these optimal minimum MSE interpolation coefficients to obtain a deinterlaced image. The CAD method, though more robust than most known methods, was not found to be very fast compared to the others. To alleviate this issue, we proposed an adaptive selection approach using a fast deinterlacing algorithm rather than using only one CAD algorithm. The proposed hybrid approach of switching between the conventional schemes (LA and MELA) and our CAD was proposed to reduce the overall computational load. A reliable condition to be used for switching the schemes was presented after a wide set of initial training processes. The results of computer simulations showed that the proposed methods outperformed a number of methods presented in the literature.

  13. Computer Texture Mapping for Laser Texturing of Injection Mold

    Directory of Open Access Journals (Sweden)

    Yongquan Zhou

    2014-04-01

    Full Text Available Laser texturing is a relatively new multiprocess technique that has been used for machining 3D curved surfaces; it is more flexible and efficient to create decorative texture on 3D curved surfaces of injection molds so as to improve the surface quality and achieve cosmetic surface of molded plastic parts. In this paper, a novel method of laser texturing 3D curved surface based on 3-axis galvanometer scanning unit has been presented to prevent the texturing of injection mold surface from much distortion which is often caused by traditional texturing processes. The novel method has been based on the computer texture mapping technology which has been developed and presented. The developed texture mapping algorithm includes surface triangulation, notations, distortion measurement, control, and numerical method. An interface of computer texture mapping has been built to implement the algorithm of texture mapping approach to controlled distortion rate of 3D texture math model from 2D original texture applied to curvature surface. Through a case study of laser texturing of a high curvature surface of injection mold of a mice top case, it shows that the novel method of laser texturing meets the quality standard of laser texturing of injection mold.

  14. Implementation of PHENIX trigger algorithms on massively parallel computers

    International Nuclear Information System (INIS)

    Petridis, A.N.; Wohn, F.K.

    1995-01-01

    The event selection requirements of contemporary high energy and nuclear physics experiments are met by the introduction of on-line trigger algorithms which identify potentially interesting events and reduce the data acquisition rate to levels that are manageable by the electronics. Such algorithms being parallel in nature can be simulated off-line using massively parallel computers. The PHENIX experiment intends to investigate the possible existence of a new phase of matter called the quark gluon plasma which has been theorized to have existed in very early stages of the evolution of the universe by studying collisions of heavy nuclei at ultra-relativistic energies. Such interactions can also reveal important information regarding the structure of the nucleus and mandate a thorough investigation of the simpler proton-nucleus collisions at the same energies. The complexity of PHENIX events and the need to analyze and also simulate them at rates similar to the data collection ones imposes enormous computation demands. This work is a first effort to implement PHENIX trigger algorithms on parallel computers and to study the feasibility of using such machines to run the complex programs necessary for the simulation of the PHENIX detector response. Fine and coarse grain approaches have been studied and evaluated. Depending on the application the performance of a massively parallel computer can be much better or much worse than that of a serial workstation. A comparison between single instruction and multiple instruction computers is also made and possible applications of the single instruction machines to high energy and nuclear physics experiments are outlined. copyright 1995 American Institute of Physics

  15. Realization of Deutsch-like algorithm using ensemble computing

    International Nuclear Information System (INIS)

    Wei Daxiu; Luo Jun; Sun Xianping; Zeng Xizhi

    2003-01-01

    The Deutsch-like algorithm [Phys. Rev. A. 63 (2001) 034101] distinguishes between even and odd query functions using fewer function calls than its possible classical counterpart in a two-qubit system. But the similar method cannot be applied to a multi-qubit system. We propose a new approach for solving Deutsch-like problem using ensemble computing. The proposed algorithm needs an ancillary qubit and can be easily extended to multi-qubit system with one query. Our ensemble algorithm beginning with a easily-prepared initial state has three main steps. The classifications of the functions can be obtained directly from the spectra of the ancilla qubit. We also demonstrate the new algorithm in a four-qubit molecular system using nuclear magnetic resonance (NMR). One hydrogen and three carbons are selected as the four qubits, and one of carbons is ancilla qubit. We choice two unitary transformations, corresponding to two functions (one odd function and one even function), to validate the ensemble algorithm. The results show that our experiment is successfully and our ensemble algorithm for solving the Deutsch-like problem is virtual

  16. Sort-Mid tasks scheduling algorithm in grid computing.

    Science.gov (United States)

    Reda, Naglaa M; Tawfik, A; Marzok, Mohamed A; Khamis, Soheir M

    2015-11-01

    Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.

  17. Curvature fluctuations as progenitors of large scale holes

    International Nuclear Information System (INIS)

    Vittorio, N.; Santangelo, P.; Occhionero, F.

    1984-01-01

    The authors extend previous work to study the formation and evolution of deep holes, under the assumption that they arise from curvature or energy perturbations in the Hubble flow. Their algorithm, which makes use of the spherically symmetric and pressureless Tolman-Bondi solution, can embed a perturbation in any cosmological background. After recalling previous results on the central depth of the hole and its radial dimension, they give here specific examples of density and peculiar velocity profiles, which may have a bearing on whether galaxy formation is a dissipative or dissipationless process. (orig.)

  18. A coordinate descent MM algorithm for fast computation of sparse logistic PCA

    KAUST Repository

    Lee, Seokho

    2013-06-01

    Sparse logistic principal component analysis was proposed in Lee et al. (2010) for exploratory analysis of binary data. Relying on the joint estimation of multiple principal components, the algorithm therein is computationally too demanding to be useful when the data dimension is high. We develop a computationally fast algorithm using a combination of coordinate descent and majorization-minimization (MM) auxiliary optimization. Our new algorithm decouples the joint estimation of multiple components into separate estimations and consists of closed-form elementwise updating formulas for each sparse principal component. The performance of the proposed algorithm is tested using simulation and high-dimensional real-world datasets. © 2013 Elsevier B.V. All rights reserved.

  19. Arbitrated Quantum Signature with Hamiltonian Algorithm Based on Blind Quantum Computation

    Science.gov (United States)

    Shi, Ronghua; Ding, Wanting; Shi, Jinjing

    2018-03-01

    A novel arbitrated quantum signature (AQS) scheme is proposed motivated by the Hamiltonian algorithm (HA) and blind quantum computation (BQC). The generation and verification of signature algorithm is designed based on HA, which enables the scheme to rely less on computational complexity. It is unnecessary to recover original messages when verifying signatures since the blind quantum computation is applied, which can improve the simplicity and operability of our scheme. It is proved that the scheme can be deployed securely, and the extended AQS has some extensive applications in E-payment system, E-government, E-business, etc.

  20. Algorithms for the Computation of Debris Risks

    Science.gov (United States)

    Matney, Mark

    2017-01-01

    Determining the risks from space debris involve a number of statistical calculations. These calculations inevitably involve assumptions about geometry - including the physical geometry of orbits and the geometry of non-spherical satellites. A number of tools have been developed in NASA's Orbital Debris Program Office to handle these calculations; many of which have never been published before. These include algorithms that are used in NASA's Orbital Debris Engineering Model ORDEM 3.0, as well as other tools useful for computing orbital collision rates and ground casualty risks. This paper will present an introduction to these algorithms and the assumptions upon which they are based.

  1. A general algorithm for computing distance transforms in linear time

    NARCIS (Netherlands)

    Meijster, A.; Roerdink, J.B.T.M.; Hesselink, W.H.; Goutsias, J; Vincent, L; Bloomberg, DS

    2000-01-01

    A new general algorithm fur computing distance transforms of digital images is presented. The algorithm consists of two phases. Both phases consist of two scans, a forward and a backward scan. The first phase scans the image column-wise, while the second phase scans the image row-wise. Since the

  2. Quantifying the quality of hand movement in stroke patients through three-dimensional curvature

    Directory of Open Access Journals (Sweden)

    Osu Rieko

    2011-10-01

    Full Text Available Abstract Background To more accurately evaluate rehabilitation outcomes in stroke patients, movement irregularities should be quantified. Previous work in stroke patients has revealed a reduction in the trajectory smoothness and segmentation of continuous movements. Clinically, the Stroke Impairment Assessment Set (SIAS evaluates the clumsiness of arm movements using an ordinal scale based on the examiner's observations. In this study, we focused on three-dimensional curvature of hand trajectory to quantify movement, and aimed to establish a novel measurement that is independent of movement duration. We compared the proposed measurement with the SIAS score and the jerk measure representing temporal smoothness. Methods Sixteen stroke patients with SIAS upper limb proximal motor function (Knee-Mouth test scores ranging from 2 (incomplete performance to 4 (mild clumsiness were recruited. Nine healthy participant with a SIAS score of 5 (normal also participated. Participants were asked to grasp a plastic glass and repetitively move it from the lap to the mouth and back at a conformable speed for 30 s, during which the hand movement was measured using OPTOTRAK. The position data was numerically differentiated and the three-dimensional curvature was computed. To compare against a previously proposed measure, the mean squared jerk normalized by its minimum value was computed. Age-matched healthy participants were instructed to move the glass at three different movement speeds. Results There was an inverse relationship between the curvature of the movement trajectory and the patient's SIAS score. The median of the -log of curvature (MedianLC correlated well with the SIAS score, upper extremity subsection of Fugl-Meyer Assessment, and the jerk measure in the paretic arm. When the healthy participants moved slowly, the increase in the jerk measure was comparable to the paretic movements with a SIAS score of 2 to 4, while the MedianLC was distinguishable

  3. A Line Search Multilevel Truncated Newton Algorithm for Computing the Optical Flow

    Directory of Open Access Journals (Sweden)

    Lluís Garrido

    2015-06-01

    Full Text Available We describe the implementation details and give the experimental results of three optimization algorithms for dense optical flow computation. In particular, using a line search strategy, we evaluate the performance of the unilevel truncated Newton method (LSTN, a multiresolution truncated Newton (MR/LSTN and a full multigrid truncated Newton (FMG/LSTN. We use three image sequences and four models of optical flow for performance evaluation. The FMG/LSTN algorithm is shown to lead to better optical flow estimation with less computational work than both the LSTN and MR/LSTN algorithms.

  4. Generic Properties of Curvature Sensing through Vision and Touch

    Directory of Open Access Journals (Sweden)

    Birgitta Dresp-Langley

    2013-01-01

    Full Text Available Generic properties of curvature representations formed on the basis of vision and touch were examined as a function of mathematical properties of curved objects. Virtual representations of the curves were shown on a computer screen for visual scaling by sighted observers (experiment 1. Their physical counterparts were placed in the two hands of blindfolded and congenitally blind observers for tactile scaling. The psychophysical data show that curvature representations in congenitally blind individuals, who never had any visual experience, and in sighted observers, who rely on vision most of the time, are statistically linked to the same mathematical properties of the curves. The perceived magnitude of object curvature, sensed through either vision or touch, is related by a mathematical power law, with similar exponents for the two sensory modalities, to the aspect ratio of the curves, a scale invariant geometric property. This finding supports biologically motivated models of sensory integration suggesting a universal power law for the adaptive brain control and balance of motor responses to environmental stimuli from any sensory modality.

  5. Shor's factoring algorithm and modern cryptography. An illustration of the capabilities inherent in quantum computers

    Science.gov (United States)

    Gerjuoy, Edward

    2005-06-01

    The security of messages encoded via the widely used RSA public key encryption system rests on the enormous computational effort required to find the prime factors of a large number N using classical (conventional) computers. In 1994 Peter Shor showed that for sufficiently large N, a quantum computer could perform the factoring with much less computational effort. This paper endeavors to explain, in a fashion comprehensible to the nonexpert, the RSA encryption protocol; the various quantum computer manipulations constituting the Shor algorithm; how the Shor algorithm performs the factoring; and the precise sense in which a quantum computer employing Shor's algorithm can be said to accomplish the factoring of very large numbers with less computational effort than a classical computer. It is made apparent that factoring N generally requires many successive runs of the algorithm. Our analysis reveals that the probability of achieving a successful factorization on a single run is about twice as large as commonly quoted in the literature.

  6. An improved non-uniformity correction algorithm and its GPU parallel implementation

    Science.gov (United States)

    Cheng, Kuanhong; Zhou, Huixin; Qin, Hanlin; Zhao, Dong; Qian, Kun; Rong, Shenghui

    2018-05-01

    The performance of SLP-THP based non-uniformity correction algorithm is seriously affected by the result of SLP filter, which always leads to image blurring and ghosting artifacts. To address this problem, an improved SLP-THP based non-uniformity correction method with curvature constraint was proposed. Here we put forward a new way to estimate spatial low frequency component. First, the details and contours of input image were obtained respectively by minimizing local Gaussian curvature and mean curvature of image surface. Then, the guided filter was utilized to combine these two parts together to get the estimate of spatial low frequency component. Finally, we brought this SLP component into SLP-THP method to achieve non-uniformity correction. The performance of proposed algorithm was verified by several real and simulated infrared image sequences. The experimental results indicated that the proposed algorithm can reduce the non-uniformity without detail losing. After that, a GPU based parallel implementation that runs 150 times faster than CPU was presented, which showed the proposed algorithm has great potential for real time application.

  7. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment.

    Science.gov (United States)

    Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che

    2014-01-16

    To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high

  8. An Algorithm for Fast Computation of 3D Zernike Moments for Volumetric Images

    OpenAIRE

    Hosny, Khalid M.; Hafez, Mohamed A.

    2012-01-01

    An algorithm was proposed for very fast and low-complexity computation of three-dimensional Zernike moments. The 3D Zernike moments were expressed in terms of exact 3D geometric moments where the later are computed exactly through the mathematical integration of the monomial terms over the digital image/object voxels. A new symmetry-based method was proposed to compute 3D Zernike moments with 87% reduction in the computational complexity. A fast 1D cascade algorithm was also employed to add m...

  9. A new fast algorithm for computing a complex number: Theoretic transforms

    Science.gov (United States)

    Reed, I. S.; Liu, K. Y.; Truong, T. K.

    1977-01-01

    A high-radix fast Fourier transformation (FFT) algorithm for computing transforms over GF(sq q), where q is a Mersenne prime, is developed to implement fast circular convolutions. This new algorithm requires substantially fewer multiplications than the conventional FFT.

  10. Cloud Computing Task Scheduling Based on Cultural Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Li Jian-Wen

    2016-01-01

    Full Text Available The task scheduling strategy based on cultural genetic algorithm(CGA is proposed in order to improve the efficiency of task scheduling in the cloud computing platform, which targets at minimizing the total time and cost of task scheduling. The improved genetic algorithm is used to construct the main population space and knowledge space under cultural framework which get independent parallel evolution, forming a mechanism of mutual promotion to dispatch the cloud task. Simultaneously, in order to prevent the defects of the genetic algorithm which is easy to fall into local optimum, the non-uniform mutation operator is introduced to improve the search performance of the algorithm. The experimental results show that CGA reduces the total time and lowers the cost of the scheduling, which is an effective algorithm for the cloud task scheduling.

  11. Signal and image processing algorithm performance in a virtual and elastic computing environment

    Science.gov (United States)

    Bennett, Kelly W.; Robertson, James

    2013-05-01

    The U.S. Army Research Laboratory (ARL) supports the development of classification, detection, tracking, and localization algorithms using multiple sensing modalities including acoustic, seismic, E-field, magnetic field, PIR, and visual and IR imaging. Multimodal sensors collect large amounts of data in support of algorithm development. The resulting large amount of data, and their associated high-performance computing needs, increases and challenges existing computing infrastructures. Purchasing computer power as a commodity using a Cloud service offers low-cost, pay-as-you-go pricing models, scalability, and elasticity that may provide solutions to develop and optimize algorithms without having to procure additional hardware and resources. This paper provides a detailed look at using a commercial cloud service provider, such as Amazon Web Services (AWS), to develop and deploy simple signal and image processing algorithms in a cloud and run the algorithms on a large set of data archived in the ARL Multimodal Signatures Database (MMSDB). Analytical results will provide performance comparisons with existing infrastructure. A discussion on using cloud computing with government data will discuss best security practices that exist within cloud services, such as AWS.

  12. An Efficient UD-Based Algorithm for the Computation of Maximum Likelihood Sensitivity of Continuous-Discrete Systems

    DEFF Research Database (Denmark)

    Boiroux, Dimitri; Juhl, Rune; Madsen, Henrik

    2016-01-01

    This paper addresses maximum likelihood parameter estimation of continuous-time nonlinear systems with discrete-time measurements. We derive an efficient algorithm for the computation of the log-likelihood function and its gradient, which can be used in gradient-based optimization algorithms....... This algorithm uses UD decomposition of symmetric matrices and the array algorithm for covariance update and gradient computation. We test our algorithm on the Lotka-Volterra equations. Compared to the maximum likelihood estimation based on finite difference gradient computation, we get a significant speedup...

  13. Curvature Entropy for Curved Profile Generation

    OpenAIRE

    Ujiie, Yoshiki; Kato, Takeo; Sato, Koichiro; Matsuoka, Yoshiyuki

    2012-01-01

    In a curved surface design, the overall shape features that emerge from combinations of shape elements are important. However, controlling the features of the overall shape in curved profiles is difficult using conventional microscopic shape information such as dimension. Herein two types of macroscopic shape information, curvature entropy and quadrature curvature entropy, quantitatively represent the features of the overall shape. The curvature entropy is calculated by the curvature distribu...

  14. Sort-Mid tasks scheduling algorithm in grid computing

    Directory of Open Access Journals (Sweden)

    Naglaa M. Reda

    2015-11-01

    Full Text Available Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.

  15. Curvature-Induced Instabilities of Shells

    Science.gov (United States)

    Pezzulla, Matteo; Stoop, Norbert; Steranka, Mark P.; Bade, Abdikhalaq J.; Holmes, Douglas P.

    2018-01-01

    Induced by proteins within the cell membrane or by differential growth, heating, or swelling, spontaneous curvatures can drastically affect the morphology of thin bodies and induce mechanical instabilities. Yet, the interaction of spontaneous curvature and geometric frustration in curved shells remains poorly understood. Via a combination of precision experiments on elastomeric spherical shells, simulations, and theory, we show how a spontaneous curvature induces a rotational symmetry-breaking buckling as well as a snapping instability reminiscent of the Venus fly trap closure mechanism. The instabilities, and their dependence on geometry, are rationalized by reducing the spontaneous curvature to an effective mechanical load. This formulation reveals a combined pressurelike term in the bulk and a torquelike term in the boundary, allowing scaling predictions for the instabilities that are in excellent agreement with experiments and simulations. Moreover, the effective pressure analogy suggests a curvature-induced subcritical buckling in closed shells. We determine the critical buckling curvature via a linear stability analysis that accounts for the combination of residual membrane and bending stresses. The prominent role of geometry in our findings suggests the applicability of the results over a wide range of scales.

  16. Development of computational algorithms for quantification of pulmonary structures

    International Nuclear Information System (INIS)

    Oliveira, Marcela de; Alvarez, Matheus; Alves, Allan F.F.; Miranda, Jose R.A.; Pina, Diana R.

    2012-01-01

    The high-resolution computed tomography has become the imaging diagnostic exam most commonly used for the evaluation of the squeals of Paracoccidioidomycosis. The subjective evaluations the radiological abnormalities found on HRCT images do not provide an accurate quantification. The computer-aided diagnosis systems produce a more objective assessment of the abnormal patterns found in HRCT images. Thus, this research proposes the development of algorithms in MATLAB® computing environment can quantify semi-automatically pathologies such as pulmonary fibrosis and emphysema. The algorithm consists in selecting a region of interest (ROI), and by the use of masks, filter densities and morphological operators, to obtain a quantification of the injured area to the area of a healthy lung. The proposed method was tested on ten HRCT scans of patients with confirmed PCM. The results of semi-automatic measurements were compared with subjective evaluations performed by a specialist in radiology, falling to a coincidence of 80% for emphysema and 58% for fibrosis. (author)

  17. The Quick Measure of a Nurbs Surface Curvature for Accurate Triangular Meshing

    Directory of Open Access Journals (Sweden)

    Kniat Aleksander

    2014-04-01

    Full Text Available NURBS surfaces are the most widely used surfaces for three-dimensional models in CAD/ CAE programs. When a model for FEM calculation is prepared with a CAD program it is inevitable to mesh it finally. There are many algorithms for meshing planar regions. Some of them may be used for meshing surfaces but it is necessary to take the curvature of the surface under consideration to avoid poor quality mesh. The mesh must be denser in the curved regions of the surface. In this paper, instead of analysing a surface curvature, the method to assess how close is a mesh triangle to the surface to which its vertices belong, is presented. The distance between a mesh triangle and a parallel tangent plane through a point on a surface is the measure of the triangle quality. Finding the surface point whose projection is located inside the mesh triangle and which is the tangency point to the plane parallel to this triangle is an optimization problem. Mathematical description of the problem and the algorithm to find its solution are also presented in the paper.

  18. Introduction: a brief overview of iterative algorithms in X-ray computed tomography.

    Science.gov (United States)

    Soleimani, M; Pengpen, T

    2015-06-13

    This paper presents a brief overview of some basic iterative algorithms, and more sophisticated methods are presented in the research papers in this issue. A range of algebraic iterative algorithms are covered here including ART, SART and OS-SART. A major limitation of the traditional iterative methods is their computational time. The Krylov subspace based methods such as the conjugate gradients (CG) algorithm and its variants can be used to solve linear systems of equations arising from large-scale CT with possible implementation using modern high-performance computing tools. The overall aim of this theme issue is to stimulate international efforts to develop the next generation of X-ray computed tomography (CT) image reconstruction software. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  19. A comparative study of attenuation correction algorithms in single photon emission computed tomography (SPECT)

    International Nuclear Information System (INIS)

    Murase, Kenya; Itoh, Hisao; Mogami, Hiroshi; Ishine, Masashiro; Kawamura, Masashi; Iio, Atsushi; Hamamoto, Ken

    1987-01-01

    A computer based simulation method was developed to assess the relative effectiveness and availability of various attenuation compensation algorithms in single photon emission computed tomography (SPECT). The effect of the nonuniformity of attenuation coefficient distribution in the body, the errors in determining a body contour and the statistical noise on reconstruction accuracy and the computation time in using the algorithms were studied. The algorithms were classified into three groups: precorrection, post correction and iterative correction methods. Furthermore, a hybrid method was devised by combining several methods. This study will be useful for understanding the characteristics limitations and strengths of the algorithms and searching for a practical correction method for photon attenuation in SPECT. (orig.)

  20. Some Inequalities for the Lp-Curvature Image

    Directory of Open Access Journals (Sweden)

    Xiang Yu

    2009-01-01

    Full Text Available Lutwak introduced the notion of Lp-curvature image and proved an inequality for the volumes of convex body and its Lp-curvature image. In this paper, we first give an monotonic property of Lp-curvature image. Further, we establish two inequalities for the Lp-curvature image and its polar, respectively. Finally, an inequality for the volumes of Lp-projection body and Lp-curvature image is obtained.

  1. Implementation and analysis of a Navier-Stokes algorithm on parallel computers

    Science.gov (United States)

    Fatoohi, Raad A.; Grosch, Chester E.

    1988-01-01

    The results of the implementation of a Navier-Stokes algorithm on three parallel/vector computers are presented. The object of this research is to determine how well, or poorly, a single numerical algorithm would map onto three different architectures. The algorithm is a compact difference scheme for the solution of the incompressible, two-dimensional, time-dependent Navier-Stokes equations. The computers were chosen so as to encompass a variety of architectures. They are the following: the MPP, an SIMD machine with 16K bit serial processors; Flex/32, an MIMD machine with 20 processors; and Cray/2. The implementation of the algorithm is discussed in relation to these architectures and measures of the performance on each machine are given. The basic comparison is among SIMD instruction parallelism on the MPP, MIMD process parallelism on the Flex/32, and vectorization of a serial code on the Cray/2. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally, conclusions are presented.

  2. Curvature bound from gravitational catalysis

    Science.gov (United States)

    Gies, Holger; Martini, Riccardo

    2018-04-01

    We determine bounds on the curvature of local patches of spacetime from the requirement of intact long-range chiral symmetry. The bounds arise from a scale-dependent analysis of gravitational catalysis and its influence on the effective potential for the chiral order parameter, as induced by fermionic fluctuations on a curved spacetime with local hyperbolic properties. The bound is expressed in terms of the local curvature scalar measured in units of a gauge-invariant coarse-graining scale. We argue that any effective field theory of quantum gravity obeying this curvature bound is safe from chiral symmetry breaking through gravitational catalysis and thus compatible with the simultaneous existence of chiral fermions in the low-energy spectrum. With increasing number of dimensions, the curvature bound in terms of the hyperbolic scale parameter becomes stronger. Applying the curvature bound to the asymptotic safety scenario for quantum gravity in four spacetime dimensions translates into bounds on the matter content of particle physics models.

  3. Autumn Algorithm-Computation of Hybridization Networks for Realistic Phylogenetic Trees.

    Science.gov (United States)

    Huson, Daniel H; Linz, Simone

    2018-01-01

    A minimum hybridization network is a rooted phylogenetic network that displays two given rooted phylogenetic trees using a minimum number of reticulations. Previous mathematical work on their calculation has usually assumed the input trees to be bifurcating, correctly rooted, or that they both contain the same taxa. These assumptions do not hold in biological studies and "realistic" trees have multifurcations, are difficult to root, and rarely contain the same taxa. We present a new algorithm for computing minimum hybridization networks for a given pair of "realistic" rooted phylogenetic trees. We also describe how the algorithm might be used to improve the rooting of the input trees. We introduce the concept of "autumn trees", a nice framework for the formulation of algorithms based on the mathematics of "maximum acyclic agreement forests". While the main computational problem is hard, the run-time depends mainly on how different the given input trees are. In biological studies, where the trees are reasonably similar, our parallel implementation performs well in practice. The algorithm is available in our open source program Dendroscope 3, providing a platform for biologists to explore rooted phylogenetic networks. We demonstrate the utility of the algorithm using several previously studied data sets.

  4. Gravitational curvature an introduction to Einstein's theory

    CERN Document Server

    Frankel, Theodore Thomas

    1979-01-01

    This classic text and reference monograph applies modern differential geometry to general relativity. A brief mathematical introduction to gravitational curvature, it emphasizes the subject's geometric essence, replacing the often-tedious analytical computations with geometric arguments. Clearly presented and physically motivated derivations express the deflection of light, Schwarzchild's exterior and interior solutions, and the Oppenheimer-Volkoff equations. A perfect choice for advanced students of mathematics, this volume will also appeal to mathematicians interested in physics. It stresses

  5. A novel computer algorithm for modeling and treating mandibular fractures: A pilot study.

    Science.gov (United States)

    Rizzi, Christopher J; Ortlip, Timothy; Greywoode, Jewel D; Vakharia, Kavita T; Vakharia, Kalpesh T

    2017-02-01

    To describe a novel computer algorithm that can model mandibular fracture repair. To evaluate the algorithm as a tool to model mandibular fracture reduction and hardware selection. Retrospective pilot study combined with cross-sectional survey. A computer algorithm utilizing Aquarius Net (TeraRecon, Inc, Foster City, CA) and Adobe Photoshop CS6 (Adobe Systems, Inc, San Jose, CA) was developed to model mandibular fracture repair. Ten different fracture patterns were selected from nine patients who had already undergone mandibular fracture repair. The preoperative computed tomography (CT) images were processed with the computer algorithm to create virtual images that matched the actual postoperative three-dimensional CT images. A survey comparing the true postoperative image with the virtual postoperative images was created and administered to otolaryngology resident and attending physicians. They were asked to rate on a scale from 0 to 10 (0 = completely different; 10 = identical) the similarity between the two images in terms of the fracture reduction and fixation hardware. Ten mandible fracture cases were analyzed and processed. There were 15 survey respondents. The mean score for overall similarity between the images was 8.41 ± 0.91; the mean score for similarity of fracture reduction was 8.61 ± 0.98; and the mean score for hardware appearance was 8.27 ± 0.97. There were no significant differences between attending and resident responses. There were no significant differences based on fracture location. This computer algorithm can accurately model mandibular fracture repair. Images created by the algorithm are highly similar to true postoperative images. The algorithm can potentially assist a surgeon planning mandibular fracture repair. 4. Laryngoscope, 2016 127:331-336, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  6. Intelligent cloud computing security using genetic algorithm as a computational tools

    Science.gov (United States)

    Razuky AL-Shaikhly, Mazin H.

    2018-05-01

    An essential change had occurred in the field of Information Technology which represented with cloud computing, cloud giving virtual assets by means of web yet awesome difficulties in the field of information security and security assurance. Currently main problem with cloud computing is how to improve privacy and security for cloud “cloud is critical security”. This paper attempts to solve cloud security by using intelligent system with genetic algorithm as wall to provide cloud data secure, all services provided by cloud must detect who receive and register it to create list of users (trusted or un-trusted) depend on behavior. The execution of present proposal has shown great outcome.

  7. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment

    Science.gov (United States)

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing. PMID:28467505

  8. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment.

    Science.gov (United States)

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Abdulhamid, Shafi'i Muhammad; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.

  9. Factors affecting root curvature of mandibular first molar

    International Nuclear Information System (INIS)

    Choi, Hang Moon; Yi, Won Jin; Heo, Min Suk; Kim, Jung Hwa; Choi, Soon Chul; Park, Tae Won

    2006-01-01

    To find the cause of root curvature by use of panoramic and lateral cephalometric radiograph. Twenty six 1st graders whose mandibular 1st molars just emerged into the mouth were selected. Panoramic and lateral cephalometric radiograph were taken at grade 1 and 6, longitudinally. In cephalometric radio graph, mandibular plane angle, ramus-occlusal place angle, gonial angle, and gonion-gnathion distance(Go-Gn distance) were measured. In panoramic radiograph, elongated root length and root angle were measured by means of digital subtraction radiography. Occlusal plane-tooth axis angle was measured, too. Pearson correlations were used to evaluate the relationships between root curvature and elongated length and longitudinal variations of all variables. Multiple regression equation using related variables was computed. The pearson correlation coefficient between curved angle and longitudinal variations of occlusal plane-tooth axis angle and ramus-occlusal plane angle was 0.350 and 0.401, respectively (p 1 +0.745X 2 (Y: root angle, X 1 : variation of occlusal plane-tooth axis angle, X 2 : variation of ramus-occlusal plane angle). It was suspected that the reasons of root curvature were change of tooth axis caused by contact with 2nd deciduous tooth and amount of mesial and superior movement related to change of occlusal plane

  10. Automated System for Teaching Computational Complexity of Algorithms Course

    Directory of Open Access Journals (Sweden)

    Vadim S. Roublev

    2017-01-01

    Full Text Available This article describes problems of designing automated teaching system for “Computational complexity of algorithms” course. This system should provide students with means to familiarize themselves with complex mathematical apparatus and improve their mathematical thinking in the respective area. The article introduces the technique of algorithms symbol scroll table that allows estimating lower and upper bounds of computational complexity. Further, we introduce a set of theorems that facilitate the analysis in cases when the integer rounding of algorithm parameters is involved and when analyzing the complexity of a sum. At the end, the article introduces a normal system of symbol transformations that allows one both to perform any symbol transformations and simplifies the automated validation of such transformations. The article is published in the authors’ wording.

  11. Scientific computing and algorithms in industrial simulations projects and products of Fraunhofer SCAI

    CERN Document Server

    Schüller, Anton; Schweitzer, Marc

    2017-01-01

    The contributions gathered here provide an overview of current research projects and selected software products of the Fraunhofer Institute for Algorithms and Scientific Computing SCAI. They show the wide range of challenges that scientific computing currently faces, the solutions it offers, and its important role in developing applications for industry. Given the exciting field of applied collaborative research and development it discusses, the book will appeal to scientists, practitioners, and students alike. The Fraunhofer Institute for Algorithms and Scientific Computing SCAI combines excellent research and application-oriented development to provide added value for our partners. SCAI develops numerical techniques, parallel algorithms and specialized software tools to support and optimize industrial simulations. Moreover, it implements custom software solutions for production and logistics, and offers calculations on high-performance computers. Its services and products are based on state-of-the-art metho...

  12. Unified algorithm for partial differential equations and examples of numerical computation

    International Nuclear Information System (INIS)

    Watanabe, Tsuguhiro

    1999-01-01

    A new unified algorithm is proposed to solve partial differential equations which describe nonlinear boundary value problems, eigenvalue problems and time developing boundary value problems. The algorithm is composed of implicit difference scheme and multiple shooting scheme and is named as HIDM (Higher order Implicit Difference Method). A new prototype computer programs for 2-dimensional partial differential equations is constructed and tested successfully to several problems. Extension of the computer programs to 3 or more higher order dimension problems will be easy due to the direct product type difference scheme. (author)

  13. Computational Analysis of Distance Operators for the Iterative Closest Point Algorithm.

    Directory of Open Access Journals (Sweden)

    Higinio Mora

    Full Text Available The Iterative Closest Point (ICP algorithm is currently one of the most popular methods for rigid registration so that it has become the standard in the Robotics and Computer Vision communities. Many applications take advantage of it to align 2D/3D surfaces due to its popularity and simplicity. Nevertheless, some of its phases present a high computational cost thus rendering impossible some of its applications. In this work, it is proposed an efficient approach for the matching phase of the Iterative Closest Point algorithm. This stage is the main bottleneck of that method so that any efficiency improvement has a great positive impact on the performance of the algorithm. The proposal consists in using low computational cost point-to-point distance metrics instead of classic Euclidean one. The candidates analysed are the Chebyshev and Manhattan distance metrics due to their simpler formulation. The experiments carried out have validated the performance, robustness and quality of the proposal. Different experimental cases and configurations have been set up including a heterogeneous set of 3D figures, several scenarios with partial data and random noise. The results prove that an average speed up of 14% can be obtained while preserving the convergence properties of the algorithm and the quality of the final results.

  14. Evaluation of six TPS algorithms in computing entrance and exit doses

    Science.gov (United States)

    Metwaly, Mohamed; Glegg, Martin; Baggarley, Shaun P.; Elliott, Alex

    2014-01-01

    Entrance and exit doses are commonly measured in in vivo dosimetry for comparison with expected values, usually generated by the treatment planning system (TPS), to verify accuracy of treatment delivery. This report aims to evaluate the accuracy of six TPS algorithms in computing entrance and exit doses for a 6 MV beam. The algorithms tested were: pencil beam convolution (Eclipse PBC), analytical anisotropic algorithm (Eclipse AAA), AcurosXB (Eclipse AXB), FFT convolution (XiO Convolution), multigrid superposition (XiO Superposition), and Monte Carlo photon (Monaco MC). Measurements with ionization chamber (IC) and diode detector in water phantoms were used as a reference. Comparisons were done in terms of central axis point dose, 1D relative profiles, and 2D absolute gamma analysis. Entrance doses computed by all TPS algorithms agreed to within 2% of the measured values. Exit doses computed by XiO Convolution, XiO Superposition, Eclipse AXB, and Monaco MC agreed with the IC measured doses to within 2%‐3%. Meanwhile, Eclipse PBC and Eclipse AAA computed exit doses were higher than the IC measured doses by up to 5.3% and 4.8%, respectively. Both algorithms assume that full backscatter exists even at the exit level, leading to an overestimation of exit doses. Despite good agreements at the central axis for Eclipse AXB and Monaco MC, 1D relative comparisons showed profiles mismatched at depths beyond 11.5 cm. Overall, the 2D absolute gamma (3%/3 mm) pass rates were better for Monaco MC, while Eclipse AXB failed mostly at the outer 20% of the field area. The findings of this study serve as a useful baseline for the implementation of entrance and exit in vivo dosimetry in clinical departments utilizing any of these six common TPS algorithms for reference comparison. PACS numbers: 87.55.‐x, 87.55.D‐, 87.55.N‐, 87.53.Bn PMID:24892349

  15. A remark about the mean curvature

    International Nuclear Information System (INIS)

    Zhang Weitao.

    1992-11-01

    In this paper, we give an integral identity about the mean curvature in Sobolev space H 0 1 (Ω) intersection H 2 (Ω). Suppose the mean curvature on Γ=δΩ is positive, we prove some inequalities of the positive mean curvature and propose some open problems. (author). 4 refs

  16. Model-driven product line engineering for mapping parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir

    2016-01-01

    Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, the mapping of the algorithm to the logical configuration platform and the implementation of the

  17. A comparison between physicians and computer algorithms for form CMS-2728 data reporting.

    Science.gov (United States)

    Malas, Mohammed Said; Wish, Jay; Moorthi, Ranjani; Grannis, Shaun; Dexter, Paul; Duke, Jon; Moe, Sharon

    2017-01-01

    CMS-2728 form (Medical Evidence Report) assesses 23 comorbidities chosen to reflect poor outcomes and increased mortality risk. Previous studies questioned the validity of physician reporting on forms CMS-2728. We hypothesize that reporting of comorbidities by computer algorithms identifies more comorbidities than physician completion, and, therefore, is more reflective of underlying disease burden. We collected data from CMS-2728 forms for all 296 patients who had incident ESRD diagnosis and received chronic dialysis from 2005 through 2014 at Indiana University outpatient dialysis centers. We analyzed patients' data from electronic medical records systems that collated information from multiple health care sources. Previously utilized algorithms or natural language processing was used to extract data on 10 comorbidities for a period of up to 10 years prior to ESRD incidence. These algorithms incorporate billing codes, prescriptions, and other relevant elements. We compared the presence or unchecked status of these comorbidities on the forms to the presence or absence according to the algorithms. Computer algorithms had higher reporting of comorbidities compared to forms completion by physicians. This remained true when decreasing data span to one year and using only a single health center source. The algorithms determination was well accepted by a physician panel. Importantly, algorithms use significantly increased the expected deaths and lowered the standardized mortality ratios. Using computer algorithms showed superior identification of comorbidities for form CMS-2728 and altered standardized mortality ratios. Adapting similar algorithms in available EMR systems may offer more thorough evaluation of comorbidities and improve quality reporting. © 2016 International Society for Hemodialysis.

  18. Automatic computer aided analysis algorithms and system for adrenal tumors on CT images.

    Science.gov (United States)

    Chai, Hanchao; Guo, Yi; Wang, Yuanyuan; Zhou, Guohui

    2017-12-04

    The adrenal tumor will disturb the secreting function of adrenocortical cells, leading to many diseases. Different kinds of adrenal tumors require different therapeutic schedules. In the practical diagnosis, it highly relies on the doctor's experience to judge the tumor type by reading the hundreds of CT images. This paper proposed an automatic computer aided analysis method for adrenal tumors detection and classification. It consisted of the automatic segmentation algorithms, the feature extraction and the classification algorithms. These algorithms were then integrated into a system and conducted on the graphic interface by using MATLAB Graphic user interface (GUI). The accuracy of the automatic computer aided segmentation and classification reached 90% on 436 CT images. The experiments proved the stability and reliability of this automatic computer aided analytic system.

  19. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  20. ESHOPPS: A COMPUTATIONAL TOOL TO AID THE TEACHING OF SHORTEST PATH ALGORITHMS

    Directory of Open Access Journals (Sweden)

    S. J. de A. LIMA

    2015-07-01

    Full Text Available The development of a computational tool called EShoPPS – Environment for Shortest Path Problem Solving, which is used to assist students in understanding the working of Dijkstra, Greedy search and A*(star algorithms is presented in this paper. Such algorithms are commonly taught in graduate and undergraduate courses of Engineering and Informatics and are used for solving many optimization problems that can be characterized as Shortest Path Problem. The EShoPPS is an interactive tool that allows students to create a graph representing the problem and also helps in developing their knowledge of each specific algorithm. Experiments performed with 155 students of undergraduate and graduate courses such as Industrial Engineering, Computer Science and Information Systems have shown that by using the EShoPPS tool students were able to improve their interpretation of investigated algorithms.

  1. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce

    Science.gov (United States)

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223

  2. Smolyak's algorithm: A powerful black box for the acceleration of scientific computations

    KAUST Repository

    Tempone, Raul; Wolfers, Soeren

    2017-01-01

    We provide a general discussion of Smolyak's algorithm for the acceleration of scientific computations. The algorithm first appeared in Smolyak's work on multidimensional integration and interpolation. Since then, it has been generalized in multiple directions and has been associated with the keywords: sparse grids, hyperbolic cross approximation, combination technique, and multilevel methods. Variants of Smolyak's algorithm have been employed in the computation of high-dimensional integrals in finance, chemistry, and physics, in the numerical solution of partial and stochastic differential equations, and in uncertainty quantification. Motivated by this broad and ever-increasing range of applications, we describe a general framework that summarizes fundamental results and assumptions in a concise application-independent manner.

  3. Smolyak's algorithm: A powerful black box for the acceleration of scientific computations

    KAUST Repository

    Tempone, Raul

    2017-03-26

    We provide a general discussion of Smolyak\\'s algorithm for the acceleration of scientific computations. The algorithm first appeared in Smolyak\\'s work on multidimensional integration and interpolation. Since then, it has been generalized in multiple directions and has been associated with the keywords: sparse grids, hyperbolic cross approximation, combination technique, and multilevel methods. Variants of Smolyak\\'s algorithm have been employed in the computation of high-dimensional integrals in finance, chemistry, and physics, in the numerical solution of partial and stochastic differential equations, and in uncertainty quantification. Motivated by this broad and ever-increasing range of applications, we describe a general framework that summarizes fundamental results and assumptions in a concise application-independent manner.

  4. An Efficient Algorithm for Computing Attractors of Synchronous And Asynchronous Boolean Networks

    Science.gov (United States)

    Zheng, Desheng; Yang, Guowu; Li, Xiaoyu; Wang, Zhicai; Liu, Feng; He, Lei

    2013-01-01

    Biological networks, such as genetic regulatory networks, often contain positive and negative feedback loops that settle down to dynamically stable patterns. Identifying these patterns, the so-called attractors, can provide important insights for biologists to understand the molecular mechanisms underlying many coordinated cellular processes such as cellular division, differentiation, and homeostasis. Both synchronous and asynchronous Boolean networks have been used to simulate genetic regulatory networks and identify their attractors. The common methods of computing attractors are that start with a randomly selected initial state and finish with exhaustive search of the state space of a network. However, the time complexity of these methods grows exponentially with respect to the number and length of attractors. Here, we build two algorithms to achieve the computation of attractors in synchronous and asynchronous Boolean networks. For the synchronous scenario, combing with iterative methods and reduced order binary decision diagrams (ROBDD), we propose an improved algorithm to compute attractors. For another algorithm, the attractors of synchronous Boolean networks are utilized in asynchronous Boolean translation functions to derive attractors of asynchronous scenario. The proposed algorithms are implemented in a procedure called geneFAtt. Compared to existing tools such as genYsis, geneFAtt is significantly faster in computing attractors for empirical experimental systems. Availability The software package is available at https://sites.google.com/site/desheng619/download. PMID:23585840

  5. Rapid mental computation system as a tool for algorithmic thinking of elementary school students development

    OpenAIRE

    Ziatdinov, Rushan; Musa, Sajid

    2013-01-01

    In this paper, we describe the possibilities of using a rapid mental computation system in elementary education. The system consists of a number of readily memorized operations that allow one to perform arithmetic computations very quickly. These operations are actually simple algorithms which can develop or improve the algorithmic thinking of pupils. Using a rapid mental computation system allows forming the basis for the study of computer science in secondary school.

  6. Discrete Curvature Theories and Applications

    KAUST Repository

    Sun, Xiang

    2016-08-25

    Discrete Di erential Geometry (DDG) concerns discrete counterparts of notions and methods in di erential geometry. This thesis deals with a core subject in DDG, discrete curvature theories on various types of polyhedral surfaces that are practically important for free-form architecture, sunlight-redirecting shading systems, and face recognition. Modeled as polyhedral surfaces, the shapes of free-form structures may have to satisfy di erent geometric or physical constraints. We study a combination of geometry and physics { the discrete surfaces that can stand on their own, as well as having proper shapes for the manufacture. These proper shapes, known as circular and conical meshes, are closely related to discrete principal curvatures. We study curvature theories that make such surfaces possible. Shading systems of freeform building skins are new types of energy-saving structures that can re-direct the sunlight. From these systems, discrete line congruences across polyhedral surfaces can be abstracted. We develop a new curvature theory for polyhedral surfaces equipped with normal congruences { a particular type of congruences de ned by linear interpolation of vertex normals. The main results are a discussion of various de nitions of normality, a detailed study of the geometry of such congruences, and a concept of curvatures and shape operators associated with the faces of a triangle mesh. These curvatures are compatible with both normal congruences and the Steiner formula. In addition to architecture, we consider the role of discrete curvatures in face recognition. We use geometric measure theory to introduce the notion of asymptotic cones associated with a singular subspace of a Riemannian manifold, which is an extension of the classical notion of asymptotic directions. We get a simple expression of these cones for polyhedral surfaces, as well as convergence and approximation theorems. We use the asymptotic cones as facial descriptors and demonstrate the

  7. Fractional charge and inter-Landau-level states at points of singular curvature.

    Science.gov (United States)

    Biswas, Rudro R; Son, Dam Thanh

    2016-08-02

    The quest for universal properties of topological phases is fundamentally important because these signatures are robust to variations in system-specific details. Aspects of the response of quantum Hall states to smooth spatial curvature are well-studied, but challenging to observe experimentally. Here we go beyond this prevailing paradigm and obtain general results for the response of quantum Hall states to points of singular curvature in real space; such points may be readily experimentally actualized. We find, using continuum analytical methods, that the point of curvature binds an excess fractional charge and sequences of quantum states split away, energetically, from the degenerate bulk Landau levels. Importantly, these inter-Landau-level states are bound to the topological singularity and have energies that are universal functions of bulk parameters and the curvature. Our exact diagonalization of lattice tight-binding models on closed manifolds demonstrates that these results continue to hold even when lattice effects are significant. An important technological implication of these results is that these inter-Landau-level states, being both energetically and spatially isolated quantum states, are promising candidates for constructing qubits for quantum computation.

  8. Area collapse algorithm computing new curve of 2D geometric objects

    Science.gov (United States)

    Buczek, Michał Mateusz

    2017-06-01

    The processing of cartographic data demands human involvement. Up-to-date algorithms try to automate a part of this process. The goal is to obtain a digital model, or additional information about shape and topology of input geometric objects. A topological skeleton is one of the most important tools in the branch of science called shape analysis. It represents topological and geometrical characteristics of input data. Its plot depends on using algorithms such as medial axis, skeletonization, erosion, thinning, area collapse and many others. Area collapse, also known as dimension change, replaces input data with lower-dimensional geometric objects like, for example, a polygon with a polygonal chain, a line segment with a point. The goal of this paper is to introduce a new algorithm for the automatic calculation of polygonal chains representing a 2D polygon. The output is entirely contained within the area of the input polygon, and it has a linear plot without branches. The computational process is automatic and repeatable. The requirements of input data are discussed. The author analyzes results based on the method of computing ends of output polygonal chains. Additional methods to improve results are explored. The algorithm was tested on real-world cartographic data received from BDOT/GESUT databases, and on point clouds from laser scanning. An implementation for computing hatching of embankment is described.

  9. The Support Reduction Algorithm for Computing Non-Parametric Function Estimates in Mixture Models

    OpenAIRE

    GROENEBOOM, PIET; JONGBLOED, GEURT; WELLNER, JON A.

    2008-01-01

    In this paper, we study an algorithm (which we call the support reduction algorithm) that can be used to compute non-parametric M-estimators in mixture models. The algorithm is compared with natural competitors in the context of convex regression and the ‘Aspect problem’ in quantum physics.

  10. A New Method of Histogram Computation for Efficient Implementation of the HOG Algorithm

    Directory of Open Access Journals (Sweden)

    Mariana-Eugenia Ilas

    2018-03-01

    Full Text Available In this paper we introduce a new histogram computation method to be used within the histogram of oriented gradients (HOG algorithm. The new method replaces the arctangent with the slope computation and the classical magnitude allocation based on interpolation with a simpler algorithm. The new method allows a more efficient implementation of HOG in general, and particularly in field-programmable gate arrays (FPGAs, by considerably reducing the area (thus increasing the level of parallelism, while maintaining very close classification accuracy compared to the original algorithm. Thus, the new method is attractive for many applications, including car detection and classification.

  11. New accountant job market reform by computer algorithm: an experimental study

    Directory of Open Access Journals (Sweden)

    Hirose Yoshitaka

    2017-01-01

    Full Text Available The purpose of this study is to examine the matching of new accountants with accounting firms in Japan. A notable feature of the present study is that it brings a computer algorithm to the job-hiring task. Job recruitment activities for new accountants in Japan are one-time, short-term struggles. Accordingly, many have searched for new rules to replace the current ones of the process. Job recruitment activities for new accountants in Japan change every year. This study proposes modifying these job recruitment activities by combining computer and human efforts. Furthermore, the study formulates the job recruitment activities by using a model and conducting experiments. As a result, the Deferred Acceptance (DA algorithm derives a high truth-telling percentage, a stable matching percentage, and greater efficiency compared with the previous approach. This suggests the potential of the Deferred Acceptance algorithm as a replacement for current approaches. In terms of accurate percentage and stability, the DA algorithm is superior to the current methods and should be adopted.

  12. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2017-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  13. Curvature force and dark energy

    International Nuclear Information System (INIS)

    Balakin, Alexander B; Pavon, Diego; Schwarz, Dominik J; Zimdahl, Winfried

    2003-01-01

    A curvature self-interaction of the cosmic gas is shown to mimic a cosmological constant or other forms of dark energy, such as a rolling tachyon condensate or a Chaplygin gas. Any given Hubble rate and deceleration parameter can be traced back to the action of an effective curvature force on the gas particles. This force self-consistently reacts back on the cosmological dynamics. The links between an imperfect fluid description, a kinetic description with effective antifriction forces and curvature forces, which represent a non-minimal coupling of gravity to matter, are established

  14. Principal Curvature Measures Estimation and Application to 3D Face Recognition

    KAUST Repository

    Tang, Yinhang

    2017-04-06

    This paper presents an effective 3D face keypoint detection, description and matching framework based on three principle curvature measures. These measures give a unified definition of principle curvatures for both smooth and discrete surfaces. They can be reasonably computed based on the normal cycle theory and the geometric measure theory. The strong theoretical basis of these measures provides us a solid discrete estimation method on real 3D face scans represented as triangle meshes. Based on these estimated measures, the proposed method can automatically detect a set of sparse and discriminating 3D facial feature points. The local facial shape around each 3D feature point is comprehensively described by histograms of these principal curvature measures. To guarantee the pose invariance of these descriptors, three principle curvature vectors of these principle curvature measures are employed to assign the canonical directions. Similarity comparison between faces is accomplished by matching all these curvature-based local shape descriptors using the sparse representation-based reconstruction method. The proposed method was evaluated on three public databases, i.e. FRGC v2.0, Bosphorus, and Gavab. Experimental results demonstrated that the three principle curvature measures contain strong complementarity for 3D facial shape description, and their fusion can largely improve the recognition performance. Our approach achieves rank-one recognition rates of 99.6, 95.7, and 97.9% on the neutral subset, expression subset, and the whole FRGC v2.0 databases, respectively. This indicates that our method is robust to moderate facial expression variations. Moreover, it also achieves very competitive performance on the pose subset (over 98.6% except Yaw 90°) and the occlusion subset (98.4%) of the Bosphorus database. Even in the case of extreme pose variations like profiles, it also significantly outperforms the state-of-the-art approaches with a recognition rate of 57.1%. The

  15. Computationally efficient model predictive control algorithms a neural network approach

    CERN Document Server

    Ławryńczuk, Maciej

    2014-01-01

    This book thoroughly discusses computationally efficient (suboptimal) Model Predictive Control (MPC) techniques based on neural models. The subjects treated include: ·         A few types of suboptimal MPC algorithms in which a linear approximation of the model or of the predicted trajectory is successively calculated on-line and used for prediction. ·         Implementation details of the MPC algorithms for feedforward perceptron neural models, neural Hammerstein models, neural Wiener models and state-space neural models. ·         The MPC algorithms based on neural multi-models (inspired by the idea of predictive control). ·         The MPC algorithms with neural approximation with no on-line linearization. ·         The MPC algorithms with guaranteed stability and robustness. ·         Cooperation between the MPC algorithms and set-point optimization. Thanks to linearization (or neural approximation), the presented suboptimal algorithms do not require d...

  16. The multilevel fast multipole algorithm (MLFMA) for solving large-scale computational electromagnetics problems

    CERN Document Server

    Ergul, Ozgur

    2014-01-01

    The Multilevel Fast Multipole Algorithm (MLFMA) for Solving Large-Scale Computational Electromagnetic Problems provides a detailed and instructional overview of implementing MLFMA. The book: Presents a comprehensive treatment of the MLFMA algorithm, including basic linear algebra concepts, recent developments on the parallel computation, and a number of application examplesCovers solutions of electromagnetic problems involving dielectric objects and perfectly-conducting objectsDiscusses applications including scattering from airborne targets, scattering from red

  17. Quantum Computation and Algorithms

    International Nuclear Information System (INIS)

    Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.

    1999-01-01

    It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution

  18. An Algorithm for Computing Screened Coulomb Scattering in Geant4

    OpenAIRE

    Mendenhall, Marcus H.; Weller, Robert A.

    2004-01-01

    An algorithm has been developed for the Geant4 Monte-Carlo package for the efficient computation of screened Coulomb interatomic scattering. It explicitly integrates the classical equations of motion for scattering events, resulting in precise tracking of both the projectile and the recoil target nucleus. The algorithm permits the user to plug in an arbitrary screening function, such as Lens-Jensen screening, which is good for backscattering calculations, or Ziegler-Biersack-Littmark screenin...

  19. A Randomized Exchange Algorithm for Computing Optimal Approximate Designs of Experiments

    KAUST Repository

    Harman, Radoslav; Filová , Lenka; Richtarik, Peter

    2018-01-01

    We propose a class of subspace ascent methods for computing optimal approximate designs that covers both existing as well as new and more efficient algorithms. Within this class of methods, we construct a simple, randomized exchange algorithm (REX). Numerical comparisons suggest that the performance of REX is comparable or superior to the performance of state-of-the-art methods across a broad range of problem structures and sizes. We focus on the most commonly used criterion of D-optimality that also has applications beyond experimental design, such as the construction of the minimum volume ellipsoid containing a given set of data-points. For D-optimality, we prove that the proposed algorithm converges to the optimum. We also provide formulas for the optimal exchange of weights in the case of the criterion of A-optimality. These formulas enable one to use REX for computing A-optimal and I-optimal designs.

  20. A Randomized Exchange Algorithm for Computing Optimal Approximate Designs of Experiments

    KAUST Repository

    Harman, Radoslav

    2018-01-17

    We propose a class of subspace ascent methods for computing optimal approximate designs that covers both existing as well as new and more efficient algorithms. Within this class of methods, we construct a simple, randomized exchange algorithm (REX). Numerical comparisons suggest that the performance of REX is comparable or superior to the performance of state-of-the-art methods across a broad range of problem structures and sizes. We focus on the most commonly used criterion of D-optimality that also has applications beyond experimental design, such as the construction of the minimum volume ellipsoid containing a given set of data-points. For D-optimality, we prove that the proposed algorithm converges to the optimum. We also provide formulas for the optimal exchange of weights in the case of the criterion of A-optimality. These formulas enable one to use REX for computing A-optimal and I-optimal designs.

  1. An Algorithm for Fast Computation of 3D Zernike Moments for Volumetric Images

    Directory of Open Access Journals (Sweden)

    Khalid M. Hosny

    2012-01-01

    Full Text Available An algorithm was proposed for very fast and low-complexity computation of three-dimensional Zernike moments. The 3D Zernike moments were expressed in terms of exact 3D geometric moments where the later are computed exactly through the mathematical integration of the monomial terms over the digital image/object voxels. A new symmetry-based method was proposed to compute 3D Zernike moments with 87% reduction in the computational complexity. A fast 1D cascade algorithm was also employed to add more complexity reduction. The comparison with existing methods was performed, where the numerical experiments and the complexity analysis ensured the efficiency of the proposed method especially with image and objects of large sizes.

  2. Environmental influences on DNA curvature

    DEFF Research Database (Denmark)

    Ussery, David; Higgins, C.F.; Bolshoy, A.

    1999-01-01

    DNA curvature plays an important role in many biological processes. To study environmentalinfluences on DNA curvature we compared the anomalous migration on polyacrylamide gels ofligation ladders of 11 specifically-designed oligonucleotides. At low temperatures (25 degreesC and below) most......, whilst spermine enhanced theanomalous migration of a different set of sequences. Sequences with a GGC motif exhibitedgreater curvature than predicted by the presently-used angles for the nearest-neighbour wedgemodel and are especially sensitive to Mg2+. The data have implications for models...... for DNAcurvature and for environmentally-sensitive DNA conformations in the regulation of geneexpression....

  3. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    Science.gov (United States)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  4. On the non-Gaussian correlation of the primordial curvature perturbation with vector fields

    DEFF Research Database (Denmark)

    Kumar Jain, Rajeev; Sloth, Martin Snoager

    2013-01-01

    We compute the three-point cross-correlation function of the primordial curvature perturbation generated during inflation with two powers of a vector field in a model where conformal invariance is broken by a direct coupling of the vector field with the inflaton. If the vector field is identified...... with the electromagnetic field, this correlation would be a non-Gaussian signature of primordial magnetic fields generated during inflation. We find that the signal is maximized for the flattened configuration where the wave number of the curvature perturbation is twice that of the vector field and in this limit...

  5. A parallel simulated annealing algorithm for standard cell placement on a hypercube computer

    Science.gov (United States)

    Jones, Mark Howard

    1987-01-01

    A parallel version of a simulated annealing algorithm is presented which is targeted to run on a hypercube computer. A strategy for mapping the cells in a two dimensional area of a chip onto processors in an n-dimensional hypercube is proposed such that both small and large distance moves can be applied. Two types of moves are allowed: cell exchanges and cell displacements. The computation of the cost function in parallel among all the processors in the hypercube is described along with a distributed data structure that needs to be stored in the hypercube to support parallel cost evaluation. A novel tree broadcasting strategy is used extensively in the algorithm for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms. An improved uniprocessor algorithm is proposed which is based on the improved results obtained from parallelization of the simulated annealing algorithm.

  6. A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.

    Science.gov (United States)

    De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc

    2010-09-01

    In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.

  7. A sub-cubic time algorithm for computing the quartet distance between two general trees

    DEFF Research Database (Denmark)

    Nielsen, Jesper; Kristensen, Anders Kabell; Mailund, Thomas

    2011-01-01

    Background When inferring phylogenetic trees different algorithms may give different trees. To study such effects a measure for the distance between two trees is useful. Quartet distance is one such measure, and is the number of quartet topologies that differ between two trees. Results We have...... derived a new algorithm for computing the quartet distance between a pair of general trees, i.e. trees where inner nodes can have any degree ≥ 3. The time and space complexity of our algorithm is sub-cubic in the number of leaves and does not depend on the degree of the inner nodes. This makes...... it the fastest algorithm so far for computing the quartet distance between general trees independent of the degree of the inner nodes. Conclusions We have implemented our algorithm and two of the best competitors. Our new algorithm is significantly faster than the competition and seems to run in close...

  8. An effective algorithm for computing global sensitivity indices (EASI)

    International Nuclear Information System (INIS)

    Plischke, Elmar

    2010-01-01

    We present an algorithm named EASI that estimates first order sensitivity indices from given data using Fast Fourier Transformations. Hence it can be used as a post-processing module for pre-computed model evaluations. Ideas for the estimation of higher order sensitivity indices are also discussed.

  9. A simple algorithm for computing positively weighted straight skeletons of monotone polygons☆

    Science.gov (United States)

    Biedl, Therese; Held, Martin; Huber, Stefan; Kaaser, Dominik; Palfrader, Peter

    2015-01-01

    We study the characteristics of straight skeletons of monotone polygonal chains and use them to devise an algorithm for computing positively weighted straight skeletons of monotone polygons. Our algorithm runs in O(nlog⁡n) time and O(n) space, where n denotes the number of vertices of the polygon. PMID:25648376

  10. A simple algorithm for computing positively weighted straight skeletons of monotone polygons.

    Science.gov (United States)

    Biedl, Therese; Held, Martin; Huber, Stefan; Kaaser, Dominik; Palfrader, Peter

    2015-02-01

    We study the characteristics of straight skeletons of monotone polygonal chains and use them to devise an algorithm for computing positively weighted straight skeletons of monotone polygons. Our algorithm runs in [Formula: see text] time and [Formula: see text] space, where n denotes the number of vertices of the polygon.

  11. A highly efficient parallel algorithm for solving the neutron diffusion nodal equations on shared-memory computers

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Kirk, B.L.

    1990-01-01

    Modern parallel computer architectures offer an enormous potential for reducing CPU and wall-clock execution times of large-scale computations commonly performed in various applications in science and engineering. Recently, several authors have reported their efforts in developing and implementing parallel algorithms for solving the neutron diffusion equation on a variety of shared- and distributed-memory parallel computers. Testing of these algorithms for a variety of two- and three-dimensional meshes showed significant speedup of the computation. Even for very large problems (i.e., three-dimensional fine meshes) executed concurrently on a few nodes in serial (nonvector) mode, however, the measured computational efficiency is very low (40 to 86%). In this paper, the authors present a highly efficient (∼85 to 99.9%) algorithm for solving the two-dimensional nodal diffusion equations on the Sequent Balance 8000 parallel computer. Also presented is a model for the performance, represented by the efficiency, as a function of problem size and the number of participating processors. The model is validated through several tests and then extrapolated to larger problems and more processors to predict the performance of the algorithm in more computationally demanding situations

  12. Manifolds of positive scalar curvature

    Energy Technology Data Exchange (ETDEWEB)

    Stolz, S [Department of Mathematics, University of Notre Dame, Notre Dame (United States)

    2002-08-15

    This lecture gives an survey on the problem of finding a positive scalar curvature metric on a closed manifold. The Gromov-Lawson-Rosenberg conjecture and its relation to the Baum-Connes conjecture are discussed and the problem of finding a positive Ricci curvature metric on a closed manifold is explained.

  13. Development of an inter-layer solute transport algorithm for SOLTR computer program. Part 1. The algorithm

    International Nuclear Information System (INIS)

    Miller, I.; Roman, K.

    1979-12-01

    In order to perform studies of the influence of regional groundwater flow systems on the long-term performance of potential high-level nuclear waste repositories, it was determined that an adequate computer model would have to consider the full three-dimensional flow system. Golder Associates' SOLTR code, while three-dimensional, has an overly simple algorithm for simulating the passage of radionuclides from one aquifier to another above or below it. Part 1 of this report describes the algorithm developed to provide SOLTR with an improved capability for simulating interaquifer transport

  14. Dynamic Speed Adaptation for Path Tracking Based on Curvature Information and Speed Limits.

    Science.gov (United States)

    Gámez Serna, Citlalli; Ruichek, Yassine

    2017-06-14

    A critical concern of autonomous vehicles is safety. Different approaches have tried to enhance driving safety to reduce the number of fatal crashes and severe injuries. As an example, Intelligent Speed Adaptation (ISA) systems warn the driver when the vehicle exceeds the recommended speed limit. However, these systems only take into account fixed speed limits without considering factors like road geometry. In this paper, we consider road curvature with speed limits to automatically adjust vehicle's speed with the ideal one through our proposed Dynamic Speed Adaptation (DSA) method. Furthermore, 'curve analysis extraction' and 'speed limits database creation' are also part of our contribution. An algorithm that analyzes GPS information off-line identifies high curvature segments and estimates the speed for each curve. The speed limit database contains information about the different speed limit zones for each traveled path. Our DSA senses speed limits and curves of the road using GPS information and ensures smooth speed transitions between current and ideal speeds. Through experimental simulations with different control algorithms on real and simulated datasets, we prove that our method is able to significantly reduce lateral errors on sharp curves, to respect speed limits and consequently increase safety and comfort for the passenger.

  15. GeoBuilder: a geometric algorithm visualization and debugging system for 2D and 3D geometric computing.

    Science.gov (United States)

    Wei, Jyh-Da; Tsai, Ming-Hung; Lee, Gen-Cher; Huang, Jeng-Hung; Lee, Der-Tsai

    2009-01-01

    Algorithm visualization is a unique research topic that integrates engineering skills such as computer graphics, system programming, database management, computer networks, etc., to facilitate algorithmic researchers in testing their ideas, demonstrating new findings, and teaching algorithm design in the classroom. Within the broad applications of algorithm visualization, there still remain performance issues that deserve further research, e.g., system portability, collaboration capability, and animation effect in 3D environments. Using modern technologies of Java programming, we develop an algorithm visualization and debugging system, dubbed GeoBuilder, for geometric computing. The GeoBuilder system features Java's promising portability, engagement of collaboration in algorithm development, and automatic camera positioning for tracking 3D geometric objects. In this paper, we describe the design of the GeoBuilder system and demonstrate its applications.

  16. Agent assisted interactive algorithm for computationally demanding multiobjective optimization problems

    OpenAIRE

    Ojalehto, Vesa; Podkopaev, Dmitry; Miettinen, Kaisa

    2015-01-01

    We generalize the applicability of interactive methods for solving computationally demanding, that is, time-consuming, multiobjective optimization problems. For this purpose we propose a new agent assisted interactive algorithm. It employs a computationally inexpensive surrogate problem and four different agents that intelligently update the surrogate based on the preferences specified by a decision maker. In this way, we decrease the waiting times imposed on the decision maker du...

  17. Multi-objective optimization of HVAC system with an evolutionary computation algorithm

    International Nuclear Information System (INIS)

    Kusiak, Andrew; Tang, Fan; Xu, Guanglin

    2011-01-01

    A data-mining approach for the optimization of a HVAC (heating, ventilation, and air conditioning) system is presented. A predictive model of the HVAC system is derived by data-mining algorithms, using a dataset collected from an experiment conducted at a research facility. To minimize the energy while maintaining the corresponding IAQ (indoor air quality) within a user-defined range, a multi-objective optimization model is developed. The solutions of this model are set points of the control system derived with an evolutionary computation algorithm. The controllable input variables - supply air temperature and supply air duct static pressure set points - are generated to reduce the energy use. The results produced by the evolutionary computation algorithm show that the control strategy saves energy by optimizing operations of an HVAC system. -- Highlights: → A data-mining approach for the optimization of a heating, ventilation, and air conditioning (HVAC) system is presented. → The data used in the project has been collected from an experiment conducted at an energy research facility. → The approach presented in the paper leads to accomplishing significant energy savings without compromising the indoor air quality. → The energy savings are accomplished by computing set points for the supply air temperature and the supply air duct static pressure.

  18. Quantitative analysis of spinal curvature in 3D: application to CT images of normal spine

    Energy Technology Data Exchange (ETDEWEB)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo [University of Ljubljana, Faculty of Electrical Engineering, Trzaska 25, SI-1000 Ljubljana (Slovenia)

    2008-04-07

    The purpose of this study is to present a framework for quantitative analysis of spinal curvature in 3D. In order to study the properties of such complex 3D structures, we propose two descriptors that capture the characteristics of spinal curvature in 3D. The descriptors are the geometric curvature (GC) and curvature angle (CA), which are independent of the orientation and size of spine anatomy. We demonstrate the two descriptors that characterize the spinal curvature in 3D on 30 computed tomography (CT) images of normal spine and on a scoliotic spine. The descriptors are determined from 3D vertebral body lines, which are obtained by two different methods. The first method is based on the least-squares technique that approximates the manually identified vertebra centroids, while the second method searches for vertebra centroids in an automated optimization scheme, based on computer-assisted image analysis. Polynomial functions of the fourth and fifth degree were used for the description of normal and scoliotic spinal curvature in 3D, respectively. The mean distance to vertebra centroids was 1.1 mm ({+-}0.6 mm) for the first and 2.1 mm ({+-}1.4 mm) for the second method. The distributions of GC and CA values were obtained along the 30 images of normal spine at each vertebral level and show that maximal thoracic kyphosis (TK), thoracolumbar junction (TJ) and maximal lumbar lordosis (LL) on average occur at T3/T4, T12/L1 and L4/L5, respectively. The main advantage of GC and CA is that the measurements are independent of the orientation and size of the spine, thus allowing objective intra- and inter-subject comparisons. The positions of maximal TK, TJ and maximal LL can be easily identified by observing the GC and CA distributions at different vertebral levels. The obtained courses of the GC and CA for the scoliotic spine were compared to the distributions of GC and CA for the normal spines. The significant difference in values indicates that the descriptors of GC and

  19. Quantitative analysis of spinal curvature in 3D: application to CT images of normal spine

    International Nuclear Information System (INIS)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo

    2008-01-01

    The purpose of this study is to present a framework for quantitative analysis of spinal curvature in 3D. In order to study the properties of such complex 3D structures, we propose two descriptors that capture the characteristics of spinal curvature in 3D. The descriptors are the geometric curvature (GC) and curvature angle (CA), which are independent of the orientation and size of spine anatomy. We demonstrate the two descriptors that characterize the spinal curvature in 3D on 30 computed tomography (CT) images of normal spine and on a scoliotic spine. The descriptors are determined from 3D vertebral body lines, which are obtained by two different methods. The first method is based on the least-squares technique that approximates the manually identified vertebra centroids, while the second method searches for vertebra centroids in an automated optimization scheme, based on computer-assisted image analysis. Polynomial functions of the fourth and fifth degree were used for the description of normal and scoliotic spinal curvature in 3D, respectively. The mean distance to vertebra centroids was 1.1 mm (±0.6 mm) for the first and 2.1 mm (±1.4 mm) for the second method. The distributions of GC and CA values were obtained along the 30 images of normal spine at each vertebral level and show that maximal thoracic kyphosis (TK), thoracolumbar junction (TJ) and maximal lumbar lordosis (LL) on average occur at T3/T4, T12/L1 and L4/L5, respectively. The main advantage of GC and CA is that the measurements are independent of the orientation and size of the spine, thus allowing objective intra- and inter-subject comparisons. The positions of maximal TK, TJ and maximal LL can be easily identified by observing the GC and CA distributions at different vertebral levels. The obtained courses of the GC and CA for the scoliotic spine were compared to the distributions of GC and CA for the normal spines. The significant difference in values indicates that the descriptors of GC and CA

  20. Integration of length and curvature in haptic perception.

    Science.gov (United States)

    Panday, Virjanand; Tiest, Wouter M Bergmann; Kappers, Astrid M L

    2014-01-24

    We investigated if and how length and curvature information are integrated when an object is explored in one hand. Subjects were asked to explore four types of objects between thumb and index finger. Objects differed in either length, curvature, both length and curvature correlated as in a circle, or anti-correlated. We found that when both length and curvature are present, performance is significantly better than when only one of the two cues is available. Therefore, we conclude that there is integration of length and curvature. Moreover, if the two cues are correlated in a circular cross-section instead of in an anti-correlated way, performance is better than predicted by a combination of two independent cues. We conclude that integration of curvature and length is highly efficient when the cues in the object are combined as in a circle, which is the most common combination of curvature and length in daily life.

  1. Computing return times or return periods with rare event algorithms

    Science.gov (United States)

    Lestang, Thibault; Ragone, Francesco; Bréhier, Charles-Edouard; Herbert, Corentin; Bouchet, Freddy

    2018-04-01

    The average time between two occurrences of the same event, referred to as its return time (or return period), is a useful statistical concept for practical applications. For instance insurances or public agencies may be interested by the return time of a 10 m flood of the Seine river in Paris. However, due to their scarcity, reliably estimating return times for rare events is very difficult using either observational data or direct numerical simulations. For rare events, an estimator for return times can be built from the extrema of the observable on trajectory blocks. Here, we show that this estimator can be improved to remain accurate for return times of the order of the block size. More importantly, we show that this approach can be generalised to estimate return times from numerical algorithms specifically designed to sample rare events. So far those algorithms often compute probabilities, rather than return times. The approach we propose provides a computationally extremely efficient way to estimate numerically the return times of rare events for a dynamical system, gaining several orders of magnitude of computational costs. We illustrate the method on two kinds of observables, instantaneous and time-averaged, using two different rare event algorithms, for a simple stochastic process, the Ornstein–Uhlenbeck process. As an example of realistic applications to complex systems, we finally discuss extreme values of the drag on an object in a turbulent flow.

  2. Development of computed tomography system and image reconstruction algorithm

    International Nuclear Information System (INIS)

    Khairiah Yazid; Mohd Ashhar Khalid; Azaman Ahmad; Khairul Anuar Mohd Salleh; Ab Razak Hamzah

    2006-01-01

    Computed tomography is one of the most advanced and powerful nondestructive inspection techniques, which is currently used in many different industries. In several CT systems, detection has been by combination of an X-ray image intensifier and charge -coupled device (CCD) camera or by using line array detector. The recent development of X-ray flat panel detector has made fast CT imaging feasible and practical. Therefore this paper explained the arrangement of a new detection system which is using the existing high resolution (127 μm pixel size) flat panel detector in MINT and the image reconstruction technique developed. The aim of the project is to develop a prototype flat panel detector based CT imaging system for NDE. The prototype consisted of an X-ray tube, a flat panel detector system, a rotation table and a computer system to control the sample motion and image acquisition. Hence this project is divided to two major tasks, firstly to develop image reconstruction algorithm and secondly to integrate X-ray imaging components into one CT system. The image reconstruction algorithm using filtered back-projection method is developed and compared to other techniques. The MATLAB program is the tools used for the simulations and computations for this project. (Author)

  3. Multi-step EMG Classification Algorithm for Human-Computer Interaction

    Science.gov (United States)

    Ren, Peng; Barreto, Armando; Adjouadi, Malek

    A three-electrode human-computer interaction system, based on digital processing of the Electromyogram (EMG) signal, is presented. This system can effectively help disabled individuals paralyzed from the neck down to interact with computers or communicate with people through computers using point-and-click graphic interfaces. The three electrodes are placed on the right frontalis, the left temporalis and the right temporalis muscles in the head, respectively. The signal processing algorithm used translates the EMG signals during five kinds of facial movements (left jaw clenching, right jaw clenching, eyebrows up, eyebrows down, simultaneous left & right jaw clenching) into five corresponding types of cursor movements (left, right, up, down and left-click), to provide basic mouse control. The classification strategy is based on three principles: the EMG energy of one channel is typically larger than the others during one specific muscle contraction; the spectral characteristics of the EMG signals produced by the frontalis and temporalis muscles during different movements are different; the EMG signals from adjacent channels typically have correlated energy profiles. The algorithm is evaluated on 20 pre-recorded EMG signal sets, using Matlab simulations. The results show that this method provides improvements and is more robust than other previous approaches.

  4. Right thoracic curvature in the normal spine

    Directory of Open Access Journals (Sweden)

    Masuda Keigo

    2011-01-01

    Full Text Available Abstract Background Trunk asymmetry and vertebral rotation, at times observed in the normal spine, resemble the characteristics of adolescent idiopathic scoliosis (AIS. Right thoracic curvature has also been reported in the normal spine. If it is determined that the features of right thoracic side curvature in the normal spine are the same as those observed in AIS, these findings might provide a basis for elucidating the etiology of this condition. For this reason, we investigated right thoracic curvature in the normal spine. Methods For normal spinal measurements, 1,200 patients who underwent a posteroanterior chest radiographs were evaluated. These consisted of 400 children (ages 4-9, 400 adolescents (ages 10-19 and 400 adults (ages 20-29, with each group comprised of both genders. The exclusion criteria were obvious chest and spinal diseases. As side curvature is minimal in normal spines and the range at which curvature is measured is difficult to ascertain, first the typical curvature range in scoliosis patients was determined and then the Cobb angle in normal spines was measured using the same range as the scoliosis curve, from T5 to T12. Right thoracic curvature was given a positive value. The curve pattern was organized in each collective three groups: neutral (from -1 degree to 1 degree, right (> +1 degree, and left ( Results In child group, Cobb angle in left was 120, in neutral was 125 and in right was 155. In adolescent group, Cobb angle in left was 70, in neutral was 114 and in right was 216. In adult group, Cobb angle in left was 46, in neutral was 102 and in right was 252. The curvature pattern shifts to the right side in the adolescent group (p Conclusions Based on standing chest radiographic measurements, a right thoracic curvature was observed in normal spines after adolescence.

  5. Fast parallel molecular algorithms for DNA-based computation: factoring integers.

    Science.gov (United States)

    Chang, Weng-Long; Guo, Minyi; Ho, Michael Shan-Hui

    2005-06-01

    The RSA public-key cryptosystem is an algorithm that converts input data to an unrecognizable encryption and converts the unrecognizable data back into its original decryption form. The security of the RSA public-key cryptosystem is based on the difficulty of factoring the product of two large prime numbers. This paper demonstrates to factor the product of two large prime numbers, and is a breakthrough in basic biological operations using a molecular computer. In order to achieve this, we propose three DNA-based algorithms for parallel subtractor, parallel comparator, and parallel modular arithmetic that formally verify our designed molecular solutions for factoring the product of two large prime numbers. Furthermore, this work indicates that the cryptosystems using public-key are perhaps insecure and also presents clear evidence of the ability of molecular computing to perform complicated mathematical operations.

  6. Fixed-point image orthorectification algorithms for reduced computational cost

    Science.gov (United States)

    French, Joseph Clinton

    Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation

  7. A fast algorithm for computer aided collimation gamma camera (CACAO)

    Science.gov (United States)

    Jeanguillaume, C.; Begot, S.; Quartuccio, M.; Douiri, A.; Franck, D.; Pihet, P.; Ballongue, P.

    2000-08-01

    The computer aided collimation gamma camera is aimed at breaking down the resolution sensitivity trade-off of the conventional parallel hole collimator. It uses larger and longer holes, having an added linear movement at the acquisition sequence. A dedicated algorithm including shift and sum, deconvolution, parabolic filtering and rotation is described. Examples of reconstruction are given. This work shows that a simple and fast algorithm, based on a diagonal dominant approximation of the problem can be derived. Its gives a practical solution to the CACAO reconstruction problem.

  8. Computional algorithm for lifetime exposure to antimicrobials in pigs using register data − the LEA algorithm

    DEFF Research Database (Denmark)

    Birkegård, Anna Camilla; Dalhoff Andersen, Vibe; Hisham Beshara Halasa, Tariq

    2017-01-01

    Accurate and detailed data on antimicrobial exposure in pig production are essential when studying the association between antimicrobial exposure and antimicrobial resistance. Due to difficulties in obtaining primary data on antimicrobial exposure in a large number of farms, there is a need...... for a robust and valid method to estimate the exposure using register data. An approach that estimates the antimicrobial exposure in every rearing period during the lifetime of a pig using register data was developed into a computational algorithm. In this approach data from national registers on antimicrobial...... purchases, movements of pigs and farm demographics registered at farm level are used. The algorithm traces batches of pigs retrospectively from slaughter to the farm(s) that housed the pigs during their finisher, weaner, and piglet period. Subsequently, the algorithm estimates the antimicrobial exposure...

  9. Curvature and torsion in growing actin networks

    International Nuclear Information System (INIS)

    Shaevitz, Joshua W; Fletcher, Daniel A

    2008-01-01

    Intracellular pathogens such as Listeria monocytogenes and Rickettsia rickettsii move within a host cell by polymerizing a comet-tail of actin fibers that ultimately pushes the cell forward. This dense network of cross-linked actin polymers typically exhibits a striking curvature that causes bacteria to move in gently looping paths. Theoretically, tail curvature has been linked to details of motility by considering force and torque balances from a finite number of polymerizing filaments. Here we track beads coated with a prokaryotic activator of actin polymerization in three dimensions to directly quantify the curvature and torsion of bead motility paths. We find that bead paths are more likely to have low rather than high curvature at any given time. Furthermore, path curvature changes very slowly in time, with an autocorrelation decay time of 200 s. Paths with a small radius of curvature, therefore, remain so for an extended period resulting in loops when confined to two dimensions. When allowed to explore a three-dimensional (3D) space, path loops are less evident. Finally, we quantify the torsion in the bead paths and show that beads do not exhibit a significant left- or right-handed bias to their motion in 3D. These results suggest that paths of actin-propelled objects may be attributed to slow changes in curvature, possibly associated with filament debranching, rather than a fixed torque

  10. A heuristic algorithm for computing the Poincar\\'e series of the invariants of binary forms

    OpenAIRE

    Djoković, Dragomir Ž.

    2006-01-01

    We propose a heuristic algorithm for fast computation of the Poincar\\'{e} series $P_n(t)$ of the invariants of binary forms of degree $n$, viewed as rational functions. The algorithm is based on certain polynomial identities which remain to be proved rigorously. By using it, we have computed the $P_n(t)$ for $n\\le30$.

  11. Quantum and classical parallelism in parity algorithms for ensemble quantum computers

    International Nuclear Information System (INIS)

    Stadelhofer, Ralf; Suter, Dieter; Banzhaf, Wolfgang

    2005-01-01

    The determination of the parity of a string of N binary digits is a well-known problem in classical as well as quantum information processing, which can be formulated as an oracle problem. It has been established that quantum algorithms require at least N/2 oracle calls. We present an algorithm that reaches this lower bound and is also optimal in terms of additional gate operations required. We discuss its application to pure and mixed states. Since it can be applied directly to thermal states, it does not suffer from signal loss associated with pseudo-pure-state preparation. For ensemble quantum computers, the number of oracle calls can be further reduced by a factor 2 k , with k is a member of {{1,2,...,log 2 (N/2}}, provided the signal-to-noise ratio is sufficiently high. This additional speed-up is linked to (classical) parallelism of the ensemble quantum computer. Experimental realizations are demonstrated on a liquid-state NMR quantum computer

  12. Digital Geometry Algorithms Theoretical Foundations and Applications to Computational Imaging

    CERN Document Server

    Barneva, Reneta

    2012-01-01

    Digital geometry emerged as an independent discipline in the second half of the last century. It deals with geometric properties of digital objects and is developed with the unambiguous goal to provide rigorous theoretical foundations for devising new advanced approaches and algorithms for various problems of visual computing. Different aspects of digital geometry have been addressed in the literature. This book is the first one that explicitly focuses on the presentation of the most important digital geometry algorithms. Each chapter provides a brief survey on a major research area related to the general volume theme, description and analysis of related fundamental algorithms, as well as new original contributions by the authors. Every chapter contains a section in which interesting open problems are addressed.

  13. Application of a fast skyline computation algorithm for serendipitous searching problems

    Science.gov (United States)

    Koizumi, Kenichi; Hiraki, Kei; Inaba, Mary

    2018-02-01

    Skyline computation is a method of extracting interesting entries from a large population with multiple attributes. These entries, called skyline or Pareto optimal entries, are known to have extreme characteristics that cannot be found by outlier detection methods. Skyline computation is an important task for characterizing large amounts of data and selecting interesting entries with extreme features. When the population changes dynamically, the task of calculating a sequence of skyline sets is called continuous skyline computation. This task is known to be difficult to perform for the following reasons: (1) information of non-skyline entries must be stored since they may join the skyline in the future; (2) the appearance or disappearance of even a single entry can change the skyline drastically; (3) it is difficult to adopt a geometric acceleration algorithm for skyline computation tasks with high-dimensional datasets. Our new algorithm called jointed rooted-tree (JR-tree) manages entries using a rooted tree structure. JR-tree delays extend the tree to deep levels to accelerate tree construction and traversal. In this study, we presented the difficulties in extracting entries tagged with a rare label in high-dimensional space and the potential of fast skyline computation in low-latency cell identification technology.

  14. Some computational challenges of developing efficient parallel algorithms for data-dependent computations in thermal-hydraulics supercomputer applications

    International Nuclear Information System (INIS)

    Woodruff, S.B.

    1992-01-01

    The Transient Reactor Analysis Code (TRAC), which features a two- fluid treatment of thermal-hydraulics, is designed to model transients in water reactors and related facilities. One of the major computational costs associated with TRAC and similar codes is calculating constitutive coefficients. Although the formulations for these coefficients are local the costs are flow-regime- or data-dependent; i.e., the computations needed for a given spatial node often vary widely as a function of time. Consequently, poor load balancing will degrade efficiency on either vector or data parallel architectures when the data are organized according to spatial location. Unfortunately, a general automatic solution to the load-balancing problem associated with data-dependent computations is not yet available for massively parallel architectures. This document discusses why developers algorithms, such as a neural net representation, that do not exhibit algorithms, such as a neural net representation, that do not exhibit load-balancing problems

  15. Stochastic approach for round-off error analysis in computing application to signal processing algorithms

    International Nuclear Information System (INIS)

    Vignes, J.

    1986-01-01

    Any result of algorithms provided by a computer always contains an error resulting from floating-point arithmetic round-off error propagation. Furthermore signal processing algorithms are also generally performed with data containing errors. The permutation-perturbation method, also known under the name CESTAC (controle et estimation stochastique d'arrondi de calcul) is a very efficient practical method for evaluating these errors and consequently for estimating the exact significant decimal figures of any result of algorithms performed on a computer. The stochastic approach of this method, its probabilistic proof, and the perfect agreement between the theoretical and practical aspects are described in this paper [fr

  16. A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Woo Seok; Kim, Soo Mee; Park, Min Jae; Lee, Dong Soo; Lee, Jae Sung [Seoul National University, Seoul (Korea, Republic of)

    2009-10-15

    The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 sec, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 sec, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries

  17. A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems

    International Nuclear Information System (INIS)

    Ha, Woo Seok; Kim, Soo Mee; Park, Min Jae; Lee, Dong Soo; Lee, Jae Sung

    2009-01-01

    The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 sec, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 sec, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries

  18. An algorithm to compute the canonical basis of an irreducible Uq(g)-module

    OpenAIRE

    de Graaf, W. A.

    2002-01-01

    An algorithm is described to compute the canonical basis of an irreducible module over a quantized enveloping algebra of a finite-dimensional semisimple Lie algebra. The algorithm works for modules that are constructed as a submodule of a tensor product of modules with known canonical bases.

  19. Algorithmic differentiation of pragma-defined parallel regions differentiating computer programs containing OpenMP

    CERN Document Server

    Förster, Michael

    2014-01-01

    Numerical programs often use parallel programming techniques such as OpenMP to compute the program's output values as efficient as possible. In addition, derivative values of these output values with respect to certain input values play a crucial role. To achieve code that computes not only the output values simultaneously but also the derivative values, this work introduces several source-to-source transformation rules. These rules are based on a technique called algorithmic differentiation. The main focus of this work lies on the important reverse mode of algorithmic differentiation. The inh

  20. Collineations of the curvature tensor in general relativity

    Indian Academy of Sciences (India)

    Curvature collineations for the curvature tensor, constructed from a fundamental Bianchi Type-V metric, are studied. We are concerned with a symmetry property of space-time which is called curvature collineation, and we briefly discuss the physical and kinematical properties of the models.

  1. Haptic perception of object curvature in Parkinson's disease.

    Directory of Open Access Journals (Sweden)

    Jürgen Konczak

    2008-07-01

    Full Text Available The haptic perception of the curvature of an object is essential for adequate object manipulation and critical for our guidance of actions. This study investigated how the ability to perceive the curvature of an object is altered by Parkinson's disease (PD.Eight healthy subjects and 11 patients with mild to moderate PD had to judge, without vision, the curvature of a virtual "box" created by a robotic manipulandum. Their hands were either moved passively along a defined curved path or they actively explored the curved curvature of a virtual wall. The curvature was either concave or convex (bulging to the left or right and was judged in two locations of the hand workspace--a left workspace location, where the curved hand path was associated with curved shoulder and elbow joint paths, and a right workspace location in which these joint paths were nearly linear. After exploring the curvature of the virtual object, subjects had to judge whether the curvature was concave or convex. Based on these data, thresholds for curvature sensitivity were established. The main findings of the study are: First, 9 out 11 PD patients (82% showed elevated thresholds for detecting convex curvatures in at least one test condition. The respective median threshold for the PD group was increased by 343% when compared to the control group. Second, when distal hand paths became less associated with proximal joint paths (right workspace, haptic acuity was reduced substantially in both groups. Third, sensitivity to hand trajectory curvature was not improved during active exploration in either group.Our data demonstrate that PD is associated with a decreased acuity of the haptic sense, which may occur already at an early stage of the disease.

  2. A simpler and elegant algorithm for computing fractal dimension in ...

    Indian Academy of Sciences (India)

    Chaotic systems are now frequently encountered in almost all branches of sciences. Dimension of such systems provides an important measure for easy characterization of dynamics of the systems. Conventional algorithms for computing dimension of such systems in higher dimensional state space face an unavoidable ...

  3. 3D skin surface reconstruction from a single image by merging global curvature and local texture using the guided filtering for 3D haptic palpation.

    Science.gov (United States)

    Lee, K; Kim, M; Kim, K

    2018-05-11

    Skin surface evaluation has been studied using various imaging techniques. However, all these studies had limited impact because they were performed using visual exam only. To improve on this scenario with haptic feedback, we propose 3D reconstruction of the skin surface using a single image. Unlike extant 3D skin surface reconstruction algorithms, we utilize the local texture and global curvature regions, combining the results for reconstruction. The first entails the reconstruction of global curvature, achieved by bilateral filtering that removes noise on the surface while maintaining the edge (ie, furrow) to obtain the overall curvature. The second entails the reconstruction of local texture, representing the fine wrinkles of the skin, using an advanced form of bilateral filtering. The final image is then composed by merging the two reconstructed images. We tested the curvature reconstruction part by comparing the resulting curvatures with measured values from real phantom objects while local texture reconstruction was verified by measuring skin surface roughness. Then, we showed the reconstructed result of our proposed algorithm via the reconstruction of various real skin surfaces. The experimental results demonstrate that our approach is a promising technology to reconstruct an accurate skin surface with a single skin image. We proposed 3D skin surface reconstruction using only a single camera. We highlighted the utility of global curvature, which has not been considered important in the past. Thus, we proposed a new method for 3D reconstruction that can be used for 3D haptic palpation, dividing the concepts of local and global regions. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. Longitudinal surface curvature effect in magnetohydrodynamics

    International Nuclear Information System (INIS)

    Bodas, N.G.

    1975-01-01

    The two-dimensional motion of an incompressible and electrically conducting fluid past an electrically insulated body surface (having curvature) is studied for a given O(1) basic flow and magnetic field, when (i) the applied magnetic field is aligned with the velocity in the basic flow, and (ii) the applied magnetic field is within the body surface. 01 and 0(Re sup(1/2)) mean the first and second order approximations respectively in an exansion scheme in powers of Resup(-1/2), Re being the Reynolds number). The technique of matched asymptotic expansions is used to solve the problem. The governing partial differential equations to 0(Resup(-1/2)) boundary layer approximation are found to give similarity solutions for a family of surface curvature and pressure gradient distributions in case (i), and for uniform basic flow with analytic surface curvature distributions in case (ii). The equations are solved numerically. In case (i) it is seen that the effect of the magnetic field on the skin-friction- correction due to the curvature is very small. Also the magnetic field at the wall is reduced by the curvature on the convex side. In case (ii) the magnetic field significantly increases the skin-friction-correction due to the curvature. The effect of the magnetic field on the O(1) and O(Resup(-1/2)) skin friction coefficients increases with the increase of the electrical conductivity of the fluid. Also, at higher values of the magnetic pressure, moderate changes in the electrical conductivity do not influence the correction to the skin-friction significantly. (Auth.)

  5. Straight-line string with curvature

    International Nuclear Information System (INIS)

    Solov'ev, L.D.

    1995-01-01

    Classical and quantum solutions for the relativistic straight-line string with arbitrary dependence on the world surface curvature are obtained. They differ from the case of the usual Nambu-Goto interaction by the behaviour of the Regge trajectory which in general can be non-linear. A regularization of the action is considered and a comparison with relativistic point with curvature is made. 5 refs

  6. Curvature of random walks and random polygons in confinement

    International Nuclear Information System (INIS)

    Diao, Y; Ernst, C; Montemayor, A; Ziegler, U

    2013-01-01

    The purpose of this paper is to study the curvature of equilateral random walks and polygons that are confined in a sphere. Curvature is one of several basic geometric properties that can be used to describe random walks and polygons. We show that confinement affects curvature quite strongly, and in the limit case where the confinement diameter equals the edge length the unconfined expected curvature value doubles from π/2 to π. To study curvature a simple model of an equilateral random walk in spherical confinement in dimensions 2 and 3 is introduced. For this simple model we derive explicit integral expressions for the expected value of the total curvature in both dimensions. These expressions are functions that depend only on the radius R of the confinement sphere. We then show that the values obtained by numeric integration of these expressions agrees with numerical average curvature estimates obtained from simulations of random walks. Finally, we compare the confinement effect on curvature of random walks with random polygons. (paper)

  7. The curvature calculation mechanism based on simple cell model.

    Science.gov (United States)

    Yu, Haiyang; Fan, Xingyu; Song, Aiqi

    2017-07-20

    A conclusion has not yet been reached on how exactly the human visual system detects curvature. This paper demonstrates how orientation-selective simple cells can be used to construct curvature-detecting neural units. Through fixed arrangements, multiple plurality cells were constructed to simulate curvature cells with a proportional output to their curvature. In addition, this paper offers a solution to the problem of narrow detection range under fixed resolution by selecting an output value under multiple resolution. Curvature cells can be treated as concrete models of an end-stopped mechanism, and they can be used to further understand "curvature-selective" characteristics and to explain basic psychophysical findings and perceptual phenomena in current studies.

  8. A depth-first search algorithm to compute elementary flux modes by linear programming.

    Science.gov (United States)

    Quek, Lake-Ee; Nielsen, Lars K

    2014-07-30

    The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints.

  9. Fast and accurate algorithm for the computation of complex linear canonical transforms.

    Science.gov (United States)

    Koç, Aykut; Ozaktas, Haldun M; Hesselink, Lambertus

    2010-09-01

    A fast and accurate algorithm is developed for the numerical computation of the family of complex linear canonical transforms (CLCTs), which represent the input-output relationship of complex quadratic-phase systems. Allowing the linear canonical transform parameters to be complex numbers makes it possible to represent paraxial optical systems that involve complex parameters. These include lossy systems such as Gaussian apertures, Gaussian ducts, or complex graded-index media, as well as lossless thin lenses and sections of free space and any arbitrary combinations of them. Complex-ordered fractional Fourier transforms (CFRTs) are a special case of CLCTs, and therefore a fast and accurate algorithm to compute CFRTs is included as a special case of the presented algorithm. The algorithm is based on decomposition of an arbitrary CLCT matrix into real and complex chirp multiplications and Fourier transforms. The samples of the output are obtained from the samples of the input in approximately N log N time, where N is the number of input samples. A space-bandwidth product tracking formalism is developed to ensure that the number of samples is information-theoretically sufficient to reconstruct the continuous transform, but not unnecessarily redundant.

  10. A projected preconditioned conjugate gradient algorithm for computing many extreme eigenpairs of a Hermitian matrix

    International Nuclear Information System (INIS)

    Vecharynski, Eugene; Yang, Chao; Pask, John E.

    2015-01-01

    We present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimal block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer

  11. Integration of length and curvature in haptic perception

    NARCIS (Netherlands)

    Panday, V.; Bergmann Tiest, W.M.; Kappers, A.M.L.

    2014-01-01

    We investigated if and how length and curvature information are integrated when an object is explored in one hand. Subjects were asked to explore four types of objects between thumb and index finger. Objects differed in either length, curvature, both length and curvature correlated as in a circle,

  12. Improving Polyp Detection Algorithms for CT Colonography: Pareto Front Approach.

    Science.gov (United States)

    Huang, Adam; Li, Jiang; Summers, Ronald M; Petrick, Nicholas; Hara, Amy K

    2010-03-21

    We investigated a Pareto front approach to improving polyp detection algorithms for CT colonography (CTC). A dataset of 56 CTC colon surfaces with 87 proven positive detections of 53 polyps sized 4 to 60 mm was used to evaluate the performance of a one-step and a two-step curvature-based region growing algorithm. The algorithmic performance was statistically evaluated and compared based on the Pareto optimal solutions from 20 experiments by evolutionary algorithms. The false positive rate was lower (pPareto optimization process can effectively help in fine-tuning and redesigning polyp detection algorithms.

  13. The Use of Computer Vision Algorithms for Automatic Orientation of Terrestrial Laser Scanning Data

    Science.gov (United States)

    Markiewicz, Jakub Stefan

    2016-06-01

    The paper presents analysis of the orientation of terrestrial laser scanning (TLS) data. In the proposed data processing methodology, point clouds are considered as panoramic images enriched by the depth map. Computer vision (CV) algorithms are used for orientation, which are applied for testing the correctness of the detection of tie points and time of computations, and for assessing difficulties in their implementation. The BRISK, FASRT, MSER, SIFT, SURF, ASIFT and CenSurE algorithms are used to search for key-points. The source data are point clouds acquired using a Z+F 5006h terrestrial laser scanner on the ruins of Iłża Castle, Poland. Algorithms allowing combination of the photogrammetric and CV approaches are also presented.

  14. Medical imaging in clinical applications algorithmic and computer-based approaches

    CERN Document Server

    Bhateja, Vikrant; Hassanien, Aboul

    2016-01-01

    This volume comprises of 21 selected chapters, including two overview chapters devoted to abdominal imaging in clinical applications supported computer aided diagnosis approaches as well as different techniques for solving the pectoral muscle extraction problem in the preprocessing part of the CAD systems for detecting breast cancer in its early stage using digital mammograms. The aim of this book is to stimulate further research in medical imaging applications based algorithmic and computer based approaches and utilize them in real-world clinical applications. The book is divided into four parts, Part-I: Clinical Applications of Medical Imaging, Part-II: Classification and clustering, Part-III: Computer Aided Diagnosis (CAD) Tools and Case Studies and Part-IV: Bio-inspiring based Computer Aided diagnosis techniques. .

  15. Anatomical study of the radius and center of curvature of the distal femoral condyle

    KAUST Repository

    Kosel, Jü rgen; Giouroudi, Ioanna; Scheffer, Cornie; Dillon, Edwin Mark; Erasmus, Pieter J.

    2010-01-01

    In this anatomical study, the anteroposterior curvature of the surface of 16 cadaveric distal femurs was examined in terms of radii and center point. Those two parameters attract high interest due to their significance for total knee arthroplasty. Basically, two different conclusions have been drawn in foregoing studies: (1) The curvature shows a constant radius and (2) the curvature shows a variable radius. The investigations were based on a new method combining three-dimensional laser-scanning and planar geometrical analyses. This method is aimed at providing high accuracy and high local resolution. The high-precision laser scanning enables the exact reproduction of the distal femurs - including their cartilage tissue - as a three-dimensional computer model. The surface curvature was investigated on intersection planes that were oriented perpendicularly to the surgical epicondylar line. Three planes were placed at the central part of each condyle. The intersection of either plane with the femur model was approximated with the help of a b-spline, yielding three b-splines on each condyle. The radii and center points of the circles, approximating the local curvature of the b-splines, were then evaluated. The results from all three b-splines were averaged in order to increase the reliability of the method. The results show the variation in the surface curvatures of the investigated samples of condyles. These variations are expressed in the pattern of the center points and the radii of the curvatures. The standard deviations of the radii for a 90 deg arc on the posterior condyle range from 0.6 mm up to 5.1 mm, with an average of 2.4 mm laterally and 2.2 mm medially. No correlation was found between the curvature of the lateral and medial condyles. Within the range of the investigated 16 samples, the conclusion can be drawn that the condyle surface curvature is not constant and different for all specimens when viewed along the surgical epicondylar axis. For the portion

  16. Fast algorithms for computing defects and their derivatives in the Regge calculus

    International Nuclear Information System (INIS)

    Brewin, Leo

    2011-01-01

    Any practical attempt to solve the Regge equations, these being a large system of non-linear algebraic equations, will almost certainly employ a Newton-Raphson-like scheme. In such cases, it is essential that efficient algorithms be used when computing the defect angles and their derivatives with respect to the leg lengths. The purpose of this paper is to present details of such an algorithm.

  17. Computer aided surface representation

    Energy Technology Data Exchange (ETDEWEB)

    Barnhill, R E

    1987-11-01

    The aims of this research are the creation of new surface forms and the determination of geometric and physical properties of surfaces. The full sweep from constructive mathematics through the implementation of algorithms and the interactive computer graphics display of surfaces is utilized. Both three-dimensional and multi- dimensional surfaces are considered. Particular emphasis is given to the scientific computing solution of Department of Energy problems. The methods that we have developed and that we are proposing to develop allow applications such as: Producing smooth contour maps from measured data, such as weather maps. Modeling the heat distribution inside a furnace from sample measurements. Terrain modeling based on satellite pictures. The investigation of new surface forms includes the topics of triangular interpolants, multivariate interpolation, surfaces defined on surfaces and monotone and/or convex surfaces. The geometric and physical properties considered include contours, the intersection of surfaces, curvatures as a interrogation tool, and numerical integration.

  18. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Edward S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Orr, Laurel J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Thompson, Kyle R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  19. Prediction of the Critical Curvature for LX-17 with the Time of Arrival Data from DNS

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Jin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fried, Laurence E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Moss, William C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-01-10

    We extract the detonation shock front velocity, curvature and acceleration from time of arrival data measured at grid points from direct numerical simulations of a 50mm rate-stick lit by a disk-source, with the ignition and growth reaction model and a JWL equation of state calibrated for LX-17. We compute the quasi-steady (D, κ) relation based on the extracted properties and predicted the critical curvatures of LX-17. We also proposed an explicit formula that contains the failure turning point, obtained from optimization for the (D, κ) relation of LX-17.

  20. Evolution of the curvature perturbations during warm inflation

    International Nuclear Information System (INIS)

    Matsuda, Tomohiro

    2009-01-01

    This paper considers warm inflation as an interesting application of multi-field inflation. Delta-N formalism is used for the calculation of the evolution of the curvature perturbations during warm inflation. Although the perturbations considered in this paper are decaying after the horizon exit, the corrections to the curvature perturbations sourced by these perturbations can remain and dominate the curvature perturbations at large scales. In addition to the typical evolution of the curvature perturbations, inhomogeneous diffusion rate is considered for warm inflation, which may lead to significant non-Gaussianity of the spectrum

  1. Weyl tensors for asymmetric complex curvatures

    International Nuclear Information System (INIS)

    Oliveira, C.G.

    Considering a second rank Hermitian field tensor and a general Hermitian connection the associated complex curvature tensor is constructed. The Weyl tensor that corresponds to this complex curvature is determined. The formalism is applied to the Weyl unitary field theory and to the Moffat gravitational theory. (Author) [pt

  2. Computationally Efficient DOA Tracking Algorithm in Monostatic MIMO Radar with Automatic Association

    Directory of Open Access Journals (Sweden)

    Huaxin Yu

    2014-01-01

    Full Text Available We consider the problem of tracking the direction of arrivals (DOA of multiple moving targets in monostatic multiple-input multiple-output (MIMO radar. A low-complexity DOA tracking algorithm in monostatic MIMO radar is proposed. The proposed algorithm obtains DOA estimation via the difference between previous and current covariance matrix of the reduced-dimension transformation signal, and it reduces the computational complexity and realizes automatic association in DOA tracking. Error analysis and Cramér-Rao lower bound (CRLB of DOA tracking are derived in the paper. The proposed algorithm not only can be regarded as an extension of array-signal-processing DOA tracking algorithm in (Zhang et al. (2008, but also is an improved version of the DOA tracking algorithm in (Zhang et al. (2008. Furthermore, the proposed algorithm has better DOA tracking performance than the DOA tracking algorithm in (Zhang et al. (2008. The simulation results demonstrate effectiveness of the proposed algorithm. Our work provides the technical support for the practical application of MIMO radar.

  3. Investigating the Multi-memetic Mind Evolutionary Computation Algorithm Efficiency

    Directory of Open Access Journals (Sweden)

    M. K. Sakharov

    2017-01-01

    Full Text Available In solving practically significant problems of global optimization, the objective function is often of high dimensionality and computational complexity and of nontrivial landscape as well. Studies show that often one optimization method is not enough for solving such problems efficiently - hybridization of several optimization methods is necessary.One of the most promising contemporary trends in this field are memetic algorithms (MA, which can be viewed as a combination of the population-based search for a global optimum and the procedures for a local refinement of solutions (memes, provided by a synergy. Since there are relatively few theoretical studies concerning the MA configuration, which is advisable for use to solve the black-box optimization problems, many researchers tend just to adaptive algorithms, which for search select the most efficient methods of local optimization for the certain domains of the search space.The article proposes a multi-memetic modification of a simple SMEC algorithm, using random hyper-heuristics. Presents the software algorithm and memes used (Nelder-Mead method, method of random hyper-sphere surface search, Hooke-Jeeves method. Conducts a comparative study of the efficiency of the proposed algorithm depending on the set and the number of memes. The study has been carried out using Rastrigin, Rosenbrock, and Zakharov multidimensional test functions. Computational experiments have been carried out for all possible combinations of memes and for each meme individually.According to results of study, conducted by the multi-start method, the combinations of memes, comprising the Hooke-Jeeves method, were successful. These results prove a rapid convergence of the method to a local optimum in comparison with other memes, since all methods perform the fixed number of iterations at the most.The analysis of the average number of iterations shows that using the most efficient sets of memes allows us to find the optimal

  4. Programming Non-Trivial Algorithms in the Measurement Based Quantum Computation Model

    Energy Technology Data Exchange (ETDEWEB)

    Alsing, Paul [United States Air Force Research Laboratory, Wright-Patterson Air Force Base; Fanto, Michael [United States Air Force Research Laboratory, Wright-Patterson Air Force Base; Lott, Capt. Gordon [United States Air Force Research Laboratory, Wright-Patterson Air Force Base; Tison, Christoper C. [United States Air Force Research Laboratory, Wright-Patterson Air Force Base

    2014-01-01

    We provide a set of prescriptions for implementing a quantum circuit model algorithm as measurement based quantum computing (MBQC) algorithm1, 2 via a large cluster state. As means of illustration we draw upon our numerical modeling experience to describe a large graph state capable of searching a logical 8 element list (a non-trivial version of Grover's algorithm3 with feedforward). We develop several prescriptions based on analytic evaluation of cluster states and graph state equations which can be generalized into any circuit model operations. Such a resulting cluster state will be able to carry out the desired operation with appropriate measurements and feed forward error correction. We also discuss the physical implementation and the analysis of the principal 3-qubit entangling gate (Toffoli) required for a non-trivial feedforward realization of an 8-element Grover search algorithm.

  5. A computational algorithm addressing how vessel length might depend on vessel diameter

    Science.gov (United States)

    Jing Cai; Shuoxin Zhang; Melvin T. Tyree

    2010-01-01

    The objective of this method paper was to examine a computational algorithm that may reveal how vessel length might depend on vessel diameter within any given stem or species. The computational method requires the assumption that vessels remain approximately constant in diameter over their entire length. When this method is applied to three species or hybrids in the...

  6. Curvature constraints from the causal entropic principle

    International Nuclear Information System (INIS)

    Bozek, Brandon; Albrecht, Andreas; Phillips, Daniel

    2009-01-01

    Current cosmological observations indicate a preference for a cosmological constant that is drastically smaller than what can be explained by conventional particle physics. The causal entropic principle (Bousso et al.) provides an alternative approach to anthropic attempts to predict our observed value of the cosmological constant by calculating the entropy created within a causal diamond. We have extended this work to use the causal entropic principle to predict the preferred curvature within the 'multiverse'. We have found that values larger than ρ k =40ρ m are disfavored by more than 99.99% peak value at ρ Λ =7.9x10 -123 and ρ k =4.3ρ m for open universes. For universes that allow only positive curvature or both positive and negative curvature, we find a correlation between curvature and dark energy that leads to an extended region of preferred values. Our universe is found to be disfavored to an extent depending on the priors on curvature. We also provide a comparison to previous anthropic constraints on open universes and discuss future directions for this work.

  7. A Novel Cloud Computing Algorithm of Security and Privacy

    Directory of Open Access Journals (Sweden)

    Chih-Yung Chen

    2013-01-01

    Full Text Available The emergence of cloud computing has simplified the flow of large-scale deployment distributed system of software suppliers; when issuing respective application programs in a sharing clouds service to different user, the management of material becomes more complex. Therefore, in multitype clouds service of trust environment, when enterprises face cloud computing, what most worries is the issue of security, but individual users are worried whether the privacy material will have an outflow risk. This research has mainly analyzed several different construction patterns of cloud computing, and quite relevant case in the deployment construction security of cloud computing by fit and unfit quality, and proposed finally an optimization safe deployment construction of cloud computing and security mechanism of material protection calculating method, namely, Global Authentication Register System (GARS, to reduce cloud material outflow risk. We implemented a system simulation to test the GARS algorithm of availability, security and performance. By experimental data analysis, the solutions of cloud computing security, and privacy derived from the research can be effective protection in cloud information security. Moreover, we have proposed cloud computing in the information security-related proposals that would provide related units for the development of cloud computing security practice.

  8. Image processing algorithm of computer-aided diagnosis in lung cancer screening by CT

    International Nuclear Information System (INIS)

    Yamamoto, Shinji

    2004-01-01

    In this paper, an image processing algorithm for computer-aided diagnosis of lung cancer by X-ray CT is described, which has been developed by my research group for these 10 years or so. CT lung images gathered at the mass screening stage are almost all normal, and lung cancer nodules will be found as the rate of less than 10%. To pick up such a very rare nodules with the high accuracy, a very sensitive detection algorithm is requested which is detectable local and very slight variation of the image. On the contrary, such a sensitive detection algorithm introduces a bad effect that a lot of normal shadows will be detected as abnormal shadows. In this paper I describe how to compromise this complicated subject and realize a practical computer-aided diagnosis tool by the image processing algorithm developed by my research group. Especially, I will mainly focus my description to the principle and characteristics of the Quoit filter which is newly developed as a high sensitive filter by my group. (author)

  9. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  10. Effect of Novel Amplitude/Phase Binning Algorithm on Commercial Four-Dimensional Computed Tomography Quality

    International Nuclear Information System (INIS)

    Olsen, Jeffrey R.; Lu Wei; Hubenschmidt, James P.; Nystrom, Michelle M.; Klahr, Paul; Bradley, Jeffrey D.; Low, Daniel A.; Parikh, Parag J.

    2008-01-01

    Purpose: Respiratory motion is a significant source of anatomic uncertainty in radiotherapy planning and can result in errors of portal size and the subsequent radiation dose. Although four-dimensional computed tomography allows for more accurate analysis of the respiratory cycle, breathing irregularities during data acquisition can cause considerable image distortions. The aim of this study was to examine the effect of respiratory irregularities on four-dimensional computed tomography, and to evaluate a novel image reconstruction algorithm using percentile-based tagging of the respiratory cycle. Methods and Materials: Respiratory-correlated helical computed tomography scans were acquired for 11 consecutive patients. The inspiration and expiration data sets were reconstructed using the default phase-based method, as well as a novel respiration percentile-based method with patient-specific metrics to define the ranges of the reconstruction. The image output was analyzed in a blinded fashion for the phase- and percentile-based reconstructions to determine the prevalence and severity of the image artifacts. Results: The percentile-based algorithm resulted in a significant reduction in artifact severity compared with the phase-based algorithm, although the overall artifact prevalence did not differ between the two algorithms. The magnitude of differences in respiratory tag placement between the phase- and percentile-based algorithms correlated with the presence of image artifacts. Conclusion: The results of our study have indicated that our novel four-dimensional computed tomography reconstruction method could be useful in detecting clinically relevant image distortions that might otherwise go unnoticed and to reduce the image distortion associated with some respiratory irregularities. Additional work is necessary to assess the clinical impact on areas of possible irregular breathing

  11. 3D face recognition with asymptotic cones based principal curvatures

    KAUST Repository

    Tang, Yinhang

    2015-05-01

    The classical curvatures of smooth surfaces (Gaussian, mean and principal curvatures) have been widely used in 3D face recognition (FR). However, facial surfaces resulting from 3D sensors are discrete meshes. In this paper, we present a general framework and define three principal curvatures on discrete surfaces for the purpose of 3D FR. These principal curvatures are derived from the construction of asymptotic cones associated to any Borel subset of the discrete surface. They describe the local geometry of the underlying mesh. First two of them correspond to the classical principal curvatures in the smooth case. We isolate the third principal curvature that carries out meaningful geometric shape information. The three principal curvatures in different Borel subsets scales give multi-scale local facial surface descriptors. We combine the proposed principal curvatures with the LNP-based facial descriptor and SRC for recognition. The identification and verification experiments demonstrate the practicability and accuracy of the third principal curvature and the fusion of multi-scale Borel subset descriptors on 3D face from FRGC v2.0.

  12. 3D face recognition with asymptotic cones based principal curvatures

    KAUST Repository

    Tang, Yinhang; Sun, Xiang; Huang, Di; Morvan, Jean-Marie; Wang, Yunhong; Chen, Liming

    2015-01-01

    The classical curvatures of smooth surfaces (Gaussian, mean and principal curvatures) have been widely used in 3D face recognition (FR). However, facial surfaces resulting from 3D sensors are discrete meshes. In this paper, we present a general framework and define three principal curvatures on discrete surfaces for the purpose of 3D FR. These principal curvatures are derived from the construction of asymptotic cones associated to any Borel subset of the discrete surface. They describe the local geometry of the underlying mesh. First two of them correspond to the classical principal curvatures in the smooth case. We isolate the third principal curvature that carries out meaningful geometric shape information. The three principal curvatures in different Borel subsets scales give multi-scale local facial surface descriptors. We combine the proposed principal curvatures with the LNP-based facial descriptor and SRC for recognition. The identification and verification experiments demonstrate the practicability and accuracy of the third principal curvature and the fusion of multi-scale Borel subset descriptors on 3D face from FRGC v2.0.

  13. A Scheduling Algorithm for Cloud Computing System Based on the Driver of Dynamic Essential Path.

    Science.gov (United States)

    Xie, Zhiqiang; Shao, Xia; Xin, Yu

    2016-01-01

    To solve the problem of task scheduling in the cloud computing system, this paper proposes a scheduling algorithm for cloud computing based on the driver of dynamic essential path (DDEP). This algorithm applies a predecessor-task layer priority strategy to solve the problem of constraint relations among task nodes. The strategy assigns different priority values to every task node based on the scheduling order of task node as affected by the constraint relations among task nodes, and the task node list is generated by the different priority value. To address the scheduling order problem in which task nodes have the same priority value, the dynamic essential long path strategy is proposed. This strategy computes the dynamic essential path of the pre-scheduling task nodes based on the actual computation cost and communication cost of task node in the scheduling process. The task node that has the longest dynamic essential path is scheduled first as the completion time of task graph is indirectly influenced by the finishing time of task nodes in the longest dynamic essential path. Finally, we demonstrate the proposed algorithm via simulation experiments using Matlab tools. The experimental results indicate that the proposed algorithm can effectively reduce the task Makespan in most cases and meet a high quality performance objective.

  14. An iterative algorithm for solving the multidimensional neutron diffusion nodal method equations on parallel computers

    International Nuclear Information System (INIS)

    Kirk, B.L.; Azmy, Y.Y.

    1992-01-01

    In this paper the one-group, steady-state neutron diffusion equation in two-dimensional Cartesian geometry is solved using the nodal integral method. The discrete variable equations comprise loosely coupled sets of equations representing the nodal balance of neutrons, as well as neutron current continuity along rows or columns of computational cells. An iterative algorithm that is more suitable for solving large problems concurrently is derived based on the decomposition of the spatial domain and is accelerated using successive overrelaxation. This algorithm is very well suited for parallel computers, especially since the spatial domain decomposition occurs naturally, so that the number of iterations required for convergence does not depend on the number of processors participating in the calculation. Implementation of the authors' algorithm on the Intel iPSC/2 hypercube and Sequent Balance 8000 parallel computer is presented, and measured speedup and efficiency for test problems are reported. The results suggest that the efficiency of the hypercube quickly deteriorates when many processors are used, while the Sequent Balance retains very high efficiency for a comparable number of participating processors. This leads to the conjecture that message-passing parallel computers are not as well suited for this algorithm as shared-memory machines

  15. On the implementation of the Ford | Fulkerson algorithm on the Multiple Instruction and Single Data computer system

    Directory of Open Access Journals (Sweden)

    A. Yu. Popov

    2014-01-01

    Full Text Available Algorithms of optimization in networks and direct graphs find a broad application when solving the practical tasks. However, along with large-scale introduction of information technologies in human activity, requirements for volumes of input data and retrieval rate of solution are aggravated. In spite of the fact that by now the large number of algorithms for the various models of computers and computing systems have been studied and implemented, the solution of key problems of optimization for real dimensions of tasks remains difficult. In this regard search of new and more efficient computing structures, as well as update of known algorithms are of great current interest.The work considers an implementation of the search-end algorithm of the maximum flow on the direct graph for multiple instructions and single data computer system (MISD developed in BMSTU. Key feature of this architecture is deep hardware support of operations over sets and structures of data. Functions of storage and access to them are realized on the specialized processor of structures processing (SP which is capable to perform at the hardware level such operations as: add, delete, search, intersect, complete, merge, and others. Advantage of such system is possibility of parallel execution of parts of the computing tasks regarding the access to the sets to data structures simultaneously with arithmetic and logical processing of information.The previous works present the general principles of the computing process arrangement and features of programs implemented in MISD system, describe the structure and principles of functioning the processor of structures processing, show the general principles of the graph task solutions in such system, and experimentally study the efficiency of the received algorithms.The work gives command formats of the SP processor, offers the technique to update the algorithms realized in MISD system, suggests the option of Ford-Falkersona algorithm

  16. Iterative schemes for parallel Sn algorithms in a shared-memory computing environment

    International Nuclear Information System (INIS)

    Haghighat, A.; Hunter, M.A.; Mattis, R.E.

    1995-01-01

    Several two-dimensional spatial domain partitioning S n transport theory algorithms are developed on the basis of different iterative schemes. These algorithms are incorporated into TWOTRAN-II and tested on the shared-memory CRAY Y-MP C90 computer. For a series of fixed-source r-z geometry homogeneous problems, it is demonstrated that the concurrent red-black algorithms may result in large parallel efficiencies (>60%) on C90. It is also demonstrated that for a realistic shielding problem, the use of the negative flux fixup causes high load imbalance, which results in a significant loss of parallel efficiency

  17. THE USE OF COMPUTER VISION ALGORITHMS FOR AUTOMATIC ORIENTATION OF TERRESTRIAL LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    J. S. Markiewicz

    2016-06-01

    Full Text Available The paper presents analysis of the orientation of terrestrial laser scanning (TLS data. In the proposed data processing methodology, point clouds are considered as panoramic images enriched by the depth map. Computer vision (CV algorithms are used for orientation, which are applied for testing the correctness of the detection of tie points and time of computations, and for assessing difficulties in their implementation. The BRISK, FASRT, MSER, SIFT, SURF, ASIFT and CenSurE algorithms are used to search for key-points. The source data are point clouds acquired using a Z+F 5006h terrestrial laser scanner on the ruins of Iłża Castle, Poland. Algorithms allowing combination of the photogrammetric and CV approaches are also presented.

  18. Large-scale sequential quadratic programming algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  19. Higher-order curvature terms and extended inflation

    International Nuclear Information System (INIS)

    Wang Yun

    1990-01-01

    We consider higher-order curvature terms in context of the Brans-Dicke theory of gravity, and investigate the effects of these terms on extended inflationary theories. We find that the higher-order curvature terms tend to speed up inflation, although the original extended-inflation solutions are stable when these terms are small. Analytical solutions are found for two extreme cases: when the higher-order curvature terms are small, and when they dominate. A conformal transformation is employed in solving the latter case, and some of the subtleties in this technique are discussed. We note that percolation is less likely to occur when the higher-order curvature terms are present. An upper bound on α is expected if we are to avoid excessive and inadequate percolation of true-vacuum bubbles

  20. Plagiarism Detection Algorithm for Source Code in Computer Science Education

    Science.gov (United States)

    Liu, Xin; Xu, Chan; Ouyang, Boyu

    2015-01-01

    Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…

  1. Constraining inverse curvature gravity with supernovae

    Energy Technology Data Exchange (ETDEWEB)

    Mena, Olga; Santiago, Jose; /Fermilab; Weller, Jochen; /University Coll., London /Fermilab

    2005-10-01

    We show that the current accelerated expansion of the Universe can be explained without resorting to dark energy. Models of generalized modified gravity, with inverse powers of the curvature can have late time accelerating attractors without conflicting with solar system experiments. We have solved the Friedman equations for the full dynamical range of the evolution of the Universe. This allows us to perform a detailed analysis of Supernovae data in the context of such models that results in an excellent fit. Hence, inverse curvature gravity models represent an example of phenomenologically viable models in which the current acceleration of the Universe is driven by curvature instead of dark energy. If we further include constraints on the current expansion rate of the Universe from the Hubble Space Telescope and on the age of the Universe from globular clusters, we obtain that the matter content of the Universe is 0.07 {le} {omega}{sub m} {le} 0.21 (95% Confidence). Hence the inverse curvature gravity models considered can not explain the dynamics of the Universe just with a baryonic matter component.

  2. Numerical methods design, analysis, and computer implementation of algorithms

    CERN Document Server

    Greenbaum, Anne

    2012-01-01

    Numerical Methods provides a clear and concise exploration of standard numerical analysis topics, as well as nontraditional ones, including mathematical modeling, Monte Carlo methods, Markov chains, and fractals. Filled with appealing examples that will motivate students, the textbook considers modern application areas, such as information retrieval and animation, and classical topics from physics and engineering. Exercises use MATLAB and promote understanding of computational results. The book gives instructors the flexibility to emphasize different aspects--design, analysis, or computer implementation--of numerical algorithms, depending on the background and interests of students. Designed for upper-division undergraduates in mathematics or computer science classes, the textbook assumes that students have prior knowledge of linear algebra and calculus, although these topics are reviewed in the text. Short discussions of the history of numerical methods are interspersed throughout the chapters. The book a...

  3. Performance of multiobjective computational intelligence algorithms for the routing and wavelength assignment problem

    Directory of Open Access Journals (Sweden)

    Jorge Patiño

    2016-01-01

    Full Text Available This paper presents an evaluation performance of computational intelligence algorithms based on the multiobjective theory for the solution of the Routing and Wavelength Assignment problem (RWA in optical networks. The study evaluates the Firefly Algorithm, the Differential Evolutionary Algorithm, the Simulated Annealing Algorithm and two versions of the Particle Swarm Optimization algorithm. The paper provides a description of the multiobjective algorithms; then, an evaluation based on the performance provided by the multiobjective algorithms versus mono-objective approaches when dealing with different traffic loads, different numberof wavelengths and wavelength conversion process over the NSFNet topology is presented. Simulation results show that monoobjective algorithms properly solve the RWA problem for low values of data traffic and low number of wavelengths. However, the multiobjective approaches adapt better to online traffic when the number of wavelengths available in the network increases as well as when wavelength conversion is implemented in the nodes.

  4. Novel tilt-curvature coupling in lipid membranes

    Science.gov (United States)

    Terzi, M. Mert; Deserno, Markus

    2017-08-01

    On mesoscopic scales, lipid membranes are well described by continuum theories whose main ingredients are the curvature of a membrane's reference surface and the tilt of its lipid constituents. In particular, Hamm and Kozlov [Eur. Phys. J. E 3, 323 (2000)] have shown how to systematically derive such a tilt-curvature Hamiltonian based on the elementary assumption of a thin fluid elastic sheet experiencing internal lateral pre-stress. Performing a dimensional reduction, they not only derive the basic form of the effective surface Hamiltonian but also express its emergent elastic couplings as trans-membrane moments of lower-level material parameters. In the present paper, we argue, though, that their derivation unfortunately missed a coupling term between curvature and tilt. This term arises because, as one moves along the membrane, the curvature-induced change of transverse distances contributes to the area strain—an effect that was believed to be small but nevertheless ends up contributing at the same (quadratic) order as all other terms in their Hamiltonian. We illustrate the consequences of this amendment by deriving the monolayer and bilayer Euler-Lagrange equations for the tilt, as well as the power spectra of shape, tilt, and director fluctuations. A particularly curious aspect of our new term is that its associated coupling constant is the second moment of the lipid monolayer's lateral stress profile—which within this framework is equal to the monolayer Gaussian curvature modulus, κ¯ m. On the one hand, this implies that many theoretical predictions now contain a parameter that is poorly known (because the Gauss-Bonnet theorem limits access to the integrated Gaussian curvature); on the other hand, the appearance of κ¯ m outside of its Gaussian curvature provenance opens opportunities for measuring it by more conventional means, for instance by monitoring a membrane's undulation spectrum at short scales.

  5. Current algorithms for computed electron beam dose planning

    International Nuclear Information System (INIS)

    Brahme, A.

    1985-01-01

    Two- and sometimes three-dimensional computer algorithms for electron beam irradiation are capable of taking all irregularities of the body cross-section and the properties of the various tissues into account. This is achieved by dividing the incoming broad beams into a number of narrow pencil beams, the penetration of which can be described by essentially one-dimensional formalisms. The constituent pencil beams are most often described by Gaussian, experimentally or theoretically derived distributions. The accuracy of different dose planning algorithms is discussed in some detail based on their ability to take the different physical interaction processes of high energy electrons into account. It is shown that those programs that take the deviations from the simple Gaussian model into account give the best agreement with experimental results. With such programs a dosimetric relative accuracy of about 5% is generally achieved except in the most complex inhomogeneity configurations. Finally, the present limitations and possible future developments of electron dose planning are discussed. (orig.)

  6. The curvature function in general relativity

    International Nuclear Information System (INIS)

    Hall, G S; MacNay, Lucy

    2006-01-01

    A function, here called the curvature function, is defined and which is constructed explicitly from the type (0, 4) curvature tensor. Although such a function may be defined for any manifold admitting a metric, attention is here concentrated on this function on a spacetime. Some properties of this function are explored and compared with a previous discussion of it given by Petrov

  7. Fast Ss-Ilm a Computationally Efficient Algorithm to Discover Socially Important Locations

    Science.gov (United States)

    Dokuz, A. S.; Celik, M.

    2017-11-01

    Socially important locations are places which are frequently visited by social media users in their social media lifetime. Discovering socially important locations provide several valuable information about user behaviours on social media networking sites. However, discovering socially important locations are challenging due to data volume and dimensions, spatial and temporal calculations, location sparseness in social media datasets, and inefficiency of current algorithms. In the literature, several studies are conducted to discover important locations, however, the proposed approaches do not work in computationally efficient manner. In this study, we propose Fast SS-ILM algorithm by modifying the algorithm of SS-ILM to mine socially important locations efficiently. Experimental results show that proposed Fast SS-ILM algorithm decreases execution time of socially important locations discovery process up to 20 %.

  8. FAST SS-ILM: A COMPUTATIONALLY EFFICIENT ALGORITHM TO DISCOVER SOCIALLY IMPORTANT LOCATIONS

    Directory of Open Access Journals (Sweden)

    A. S. Dokuz

    2017-11-01

    Full Text Available Socially important locations are places which are frequently visited by social media users in their social media lifetime. Discovering socially important locations provide several valuable information about user behaviours on social media networking sites. However, discovering socially important locations are challenging due to data volume and dimensions, spatial and temporal calculations, location sparseness in social media datasets, and inefficiency of current algorithms. In the literature, several studies are conducted to discover important locations, however, the proposed approaches do not work in computationally efficient manner. In this study, we propose Fast SS-ILM algorithm by modifying the algorithm of SS-ILM to mine socially important locations efficiently. Experimental results show that proposed Fast SS-ILM algorithm decreases execution time of socially important locations discovery process up to 20 %.

  9. A coordinate descent MM algorithm for fast computation of sparse logistic PCA

    KAUST Repository

    Lee, Seokho; Huang, Jianhua Z.

    2013-01-01

    Sparse logistic principal component analysis was proposed in Lee et al. (2010) for exploratory analysis of binary data. Relying on the joint estimation of multiple principal components, the algorithm therein is computationally too demanding

  10. Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle–Pock algorithm

    DEFF Research Database (Denmark)

    Sidky, Emil Y.; Jørgensen, Jakob Heide; Pan, Xiaochuan

    2012-01-01

    The primal–dual optimization algorithm developed in Chambolle and Pock (CP) (2011 J. Math. Imag. Vis. 40 1–26) is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems...... for the purpose of designing iterative image reconstruction algorithms for CT. The primal–dual algorithm is briefly summarized in this paper, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application...

  11. Computation of the optical properties of turbid media from slope and curvature of spatially resolved reflectance curves

    International Nuclear Information System (INIS)

    Jäger, Marion; Foschum, Florian; Kienle, Alwin

    2013-01-01

    The optical properties of turbid media were calculated from the curvature at the radial distance ρ O and the slope at the radial distance ρ* of simulated spatially resolved reflectance curves (ρ O (ρ*) denotes a decrease of the spatially resolved reflectance curve of 0.75 (2.4) orders of magnitude relative to the reflectance value at 1.2 mm). We found correlations between the curvature at ρ O and the reduced scattering coefficient as well as the slope at ρ* and the absorption coefficient. For the determination of the optical properties we used these two correlations. The calculation of the reduced scattering coefficient from the curvature at ρ O is practically independent from the absorption coefficient. Knowing the reduced scattering coefficient within a certain accuracy allows the determination of the absorption coefficient from the slope at ρ*. Additionally, we investigated the performance of an artificial neural network for the determination of the optical properties using the above explained correlations. This means we used the derivatives as input data. Our artificial neural network was capable to learn the mapping between the optical properties and the derivatives. In effect, the results for the determined optical properties improved in comparison to the above explained method. Finally, the procedure was compared to an artificial neural network that was trained without using the derivatives. (note)

  12. A practical O(n log2 n) time algorithm for computing the triplet distance on binary trees

    DEFF Research Database (Denmark)

    Sand, Andreas; Pedersen, Christian Nørgaard Storm; Mailund, Thomas

    2013-01-01

    rooted binary trees in time O (n log2 n). The algorithm is related to an algorithm for computing the quartet distance between two unrooted binary trees in time O (n log n). While the quartet distance algorithm has a very severe overhead in the asymptotic time complexity that makes it impractical compared......The triplet distance is a distance measure that compares two rooted trees on the same set of leaves by enumerating all sub-sets of three leaves and counting how often the induced topologies of the tree are equal or different. We present an algorithm that computes the triplet distance between two...

  13. Inverse curvature flows in asymptotically Robertson Walker spaces

    Science.gov (United States)

    Kröner, Heiko

    2018-04-01

    In this paper we consider inverse curvature flows in a Lorentzian manifold N which is the topological product of the real numbers with a closed Riemannian manifold and equipped with a Lorentzian metric having a future singularity so that N is asymptotically Robertson Walker. The flow speeds are future directed and given by 1 / F where F is a homogeneous degree one curvature function of class (K*) of the principal curvatures, i.e. the n-th root of the Gauss curvature. We prove longtime existence of these flows and that the flow hypersurfaces converge to smooth functions when they are rescaled with a proper factor which results from the asymptotics of the metric.

  14. Fast GPU-based computation of the sensitivity matrix for a PET list-mode OSEM algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Nassiri, Moulay Ali; Carrier, Jean-Francois [Montreal Univ., QC (Canada). Dept. de Radio-Oncologie; Hissoiny, Sami [Ecole Polytechnique de Montreal, QC (Canada). Dept. de Genie Informatique et Genie Logiciel; Despres, Philippe [Quebec Univ. (Canada). Dept. de Radio-Oncologie

    2011-07-01

    One of the obstacle in introducing a list-mode PET reconstruction algorithm for routine clinical use is the long computation time required for the sensitivity matrix calculation. This matrix must be computed for each study because it depends on the object attenuation map. During the last decade, studies have shown that 3D list-mode OSEM reconstruction algorithms could be effectively performed and considerably accelerated by GPU devices. However, most of that preliminary work (1) was done for pre-clinical PET systems in which the number of LORs is small compared to modern human PET systems and (2) supposed that the sensitivity matrix is pre-calculated. The time required to compute this matrix can however be longer than the reconstruction time itself. The objective of this work is to investigate the performance of sensitivity matrix calculations in terms of computation time with modern GPUs, for clinical fully 3D LM-OSEM for modern PET scanners. For this purpose, sensitivity matrix calculations and full list-mode OSEM reconstruction for human PET systems were implemented on GPUs using the CUDA framework. The system matrices were built on-the-fly by using the multi-ray Siddon algorithm. The time to compute the sensitivity matrix for 288 x 288 x 57 arrays using 3 tangential LORs was 29 seconds. The 3D LM-OSEM algorithm, including the sensitivity matrix calculation, was performed for the same LORs in 71 seconds for 62 millions events, 6 frames and 1 iterations. This work let envision fast reconstructions for advanced PET application such as dynamic studies and parametric image reconstruction. (orig.)

  15. Curvature driven instabilities in toroidal plasmas

    International Nuclear Information System (INIS)

    Andersson, P.

    1986-11-01

    The electromagnetic ballooning mode, the curvature driven trapped electron mode and the toroidally induced ion temperature gradient mode have been studies. Eigenvalue equations have been derived and solved both numerically and analytically. For electromagnetic ballooning modes the effects of convective damping, finite Larmor radius, higher order curvature terms, and temperature gradients have been investigated. A fully toroidal fluid ion model has been developed. It is shown that a necessary and sufficient condition for an instability below the MHD limit is the presence of an ion temperature gradient. Analytical dispersion relations giving results in good agreement with numerical solutions are also presented. The curvature driven trapped electron modes are found to be unstable for virtually all parameters with growth rates of the order of the diamagnetic drift frequency. Studies have been made, using both a gyrokinetic ion description and the fully toroidal ion model. Both analytical and numerical results are presented and are found to be in good agreement. The toroidally induced ion temperature gradients modes are found to have a behavior similar to that of the curvature driven trapped electron modes and can in the electrostatic limit be described by a simple quadratic dispersion equation. (author)

  16. Computation of Quasi-Periodic Normally Hyperbolic Invariant Tori: Algorithms, Numerical Explorations and Mechanisms of Breakdown

    Science.gov (United States)

    Canadell, Marta; Haro, Àlex

    2017-12-01

    We present several algorithms for computing normally hyperbolic invariant tori carrying quasi-periodic motion of a fixed frequency in families of dynamical systems. The algorithms are based on a KAM scheme presented in Canadell and Haro (J Nonlinear Sci, 2016. doi: 10.1007/s00332-017-9389-y), to find the parameterization of the torus with prescribed dynamics by detuning parameters of the model. The algorithms use different hyperbolicity and reducibility properties and, in particular, compute also the invariant bundles and Floquet transformations. We implement these methods in several 2-parameter families of dynamical systems, to compute quasi-periodic arcs, that is, the parameters for which 1D normally hyperbolic invariant tori with a given fixed frequency do exist. The implementation lets us to perform the continuations up to the tip of the quasi-periodic arcs, for which the invariant curves break down. Three different mechanisms of breakdown are analyzed, using several observables, leading to several conjectures.

  17. Algorithm of calculation of multicomponent system eutectics using electronic digital computer

    International Nuclear Information System (INIS)

    Posypajko, V.I.; Stratilatov, B.V.; Pervikova, V.I.; Volkov, V.Ya.

    1975-01-01

    A computer algorithm is proposed for determining low-temperature equilibrium regions for existing phases. The algorithm has been used in calculating nonvariant parameters (temperatures of melting of eutectics and the concentrations of their components) for a series of trinary systems, among which are Ksub(long)Cl, WO 4 , SO 4 (x 1 =K 2 WO 4 ; x 2 =K 2 SO 4 ), Ag, Cd, Pbsub(long)Cl(x 1 =CdCl 2 , x 2 =PbCl 2 ); Ksub(long)F, Cl, I (x 1 =KF, x 2 =KI). The proposed method of calculating eutectics permits the planning of the subsequent experiment in determining the parameters of the eutectics of multicomponent systems and the forecasting of chemical interaction in such systems. The algorithm can be used in calculating systems containing any number of components

  18. Image preprocessing for improving computational efficiency in implementation of restoration and superresolution algorithms.

    Science.gov (United States)

    Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen

    2002-12-10

    Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the

  19. An algorithm for computing screened Coulomb scattering in GEANT4

    Energy Technology Data Exchange (ETDEWEB)

    Mendenhall, Marcus H. [Vanderbilt University Free Electron Laser Center, P.O. Box 351816 Station B, Nashville, TN 37235-1816 (United States)]. E-mail: marcus.h.mendenhall@vanderbilt.edu; Weller, Robert A. [Department of Electrical Engineering and Computer Science, Vanderbilt University, P.O. Box 351821 Station B, Nashville, TN 37235-1821 (United States)]. E-mail: robert.a.weller@vanderbilt.edu

    2005-01-01

    An algorithm has been developed for the GEANT4 Monte-Carlo package for the efficient computation of screened Coulomb interatomic scattering. It explicitly integrates the classical equations of motion for scattering events, resulting in precise tracking of both the projectile and the recoil target nucleus. The algorithm permits the user to plug in an arbitrary screening function, such as Lens-Jensen screening, which is good for backscattering calculations, or Ziegler-Biersack-Littmark screening, which is good for nuclear straggling and implantation problems. This will allow many of the applications of the TRIM and SRIM codes to be extended into the much more general GEANT4 framework where nuclear and other effects can be included.

  20. An algorithm for computing screened Coulomb scattering in GEANT4

    International Nuclear Information System (INIS)

    Mendenhall, Marcus H.; Weller, Robert A.

    2005-01-01

    An algorithm has been developed for the GEANT4 Monte-Carlo package for the efficient computation of screened Coulomb interatomic scattering. It explicitly integrates the classical equations of motion for scattering events, resulting in precise tracking of both the projectile and the recoil target nucleus. The algorithm permits the user to plug in an arbitrary screening function, such as Lens-Jensen screening, which is good for backscattering calculations, or Ziegler-Biersack-Littmark screening, which is good for nuclear straggling and implantation problems. This will allow many of the applications of the TRIM and SRIM codes to be extended into the much more general GEANT4 framework where nuclear and other effects can be included

  1. BootGraph: probabilistic fiber tractography using bootstrap algorithms and graph theory.

    Science.gov (United States)

    Vorburger, Robert S; Reischauer, Carolin; Boesiger, Peter

    2013-02-01

    Bootstrap methods have recently been introduced to diffusion-weighted magnetic resonance imaging to estimate the measurement uncertainty of ensuing diffusion parameters directly from the acquired data without the necessity to assume a noise model. These methods have been previously combined with deterministic streamline tractography algorithms to allow for the assessment of connection probabilities in the human brain. Thereby, the local noise induced disturbance in the diffusion data is accumulated additively due to the incremental progression of streamline tractography algorithms. Graph based approaches have been proposed to overcome this drawback of streamline techniques. For this reason, the bootstrap method is in the present work incorporated into a graph setup to derive a new probabilistic fiber tractography method, called BootGraph. The acquired data set is thereby converted into a weighted, undirected graph by defining a vertex in each voxel and edges between adjacent vertices. By means of the cone of uncertainty, which is derived using the wild bootstrap, a weight is thereafter assigned to each edge. Two path finding algorithms are subsequently applied to derive connection probabilities. While the first algorithm is based on the shortest path approach, the second algorithm takes all existing paths between two vertices into consideration. Tracking results are compared to an established algorithm based on the bootstrap method in combination with streamline fiber tractography and to another graph based algorithm. The BootGraph shows a very good performance in crossing situations with respect to false negatives and permits incorporating additional constraints, such as a curvature threshold. By inheriting the advantages of the bootstrap method and graph theory, the BootGraph method provides a computationally efficient and flexible probabilistic tractography setup to compute connection probability maps and virtual fiber pathways without the drawbacks of

  2. GDP growth and the yield curvature

    DEFF Research Database (Denmark)

    Møller, Stig Vinther

    2014-01-01

    This paper examines the forecastability of GDP growth using information from the term structure of yields. In contrast to previous studies, the paper shows that the curvature of the yield curve contributes with much more forecasting power than the slope of yield curve. The yield curvature also...... predicts bond returns, implying a common element to time-variation in expected bond returns and expected GDP growth....

  3. Computer Algorithms in the Search for Unrelated Stem Cell Donors

    Directory of Open Access Journals (Sweden)

    David Steiner

    2012-01-01

    Full Text Available Hematopoietic stem cell transplantation (HSCT is a medical procedure in the field of hematology and oncology, most often performed for patients with certain cancers of the blood or bone marrow. A lot of patients have no suitable HLA-matched donor within their family, so physicians must activate a “donor search process” by interacting with national and international donor registries who will search their databases for adult unrelated donors or cord blood units (CBU. Information and communication technologies play a key role in the donor search process in donor registries both nationally and internationaly. One of the major challenges for donor registry computer systems is the development of a reliable search algorithm. This work discusses the top-down design of such algorithms and current practice. Based on our experience with systems used by several stem cell donor registries, we highlight typical pitfalls in the implementation of an algorithm and underlying data structure.

  4. 5G Network Communication, Caching, and Computing Algorithms Based on the Two‐Tier Game Model

    Directory of Open Access Journals (Sweden)

    Sungwook Kim

    2018-02-01

    Full Text Available In this study, we developed hybrid control algorithms in smart base stations (SBSs along with devised communication, caching, and computing techniques. In the proposed scheme, SBSs are equipped with computing power and data storage to collectively offload the computation from mobile user equipment and to cache the data from clouds. To combine in a refined manner the communication, caching, and computing algorithms, game theory is adopted to characterize competitive and cooperative interactions. The main contribution of our proposed scheme is to illuminate the ultimate synergy behind a fully integrated approach, while providing excellent adaptability and flexibility to satisfy the different performance requirements. Simulation results demonstrate that the proposed approach can outperform existing schemes by approximately 5% to 15% in terms of bandwidth utilization, access delay, and system throughput.

  5. A new algorithm to compute conjectured supply function equilibrium in electricity markets

    International Nuclear Information System (INIS)

    Diaz, Cristian A.; Villar, Jose; Campos, Fco Alberto; Rodriguez, M. Angel

    2011-01-01

    Several types of market equilibria approaches, such as Cournot, Conjectural Variation (CVE), Supply Function (SFE) or Conjectured Supply Function (CSFE) have been used to model electricity markets for the medium and long term. Among them, CSFE has been proposed as a generalization of the classic Cournot. It computes the equilibrium considering the reaction of the competitors against changes in their strategy, combining several characteristics of both CVE and SFE. Unlike linear SFE approaches, strategies are linearized only at the equilibrium point, using their first-order Taylor approximation. But to solve CSFE, the slope or the intercept of the linear approximations must be given, which has been proved to be very restrictive. This paper proposes a new algorithm to compute CSFE. Unlike previous approaches, the main contribution is that the competitors' strategies for each generator are initially unknown (both slope and intercept) and endogenously computed by this new iterative algorithm. To show the applicability of the proposed approach, it has been applied to several case examples where its qualitative behavior has been analyzed in detail. (author)

  6. Face recognition based on depth maps and surface curvature

    Science.gov (United States)

    Gordon, Gaile G.

    1991-09-01

    This paper explores the representation of the human face by features based on the curvature of the face surface. Curature captures many features necessary to accurately describe the face, such as the shape of the forehead, jawline, and cheeks, which are not easily detected from standard intensity images. Moreover, the value of curvature at a point on the surface is also viewpoint invariant. Until recently range data of high enough resolution and accuracy to perform useful curvature calculations on the scale of the human face had been unavailable. Although several researchers have worked on the problem of interpreting range data from curved (although usually highly geometrically structured) surfaces, the main approaches have centered on segmentation by signs of mean and Gaussian curvature which have not proved sufficient in themselves for the case of the human face. This paper details the calculation of principal curvature for a particular data set, the calculation of general surface descriptors based on curvature, and the calculation of face specific descriptors based both on curvature features and a priori knowledge about the structure of the face. These face specific descriptors can be incorporated into many different recognition strategies. A system that implements one such strategy, depth template comparison, giving recognition rates between 80% and 90% is described.

  7. INVESTIGATION OF CURVES SET BY CUBIC DISTRIBUTION OF CURVATURE

    Directory of Open Access Journals (Sweden)

    S. A. Ustenko

    2014-03-01

    Full Text Available Purpose. Further development of the geometric modeling of curvelinear contours of different objects based on the specified cubic curvature distribution and setpoints of curvature in the boundary points. Methodology. We investigate the flat section of the curvilinear contour generating under condition that cubic curvature distribution is set. Curve begins and ends at the given points, where angles of tangent slope and curvature are also determined. It was obtained the curvature equation of this curve, depending on the section length and coefficient c of cubic curvature distribution. The analysis of obtained equation was carried out. As well as, it was investigated the conditions, in which the inflection points of the curve are appearing. One should find such an interval of parameter change (depending on the input data and the section length, in order to place the inflection point of the curvature graph outside the curve section borders. It was determined the dependence of tangent slope of angle to the curve at its arbitrary point, as well as it was given the recommendations to solve a system of integral equations that allow finding the length of the curve section and the coefficient c of curvature cubic distribution. Findings. As the result of curves research, it is found that the criterion for their selection one can consider the absence of inflection points of the curvature on the observed section. Influence analysis of the parameter c on the graph of tangent slope angle to the curve showed that regardless of its value, it is provided the same rate of angle increase of tangent slope to the curve. Originality. It is improved the approach to geometric modeling of curves based on cubic curvature distribution with its given values at the boundary points by eliminating the inflection points from the observed section of curvilinear contours. Practical value. Curves obtained using the proposed method can be used for geometric modeling of curvilinear

  8. Generalized Curvature-Matter Couplings in Modified Gravity

    Directory of Open Access Journals (Sweden)

    Tiberiu Harko

    2014-07-01

    Full Text Available In this work, we review a plethora of modified theories of gravity with generalized curvature-matter couplings. The explicit nonminimal couplings, for instance, between an arbitrary function of the scalar curvature R and the Lagrangian density of matter, induces a non-vanishing covariant derivative of the energy-momentum tensor, implying non-geodesic motion and, consequently, leads to the appearance of an extra force. Applied to the cosmological context, these curvature-matter couplings lead to interesting phenomenology, where one can obtain a unified description of the cosmological epochs. We also consider the possibility that the behavior of the galactic flat rotation curves can be explained in the framework of the curvature-matter coupling models, where the extra terms in the gravitational field equations modify the equations of motion of test particles and induce a supplementary gravitational interaction. In addition to this, these models are extremely useful for describing dark energy-dark matter interactions and for explaining the late-time cosmic acceleration.

  9. Constant curvature black holes in Einstein AdS gravity: Euclidean action and thermodynamics

    Science.gov (United States)

    Guilleminot, Pablo; Olea, Rodrigo; Petrov, Alexander N.

    2018-03-01

    We compute the Euclidean action for constant curvature black holes (CCBHs), as an attempt to associate thermodynamic quantities to these solutions of Einstein anti-de Sitter (AdS) gravity. CCBHs are gravitational configurations obtained by identifications along isometries of a D -dimensional globally AdS space, such that the Riemann tensor remains constant. Here, these solutions are interpreted as extended objects, which contain a (D -2 )-dimensional de-Sitter brane as a subspace. Nevertheless, the computation of the free energy for these solutions shows that they do not obey standard thermodynamic relations.

  10. Automated local line rolling forming and simplified deformation simulation method for complex curvature plate of ships

    Directory of Open Access Journals (Sweden)

    Y. Zhao

    2017-06-01

    Full Text Available Local line rolling forming is a common forming approach for the complex curvature plate of ships. However, the processing mode based on artificial experience is still applied at present, because it is difficult to integrally determine relational data for the forming shape, processing path, and process parameters used to drive automation equipment. Numerical simulation is currently the major approach for generating such complex relational data. Therefore, a highly precise and effective numerical computation method becomes crucial in the development of the automated local line rolling forming system for producing complex curvature plates used in ships. In this study, a three-dimensional elastoplastic finite element method was first employed to perform numerical computations for local line rolling forming, and the corresponding deformation and strain distribution features were acquired. In addition, according to the characteristics of strain distributions, a simplified deformation simulation method, based on the deformation obtained by applying strain was presented. Compared to the results of the three-dimensional elastoplastic finite element method, this simplified deformation simulation method was verified to provide high computational accuracy, and this could result in a substantial reduction in calculation time. Thus, the application of the simplified deformation simulation method was further explored in the case of multiple rolling loading paths. Moreover, it was also utilized to calculate the local line rolling forming for the typical complex curvature plate of ships. Research findings indicated that the simplified deformation simulation method was an effective tool for rapidly obtaining relationships between the forming shape, processing path, and process parameters.

  11. A curvature theory for discrete surfaces based on mesh parallelity

    KAUST Repository

    Bobenko, Alexander Ivanovich

    2009-12-18

    We consider a general theory of curvatures of discrete surfaces equipped with edgewise parallel Gauss images, and where mean and Gaussian curvatures of faces are derived from the faces\\' areas and mixed areas. Remarkably these notions are capable of unifying notable previously defined classes of surfaces, such as discrete isothermic minimal surfaces and surfaces of constant mean curvature. We discuss various types of natural Gauss images, the existence of principal curvatures, constant curvature surfaces, Christoffel duality, Koenigs nets, contact element nets, s-isothermic nets, and interesting special cases such as discrete Delaunay surfaces derived from elliptic billiards. © 2009 Springer-Verlag.

  12. Bringing Algorithms to Life: Cooperative Computing Activities Using Students as Processors.

    Science.gov (United States)

    Bachelis, Gregory F.; And Others

    1994-01-01

    Presents cooperative computing activities in which each student plays the role of a switch or processor and acts out algorithms. Includes binary counting, finding the smallest card in a deck, sorting by selection and merging, adding and multiplying large numbers, and sieving for primes. (16 references) (Author/MKR)

  13. Study on Cloud Computing Resource Scheduling Strategy Based on the Ant Colony Optimization Algorithm

    OpenAIRE

    Lingna He; Qingshui Li; Linan Zhu

    2012-01-01

    In order to replace the traditional Internet software usage patterns and enterprise management mode, this paper proposes a new business calculation mode- cloud computing, resources scheduling strategy is the key technology in cloud computing, Based on the study of cloud computing system structure and the mode of operation, The key research for cloud computing the process of the work scheduling and resource allocation problems based on ant colony algorithm , Detailed analysis and design of the...

  14. Image Structure-Preserving Denoising Based on Difference Curvature Driven Fractional Nonlinear Diffusion

    Directory of Open Access Journals (Sweden)

    Xuehui Yin

    2015-01-01

    Full Text Available The traditional integer-order partial differential equations and gradient regularization based image denoising techniques often suffer from staircase effect, speckle artifacts, and the loss of image contrast and texture details. To address these issues, in this paper, a difference curvature driven fractional anisotropic diffusion for image noise removal is presented, which uses two new techniques, fractional calculus and difference curvature, to describe the intensity variations in images. The fractional-order derivatives information of an image can deal well with the textures of the image and achieve a good tradeoff between eliminating speckle artifacts and restraining staircase effect. The difference curvature constructed by the second order derivatives along the direction of gradient of an image and perpendicular to the gradient can effectively distinguish between ramps and edges. Fourier transform technique is also proposed to compute the fractional-order derivative. Experimental results demonstrate that the proposed denoising model can avoid speckle artifacts and staircase effect and preserve important features such as curvy edges, straight edges, ramps, corners, and textures. They are obviously superior to those of traditional integral based methods. The experimental results also reveal that our proposed model yields a good visual effect and better values of MSSIM and PSNR.

  15. Reconstruction of sparse-view X-ray computed tomography using adaptive iterative algorithms.

    Science.gov (United States)

    Liu, Li; Lin, Weikai; Jin, Mingwu

    2015-01-01

    In this paper, we propose two reconstruction algorithms for sparse-view X-ray computed tomography (CT). Treating the reconstruction problems as data fidelity constrained total variation (TV) minimization, both algorithms adapt the alternate two-stage strategy: projection onto convex sets (POCS) for data fidelity and non-negativity constraints and steepest descent for TV minimization. The novelty of this work is to determine iterative parameters automatically from data, thus avoiding tedious manual parameter tuning. In TV minimization, the step sizes of steepest descent are adaptively adjusted according to the difference from POCS update in either the projection domain or the image domain, while the step size of algebraic reconstruction technique (ART) in POCS is determined based on the data noise level. In addition, projection errors are used to compare with the error bound to decide whether to perform ART so as to reduce computational costs. The performance of the proposed methods is studied and evaluated using both simulated and physical phantom data. Our methods with automatic parameter tuning achieve similar, if not better, reconstruction performance compared to a representative two-stage algorithm. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Use of Monte Carlo computation in benchmarking radiotherapy treatment planning system algorithms

    International Nuclear Information System (INIS)

    Lewis, R.D.; Ryde, S.J.S.; Seaby, A.W.; Hancock, D.A.; Evans, C.J.

    2000-01-01

    Radiotherapy treatments are becoming more complex, often requiring the dose to be calculated in three dimensions and sometimes involving the application of non-coplanar beams. The ability of treatment planning systems to accurately calculate dose under a range of these and other irradiation conditions requires evaluation. Practical assessment of such arrangements can be problematical, especially when a heterogeneous medium is used. This work describes the use of Monte Carlo computation as a benchmarking tool to assess the dose distribution of external photon beam plans obtained in a simple heterogeneous phantom by several commercially available 3D and 2D treatment planning system algorithms. For comparison, practical measurements were undertaken using film dosimetry. The dose distributions were calculated for a variety of irradiation conditions designed to show the effects of surface obliquity, inhomogeneities and missing tissue above tangential beams. The results show maximum dose differences of 47% between some planning algorithms and film at a point 1 mm below a tangentially irradiated surface. Overall, the dose distribution obtained from film was most faithfully reproduced by the Monte Carlo N-Particle results illustrating the potential of Monte Carlo computation in evaluating treatment planning system algorithms. (author)

  17. Translating solitons to symplectic and Lagrangian mean curvature flows

    International Nuclear Information System (INIS)

    Han Xiaoli; Li Jiayu

    2007-05-01

    In this paper, we construct finite blow-up examples for symplectic mean curvature flows and we study symplectic translating solitons. We prove that there is no translating solitons with vertical bar α vertical bar ≤ α 0 to the symplectic mean curvature flow or to the almost calibrated Lagrangian mean curvature flow for some α 0 . (author)

  18. Dataflow-Based Mapping of Computer Vision Algorithms onto FPGAs

    Directory of Open Access Journals (Sweden)

    Ivan Corretjer

    2007-01-01

    Full Text Available We develop a design methodology for mapping computer vision algorithms onto an FPGA through the use of coarse-grain reconfigurable dataflow graphs as a representation to guide the designer. We first describe a new dataflow modeling technique called homogeneous parameterized dataflow (HPDF, which effectively captures the structure of an important class of computer vision applications. This form of dynamic dataflow takes advantage of the property that in a large number of image processing applications, data production and consumption rates can vary, but are equal across dataflow graph edges for any particular application iteration. After motivating and defining the HPDF model of computation, we develop an HPDF-based design methodology that offers useful properties in terms of verifying correctness and exposing performance-enhancing transformations; we discuss and address various challenges in efficiently mapping an HPDF-based application representation into target-specific HDL code; and we present experimental results pertaining to the mapping of a gesture recognition application onto the Xilinx Virtex II FPGA.

  19. Rayleigh’s quotient–based damage detection algorithm: Theoretical concepts, computational techniques, and field implementation strategies

    DEFF Research Database (Denmark)

    NJOMO WANDJI, Wilfried

    2017-01-01

    levels are targeted: existence, location, and severity. The proposed algorithm is analytically developed from the dynamics theory and the virtual energy principle. Some computational techniques are proposed for carrying out computations, including discretization, integration, derivation, and suitable...

  20. Curvature reduces bending strains in the quokka femur

    Directory of Open Access Journals (Sweden)

    Kyle McCabe

    2017-03-01

    Full Text Available This study explores how curvature in the quokka femur may help to reduce bending strain during locomotion. The quokka is a small wallaby, but the curvature of the femur and the muscles active during stance phase are similar to most quadrupedal mammals. Our hypothesis is that the action of hip extensor and ankle plantarflexor muscles during stance phase place cranial bending strains that act to reduce the caudal curvature of the femur. Knee extensors and biarticular muscles that span the femur longitudinally create caudal bending strains in the caudally curved (concave caudal side bone. These opposing strains can balance each other and result in less strain on the bone. We test this idea by comparing the performance of a normally curved finite element model of the quokka femur to a digitally straightened version of the same bone. The normally curved model is indeed less strained than the straightened version. To further examine the relationship between curvature and the strains in the femoral models, we also tested an extra-curved and a reverse-curved version with the same loads. There appears to be a linear relationship between the curvature and the strains experienced by the models. These results demonstrate that longitudinal curvature in bones may be a manipulable mechanism whereby bone can induce a strain gradient to oppose strains induced by habitual loading.

  1. Sequence periodicity in nucleosomal DNA and intrinsic curvature.

    Science.gov (United States)

    Nair, T Murlidharan

    2010-05-17

    Most eukaryotic DNA contained in the nucleus is packaged by wrapping DNA around histone octamers. Histones are ubiquitous and bind most regions of chromosomal DNA. In order to achieve smooth wrapping of the DNA around the histone octamer, the DNA duplex should be able to deform and should possess intrinsic curvature. The deformability of DNA is a result of the non-parallelness of base pair stacks. The stacking interaction between base pairs is sequence dependent. The higher the stacking energy the more rigid the DNA helix, thus it is natural to expect that sequences that are involved in wrapping around the histone octamer should be unstacked and possess intrinsic curvature. Intrinsic curvature has been shown to be dictated by the periodic recurrence of certain dinucleotides. Several genome-wide studies directed towards mapping of nucleosome positions have revealed periodicity associated with certain stretches of sequences. In the current study, these sequences have been analyzed with a view to understand their sequence-dependent structures. Higher order DNA structures and the distribution of molecular bend loci associated with 146 base nucleosome core DNA sequence from C. elegans and chicken have been analyzed using the theoretical model for DNA curvature. The curvature dispersion calculated by cyclically permuting the sequences revealed that the molecular bend loci were delocalized throughout the nucleosome core region and had varying degrees of intrinsic curvature. The higher order structures associated with nucleosomes of C.elegans and chicken calculated from the sequences revealed heterogeneity with respect to the deviation of the DNA axis. The results points to the possibility of context dependent curvature of varying degrees to be associated with nucleosomal DNA.

  2. Lecture notes on mean curvature flow, barriers and singular perturbations

    CERN Document Server

    Bellettini, Giovanni

    2013-01-01

    The aim of the book is to study some aspects of geometric evolutions, such as mean curvature flow and anisotropic mean curvature flow of hypersurfaces. We analyze the origin of such flows and their geometric and variational nature. Some of the most important aspects of mean curvature flow are described, such as the comparison principle and its use in the definition of suitable weak solutions. The anisotropic evolutions, which can be considered as a generalization of mean curvature flow, are studied from the view point of Finsler geometry. Concerning singular perturbations, we discuss the convergence of the Allen–Cahn (or Ginsburg–Landau) type equations to (possibly anisotropic) mean curvature flow before the onset of singularities in the limit problem. We study such kinds of asymptotic problems also in the static case, showing convergence to prescribed curvature-type problems.

  3. Fast fourier algorithms in spectral computation and analysis of vibrating machines

    International Nuclear Information System (INIS)

    Farooq, U.; Hafeez, T.; Khan, M.Z.; Amir, M.

    2001-01-01

    In this work we have discussed Fourier and its history series, relationships among various Fourier mappings, Fourier coefficients, transforms, inverse transforms, integrals, analyses, discrete and fast algorithms for data processing and analysis of vibrating systems. The evaluation of magnitude of the source signal at transmission time, related coefficient matrix, intensity, and magnitude at the receiving end (stations). Matrix computation of Fourier transform has been explained, and applications are presented. The fast Fourier transforms, new computational scheme. have been tested with an example. The work also includes digital programs for obtaining the frequency contents of time function. It has been explained that how the fast Fourier algorithms (FFT) has decreased computational work by several order of magnitudes and split the spectrum of a signal into two (even and odd modes) at every successive step. That fast quantitative processing for discrete Fourier transforms' computations as well as signal splitting and combination provides an efficient. and reliable tool for spectral analyses. Fourier series decompose the given variable into a sum of oscillatory functions each having a specific frequency. These frequencies, with their corresponding amplitude and phase angles, constitute the frequency contents of the original time functions. These fast processing achievements, signals decomposition and combination may be carried out by the principle of superposition and convolution for, even, signals of different frequencies. Considerable information about a machine or a structure can be derived from variable speed and frequency tests. (author)

  4. Dynamic curvature sensing employing ionic-polymer–metal composite sensors

    International Nuclear Information System (INIS)

    Bahramzadeh, Yousef; Shahinpoor, Mohsen

    2011-01-01

    A dynamic curvature sensor is presented based on ionic-polymer–metal composite (IPMC) for curvature monitoring of deployable/inflatable dynamic space structures. Monitoring the curvature variation is of high importance in various engineering structures including shape monitoring of deployable/inflatable space structures in which the structural boundaries undergo a dynamic deployment process. The high sensitivity of IPMCs to the applied deformations as well as its flexibility make IPMCs a promising candidate for sensing of dynamic curvature changes. Herein, we explore the dynamic response of an IPMC sensor strip with respect to controlled curvature deformations subjected to different forms of input functions. Using a specially designed experimental setup, the voltage recovery effect, phase delay, and rate dependency of the output voltage signal of an IPMC curvature sensor are analyzed. Experimental results show that the IPMC sensor maintains the linearity, sensitivity, and repeatability required for curvature sensing. Besides, in order to describe the dynamic phenomena such as the rate dependency of the IPMC sensor, a chemo-electro-mechanical model based on the Poisson–Nernst–Planck (PNP) equation for the kinetics of ion diffusion is presented. By solving the governing partial differential equations the frequency response of the IPMC sensor is derived. The physical model is able to describe the dynamic properties of the IPMC sensor and the dependency of the signal on rate of excitations

  5. Use of a genetic algorithm to solve two-fluid flow problems on an NCUBE multiprocessor computer

    International Nuclear Information System (INIS)

    Pryor, R.J.; Cline, D.D.

    1992-01-01

    A method of solving the two-phase fluid flow equations using a genetic algorithm on a NCUBE multiprocessor computer is presented. The topics discussed are the two-phase flow equations, the genetic representation of the unknowns, the fitness function, the genetic operators, and the implementation of the algorithm on the NCUBE computer. The efficiency of the implementation is investigated using a pipe blowdown problem. Effects of varying the genetic parameters and the number of processors are presented

  6. What can Numerical Computation do for the History of Science? (Study of an Orbit Drawn by Newton on a Letter to Hooke)

    Science.gov (United States)

    Stuchi, Teresa; Cardozo Dias, P.

    2013-05-01

    Abstract (2,250 Maximum Characters): On a letter to Robert Hooke, Isaac Newton drew the orbit of a mass moving under a constant attracting central force. How he drew the orbit may indicate how and when he developed dynamic categories. Some historians claim that Newton used a method contrived by Hooke; others that he used some method of curvature. We prove geometrically: Hooke’s method is a second order symplectic area preserving algorithm, and the method of curvature is a first order algorithm without special features; then we integrate the hamiltonian equations. Integration by the method of curvature can also be done exploring geometric properties of curves. We compare three methods: Hooke’s method, the method of curvature and a first order method. A fourth order algorithm sets a standard of comparison. We analyze which of these methods best explains Newton’s drawing.

  7. Efficiency Analysis of the Parallel Implementation of the SIMPLE Algorithm on Multiprocessor Computers

    Science.gov (United States)

    Lashkin, S. V.; Kozelkov, A. S.; Yalozo, A. V.; Gerasimov, V. Yu.; Zelensky, D. K.

    2017-12-01

    This paper describes the details of the parallel implementation of the SIMPLE algorithm for numerical solution of the Navier-Stokes system of equations on arbitrary unstructured grids. The iteration schemes for the serial and parallel versions of the SIMPLE algorithm are implemented. In the description of the parallel implementation, special attention is paid to computational data exchange among processors under the condition of the grid model decomposition using fictitious cells. We discuss the specific features for the storage of distributed matrices and implementation of vector-matrix operations in parallel mode. It is shown that the proposed way of matrix storage reduces the number of interprocessor exchanges. A series of numerical experiments illustrates the effect of the multigrid SLAE solver tuning on the general efficiency of the algorithm; the tuning involves the types of the cycles used (V, W, and F), the number of iterations of a smoothing operator, and the number of cells for coarsening. Two ways (direct and indirect) of efficiency evaluation for parallelization of the numerical algorithm are demonstrated. The paper presents the results of solving some internal and external flow problems with the evaluation of parallelization efficiency by two algorithms. It is shown that the proposed parallel implementation enables efficient computations for the problems on a thousand processors. Based on the results obtained, some general recommendations are made for the optimal tuning of the multigrid solver, as well as for selecting the optimal number of cells per processor.

  8. The Computational Complexity, Parallel Scalability, and Performance of Atmospheric Data Assimilation Algorithms

    Science.gov (United States)

    Lyster, Peter M.; Guo, J.; Clune, T.; Larson, J. W.; Atlas, Robert (Technical Monitor)

    2001-01-01

    The computational complexity of algorithms for Four Dimensional Data Assimilation (4DDA) at NASA's Data Assimilation Office (DAO) is discussed. In 4DDA, observations are assimilated with the output of a dynamical model to generate best-estimates of the states of the system. It is thus a mapping problem, whereby scattered observations are converted into regular accurate maps of wind, temperature, moisture and other variables. The DAO is developing and using 4DDA algorithms that provide these datasets, or analyses, in support of Earth System Science research. Two large-scale algorithms are discussed. The first approach, the Goddard Earth Observing System Data Assimilation System (GEOS DAS), uses an atmospheric general circulation model (GCM) and an observation-space based analysis system, the Physical-space Statistical Analysis System (PSAS). GEOS DAS is very similar to global meteorological weather forecasting data assimilation systems, but is used at NASA for climate research. Systems of this size typically run at between 1 and 20 gigaflop/s. The second approach, the Kalman filter, uses a more consistent algorithm to determine the forecast error covariance matrix than does GEOS DAS. For atmospheric assimilation, the gridded dynamical fields typically have More than 10(exp 6) variables, therefore the full error covariance matrix may be in excess of a teraword. For the Kalman filter this problem can easily scale to petaflop/s proportions. We discuss the computational complexity of GEOS DAS and our implementation of the Kalman filter. We also discuss and quantify some of the technical issues and limitations in developing efficient, in terms of wall clock time, and scalable parallel implementations of the algorithms.

  9. COOBBO: A Novel Opposition-Based Soft Computing Algorithm for TSP Problems

    Directory of Open Access Journals (Sweden)

    Qingzheng Xu

    2014-12-01

    Full Text Available In this paper, we propose a novel definition of opposite path. Its core feature is that the sequence of candidate paths and the distances between adjacent nodes in the tour are considered simultaneously. In a sense, the candidate path and its corresponding opposite path have the same (or similar at least distance to the optimal path in the current population. Based on an accepted framework for employing opposition-based learning, Oppositional Biogeography-Based Optimization using the Current Optimum, called COOBBO algorithm, is introduced to solve traveling salesman problems. We demonstrate its performance on eight benchmark problems and compare it with other optimization algorithms. Simulation results illustrate that the excellent performance of our proposed algorithm is attributed to the distinct definition of opposite path. In addition, its great strength lies in exploitation for enhancing the solution accuracy, not exploration for improving the population diversity. Finally, by comparing different version of COOBBO, another conclusion is that each successful opposition-based soft computing algorithm needs to adjust and remain a good balance between backward adjacent node and forward adjacent node.

  10. An adaptive multi-spline refinement algorithm in simulation based sailboat trajectory optimization using onboard multi-core computer systems

    Directory of Open Access Journals (Sweden)

    Dębski Roman

    2016-06-01

    Full Text Available A new dynamic programming based parallel algorithm adapted to on-board heterogeneous computers for simulation based trajectory optimization is studied in the context of “high-performance sailing”. The algorithm uses a new discrete space of continuously differentiable functions called the multi-splines as its search space representation. A basic version of the algorithm is presented in detail (pseudo-code, time and space complexity, search space auto-adaptation properties. Possible extensions of the basic algorithm are also described. The presented experimental results show that contemporary heterogeneous on-board computers can be effectively used for solving simulation based trajectory optimization problems. These computers can be considered micro high performance computing (HPC platforms-they offer high performance while remaining energy and cost efficient. The simulation based approach can potentially give highly accurate results since the mathematical model that the simulator is built upon may be as complex as required. The approach described is applicable to many trajectory optimization problems due to its black-box represented performance measure and use of OpenCL.

  11. Computer architecture for efficient algorithmic executions in real-time systems: New technology for avionics systems and advanced space vehicles

    Science.gov (United States)

    Carroll, Chester C.; Youngblood, John N.; Saha, Aindam

    1987-01-01

    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.

  12. Cognitive Correlates of Performance in Algorithms in a Computer Science Course for High School

    Science.gov (United States)

    Avancena, Aimee Theresa; Nishihara, Akinori

    2014-01-01

    Computer science for high school faces many challenging issues. One of these is whether the students possess the appropriate cognitive ability for learning the fundamentals of computer science. Online tests were created based on known cognitive factors and fundamental algorithms and were implemented among the second grade students in the…

  13. Influence of Coanda surface curvature on performance of bladeless fan

    Science.gov (United States)

    Li, Guoqi; Hu, Yongjun; Jin, Yingzi; Setoguchi, Toshiaki; Kim, Heuy Dong

    2014-10-01

    The unique Coanda surface has a great influence on the performance of bladeless fan. However, there is few studies to explain the relationship between the performance and Coanda surface curvature at present. In order to gain a qualitative understanding of effect of the curvature on the performance of bladeless fan, numerical studies are performed in this paper. Firstly, three-dimensional numerical simulation is done by Fluent software. For the purpose to obtain detailed information of the flow field around the Coanda surface, two-dimensional numerical simulation is also conducted. Five types of Coanda surfaces with different curvature are designed, and the flow behaviour and the performance of them are analyzed and compared with those of the prototype. The analysis indicates that the curvature of Coanda surface is strongly related to blowing performance, It is found that there is an optimal curvature of Coanda surfaces among the studied models. Simulation result shows that there is a special low pressure region. With increasing curvature in Y direction, several low pressure regions gradually enlarged, then begin to merge slowly, and finally form a large area of low pressure. From the analyses of streamlines and velocity angle, it is found that the magnitude of the curvature affects the flow direction and reasonable curvature can induce fluid flow close to the wall. Thus, it leads to that the curvature of the streamlines is consistent with that of Coanda surface. Meanwhile, it also causes the fluid movement towards the most suitable direction. This study will provide useful information to performance improvements of bladeless fans.

  14. Quantitative Imaging Biomarkers: A Review of Statistical Methods for Computer Algorithm Comparisons

    Science.gov (United States)

    2014-01-01

    Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. PMID:24919829

  15. Quantitative imaging biomarkers: a review of statistical methods for computer algorithm comparisons.

    Science.gov (United States)

    Obuchowski, Nancy A; Reeves, Anthony P; Huang, Erich P; Wang, Xiao-Feng; Buckler, Andrew J; Kim, Hyun J Grace; Barnhart, Huiman X; Jackson, Edward F; Giger, Maryellen L; Pennello, Gene; Toledano, Alicia Y; Kalpathy-Cramer, Jayashree; Apanasovich, Tatiyana V; Kinahan, Paul E; Myers, Kyle J; Goldgof, Dmitry B; Barboriak, Daniel P; Gillies, Robert J; Schwartz, Lawrence H; Sullivan, Daniel C

    2015-02-01

    Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  16. Distributed mean curvature on a discrete manifold for Regge calculus

    International Nuclear Information System (INIS)

    Conboye, Rory; Miller, Warner A; Ray, Shannon

    2015-01-01

    The integrated mean curvature of a simplicial manifold is well understood in both Regge Calculus and Discrete Differential Geometry. However, a well motivated pointwise definition of curvature requires a careful choice of the volume over which to uniformly distribute the local integrated curvature. We show that hybrid cells formed using both the simplicial lattice and its circumcentric dual emerge as a remarkably natural structure for the distribution of this local integrated curvature. These hybrid cells form a complete tessellation of the simplicial manifold, contain a geometric orthonormal basis, and are also shown to give a pointwise mean curvature with a natural interpretation as the fractional rate of change of the normal vector. (paper)

  17. Distributed mean curvature on a discrete manifold for Regge calculus

    Science.gov (United States)

    Conboye, Rory; Miller, Warner A.; Ray, Shannon

    2015-09-01

    The integrated mean curvature of a simplicial manifold is well understood in both Regge Calculus and Discrete Differential Geometry. However, a well motivated pointwise definition of curvature requires a careful choice of the volume over which to uniformly distribute the local integrated curvature. We show that hybrid cells formed using both the simplicial lattice and its circumcentric dual emerge as a remarkably natural structure for the distribution of this local integrated curvature. These hybrid cells form a complete tessellation of the simplicial manifold, contain a geometric orthonormal basis, and are also shown to give a pointwise mean curvature with a natural interpretation as the fractional rate of change of the normal vector.

  18. Substrate curvature gradient drives rapid droplet motion.

    Science.gov (United States)

    Lv, Cunjing; Chen, Chao; Chuang, Yin-Chuan; Tseng, Fan-Gang; Yin, Yajun; Grey, Francois; Zheng, Quanshui

    2014-07-11

    Making small liquid droplets move spontaneously on solid surfaces is a key challenge in lab-on-chip and heat exchanger technologies. Here, we report that a substrate curvature gradient can accelerate micro- and nanodroplets to high speeds on both hydrophilic and hydrophobic substrates. Experiments for microscale water droplets on tapered surfaces show a maximum speed of 0.42  m/s, 2 orders of magnitude higher than with a wettability gradient. We show that the total free energy and driving force exerted on a droplet are determined by the substrate curvature and substrate curvature gradient, respectively. Using molecular dynamics simulations, we predict nanoscale droplets moving spontaneously at over 100  m/s on tapered surfaces.

  19. Correlation signatures of wet soils and snows. [algorithm development and computer programming

    Science.gov (United States)

    Phillips, M. R.

    1972-01-01

    Interpretation, analysis, and development of algorithms have provided the necessary computational programming tools for soil data processing, data handling and analysis. Algorithms that have been developed thus far, are adequate and have been proven successful for several preliminary and fundamental applications such as software interfacing capabilities, probability distributions, grey level print plotting, contour plotting, isometric data displays, joint probability distributions, boundary mapping, channel registration and ground scene classification. A description of an Earth Resources Flight Data Processor, (ERFDP), which handles and processes earth resources data under a users control is provided.

  20. Effect of nano-scale curvature on the intrinsic blood coagulation system

    Science.gov (United States)

    Kushida, Takashi; Saha, Krishnendu; Subramani, Chandramouleeswaran; Nandwana, Vikas; Rotello, Vincent M.

    2014-11-01

    The intrinsic coagulation activity of silica nanoparticles strongly depends on their surface curvature. Nanoparticles with higher surface curvature do not denature blood coagulation factor XII on its surface, providing a coagulation `silent' surface, while nanoparticles with lower surface curvature show denaturation and concomitant coagulation.The intrinsic coagulation activity of silica nanoparticles strongly depends on their surface curvature. Nanoparticles with higher surface curvature do not denature blood coagulation factor XII on its surface, providing a coagulation `silent' surface, while nanoparticles with lower surface curvature show denaturation and concomitant coagulation. Electronic supplementary information (ESI) available: Physical properties and scanning electron micrographs (SEM) of silica NPs, intrinsic coagulation activity after 3 h. See DOI: 10.1039/c4nr04128c

  1. Effect of Plate Curvature on Blast Response of Structural Steel Plates

    Science.gov (United States)

    Veeredhi, Lakshmi Shireen Banu; Ramana Rao, N. V.; Veeredhi, Vasudeva Rao

    2018-04-01

    In the present work an attempt is made, through simulation studies, to determine the effect of plate curvature on the blast response of a door structure made of ASTM A515 grade 50 steel plates. A door structure with dimensions of 5.142 m × 2.56 m × 10 mm having six different radii of curvatures is analyzed which is subjected to blast load. The radii of curvature investigated are infinity (flat plate), 16.63, 10.81, 8.26, 6.61 and 5.56 m. In the present study, a stand-off distance of 11 m is considered for all the cases. Results showed that the door structure with smallest radius of curvature experienced least plastic deformation and yielding when compared to a door with larger radius of curvature with same projected area. From the present Investigation, it is observed that, as the radius of curvature of the plate increases, the deformation mode gradually shifts from indentation mode to flexural mode. The plates with infinity and 16.63 m radius of curvature have undergone flexural mode of deformation and plates with 6.61 and 5.56 m radius of curvature undergo indentation mode of deformation. Whereas, mixed mode of deformation that consists of both flexural and indentation mode of deformations are seen in the plates with radius of curvature 10.81 and 8.26 m. As the radius of curvature of the plate decreases the ability of the plate to mitigate the effect the blast loads increased. It is observed that the plate with smaller radius of curvature deflects most of the blast energy and results in least indentation mode of deformation. The most significant observation made in the present investigation is that the strain energy absorbed by the steel plate gets reduced to 1/3 rd when the radius of curvature is approximately equal to the stand-off distance which could be the critical radius of curvature.

  2. On Riemannian manifolds (Mn, g) of quasi-constant curvature

    International Nuclear Information System (INIS)

    Rahman, M.S.

    1995-07-01

    A Riemannian manifold (M n , g) of quasi-constant curvature is defined. It is shown that an (M n , g) in association with other class of manifolds gives rise, under certain conditions, to a manifold of quasi-constant curvature. Some observations on how a manifold of quasi-constant curvature accounts for a pseudo Ricci-symmetric manifold and quasi-umbilical hypersurface are made. (author). 10 refs

  3. Evaluation and mitigation of potential errors in radiochromic film dosimetry due to film curvature at scanning.

    Science.gov (United States)

    Palmer, Antony L; Bradley, David A; Nisbet, Andrew

    2015-03-08

    This work considers a previously overlooked uncertainty present in film dosimetry which results from moderate curvature of films during the scanning process. Small film samples are particularly susceptible to film curling which may be undetected or deemed insignificant. In this study, we consider test cases with controlled induced curvature of film and with film raised horizontally above the scanner plate. We also evaluate the difference in scans of a film irradiated with a typical brachytherapy dose distribution with the film naturally curved and with the film held flat on the scanner. Typical naturally occurring curvature of film at scanning, giving rise to a maximum height 1 to 2 mm above the scan plane, may introduce dose errors of 1% to 4%, and considerably reduce gamma evaluation passing rates when comparing film-measured doses with treatment planning system-calculated dose distributions, a common application of film dosimetry in radiotherapy. The use of a triple-channel dosimetry algorithm appeared to mitigate the error due to film curvature compared to conventional single-channel film dosimetry. The change in pixel value and calibrated reported dose with film curling or height above the scanner plate may be due to variations in illumination characteristics, optical disturbances, or a Callier-type effect. There is a clear requirement for physically flat films at scanning to avoid the introduction of a substantial error source in film dosimetry. Particularly for small film samples, a compression glass plate above the film is recommended to ensure flat-film scanning. This effect has been overlooked to date in the literature.

  4. What can numerical computation do for the history of science? (a study of an orbit drawn by Newton in a letter to Hooke)

    International Nuclear Information System (INIS)

    Dias, Penha Maria Cardozo; Stuchi, T J

    2013-01-01

    In a letter to Robert Hooke, Isaac Newton drew the orbit of a mass moving under a constant attracting central force. The drawing of the orbit may indicate how and when Newton developed dynamic categories. Some historians claim that Newton used a method contrived by Hooke; others that he used some method of curvature. We prove that Hooke’s method is a second-order symplectic area-preserving algorithm, and the method of curvature is a first-order algorithm without special features; then we integrate the Hamiltonian equations. Integration by the method of curvature can also be done, exploring the geometric properties of curves. We compare three methods: Hooke’s method, the method of curvature and a first-order method. A fourth-order algorithm sets a standard of comparison. We analyze which of these methods best explains Newton’s drawing. (paper)

  5. What can numerical computation do for the history of science? (a study of an orbit drawn by Newton in a letter to Hooke)

    Science.gov (United States)

    Cardozo Dias, Penha Maria; Stuchi, T. J.

    2013-11-01

    In a letter to Robert Hooke, Isaac Newton drew the orbit of a mass moving under a constant attracting central force. The drawing of the orbit may indicate how and when Newton developed dynamic categories. Some historians claim that Newton used a method contrived by Hooke; others that he used some method of curvature. We prove that Hooke’s method is a second-order symplectic area-preserving algorithm, and the method of curvature is a first-order algorithm without special features; then we integrate the Hamiltonian equations. Integration by the method of curvature can also be done, exploring the geometric properties of curves. We compare three methods: Hooke’s method, the method of curvature and a first-order method. A fourth-order algorithm sets a standard of comparison. We analyze which of these methods best explains Newton’s drawing.

  6. Cholera toxin B subunit induces local curvature on lipid bilayers

    DEFF Research Database (Denmark)

    Pezeshkian, Weria; Nåbo, Lina J.; Ipsen, John H.

    2017-01-01

    B induces a local membrane curvature that is essential for its clathrin-independent uptake. Using all-atom molecular dynamics, we show that CTxB induces local curvature, with the radius of curvature around 36 nm. The main feature of the CTxB molecular structure that causes membrane bending is the protruding...... alpha helices in the middle of the protein. Our study points to a generic protein design principle for generating local membrane curvature through specific binding to their lipid anchors....

  7. Curvature collineations for the field of gravitational waves

    International Nuclear Information System (INIS)

    Singh, K.P.; Singh, Gulab

    1981-01-01

    It has been shown that the space-times formed from a plane-fronted gravity wave and from a plane sandwich wave with constant polarisation admit proper curvature collineation in general. The curvature collineation vectors have been determined explicitly. (author)

  8. Molecular simulation workflows as parallel algorithms: the execution engine of Copernicus, a distributed high-performance computing platform.

    Science.gov (United States)

    Pronk, Sander; Pouya, Iman; Lundborg, Magnus; Rotskoff, Grant; Wesén, Björn; Kasson, Peter M; Lindahl, Erik

    2015-06-09

    Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus.

  9. Pattern recognition algorithms for data mining scalability, knowledge discovery and soft granular computing

    CERN Document Server

    Pal, Sankar K

    2004-01-01

    Pattern Recognition Algorithms for Data Mining addresses different pattern recognition (PR) tasks in a unified framework with both theoretical and experimental results. Tasks covered include data condensation, feature selection, case generation, clustering/classification, and rule generation and evaluation. This volume presents various theories, methodologies, and algorithms, using both classical approaches and hybrid paradigms. The authors emphasize large datasets with overlapping, intractable, or nonlinear boundary classes, and datasets that demonstrate granular computing in soft frameworks.Organized into eight chapters, the book begins with an introduction to PR, data mining, and knowledge discovery concepts. The authors analyze the tasks of multi-scale data condensation and dimensionality reduction, then explore the problem of learning with support vector machine (SVM). They conclude by highlighting the significance of granular computing for different mining tasks in a soft paradigm.

  10. Fast parallel algorithm for three-dimensional distance-driven model in iterative computed tomography reconstruction

    International Nuclear Information System (INIS)

    Chen Jian-Lin; Li Lei; Wang Lin-Yuan; Cai Ai-Long; Xi Xiao-Qi; Zhang Han-Ming; Li Jian-Xin; Yan Bin

    2015-01-01

    The projection matrix model is used to describe the physical relationship between reconstructed object and projection. Such a model has a strong influence on projection and backprojection, two vital operations in iterative computed tomographic reconstruction. The distance-driven model (DDM) is a state-of-the-art technology that simulates forward and back projections. This model has a low computational complexity and a relatively high spatial resolution; however, it includes only a few methods in a parallel operation with a matched model scheme. This study introduces a fast and parallelizable algorithm to improve the traditional DDM for computing the parallel projection and backprojection operations. Our proposed model has been implemented on a GPU (graphic processing unit) platform and has achieved satisfactory computational efficiency with no approximation. The runtime for the projection and backprojection operations with our model is approximately 4.5 s and 10.5 s per loop, respectively, with an image size of 256×256×256 and 360 projections with a size of 512×512. We compare several general algorithms that have been proposed for maximizing GPU efficiency by using the unmatched projection/backprojection models in a parallel computation. The imaging resolution is not sacrificed and remains accurate during computed tomographic reconstruction. (paper)

  11. Remarks on the boundary curve of a constant mean curvature topological disc

    DEFF Research Database (Denmark)

    Brander, David; Lopéz, Rafael

    2017-01-01

    We discuss some consequences of the existence of the holomorphic quadratic Hopf differential on a conformally immersed constant mean curvature topological disc with analytic boundary. In particular, we derive a formula for the mean curvature as a weighted average of the normal curvature of the bo......We discuss some consequences of the existence of the holomorphic quadratic Hopf differential on a conformally immersed constant mean curvature topological disc with analytic boundary. In particular, we derive a formula for the mean curvature as a weighted average of the normal curvature...

  12. NUMERICAL INVESTIGATION OF CURVATURE AND TORSION EFFECTS ON WATER FLOW FIELD IN HELICAL RECTANGULAR CHANNELS

    Directory of Open Access Journals (Sweden)

    A. H. ELBATRAN

    2015-07-01

    Full Text Available Helical channels have a wide range of applications in petroleum engineering, nuclear, heat exchanger, chemical, mineral and polymer industries. They are used in the separation processes for fluids of different densities. The centrifugal force, free surface and geometrical effects of the helical channel make the flow pattern more complicated; hence it is very difficult to perform physical experiment to predict channel performance. Computational Fluid Dynamics (CFD can be suitable alternative for studying the flow pattern characteristics in helical channels. The different ranges of dimensional parameters, such as curvature and torsion, often cause various flow regimes in the helical channels. In this study, the effects of physical parameters such as curvature, torsion, Reynolds number, Froude number and Dean Number on the characteristics of the turbulent flow in helical rectangular channels have been investigated numerically, using a finite volume RANSE code Fluent of Ansys workbench 10.1 UTM licensed. The physical parameters were reported for range of curvature (δ of 0.16 to 0.51 and torsion (λ of 0.032 to 0.1 .The numerical results of this study showed that the decrease in the channel curvature and the increase in the channel torsion numbers led to the increase of the flow velocity inside the channel and the change in the shape of water free surface at given Dean, Reynolds and Froude numbers.

  13. Design of an optimum computer vision-based automatic abalone (Haliotis discus hannai) grading algorithm.

    Science.gov (United States)

    Lee, Donggil; Lee, Kyounghoon; Kim, Seonghun; Yang, Yongsu

    2015-04-01

    An automatic abalone grading algorithm that estimates abalone weights on the basis of computer vision using 2D images is developed and tested. The algorithm overcomes the problems experienced by conventional abalone grading methods that utilize manual sorting and mechanical automatic grading. To design an optimal algorithm, a regression formula and R(2) value were investigated by performing a regression analysis for each of total length, body width, thickness, view area, and actual volume against abalone weights. The R(2) value between the actual volume and abalone weight was 0.999, showing a relatively high correlation. As a result, to easily estimate the actual volumes of abalones based on computer vision, the volumes were calculated under the assumption that abalone shapes are half-oblate ellipsoids, and a regression formula was derived to estimate the volumes of abalones through linear regression analysis between the calculated and actual volumes. The final automatic abalone grading algorithm is designed using the abalone volume estimation regression formula derived from test results, and the actual volumes and abalone weights regression formula. In the range of abalones weighting from 16.51 to 128.01 g, the results of evaluation of the performance of algorithm via cross-validation indicate root mean square and worst-case prediction errors of are 2.8 and ±8 g, respectively. © 2015 Institute of Food Technologists®

  14. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  15. Standardized evaluation of algorithms for computer-aided diagnosis of dementia based on structural MRI

    DEFF Research Database (Denmark)

    Bron, Esther E.; Smits, Marion; van der Flier, Wiesje M.

    2015-01-01

    algorithms based on a clinically representative multi-center data set. Using clinical practice as the starting point, the goal was to reproduce the clinical diagnosis. Therefore, we evaluated algorithms for multi-class classification of three diagnostic groups: patients with probable Alzheimer's disease...... of aging). The best performing algorithm yielded an accuracy of 63.0% and an area under the receiver-operating-characteristic curve (AUC) of 78.8%. In general, the best performances were achieved using feature extraction based on voxel-based morphometry or a combination of features that included volume......Abstract Algorithms for computer-aided diagnosis of dementia based on structural MRI have demonstrated high performance in the literature, but are difficult to compare as different data sets and methodology were used for evaluation. In addition, it is unclear how the algorithms would perform...

  16. Efficient Algorithms for Computing the Triplet and Quartet Distance Between Trees of Arbitrary Degree

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf; Mailund, Thomas

    2013-01-01

    ), respectively, and counting how often the induced topologies in the two input trees are different. In this paper we present efficient algorithms for computing these distances. We show how to compute the triplet distance in time O(n log n) and the quartet distance in time O(d n log n), where d is the maximal......The triplet and quartet distances are distance measures to compare two rooted and two unrooted trees, respectively. The leaves of the two trees should have the same set of n labels. The distances are defined by enumerating all subsets of three labels (triplets) and four labels (quartets...... degree of any node in the two trees. Within the same time bounds, our framework also allows us to compute the parameterized triplet and quartet distances, where a parameter is introduced to weight resolved (binary) topologies against unresolved (non-binary) topologies. The previous best algorithm...

  17. Efficient quantum algorithm for computing n-time correlation functions.

    Science.gov (United States)

    Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E

    2014-07-11

    We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.

  18. Efficient frequent pattern mining algorithm based on node sets in cloud computing environment

    Science.gov (United States)

    Billa, V. N. Vinay Kumar; Lakshmanna, K.; Rajesh, K.; Reddy, M. Praveen Kumar; Nagaraja, G.; Sudheer, K.

    2017-11-01

    The ultimate goal of Data Mining is to determine the hidden information which is useful in making decisions using the large databases collected by an organization. This Data Mining involves many tasks that are to be performed during the process. Mining frequent itemsets is the one of the most important tasks in case of transactional databases. These transactional databases contain the data in very large scale where the mining of these databases involves the consumption of physical memory and time in proportion to the size of the database. A frequent pattern mining algorithm is said to be efficient only if it consumes less memory and time to mine the frequent itemsets from the given large database. Having these points in mind in this thesis we proposed a system which mines frequent itemsets in an optimized way in terms of memory and time by using cloud computing as an important factor to make the process parallel and the application is provided as a service. A complete framework which uses a proven efficient algorithm called FIN algorithm. FIN algorithm works on Nodesets and POC (pre-order coding) tree. In order to evaluate the performance of the system we conduct the experiments to compare the efficiency of the same algorithm applied in a standalone manner and in cloud computing environment on a real time data set which is traffic accidents data set. The results show that the memory consumption and execution time taken for the process in the proposed system is much lesser than those of standalone system.

  19. Cosmic curvature from de Sitter equilibrium cosmology.

    Science.gov (United States)

    Albrecht, Andreas

    2011-10-07

    I show that the de Sitter equilibrium cosmology generically predicts observable levels of curvature in the Universe today. The predicted value of the curvature, Ω(k), depends only on the ratio of the density of nonrelativistic matter to cosmological constant density ρ(m)(0)/ρ(Λ) and the value of the curvature from the initial bubble that starts the inflation, Ω(k)(B). The result is independent of the scale of inflation, the shape of the potential during inflation, and many other details of the cosmology. Future cosmological measurements of ρ(m)(0)/ρ(Λ) and Ω(k) will open up a window on the very beginning of our Universe and offer an opportunity to support or falsify the de Sitter equilibrium cosmology.

  20. Radion stabilization in higher curvature warped spacetime

    Energy Technology Data Exchange (ETDEWEB)

    Das, Ashmita [Indian Institute of Technology, Department of Physics, Guwahati, Assam (India); Mukherjee, Hiya; Paul, Tanmoy; SenGupta, Soumitra [Indian Association for the Cultivation of Science, Department of Theoretical Physics, Kolkata (India)

    2018-02-15

    We consider a five dimensional AdS spacetime in presence of higher curvature term like F(R) = R + αR{sup 2} in the bulk. In this model, we examine the possibility of modulus stabilization from the scalar degrees of freedom of higher curvature gravity free of ghosts. Our result reveals that the model stabilizes itself and the mechanism of modulus stabilization can be argued from a geometric point of view. We determine the region of the parametric space for which the modulus (or radion) can to be stabilized. We also show how the mass and coupling parameters of radion field are modified due to higher curvature term leading to modifications of its phenomenological implications on the visible 3-brane. (orig.)

  1. A new efficient algorithm for computing the imprecise reliability of monotone systems

    International Nuclear Information System (INIS)

    Utkin, Lev V.

    2004-01-01

    Reliability analysis of complex systems by partial information about reliability of components and by different conditions of independence of components may be carried out by means of the imprecise probability theory which provides a unified framework (natural extension, lower and upper previsions) for computing the system reliability. However, the application of imprecise probabilities to reliability analysis meets with a complexity of optimization problems which have to be solved for obtaining the system reliability measures. Therefore, an efficient simplified algorithm to solve and decompose the optimization problems is proposed in the paper. This algorithm allows us to practically implement reliability analysis of monotone systems under partial and heterogeneous information about reliability of components and under conditions of the component independence or the lack of information about independence. A numerical example illustrates the algorithm

  2. The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators

    Directory of Open Access Journals (Sweden)

    Dazhi Jiang

    2015-01-01

    Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.

  3. Robust modal curvature features for identifying multiple damage in beams

    Science.gov (United States)

    Ostachowicz, Wiesław; Xu, Wei; Bai, Runbo; Radzieński, Maciej; Cao, Maosen

    2014-03-01

    Curvature mode shape is an effective feature for damage detection in beams. However, it is susceptible to measurement noise, easily impairing its advantage of sensitivity to damage. To deal with this deficiency, this study formulates an improved curvature mode shape for multiple damage detection in beams based on integrating a wavelet transform (WT) and a Teager energy operator (TEO). The improved curvature mode shape, termed the WT - TEO curvature mode shape, has inherent capabilities of immunity to noise and sensitivity to damage. The proposed method is experimentally validated by identifying multiple cracks in cantilever steel beams with the mode shapes acquired using a scanning laser vibrometer. The results demonstrate that the improved curvature mode shape can identify multiple damage accurately and reliably, and it is fairly robust to measurement noise.

  4. Systems approach to modeling the Token Bucket algorithm in computer networks

    Directory of Open Access Journals (Sweden)

    Ahmed N. U.

    2002-01-01

    Full Text Available In this paper, we construct a new dynamic model for the Token Bucket (TB algorithm used in computer networks and use systems approach for its analysis. This model is then augmented by adding a dynamic model for a multiplexor at an access node where the TB exercises a policing function. In the model, traffic policing, multiplexing and network utilization are formally defined. Based on the model, we study such issues as (quality of service QoS, traffic sizing and network dimensioning. Also we propose an algorithm using feedback control to improve QoS and network utilization. Applying MPEG video traces as the input traffic to the model, we verify the usefulness and effectiveness of our model.

  5. Mathematical models and algorithms for the computer program 'WOLF'

    International Nuclear Information System (INIS)

    Halbach, K.

    1975-12-01

    The computer program FLOW finds the nonrelativistic self- consistent set of two-dimensional ion trajectories and electric fields (including space charges from ions and electrons) for a given set of initial and boundary conditions for the particles and fields. The combination of FLOW with the optimization code PISA gives the program WOLF, which finds the shape of the emitter which is consistent with the plasma forming it, and in addition varies physical characteristics such as electrode position, shapes, and potentials so that some performance characteristics are optimized. The motivation for developing these programs was the desire to design optimum ion source extractor/accelerator systems in a systematic fashion. The purpose of this report is to explain and derive the mathematical models and algorithms which approximate the real physical processes. It serves primarily to document the computer programs. 10 figures

  6. A Robust Algorithm to Determine the Topology of Space from the Cosmic Microwave Background Radiation

    OpenAIRE

    Weeks, Jeffrey R.

    2001-01-01

    Satellite measurements of the cosmic microwave back-ground radiation will soon provide an opportunity to test whether the universe is multiply connected. This paper presents a new algorithm for deducing the topology of the universe from the microwave background data. Unlike an older algorithm, the new algorithm gives the curvature of space and the radius of the last scattering surface as outputs, rather than requiring them as inputs. The new algorithm is also more tolerant of erro...

  7. Efficiently computing exact geodesic loops within finite steps.

    Science.gov (United States)

    Xin, Shi-Qing; He, Ying; Fu, Chi-Wing

    2012-06-01

    Closed geodesics, or geodesic loops, are crucial to the study of differential topology and differential geometry. Although the existence and properties of closed geodesics on smooth surfaces have been widely studied in mathematics community, relatively little progress has been made on how to compute them on polygonal surfaces. Most existing algorithms simply consider the mesh as a graph and so the resultant loops are restricted only on mesh edges, which are far from the actual geodesics. This paper is the first to prove the existence and uniqueness of geodesic loop restricted on a closed face sequence; it contributes also with an efficient algorithm to iteratively evolve an initial closed path on a given mesh into an exact geodesic loop within finite steps. Our proposed algorithm takes only an O(k) space complexity and an O(mk) time complexity (experimentally), where m is the number of vertices in the region bounded by the initial loop and the resultant geodesic loop, and k is the average number of edges in the edge sequences that the evolving loop passes through. In contrast to the existing geodesic curvature flow methods which compute an approximate geodesic loop within a predefined threshold, our method is exact and can apply directly to triangular meshes without needing to solve any differential equation with a numerical solver; it can run at interactive speed, e.g., in the order of milliseconds, for a mesh with around 50K vertices, and hence, significantly outperforms existing algorithms. Actually, our algorithm could run at interactive speed even for larger meshes. Besides the complexity of the input mesh, the geometric shape could also affect the number of evolving steps, i.e., the performance. We motivate our algorithm with an interactive shape segmentation example shown later in the paper.

  8. Integral computer-generated hologram via a modified Gerchberg-Saxton algorithm

    International Nuclear Information System (INIS)

    Wu, Pei-Jung; Lin, Bor-Shyh; Chen, Chien-Yue; Huang, Guan-Syun; Deng, Qing-Long; Chang, Hsuan T

    2015-01-01

    An integral computer-generated hologram, which modulates the phase function of an object based on a modified Gerchberg–Saxton algorithm and compiles a digital cryptographic diagram with phase synthesis, is proposed in this study. When the diagram completes position demultiplexing decipherment, multi-angle elemental images can be reconstructed. Furthermore, an integral CGH with a depth of 225 mm and a visual angle of ±11° is projected through the lens array. (paper)

  9. Development of algorithm for continuous generation of a computer game in terms of usability and optimization of developed code in computer science

    Directory of Open Access Journals (Sweden)

    Tibor Skala

    2018-03-01

    Full Text Available As both hardware and software have become increasingly available and constantly developed, they globally contribute to improvements in technology in every field of technology and arts. Digital tools for creation and processing of graphical contents are very developed and they have been designed to shorten the time required for content creation, which is, in this case, animation. Since contemporary animation has experienced a surge in various visual styles and visualization methods, programming is built-in in everything that is currently in use. There is no doubt that there is a variety of algorithms and software which are the brain and the moving force behind any idea created for a specific purpose and applicability in society. Art and technology combined make a direct and oriented medium for publishing and marketing in every industry, including those which are not necessarily closely related to those that rely heavily on visual aspect of work. Additionally, quality and consistency of an algorithm will also depend on proper integration into the system that will be powered by that algorithm as well as on the way the algorithm is designed. Development of an endless algorithm and its effective use will be shown during the use of the computer game. In order to present the effect of various parameters, in the final phase of the computer game development an endless algorithm was tested with varying number of key input parameters (achieved time, score reached, pace of the game.

  10. The speed-curvature power law of movements: a reappraisal.

    Science.gov (United States)

    Zago, Myrka; Matic, Adam; Flash, Tamar; Gomez-Marin, Alex; Lacquaniti, Francesco

    2018-01-01

    Several types of curvilinear movements obey approximately the so called 2/3 power law, according to which the angular speed varies proportionally to the 2/3 power of the curvature. The origin of the law is debated but it is generally thought to depend on physiological mechanisms. However, a recent paper (Marken and Shaffer, Exp Brain Res 88:685-690, 2017) claims that this power law is simply a statistical artifact, being a mathematical consequence of the way speed and curvature are calculated. Here we reject this hypothesis by showing that the speed-curvature power law of biological movements is non-trivial. First, we confirm that the power exponent varies with the shape of human drawing movements and with environmental factors. Second, we report experimental data from Drosophila larvae demonstrating that the power law does not depend on how curvature is calculated. Third, we prove that the law can be violated by means of several mathematical and physical examples. Finally, we discuss biological constraints that may underlie speed-curvature power laws discovered in empirical studies.

  11. Development of a computationally efficient algorithm for attitude estimation of a remote sensing satellite

    Science.gov (United States)

    Labibian, Amir; Bahrami, Amir Hossein; Haghshenas, Javad

    2017-09-01

    This paper presents a computationally efficient algorithm for attitude estimation of remote a sensing satellite. In this study, gyro, magnetometer, sun sensor and star tracker are used in Extended Kalman Filter (EKF) structure for the purpose of Attitude Determination (AD). However, utilizing all of the measurement data simultaneously in EKF structure increases computational burden. Specifically, assuming n observation vectors, an inverse of a 3n×3n matrix is required for gain calculation. In order to solve this problem, an efficient version of EKF, namely Murrell's version, is employed. This method utilizes measurements separately at each sampling time for gain computation. Therefore, an inverse of a 3n×3n matrix is replaced by an inverse of a 3×3 matrix for each measurement vector. Moreover, gyro drifts during the time can reduce the pointing accuracy. Therefore, a calibration algorithm is utilized for estimation of the main gyro parameters.

  12. Extrinsic and intrinsic curvatures in thermodynamic geometry

    Energy Technology Data Exchange (ETDEWEB)

    Hosseini Mansoori, Seyed Ali, E-mail: shossein@bu.edu [Department of Physics, Boston University, 590 Commonwealth Ave., Boston, MA 02215 (United States); Department of Physics, Isfahan University of Technology, Isfahan 84156-83111 (Iran, Islamic Republic of); Mirza, Behrouz, E-mail: b.mirza@cc.iut.ac.ir [Department of Physics, Isfahan University of Technology, Isfahan 84156-83111 (Iran, Islamic Republic of); Sharifian, Elham, E-mail: e.sharifian@ph.iut.ac.ir [Department of Physics, Isfahan University of Technology, Isfahan 84156-83111 (Iran, Islamic Republic of)

    2016-08-10

    We investigate the intrinsic and extrinsic curvatures of a certain hypersurface in thermodynamic geometry of a physical system and show that they contain useful thermodynamic information. For an anti-Reissner–Nordström-(A)de Sitter black hole (Phantom), the extrinsic curvature of a constant Q hypersurface has the same sign as the heat capacity around the phase transition points. The intrinsic curvature of the hypersurface can also be divergent at the critical points but has no information about the sign of the heat capacity. Our study explains the consistent relationship holding between the thermodynamic geometry of the KN-AdS black holes and those of the RN (J-zero hypersurface) and Kerr black holes (Q-zero hypersurface) ones [1]. This approach can easily be generalized to an arbitrary thermodynamic system.

  13. Extrinsic and intrinsic curvatures in thermodynamic geometry

    International Nuclear Information System (INIS)

    Hosseini Mansoori, Seyed Ali; Mirza, Behrouz; Sharifian, Elham

    2016-01-01

    We investigate the intrinsic and extrinsic curvatures of a certain hypersurface in thermodynamic geometry of a physical system and show that they contain useful thermodynamic information. For an anti-Reissner–Nordström-(A)de Sitter black hole (Phantom), the extrinsic curvature of a constant Q hypersurface has the same sign as the heat capacity around the phase transition points. The intrinsic curvature of the hypersurface can also be divergent at the critical points but has no information about the sign of the heat capacity. Our study explains the consistent relationship holding between the thermodynamic geometry of the KN-AdS black holes and those of the RN (J-zero hypersurface) and Kerr black holes (Q-zero hypersurface) ones [1]. This approach can easily be generalized to an arbitrary thermodynamic system.

  14. Fast parallel molecular algorithms for DNA-based computation: solving the elliptic curve discrete logarithm problem over GF2.

    Science.gov (United States)

    Li, Kenli; Zou, Shuting; Xv, Jin

    2008-01-01

    Elliptic curve cryptographic algorithms convert input data to unrecognizable encryption and the unrecognizable data back again into its original decrypted form. The security of this form of encryption hinges on the enormous difficulty that is required to solve the elliptic curve discrete logarithm problem (ECDLP), especially over GF(2(n)), n in Z+. This paper describes an effective method to find solutions to the ECDLP by means of a molecular computer. We propose that this research accomplishment would represent a breakthrough for applied biological computation and this paper demonstrates that in principle this is possible. Three DNA-based algorithms: a parallel adder, a parallel multiplier, and a parallel inverse over GF(2(n)) are described. The biological operation time of all of these algorithms is polynomial with respect to n. Considering this analysis, cryptography using a public key might be less secure. In this respect, a principal contribution of this paper is to provide enhanced evidence of the potential of molecular computing to tackle such ambitious computations.

  15. Towards a computational- and algorithmic-level account of concept blending using analogies and amalgams

    Science.gov (United States)

    Besold, Tarek R.; Kühnberger, Kai-Uwe; Plaza, Enric

    2017-10-01

    Concept blending - a cognitive process which allows for the combination of certain elements (and their relations) from originally distinct conceptual spaces into a new unified space combining these previously separate elements, and enables reasoning and inference over the combination - is taken as a key element of creative thought and combinatorial creativity. In this article, we summarise our work towards the development of a computational-level and algorithmic-level account of concept blending, combining approaches from computational analogy-making and case-based reasoning (CBR). We present the theoretical background, as well as an algorithmic proposal integrating higher-order anti-unification matching and generalisation from analogy with amalgams from CBR. The feasibility of the approach is then exemplified in two case studies.

  16. Weakly and strongly polynomial algorithms for computing the maximum decrease in uniform arc capacities

    Directory of Open Access Journals (Sweden)

    Ghiyasvand Mehdi

    2016-01-01

    Full Text Available In this paper, a new problem on a directed network is presented. Let D be a feasible network such that all arc capacities are equal to U. Given a t > 0, the network D with arc capacities U - t is called the t-network. The goal of the problem is to compute the largest t such that the t-network is feasible. First, we present a weakly polynomial time algorithm to solve this problem, which runs in O(log(nU maximum flow computations, where n is the number of nodes. Then, an O(m2n time approach is presented, where m is the number of arcs. Both the weakly and strongly polynomial algorithms are inspired by McCormick and Ervolina (1994.

  17. Generalization of the Lord-Wingersky Algorithm to Computing the Distribution of Summed Test Scores Based on Real-Number Item Scores

    Science.gov (United States)

    Kim, Seonghoon

    2013-01-01

    With known item response theory (IRT) item parameters, Lord and Wingersky provided a recursive algorithm for computing the conditional frequency distribution of number-correct test scores, given proficiency. This article presents a generalized algorithm for computing the conditional distribution of summed test scores involving real-number item…

  18. Positive spatial curvature does not falsify the landscape

    Science.gov (United States)

    Horn, B.

    2017-12-01

    We present a simple cosmological model where the quantum tunneling of a scalar field rearranges the energetics of the matter sector, sending a stable static ancestor vacuum with positive spatial curvature into an inating solution with positive curvature. This serves as a proof of principle that an observation of positive spatial curvature does not falsify the hypothesis that our current observer patch originated from false vacuum tunneling in a string or field theoretic landscape. This poster submission is a summary of the work, and was presented at the 3rd annual ICPPA held in Moscow from October 2 to 5, 2017, by Prof. Rostislav Konoplich on behalf of the author.

  19. Curvature perturbation and waterfall dynamics in hybrid inflation

    International Nuclear Information System (INIS)

    Abolhasani, Ali Akbar; Firouzjahi, Hassan; Sasaki, Misao

    2011-01-01

    We investigate the parameter spaces of hybrid inflation model with special attention paid to the dynamics of waterfall field and curvature perturbations induced from its quantum fluctuations. Depending on the inflaton field value at the time of phase transition and the sharpness of the phase transition inflation can have multiple extended stages. We find that for models with mild phase transition the induced curvature perturbation from the waterfall field is too large to satisfy the COBE normalization. We investigate the model parameter space where the curvature perturbations from the waterfall quantum fluctuations vary between the results of standard hybrid inflation and the results obtained here

  20. Curvature perturbation and waterfall dynamics in hybrid inflation

    Energy Technology Data Exchange (ETDEWEB)

    Abolhasani, Ali Akbar [Department of Physics, Sharif University of Technology, Tehran (Iran, Islamic Republic of); Firouzjahi, Hassan [School of Physics, Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of); Sasaki, Misao, E-mail: abolhasani@mail.ipm.ir, E-mail: firouz@mail.ipm.ir, E-mail: misao@yukawa.kyoto-u.ac.jp [Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502 (Japan)

    2011-10-01

    We investigate the parameter spaces of hybrid inflation model with special attention paid to the dynamics of waterfall field and curvature perturbations induced from its quantum fluctuations. Depending on the inflaton field value at the time of phase transition and the sharpness of the phase transition inflation can have multiple extended stages. We find that for models with mild phase transition the induced curvature perturbation from the waterfall field is too large to satisfy the COBE normalization. We investigate the model parameter space where the curvature perturbations from the waterfall quantum fluctuations vary between the results of standard hybrid inflation and the results obtained here.

  1. Management algorithm for images of hepatic incidentalomas, renal and adrenal detected by computed tomography

    International Nuclear Information System (INIS)

    Montero Gonzalez, Allan

    2012-01-01

    A literature review has been carried out in the diagnostic and monitoring algorithms for image of incidentalomas of solid abdominal organs (liver, kidney and adrenal glands) detected by computed tomography (CT). The criteria have been unified and updated for a effective diagnosis. Posed algorithms have been made in simplified form. The imaging techniques have been specified for each pathology, showing the advantages and disadvantages of using it and justifying the application in daily practice [es

  2. Numeric algorithms for parallel processors computer architectures with applications to the few-groups neutron diffusion equations

    International Nuclear Information System (INIS)

    Zee, S.K.

    1987-01-01

    A numeric algorithm and an associated computer code were developed for the rapid solution of the finite-difference method representation of the few-group neutron-diffusion equations on parallel computers. Applications of the numeric algorithm on both SIMD (vector pipeline) and MIMD/SIMD (multi-CUP/vector pipeline) architectures were explored. The algorithm was successfully implemented in the two-group, 3-D neutron diffusion computer code named DIFPAR3D (DIFfusion PARallel 3-Dimension). Numerical-solution techniques used in the code include the Chebyshev polynomial acceleration technique in conjunction with the power method of outer iteration. For inner iterations, a parallel form of red-black (cyclic) line SOR with automated determination of group dependent relaxation factors and iteration numbers required to achieve specified inner iteration error tolerance is incorporated. The code employs a macroscopic depletion model with trace capability for selected fission products' transients and critical boron. In addition to this, moderator and fuel temperature feedback models are also incorporated into the DIFPAR3D code, for realistic simulation of power reactor cores. The physics models used were proven acceptable in separate benchmarking studies

  3. Bacterial cell curvature through mechanical control of cell growth

    DEFF Research Database (Denmark)

    Cabeen, M.; Charbon, Godefroid; Vollmer, W.

    2009-01-01

    The cytoskeleton is a key regulator of cell morphogenesis. Crescentin, a bacterial intermediate filament-like protein, is required for the curved shape of Caulobacter crescentus and localizes to the inner cell curvature. Here, we show that crescentin forms a single filamentous structure...... that collapses into a helix when detached from the cell membrane, suggesting that it is normally maintained in a stretched configuration. Crescentin causes an elongation rate gradient around the circumference of the sidewall, creating a longitudinal cell length differential and hence curvature. Such curvature...... can be produced by physical force alone when cells are grown in circular microchambers. Production of crescentin in Escherichia coli is sufficient to generate cell curvature. Our data argue for a model in which physical strain borne by the crescentin structure anisotropically alters the kinetics...

  4. Discrimination of curvature from motion during smooth pursuit eye movements and fixation.

    Science.gov (United States)

    Ross, Nicholas M; Goettker, Alexander; Schütz, Alexander C; Braun, Doris I; Gegenfurtner, Karl R

    2017-09-01

    Smooth pursuit and motion perception have mainly been investigated with stimuli moving along linear trajectories. Here we studied the quality of pursuit movements to curved motion trajectories in human observers and examined whether the pursuit responses would be sensitive enough to discriminate various degrees of curvature. In a two-interval forced-choice task subjects pursued a Gaussian blob moving along a curved trajectory and then indicated in which interval the curve was flatter. We also measured discrimination thresholds for the same curvatures during fixation. Motion curvature had some specific effects on smooth pursuit properties: trajectories with larger amounts of curvature elicited lower open-loop acceleration, lower pursuit gain, and larger catch-up saccades compared with less curved trajectories. Initially, target motion curvatures were underestimated; however, ∼300 ms after pursuit onset pursuit responses closely matched the actual curved trajectory. We calculated perceptual thresholds for curvature discrimination, which were on the order of 1.5 degrees of visual angle (°) for a 7.9° curvature standard. Oculometric sensitivity to curvature discrimination based on the whole pursuit trajectory was quite similar to perceptual performance. Oculometric thresholds based on smaller time windows were higher. Thus smooth pursuit can quite accurately follow moving targets with curved trajectories, but temporal integration over longer periods is necessary to reach perceptual thresholds for curvature discrimination. NEW & NOTEWORTHY Even though motion trajectories in the real world are frequently curved, most studies of smooth pursuit and motion perception have investigated linear motion. We show that pursuit initially underestimates the curvature of target motion and is able to reproduce the target curvature ∼300 ms after pursuit onset. Temporal integration of target motion over longer periods is necessary for pursuit to reach the level of precision found

  5. The role of curvature in silica mesoporous crystals

    KAUST Repository

    Miyasaka, Keiichi

    2012-02-08

    Silica mesoporous crystals (SMCs) offer a unique opportunity to study micellar mesophases. Replication of non-equilibrium mesophases into porous silica structures allows the characterization of surfactant phases under a variety of chemical and physical perturbations, through methods not typically accessible to liquid crystal chemists. A poignant example is the use of electron microscopy and crystallography, as discussed herein, for the purpose of determining the fundamental role of amphiphile curvature, namely mean curvature and Gaussian curvature, which have been extensively studied in various fields such as polymer, liquid crystal, biological membrane, etc. The present work aims to highlight some current studies devoted to the interface curvature on SMCs, in which electron microscopy and electron crystallography (EC) are used to understand the geometry of silica wall surface in bicontinuous and cage-type mesostructures through the investigation of electrostatic potential maps. Additionally, we show that by altering the synthesis conditions during the preparation of SMCs, it is possible to isolate particles during micellar mesophase transformations in the cubic bicontinuous system, allowing us to view and study epitaxial relations under the specific synthesis conditions. By studying the relationship between mesoporous structure, interface curvature and micellar mesophases using electron microscopy and EC, we hope to bring new insights into the formation mechanism of these unique materials but also contribute a new way of understanding periodic liquid crystal systems. © 2012 The Royal Society.

  6. The role of curvature in silica mesoporous crystals

    KAUST Repository

    Miyasaka, Keiichi; Bennett, Alfonso Garcia; Han, Lu; Han, Yu; Xiao, Changhong; Fujita, Nobuhisa; Castle, Toen; Sakamoto, Yasuhiro; Che, Shunai; Terasaki, Osamu

    2012-01-01

    Silica mesoporous crystals (SMCs) offer a unique opportunity to study micellar mesophases. Replication of non-equilibrium mesophases into porous silica structures allows the characterization of surfactant phases under a variety of chemical and physical perturbations, through methods not typically accessible to liquid crystal chemists. A poignant example is the use of electron microscopy and crystallography, as discussed herein, for the purpose of determining the fundamental role of amphiphile curvature, namely mean curvature and Gaussian curvature, which have been extensively studied in various fields such as polymer, liquid crystal, biological membrane, etc. The present work aims to highlight some current studies devoted to the interface curvature on SMCs, in which electron microscopy and electron crystallography (EC) are used to understand the geometry of silica wall surface in bicontinuous and cage-type mesostructures through the investigation of electrostatic potential maps. Additionally, we show that by altering the synthesis conditions during the preparation of SMCs, it is possible to isolate particles during micellar mesophase transformations in the cubic bicontinuous system, allowing us to view and study epitaxial relations under the specific synthesis conditions. By studying the relationship between mesoporous structure, interface curvature and micellar mesophases using electron microscopy and EC, we hope to bring new insights into the formation mechanism of these unique materials but also contribute a new way of understanding periodic liquid crystal systems. © 2012 The Royal Society.

  7. Single Lipid Molecule Dynamics on Supported Lipid Bilayers with Membrane Curvature

    Directory of Open Access Journals (Sweden)

    Philip P. Cheney

    2017-03-01

    Full Text Available The plasma membrane is a highly compartmentalized, dynamic material and this organization is essential for a wide variety of cellular processes. Nanoscale domains allow proteins to organize for cell signaling, endo- and exocytosis, and other essential processes. Even in the absence of proteins, lipids have the ability to organize into domains as a result of a variety of chemical and physical interactions. One feature of membranes that affects lipid domain formation is membrane curvature. To directly test the role of curvature in lipid sorting, we measured the accumulation of two similar lipids, 1,2-Dihexadecanoyl-sn-glycero-3-phosphoethanolamine (DHPE and hexadecanoic acid (HDA, using a supported lipid bilayer that was assembled over a nanopatterned surface to obtain regions of membrane curvature. Both lipids studied contain 16 carbon, saturated tails and a head group tag for fluorescence microscopy measurements. The accumulation of lipids at curvatures ranging from 28 nm to 55 nm radii was measured and fluorescein labeled DHPE accumulated more than fluorescein labeled HDA at regions of membrane curvature. We then tested whether single biotinylated DHPE molecules sense curvature using single particle tracking methods. Similar to groups of fluorescein labeled DHPE accumulating at curvature, the dynamics of single molecules of biotinylated DHPE was also affected by membrane curvature and highly confined motion was observed.

  8. New curvature-torsion relations through decomposition of the Bianchi identities

    International Nuclear Information System (INIS)

    Davies, J.B.

    1988-01-01

    The Bianchi Identities relating asymmetric curvature to torsion are obtained as a new set of equations governing second-order curvature tensors. The usual contribution of symmetric curvature to the gravitational field is found to be a subset of these identities though with an added contribution due to torsion gradients. The antisymmetric curvature two-tensor is shown to be related to the divergence of the torsion. Using a model of particle-antiparticle pair production, identification of certain torsion components with electroweak fields is proposed. These components obey equations, similar to Maxwell's that are subsets of these linear Bianchi identities. These results are shown to be consistent with gauge and other previous analyses

  9. Parallel scientific computing theory, algorithms, and applications of mesh based and meshless methods

    CERN Document Server

    Trobec, Roman

    2015-01-01

    This book is concentrated on the synergy between computer science and numerical analysis. It is written to provide a firm understanding of the described approaches to computer scientists, engineers or other experts who have to solve real problems. The meshless solution approach is described in more detail, with a description of the required algorithms and the methods that are needed for the design of an efficient computer program. Most of the details are demonstrated on solutions of practical problems, from basic to more complicated ones. This book will be a useful tool for any reader interes

  10. Effects on Buildings of Surface Curvature Caused by Underground Coal Mining

    Directory of Open Access Journals (Sweden)

    Haifeng Hu

    2016-08-01

    Full Text Available Ground curvature caused by underground mining is one of the most obvious deformation quantities in buildings. To study the influence of surface curvature on buildings and predict the movement and deformation of buildings caused by ground curvature, a prediction model of the influence function on mining subsidence was used to establish the relationship between surface curvature and wall deformation. The prediction model of wall deformation was then established and the surface curvature was obtained from mining subsidence prediction software. Five prediction lines were set up in the wall from bottom to top and the predicted deformation of each line was used to calculate the crack positions in the wall. Thus, the crack prediction model was obtained. The model was verified by a case study from a coalmine in Shanxi, China. The results show that when the ground curvature is positive, the crack in the wall is shaped like a “V”; when the ground curvature is negative, the crack is shaped like a “∧”. The conclusion provides the basis for a damage evaluation method for buildings in coalmine areas.

  11. Advanced Curvature Deformable Mirrors

    Science.gov (United States)

    2010-09-01

    ORGANIZATION NAME(S) AND ADDRESS(ES) University of Hawaii ,Institute for Astronomy,640 North A‘ohoku Place, #209 , Hilo ,HI,96720-2700 8. PERFORMING...Advanced Curvature Deformable Mirrors Christ Ftaclas1,2, Aglae Kellerer2 and Mark Chun2 Institute for Astronomy, University of Hawaii

  12. Effect of nano-scale curvature on the intrinsic blood coagulation system

    Science.gov (United States)

    Kushida, Takashi; Saha, Krishnendu; Subramani, Chandramouleeswaran; Nandwana, Vikas; Rotello, Vincent M.

    2014-01-01

    The intrinsic coagulation activity of silica nanoparticles strongly depends on their surface curvature. Nanoparticles with higher surface curvature do not denature blood coagulation factor XII on its surface, providing a coagulation ‘silent’ surface, while nanoparticles with lower surface curvature shows denaturation and concomitant coagulation. PMID:25341004

  13. A geometric construction of the Riemann scalar curvature in Regge calculus

    International Nuclear Information System (INIS)

    McDonald, Jonathan R; Miller, Warner A

    2008-01-01

    The Riemann scalar curvature plays a central role in Einstein's geometric theory of gravity. We describe a new geometric construction of this scalar curvature invariant at an event (vertex) in a discrete spacetime geometry. This allows one to constructively measure the scalar curvature using only clocks and photons. Given recent interest in discrete pre-geometric models of quantum gravity, we believe is it ever so important to reconstruct the curvature scalar with respect to a finite number of communicating observers. This derivation makes use of a new fundamental lattice cell built from elements inherited from both the original simplicial (Delaunay) spacetime and its circumcentric dual (Voronoi) lattice. The orthogonality properties between these two lattices yield an expression for the vertex-based scalar curvature which is strikingly similar to the corresponding hinge-based expression in Regge calculus (deficit angle per unit Voronoi dual area). In particular, we show that the scalar curvature is simply a vertex-based weighted average of deficits per weighted average of dual areas

  14. A geometric construction of the Riemann scalar curvature in Regge calculus

    Science.gov (United States)

    McDonald, Jonathan R.; Miller, Warner A.

    2008-10-01

    The Riemann scalar curvature plays a central role in Einstein's geometric theory of gravity. We describe a new geometric construction of this scalar curvature invariant at an event (vertex) in a discrete spacetime geometry. This allows one to constructively measure the scalar curvature using only clocks and photons. Given recent interest in discrete pre-geometric models of quantum gravity, we believe is it ever so important to reconstruct the curvature scalar with respect to a finite number of communicating observers. This derivation makes use of a new fundamental lattice cell built from elements inherited from both the original simplicial (Delaunay) spacetime and its circumcentric dual (Voronoi) lattice. The orthogonality properties between these two lattices yield an expression for the vertex-based scalar curvature which is strikingly similar to the corresponding hinge-based expression in Regge calculus (deficit angle per unit Voronoi dual area). In particular, we show that the scalar curvature is simply a vertex-based weighted average of deficits per weighted average of dual areas.

  15. An ART iterative reconstruction algorithm for computed tomography of diffraction enhanced imaging

    International Nuclear Information System (INIS)

    Wang Zhentian; Zhang Li; Huang Zhifeng; Kang Kejun; Chen Zhiqiang; Fang Qiaoguang; Zhu Peiping

    2009-01-01

    X-ray diffraction enhanced imaging (DEI) has extremely high sensitivity for weakly absorbing low-Z samples in medical and biological fields. In this paper, we propose an Algebra Reconstruction Technique (ART) iterative reconstruction algorithm for computed tomography of diffraction enhanced imaging (DEI-CT). An Ordered Subsets (OS) technique is used to accelerate the ART reconstruction. Few-view reconstruction is also studied, and a partial differential equation (PDE) type filter which has the ability of edge-preserving and denoising is used to improve the image quality and eliminate the artifacts. The proposed algorithm is validated with both the numerical simulations and the experiment at the Beijing synchrotron radiation facility (BSRF). (authors)

  16. Model-independent Constraints on Cosmic Curvature and Opacity

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Guo-Jian; Li, Zheng-Xiang; Xia, Jun-Qing; Zhu, Zong-Hong [Department of Astronomy, Beijing Normal University, Beijing 100875 (China); Wei, Jun-Jie, E-mail: gjwang@mail.bnu.edu.cn, E-mail: zxli918@bnu.edu.cn, E-mail: xiajq@bnu.edu.cn, E-mail: zhuzh@bnu.edu.cn, E-mail: jjwei@pmo.ac.cn [Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210008 (China)

    2017-09-20

    In this paper, we propose to estimate the spatial curvature of the universe and the cosmic opacity in a model-independent way with expansion rate measurements, H ( z ), and type Ia supernova (SNe Ia). On the one hand, using a nonparametric smoothing method Gaussian process, we reconstruct a function H ( z ) from opacity-free expansion rate measurements. Then, we integrate the H ( z ) to obtain distance modulus μ {sub H}, which is dependent on the cosmic curvature. On the other hand, distances of SNe Ia can be determined by their photometric observations and thus are opacity-dependent. In our analysis, by confronting distance moduli μ {sub H} with those obtained from SNe Ia, we achieve estimations for both the spatial curvature and the cosmic opacity without any assumptions for the cosmological model. Here, it should be noted that light curve fitting parameters, accounting for the distance estimation of SNe Ia, are determined in a global fit together with the cosmic opacity and spatial curvature to get rid of the dependence of these parameters on cosmology. In addition, we also investigate whether the inclusion of different priors for the present expansion rate ( H {sub 0}: global estimation, 67.74 ± 0.46 km s{sup −1} Mpc{sup −1}, and local measurement, 73.24 ± 1.74 km s{sup −1} Mpc{sup −1}) exert influence on the reconstructed H ( z ) and the following estimations of the spatial curvature and cosmic opacity. Results show that, in general, a spatially flat and transparent universe is preferred by the observations. Moreover, it is suggested that priors for H {sub 0} matter a lot. Finally, we find that there is a strong degeneracy between the curvature and the opacity.

  17. Long-term Results of Ventral Penile Curvature Repair in Childhood.

    Science.gov (United States)

    Golomb, Dor; Sivan, Bezalel; Livne, Pinhas M; Nevo, Amihay; Ben-Meir, David

    2018-02-01

    To assess the postpubertal outcome of ventral penile curvature repaired in infancy in terms of recurrence and aesthetics. Postpubertal patients treated for hypospadias and ventral penile curvature in infancy at a tertiary medical center were invited to undergo assessment of the quality of the repair. Findings were compared between patients with a straight penis after skin release and patients who required dorsal plication. The cohort included 27 patients of mean age 16.5 years who were reported with straight penis after surgery. Postpubertal curvature was found in 6 of 14 patients (43%) successfully treated by skin release and 10 of 13 patients (77%) who underwent dorsal plication (P = .087). Significant curvature (≥30 degrees) was found in 1 of 14 patients in the skin-release group and 4 of 13 in the dorsal plication group (P = .16). Rates of redo urethroplasty were 2 of 14 (14%) and 5 of 10 (50%), respectively. Patient satisfaction with the appearance of the penis did not differ significantly. Ventral penile curvature repaired in infancy often recurs after puberty. The need for dorsal plication has a trend-level association with recurrence of penile curvature in puberty. It might also be related to the degree of postpubertal penile curvature and the need for redo urethroplasty. Procedure type does not affect patient satisfaction with the postpubertal appearance of the penis. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. On the scalar curvature of self-dual manifolds

    International Nuclear Information System (INIS)

    Kim, J.

    1992-08-01

    We generalize LeBrun's explicit ''hyperbolic ansatz'' construction of self-dual metrics on connected sums of conformally flat manifolds and CP 2 's through a systematic use of the theory of hyperbolic geometry and Kleinian groups. (This construction produces, for example, all self-dual manifolds with semi-free S 1 -action and with either nonnegative scalar curvature or positive-definite intersection form.) We then point out a simple criterion for determining the sign of the scalar curvature of these conformal metrics. Exploiting this, we then show that the sign of the scalar curvature can change on connected components of the moduli space of self-dual metrics, thereby answering a question raised by King and Kotschick. (author). Refs

  19. On a curvature-statistics theorem

    International Nuclear Information System (INIS)

    Calixto, M; Aldaya, V

    2008-01-01

    The spin-statistics theorem in quantum field theory relates the spin of a particle to the statistics obeyed by that particle. Here we investigate an interesting correspondence or connection between curvature (κ = ±1) and quantum statistics (Fermi-Dirac and Bose-Einstein, respectively). The interrelation between both concepts is established through vacuum coherent configurations of zero modes in quantum field theory on the compact O(3) and noncompact O(2; 1) (spatial) isometry subgroups of de Sitter and Anti de Sitter spaces, respectively. The high frequency limit, is retrieved as a (zero curvature) group contraction to the Newton-Hooke (harmonic oscillator) group. We also make some comments on the physical significance of the vacuum energy density and the cosmological constant problem.

  20. On a curvature-statistics theorem

    Energy Technology Data Exchange (ETDEWEB)

    Calixto, M [Departamento de Matematica Aplicada y Estadistica, Universidad Politecnica de Cartagena, Paseo Alfonso XIII 56, 30203 Cartagena (Spain); Aldaya, V [Instituto de Astrofisica de Andalucia, Apartado Postal 3004, 18080 Granada (Spain)], E-mail: Manuel.Calixto@upct.es

    2008-08-15

    The spin-statistics theorem in quantum field theory relates the spin of a particle to the statistics obeyed by that particle. Here we investigate an interesting correspondence or connection between curvature ({kappa} = {+-}1) and quantum statistics (Fermi-Dirac and Bose-Einstein, respectively). The interrelation between both concepts is established through vacuum coherent configurations of zero modes in quantum field theory on the compact O(3) and noncompact O(2; 1) (spatial) isometry subgroups of de Sitter and Anti de Sitter spaces, respectively. The high frequency limit, is retrieved as a (zero curvature) group contraction to the Newton-Hooke (harmonic oscillator) group. We also make some comments on the physical significance of the vacuum energy density and the cosmological constant problem.

  1. A new fast algorithm for the evaluation of regions of interest and statistical uncertainty in computed tomography

    International Nuclear Information System (INIS)

    Huesman, R.H.

    1984-01-01

    A new algorithm for region of interest evaluation in computed tomography is described. Region of interest evaluation is a technique used to improve quantitation of the tomographic imaging process by summing (or averaging) the reconstructed quantity throughout a volume of particular significance. An important application of this procedure arises in the analysis of dynamic emission computed tomographic data, in which the uptake and clearance of radiotracers are used to determine the blood flow and/or physiologica function of tissue within the significant volume. The new algorithm replaces the conventional technique of repeated image reconstructions with one in which projected regions are convolved and then used to form multiple vector inner products with the raw tomographic data sets. Quantitation of regions of interest is made without the need for reconstruction of tomographic images. The computational advantage of the new algorithm over conventional methods is between factors of 20 and of 500 for typical applications encountered in medical science studies. The greatest benefit is the ease with which the statistical uncertainty of the result is computed. The entire covariance matrix for the evaluation of regions of interest can be calculated with relatively few operations. (author)

  2. Intensity-Curvature Measurement Approaches for the Diagnosis of Magnetic Resonance Imaging Brain Tumors

    Directory of Open Access Journals (Sweden)

    Carlo Ciulla

    2015-11-01

    Full Text Available This research presents signal-image post-processing techniques called Intensity-Curvature Measurement Approaches with application to the diagnosis of human brain tumors detected through Magnetic Resonance Imaging (MRI. Post-processing of the MRI of the human brain encompasses the following model functions: (i bivariate cubic polynomial, (ii bivariate cubic Lagrange polynomial, (iii monovariate sinc, and (iv bivariate linear. The following Intensity-Curvature Measurement Approaches were used: (i classic-curvature, (ii signal resilient to interpolation, (iii intensity-curvature measure and (iv intensity-curvature functional. The results revealed that the classic-curvature, the signal resilient to interpolation and the intensity-curvature functional are able to add additional information useful to the diagnosis carried out with MRI. The contribution to the MRI diagnosis of our study are: (i the enhanced gray level scale of the tumor mass and the well-behaved representation of the tumor provided through the signal resilient to interpolation, and (ii the visually perceptible third dimension perpendicular to the image plane provided through the classic-curvature and the intensity-curvature functional.

  3. Parallel computation of nondeterministic algorithms in VLSI

    Energy Technology Data Exchange (ETDEWEB)

    Hortensius, P D

    1987-01-01

    This work examines parallel VLSI implementations of nondeterministic algorithms. It is demonstrated that conventional pseudorandom number generators are unsuitable for highly parallel applications. Efficient parallel pseudorandom sequence generation can be accomplished using certain classes of elementary one-dimensional cellular automata. The pseudorandom numbers appear in parallel on each clock cycle. Extensive study of the properties of these new pseudorandom number generators is made using standard empirical random number tests, cycle length tests, and implementation considerations. Furthermore, it is shown these particular cellular automata can form the basis of efficient VLSI architectures for computations involved in the Monte Carlo simulation of both the percolation and Ising models from statistical mechanics. Finally, a variation on a Built-In Self-Test technique based upon cellular automata is presented. These Cellular Automata-Logic-Block-Observation (CALBO) circuits improve upon conventional design for testability circuitry.

  4. Algorithms in Singular

    Directory of Open Access Journals (Sweden)

    Hans Schonemann

    1996-12-01

    Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].

  5. Computational Comparison of Several Greedy Algorithms for the Minimum Cost Perfect Matching Problem on Large Graphs

    DEFF Research Database (Denmark)

    Wøhlk, Sanne; Laporte, Gilbert

    2017-01-01

    The aim of this paper is to computationally compare several algorithms for the Minimum Cost Perfect Matching Problem on an undirected complete graph. Our work is motivated by the need to solve large instances of the Capacitated Arc Routing Problem (CARP) arising in the optimization of garbage...... collection in Denmark. Common heuristics for the CARP involve the optimal matching of the odd-degree nodes of a graph. The algorithms used in the comparison include the CPLEX solution of an exact formulation, the LEDA matching algorithm, a recent implementation of the Blossom algorithm, as well as six...

  6. Statistical mechanics of surfaces with curvature dependent action

    International Nuclear Information System (INIS)

    Jonsson, T.

    1987-01-01

    We review recent results about discretized random surfaces whose action (energy) depends on the extrinsic curvature. The surface tension scales to zero at an appropriate critical point if the coupling constant of the curvature term is taken to infinity. At this critical point one expects to be able to construct a continuum theory of smooth surfaces. (orig.)

  7. A 1 + 5-dimensional gravitational-wave solution. Curvature singularity and spacetime singularity

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yu-Zhu [Tianjin University, Department of Physics, Tianjin (China); Li, Wen-Du [Tianjin University, Department of Physics, Tianjin (China); Nankai University, Theoretical Physics Division, Chern Institute of Mathematics, Tianjin (China); Dai, Wu-Sheng [Nankai University, Theoretical Physics Division, Chern Institute of Mathematics, Tianjin (China); Nankai University and Tianjin University, LiuHui Center for Applied Mathematics, Tianjin (China)

    2017-12-15

    We solve a 1 + 5-dimensional cylindrical gravitational-wave solution of the Einstein equation, in which there are two curvature singularities. Then we show that one of the curvature singularities can be removed by an extension of the spacetime. The result exemplifies that the curvature singularity is not always a spacetime singularity; in other words, the curvature singularity cannot serve as a criterion for spacetime singularities. (orig.)

  8. Computational complexity of algorithms for sequence comparison, short-read assembly and genome alignment.

    Science.gov (United States)

    Baichoo, Shakuntala; Ouzounis, Christos A

    A multitude of algorithms for sequence comparison, short-read assembly and whole-genome alignment have been developed in the general context of molecular biology, to support technology development for high-throughput sequencing, numerous applications in genome biology and fundamental research on comparative genomics. The computational complexity of these algorithms has been previously reported in original research papers, yet this often neglected property has not been reviewed previously in a systematic manner and for a wider audience. We provide a review of space and time complexity of key sequence analysis algorithms and highlight their properties in a comprehensive manner, in order to identify potential opportunities for further research in algorithm or data structure optimization. The complexity aspect is poised to become pivotal as we will be facing challenges related to the continuous increase of genomic data on unprecedented scales and complexity in the foreseeable future, when robust biological simulation at the cell level and above becomes a reality. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Numerical and Theoretical Investigations Concerning the Continuous-Surface-Curvature Effect in Compressor Blades

    Directory of Open Access Journals (Sweden)

    Yin Song

    2014-12-01

    Full Text Available Though the importance of curvature continuity on compressor blade performances has been realized, there are two major questions that need to be solved, i.e., the respective effects of curvature continuity at the leading-edge blend point and the main surface, and the contradiction between the traditional theory and experimental observations in the effect of those novel leading-edge shapes with smaller curvature discontinuity and sharper nose. In this paper, an optimization method to design continuous-curvature blade profiles which deviate little from datum blades is proposed, and numerical and theoretical analysis is carried out to investigate the continuous-curvature effect on blade performances. The results show that the curvature continuity at the leading-edge blend point helps to eliminate the separation bubble, thus improving the blade performance. The main-surface curvature continuity is also beneficial, although its effects are much smaller than those of the blend-point curvature continuity. Furthermore, it is observed that there exist two factors controlling the leading-edge spike, i.e., the curvature discontinuity at the blend point which dominates at small incidences, and the nose curvature which dominates at large incidences. To the authors’ knowledge, such mechanisms have not been reported before, and they can help to solve the sharp-leading-edge paradox.

  10. Cloud identification using genetic algorithms and massively parallel computation

    Science.gov (United States)

    Buckles, Bill P.; Petry, Frederick E.

    1996-01-01

    As a Guest Computational Investigator under the NASA administered component of the High Performance Computing and Communication Program, we implemented a massively parallel genetic algorithm on the MasPar SIMD computer. Experiments were conducted using Earth Science data in the domains of meteorology and oceanography. Results obtained in these domains are competitive with, and in most cases better than, similar problems solved using other methods. In the meteorological domain, we chose to identify clouds using AVHRR spectral data. Four cloud speciations were used although most researchers settle for three. Results were remarkedly consistent across all tests (91% accuracy). Refinements of this method may lead to more timely and complete information for Global Circulation Models (GCMS) that are prevalent in weather forecasting and global environment studies. In the oceanographic domain, we chose to identify ocean currents from a spectrometer having similar characteristics to AVHRR. Here the results were mixed (60% to 80% accuracy). Given that one is willing to run the experiment several times (say 10), then it is acceptable to claim the higher accuracy rating. This problem has never been successfully automated. Therefore, these results are encouraging even though less impressive than the cloud experiment. Successful conclusion of an automated ocean current detection system would impact coastal fishing, naval tactics, and the study of micro-climates. Finally we contributed to the basic knowledge of GA (genetic algorithm) behavior in parallel environments. We developed better knowledge of the use of subpopulations in the context of shared breeding pools and the migration of individuals. Rigorous experiments were conducted based on quantifiable performance criteria. While much of the work confirmed current wisdom, for the first time we were able to submit conclusive evidence. The software developed under this grant was placed in the public domain. An extensive user

  11. A theoretically exact reconstruction algorithm for helical cone-beam differential phase-contrast computed tomography

    International Nuclear Information System (INIS)

    Li Jing; Sun Yi; Zhu Peiping

    2013-01-01

    Differential phase-contrast computed tomography (DPC-CT) reconstruction problems are usually solved by using parallel-, fan- or cone-beam algorithms. For rod-shaped objects, the x-ray beams cannot recover all the slices of the sample at the same time. Thus, if a rod-shaped sample is required to be reconstructed by the above algorithms, one should alternately perform translation and rotation on this sample, which leads to lower efficiency. The helical cone-beam CT may significantly improve scanning efficiency for rod-shaped objects over other algorithms. In this paper, we propose a theoretically exact filter-backprojection algorithm for helical cone-beam DPC-CT, which can be applied to reconstruct the refractive index decrement distribution of the samples directly from two-dimensional differential phase-contrast images. Numerical simulations are conducted to verify the proposed algorithm. Our work provides a potential solution for inspecting the rod-shaped samples using DPC-CT, which may be applicable with the evolution of DPC-CT equipments. (paper)

  12. Computational issues in alternating projection algorithms for fixed-order control design

    DEFF Research Database (Denmark)

    Beran, Eric Bengt; Grigoriadis, K.

    1997-01-01

    Alternating projection algorithms have been introduced recently to solve fixed-order controller design problems described by linear matrix inequalities and non-convex coupling rank constraints. In this work, an extensive numerical experimentation using proposed benchmark fixed-order control design...... examples is used to indicate the computational efficiency of the method. These results indicate that the proposed alternating projections are effective in obtaining low-order controllers for small and medium order problems...

  13. Amphipathic motifs in BAR domains are essential for membrane curvature sensing

    DEFF Research Database (Denmark)

    Bhatia, Vikram K; Madsen, Kenneth L; Bolinger, Pierre-Yves

    2009-01-01

    BAR (Bin/Amphiphysin/Rvs) domains and amphipathic alpha-helices (AHs) are believed to be sensors of membrane curvature thus facilitating the assembly of protein complexes on curved membranes. Here, we used quantitative fluorescence microscopy to compare the binding of both motifs on single...... nanosized liposomes of different diameters and therefore membrane curvature. Characterization of members of the three BAR domain families showed surprisingly that the crescent-shaped BAR dimer with its positively charged concave face is not able to sense membrane curvature. Mutagenesis on BAR domains showed...... that membrane curvature sensing critically depends on the N-terminal AH and furthermore that BAR domains sense membrane curvature through hydrophobic insertion in lipid packing defects and not through electrostatics. Consequently, amphipathic motifs, such as AHs, that are often associated with BAR domains...

  14. A computational environment for long-term multi-feature and multi-algorithm seizure prediction.

    Science.gov (United States)

    Teixeira, C A; Direito, B; Costa, R P; Valderrama, M; Feldwisch-Drentrup, H; Nikolopoulos, S; Le Van Quyen, M; Schelter, B; Dourado, A

    2010-01-01

    The daily life of epilepsy patients is constrained by the possibility of occurrence of seizures. Until now, seizures cannot be predicted with sufficient sensitivity and specificity. Most of the seizure prediction studies have been focused on a small number of patients, and frequently assuming unrealistic hypothesis. This paper adopts the view that for an appropriate development of reliable predictors one should consider long-term recordings and several features and algorithms integrated in one software tool. A computational environment, based on Matlab (®), is presented, aiming to be an innovative tool for seizure prediction. It results from the need of a powerful and flexible tool for long-term EEG/ECG analysis by multiple features and algorithms. After being extracted, features can be subjected to several reduction and selection methods, and then used for prediction. The predictions can be conducted based on optimized thresholds or by applying computational intelligence methods. One important aspect is the integrated evaluation of the seizure prediction characteristic of the developed predictors.

  15. Highly efficient computer algorithm for identifying layer thickness of atomically thin 2D materials

    Science.gov (United States)

    Lee, Jekwan; Cho, Seungwan; Park, Soohyun; Bae, Hyemin; Noh, Minji; Kim, Beom; In, Chihun; Yang, Seunghoon; Lee, Sooun; Seo, Seung Young; Kim, Jehyun; Lee, Chul-Ho; Shim, Woo-Young; Jo, Moon-Ho; Kim, Dohun; Choi, Hyunyong

    2018-03-01

    The fields of layered material research, such as transition-metal dichalcogenides (TMDs), have demonstrated that the optical, electrical and mechanical properties strongly depend on the layer number N. Thus, efficient and accurate determination of N is the most crucial step before the associated device fabrication. An existing experimental technique using an optical microscope is the most widely used one to identify N. However, a critical drawback of this approach is that it relies on extensive laboratory experiences to estimate N; it requires a very time-consuming image-searching task assisted by human eyes and secondary measurements such as atomic force microscopy and Raman spectroscopy, which are necessary to ensure N. In this work, we introduce a computer algorithm based on the image analysis of a quantized optical contrast. We show that our algorithm can apply to a wide variety of layered materials, including graphene, MoS2, and WS2 regardless of substrates. The algorithm largely consists of two parts. First, it sets up an appropriate boundary between target flakes and substrate. Second, to compute N, it automatically calculates the optical contrast using an adaptive RGB estimation process between each target, which results in a matrix with different integer Ns and returns a matrix map of Ns onto the target flake position. Using a conventional desktop computational power, the time taken to display the final N matrix was 1.8 s on average for the image size of 1280 pixels by 960 pixels and obtained a high accuracy of 90% (six estimation errors among 62 samples) when compared to the other methods. To show the effectiveness of our algorithm, we also apply it to TMD flakes transferred on optically transparent c-axis sapphire substrates and obtain a similar result of the accuracy of 94% (two estimation errors among 34 samples).

  16. On the curvature of transmitted intensity plots in broad beam studies

    International Nuclear Information System (INIS)

    El-Kateb, A.H.

    2000-01-01

    Transmission of a broad beam of gamma rays of 81- and 356-keV energies from 133 Ba is studied singly and dually. This study is the first to deal with the curvatures of the intensity plots. The targets are dextrose solutions of percentage concentrations up to 0.125 and soil containing water with concentrations up to 0.319. The logarithmic intensity plots are expressed in terms of a polynomial in the concentration. The curvatures of the plots are measured and calculated on the basis of the theoretical mass attenuation coefficients. The results are discussed in conjunction with buildup factors and the probability of photoelectric and Compton interactions. The curvatures show maxima when incoherent interaction prevails. This is evidently proved in case of the single 356-keV and of the dual 81- and 356-keV applied energies. Comparison is performed between the measured and calculated curvatures. The concept of curvature is applied and discussed for published results of narrow beam geometry. Correspondingly, this is the first search to introduce curvature instead of buildup as a measure for transmitted collided photons

  17. Evolution of curvature perturbation in generalized gravity theories

    International Nuclear Information System (INIS)

    Matsuda, Tomohiro

    2009-01-01

    Using the cosmological perturbation theory in terms of the δN formalism, we find the simple formulation of the evolution of the curvature perturbation in generalized gravity theories. Compared with the standard gravity theory, a crucial difference appears in the end-boundary of the inflationary stage, which is due to the non-ideal form of the energy-momentum tensor that depends explicitly on the curvature scalar. Recent study shows that ultraviolet-complete quantum theory of gravity (Horava-Lifshitz gravity) can be approximated by using a generalized gravity action. Our paper may give an important step in understanding the evolution of the curvature perturbation during inflation, where the energy-momentum tensor may not be given by the ideal form due to the corrections from the fundamental theory.

  18. Vibration Analysis of Circular Arch Element Using Curvature

    Directory of Open Access Journals (Sweden)

    H. Saffari

    2008-01-01

    Full Text Available In this paper, a finite element technique was used to determine the natural frequencies, and the mode shapes of a circular arch element was based on the curvature, which can fully represent the bending energy and by the equilibrium equations, the shear and axial strain energy were incorporated into the formulation. The treatment of general boundary conditions dose need a consideration when the element is incorporated by the curvature-based formula. This can be obtained by the introduction of a transformation matrix between nodal curvatures and nodal displacements. The equation of the motion for the element was obtained by the Lagrangian equation. Four examples are presented in order to verify the element formulation and its analytical capability.

  19. A major QTL controls susceptibility to spinal curvature in the curveback guppy

    Directory of Open Access Journals (Sweden)

    Dreyer Christine

    2011-01-01

    Full Text Available Abstract Background Understanding the genetic basis of heritable spinal curvature would benefit medicine and aquaculture. Heritable spinal curvature among otherwise healthy children (i.e. Idiopathic Scoliosis and Scheuermann kyphosis accounts for more than 80% of all spinal curvatures and imposes a substantial healthcare cost through bracing, hospitalizations, surgery, and chronic back pain. In aquaculture, the prevalence of heritable spinal curvature can reach as high as 80% of a stock, and thus imposes a substantial cost through production losses. The genetic basis of heritable spinal curvature is unknown and so the objective of this work is to identify quantitative trait loci (QTL affecting heritable spinal curvature in the curveback guppy. Prior work with curveback has demonstrated phenotypic parallels to human idiopathic-type scoliosis, suggesting shared biological pathways for the deformity. Results A major effect QTL that acts in a recessive manner and accounts for curve susceptibility was detected in an initial mapping cross on LG 14. In a second cross, we confirmed this susceptibility locus and fine mapped it to a 5 cM region that explains 82.6% of the total phenotypic variance. Conclusions We identify a major QTL that controls susceptibility to curvature. This locus contains over 100 genes, including MTNR1B, a candidate gene for human idiopathic scoliosis. The identification of genes associated with heritable spinal curvature in the curveback guppy has the potential to elucidate the biological basis of spinal curvature among humans and economically important teleosts.

  20. Exploiting Genomic Knowledge in Optimising Molecular Breeding Programmes: Algorithms from Evolutionary Computing

    Science.gov (United States)

    O'Hagan, Steve; Knowles, Joshua; Kell, Douglas B.

    2012-01-01

    Comparatively few studies have addressed directly the question of quantifying the benefits to be had from using molecular genetic markers in experimental breeding programmes (e.g. for improved crops and livestock), nor the question of which organisms should be mated with each other to best effect. We argue that this requires in silico modelling, an approach for which there is a large literature in the field of evolutionary computation (EC), but which has not really been applied in this way to experimental breeding programmes. EC seeks to optimise measurable outcomes (phenotypic fitnesses) by optimising in silico the mutation, recombination and selection regimes that are used. We review some of the approaches from EC, and compare experimentally, using a biologically relevant in silico landscape, some algorithms that have knowledge of where they are in the (genotypic) search space (G-algorithms) with some (albeit well-tuned ones) that do not (F-algorithms). For the present kinds of landscapes, F- and G-algorithms were broadly comparable in quality and effectiveness, although we recognise that the G-algorithms were not equipped with any ‘prior knowledge’ of epistatic pathway interactions. This use of algorithms based on machine learning has important implications for the optimisation of experimental breeding programmes in the post-genomic era when we shall potentially have access to the full genome sequence of every organism in a breeding population. The non-proprietary code that we have used is made freely available (via Supplementary information). PMID:23185279

  1. High-performance simulation-based algorithms for an alpine ski racer’s trajectory optimization in heterogeneous computer systems

    Directory of Open Access Journals (Sweden)

    Dębski Roman

    2014-09-01

    Full Text Available Effective, simulation-based trajectory optimization algorithms adapted to heterogeneous computers are studied with reference to the problem taken from alpine ski racing (the presented solution is probably the most general one published so far. The key idea behind these algorithms is to use a grid-based discretization scheme to transform the continuous optimization problem into a search problem over a specially constructed finite graph, and then to apply dynamic programming to find an approximation of the global solution. In the analyzed example it is the minimum-time ski line, represented as a piecewise-linear function (a method of elimination of unfeasible solutions is proposed. Serial and parallel versions of the basic optimization algorithm are presented in detail (pseudo-code, time and memory complexity. Possible extensions of the basic algorithm are also described. The implementation of these algorithms is based on OpenCL. The included experimental results show that contemporary heterogeneous computers can be treated as μ-HPC platforms-they offer high performance (the best speedup was equal to 128 while remaining energy and cost efficient (which is crucial in embedded systems, e.g., trajectory planners of autonomous robots. The presented algorithms can be applied to many trajectory optimization problems, including those having a black-box represented performance measure

  2. Geometry-specific scaling of detonation parameters from front curvature

    International Nuclear Information System (INIS)

    Jackson, Scott I.; Short, Mark

    2011-01-01

    It has previously been asserted that classical detonation curvature theory predicts that the critical diameter and the diameter-effect curve of a cylindrical high-explosive charge should scale with twice the thickness of an analogous two-dimensional explosive slab. The varied agreement of experimental results with this expectation have led some to question the ability of curvature-based concepts to predict detonation propagation in non-ideal explosives. This study addresses such claims by showing that the expected scaling relationship (hereafter referred to d = 2w) is not consistent with curvature-based Detonation Shock Dynamics (DSD) theory.

  3. Public and private space curvature in Robertson-Walker universes.

    Science.gov (United States)

    Rindler, W.

    1981-05-01

    The question is asked: what space curvature would a fundamental observer in an ideal Robertson-Walker universe obtain by direct local spatial measurements, i.e., without reference to the motion pattern of the other galaxies? The answer is that he obtains the curvatureK of his “private” space generated by all the geodesics orthogonal to his world line at the moment in question, and that ˜K is related to the usual curvatureK=k/R 2 of the “public” space of galaxies byK=K+H 2/c2, whereH is Hubble's parameter.

  4. Studying biomolecule localization by engineering bacterial cell wall curvature.

    Directory of Open Access Journals (Sweden)

    Lars D Renner

    Full Text Available In this article we describe two techniques for exploring the relationship between bacterial cell shape and the intracellular organization of proteins. First, we created microchannels in a layer of agarose to reshape live bacterial cells and predictably control their mean cell wall curvature, and quantified the influence of curvature on the localization and distribution of proteins in vivo. Second, we used agarose microchambers to reshape bacteria whose cell wall had been chemically and enzymatically removed. By combining microstructures with different geometries and fluorescence microscopy, we determined the relationship between bacterial shape and the localization for two different membrane-associated proteins: i the cell-shape related protein MreB of Escherichia coli, which is positioned along the long axis of the rod-shaped cell; and ii the negative curvature-sensing cell division protein DivIVA of Bacillus subtilis, which is positioned primarily at cell division sites. Our studies of intracellular organization in live cells of E. coli and B. subtilis demonstrate that MreB is largely excluded from areas of high negative curvature, whereas DivIVA localizes preferentially to regions of high negative curvature. These studies highlight a unique approach for studying the relationship between cell shape and intracellular organization in intact, live bacteria.

  5. A review on quantum search algorithms

    Science.gov (United States)

    Giri, Pulak Ranjan; Korepin, Vladimir E.

    2017-12-01

    The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.

  6. Codimension two branes and distributional curvature

    International Nuclear Information System (INIS)

    Traschen, Jennie

    2009-01-01

    In general relativity, there is a well-developed formalism for working with the approximation that a gravitational source is concentrated on a shell, or codimension one surface. In contrast, there are obstacles to concentrating sources on surfaces that have a higher codimension, for example, a string in a spacetime with a dimension greater than or equal to four. Here it is shown that, by giving up some of the generality of the codimension one case, curvature can be concentrated on submanifolds that have codimension two. A class of metrics is identified such that (1) the scalar curvature and Ricci densities exist as distributions with support on a codimension two submanifold, and (2) using the Einstein equation, the distributional curvature corresponds to a concentrated stress-energy with equation of state p = -ρ, where p is the isotropic pressure tangent to the submanifold, and ρ is the energy density. This is the appropriate stress-energy to describe a self-gravitating brane that is governed by an area action, or a braneworld deSitter cosmology. The possibility of having a different equation of state arise from a wider class of metrics is discussed.

  7. Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control.

    Science.gov (United States)

    1983-08-15

    obtainable from real data, rather than relying on a stock database. Often, computer vision and image processing algorithms become subconsciously tuned to...two coils on the same mount structure. Since it was not possible to reprogram the binary system, we turned to the POPEYE system for both its grey

  8. Experimental realization of a one-way quantum computer algorithm solving Simon's problem.

    Science.gov (United States)

    Tame, M S; Bell, B A; Di Franco, C; Wadsworth, W J; Rarity, J G

    2014-11-14

    We report an experimental demonstration of a one-way implementation of a quantum algorithm solving Simon's problem-a black-box period-finding problem that has an exponential gap between the classical and quantum runtime. Using an all-optical setup and modifying the bases of single-qubit measurements on a five-qubit cluster state, key representative functions of the logical two-qubit version's black box can be queried and solved. To the best of our knowledge, this work represents the first experimental realization of the quantum algorithm solving Simon's problem. The experimental results are in excellent agreement with the theoretical model, demonstrating the successful performance of the algorithm. With a view to scaling up to larger numbers of qubits, we analyze the resource requirements for an n-qubit version. This work helps highlight how one-way quantum computing provides a practical route to experimentally investigating the quantum-classical gap in the query complexity model.

  9. Algorithms for computing solvents of unilateral second-order matrix polynomials over prime finite fields using lambda-matrices

    Science.gov (United States)

    Burtyka, Filipp

    2018-01-01

    The paper considers algorithms for finding diagonalizable and non-diagonalizable roots (so called solvents) of monic arbitrary unilateral second-order matrix polynomial over prime finite field. These algorithms are based on polynomial matrices (lambda-matrices). This is an extension of existing general methods for computing solvents of matrix polynomials over field of complex numbers. We analyze how techniques for complex numbers can be adapted for finite field and estimate asymptotic complexity of the obtained algorithms.

  10. Computing Principal Eigenvectors of Large Web Graphs: Algorithms and Accelerations Related to PageRank and HITS

    Science.gov (United States)

    Nagasinghe, Iranga

    2010-01-01

    This thesis investigates and develops a few acceleration techniques for the search engine algorithms used in PageRank and HITS computations. PageRank and HITS methods are two highly successful applications of modern Linear Algebra in computer science and engineering. They constitute the essential technologies accounted for the immense growth and…

  11. Quantitative analysis and prediction of curvature in leucine-rich repeat proteins.

    Science.gov (United States)

    Hindle, K Lauren; Bella, Jordi; Lovell, Simon C

    2009-11-01

    Leucine-rich repeat (LRR) proteins form a large and diverse family. They have a wide range of functions most of which involve the formation of protein-protein interactions. All known LRR structures form curved solenoids, although there is large variation in their curvature. It is this curvature that determines the shape and dimensions of the inner space available for ligand binding. Unfortunately, large-scale parameters such as the overall curvature of a protein domain are extremely difficult to predict. Here, we present a quantitative analysis of determinants of curvature of this family. Individual repeats typically range in length between 20 and 30 residues and have a variety of secondary structures on their convex side. The observed curvature of the LRR domains correlates poorly with the lengths of their individual repeats. We have, therefore, developed a scoring function based on the secondary structure of the convex side of the protein that allows prediction of the overall curvature with a high degree of accuracy. We also demonstrate the effectiveness of this method in selecting a suitable template for comparative modeling. We have developed an automated, quantitative protocol that can be used to predict accurately the curvature of leucine-rich repeat proteins of unknown structure from sequence alone. This protocol is available as an online resource at http://www.bioinf.manchester.ac.uk/curlrr/.

  12. An improved ant colony optimization algorithm with fault tolerance for job scheduling in grid computing systems.

    Directory of Open Access Journals (Sweden)

    Hajara Idris

    Full Text Available The Grid scheduler, schedules user jobs on the best available resource in terms of resource characteristics by optimizing job execution time. Resource failure in Grid is no longer an exception but a regular occurring event as resources are increasingly being used by the scientific community to solve computationally intensive problems which typically run for days or even months. It is therefore absolutely essential that these long-running applications are able to tolerate failures and avoid re-computations from scratch after resource failure has occurred, to satisfy the user's Quality of Service (QoS requirement. Job Scheduling with Fault Tolerance in Grid Computing using Ant Colony Optimization is proposed to ensure that jobs are executed successfully even when resource failure has occurred. The technique employed in this paper, is the use of resource failure rate, as well as checkpoint-based roll back recovery strategy. Check-pointing aims at reducing the amount of work that is lost upon failure of the system by immediately saving the state of the system. A comparison of the proposed approach with an existing Ant Colony Optimization (ACO algorithm is discussed. The experimental results of the implemented Fault Tolerance scheduling algorithm show that there is an improvement in the user's QoS requirement over the existing ACO algorithm, which has no fault tolerance integrated in it. The performance evaluation of the two algorithms was measured in terms of the three main scheduling performance metrics: makespan, throughput and average turnaround time.

  13. An algorithm to compute a rule for division problems with multiple references

    Directory of Open Access Journals (Sweden)

    Sánchez Sánchez, Francisca J.

    2012-01-01

    Full Text Available In this paper we consider an extension of the classic division problem with claims: Thedivision problem with multiple references. Hinojosa et al. (2012 provide a solution for this type of pro-blems. The aim of this work is to extend their results by proposing an algorithm that calculates allocationsbased on these results. All computational details are provided in the paper.

  14. Spinal curvature and characteristics of postural change in pregnant women.

    Science.gov (United States)

    Okanishi, Natsuko; Kito, Nobuhiro; Akiyama, Mitoshi; Yamamoto, Masako

    2012-07-01

    Pregnant women often report complaints due to physiological and postural changes. Postural changes during pregnancy may cause low back pain and pelvic girdle pain. This study aimed to compare the characteristics of postural changes in pregnant compared with non-pregnant women. Prospective case-control study. Pregnancy care center. Fifteen women at 17-34 weeks pregnancy comprised the study group, while 10 non-pregnant female volunteers comprised the control group. Standing posture was evaluated in the sagittal plane with static digital pictures. Two angles were measured by image analysis software: (1) between the trunk and pelvis; and (2) between the trunk and lower extremity. Spinal curvature was measured with Spinal Mouse® to calculate the means of sacral inclination, thoracic and lumbar curvature and inclination. The principal components were calculated until eigenvalues surpassed 1. Three distinct factors with eigenvalues of 1.00-2.49 were identified, consistent with lumbosacral spinal curvature and inclination, thoracic spine curvature, and inclination of the body. These factors accounted for 77.2% of the total variance in posture variables. Eleven pregnant women showed postural characteristics of lumbar kyphosis and sacral posterior inclination. Body inclination showed a variety of patterns compared with those in healthy women. Spinal curvature demonstrated a tendency for lumbar kyphosis in pregnant women. Pregnancy may cause changes in spinal curvature and posture, which may in turn lead to relevant symptoms. Our data provide a basis for investigating the effects of spinal curvature and postural changes on symptoms during pregnancy. © 2012 The Authors Acta Obstetricia et Gynecologica Scandinavica© 2012 Nordic Federation of Societies of Obstetrics and Gynecology.

  15. Computationally Efficient Power Allocation Algorithm in Multicarrier-Based Cognitive Radio Networks: OFDM and FBMC Systems

    Directory of Open Access Journals (Sweden)

    Shaat Musbah

    2010-01-01

    Full Text Available Cognitive Radio (CR systems have been proposed to increase the spectrum utilization by opportunistically access the unused spectrum. Multicarrier communication systems are promising candidates for CR systems. Due to its high spectral efficiency, filter bank multicarrier (FBMC can be considered as an alternative to conventional orthogonal frequency division multiplexing (OFDM for transmission over the CR networks. This paper addresses the problem of resource allocation in multicarrier-based CR networks. The objective is to maximize the downlink capacity of the network under both total power and interference introduced to the primary users (PUs constraints. The optimal solution has high computational complexity which makes it unsuitable for practical applications and hence a low complexity suboptimal solution is proposed. The proposed algorithm utilizes the spectrum holes in PUs bands as well as active PU bands. The performance of the proposed algorithm is investigated for OFDM and FBMC based CR systems. Simulation results illustrate that the proposed resource allocation algorithm with low computational complexity achieves near optimal performance and proves the efficiency of using FBMC in CR context.

  16. Waterfall field in hybrid inflation and curvature perturbation

    International Nuclear Information System (INIS)

    Gong, Jinn-Ouk; Sasaki, Misao

    2011-01-01

    We study carefully the contribution of the waterfall field to the curvature perturbation at the end of hybrid inflation. In particular we clarify the parameter dependence analytically under reasonable assumptions on the model parameters. After calculating the mode function of the waterfall field, we use the δN formalism and confirm the previously obtained result that the power spectrum is very blue with the index 4 and is absolutely negligible on large scales. However, we also find that the resulting curvature perturbation is highly non-Gaussian and hence we calculate the bispectrum. We find that the bispectrum is at leading order independent of momentum and exhibits its peak at the equilateral limit, though it is unobservably small on large scales. We also present the one-point probability distribution function of the curvature perturbation

  17. Waterfall field in hybrid inflation and curvature perturbation

    Energy Technology Data Exchange (ETDEWEB)

    Gong, Jinn-Ouk [Instituut-Lorentz for Theoretical Physics, Universiteit Leiden, 2333 CA Leiden (Netherlands); Sasaki, Misao, E-mail: jgong@lorentz.leidenuniv.nl, E-mail: misao@yukawa.kyoto-u.ac.jp [Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502 (Japan)

    2011-03-01

    We study carefully the contribution of the waterfall field to the curvature perturbation at the end of hybrid inflation. In particular we clarify the parameter dependence analytically under reasonable assumptions on the model parameters. After calculating the mode function of the waterfall field, we use the δN formalism and confirm the previously obtained result that the power spectrum is very blue with the index 4 and is absolutely negligible on large scales. However, we also find that the resulting curvature perturbation is highly non-Gaussian and hence we calculate the bispectrum. We find that the bispectrum is at leading order independent of momentum and exhibits its peak at the equilateral limit, though it is unobservably small on large scales. We also present the one-point probability distribution function of the curvature perturbation.

  18. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    Science.gov (United States)

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  19. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    Directory of Open Access Journals (Sweden)

    Shoaib Ehsan

    2015-07-01

    Full Text Available The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF, allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video. Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44% in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  20. Concurrent validity of an automated algorithm for computing the center of pressure excursion index (CPEI).

    Science.gov (United States)

    Diaz, Michelle A; Gibbons, Mandi W; Song, Jinsup; Hillstrom, Howard J; Choe, Kersti H; Pasquale, Maria R

    2018-01-01

    Center of Pressure Excursion Index (CPEI), a parameter computed from the distribution of plantar pressures during stance phase of barefoot walking, has been used to assess dynamic foot function. The original custom program developed to calculate CPEI required the oversight of a user who could manually correct for certain exceptions to the computational rules. A new fully automatic program has been developed to calculate CPEI with an algorithm that accounts for these exceptions. The purpose of this paper is to compare resulting CPEI values computed by these two programs on plantar pressure data from both asymptomatic and pathologic subjects. If comparable, the new program offers significant benefits-reduced potential for variability due to rater discretion and faster CPEI calculation. CPEI values were calculated from barefoot plantar pressure distributions during comfortable paced walking on 61 healthy asymptomatic adults, 19 diabetic adults with moderate hallux valgus, and 13 adults with mild hallux valgus. Right foot data for each subject was analyzed with linear regression and a Bland-Altman plot. The automated algorithm yielded CPEI values that were linearly related to the original program (R 2 =0.99; Pcomputation methods. Results of this analysis suggest that the new automated algorithm may be used to calculate CPEI on both healthy and pathologic feet. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Differential geometric structures of stream functions: incompressible two-dimensional flow and curvatures

    International Nuclear Information System (INIS)

    Yamasaki, K; Iwayama, T; Yajima, T

    2011-01-01

    The Okubo-Weiss field, frequently used for partitioning incompressible two-dimensional (2D) fluids into coherent and incoherent regions, corresponds to the Gaussian curvature of the stream function. Therefore, we consider the differential geometric structures of stream functions and calculate the Gaussian curvatures of some basic flows. We find the following. (I) The vorticity corresponds to the mean curvature of the stream function. Thus, the stream-function surface for an irrotational flow and that for a parallel shear flow correspond to the minimal surface and a developable surface, respectively. (II) The relationship between the coherency and the magnitude of the vorticity is interpreted by the curvatures. (III) Using the Gaussian curvature, stability of single and double point vortex streets is analyzed. The results of this analysis are compared with the well-known linear stability analysis. (IV) Conformal mapping in fluid mechanics is the physical expression of the geometric fact that the sign of the Gaussian curvature does not change in conformal mapping. These findings suggest that the curvatures of stream functions are useful for understanding the geometric structure of an incompressible 2D flow.

  2. Computing handbook computer science and software engineering

    CERN Document Server

    Gonzalez, Teofilo; Tucker, Allen

    2014-01-01

    Overview of Computer Science Structure and Organization of Computing Peter J. DenningComputational Thinking Valerie BarrAlgorithms and Complexity Data Structures Mark WeissBasic Techniques for Design and Analysis of Algorithms Edward ReingoldGraph and Network Algorithms Samir Khuller and Balaji RaghavachariComputational Geometry Marc van KreveldComplexity Theory Eric Allender, Michael Loui, and Kenneth ReganFormal Models and Computability Tao Jiang, Ming Li, and Bala

  3. Marcus canonical integral for non-Gaussian processes and its computation: pathwise simulation and tau-leaping algorithm.

    Science.gov (United States)

    Li, Tiejun; Min, Bin; Wang, Zhiming

    2013-03-14

    The stochastic integral ensuring the Newton-Leibnitz chain rule is essential in stochastic energetics. Marcus canonical integral has this property and can be understood as the Wong-Zakai type smoothing limit when the driving process is non-Gaussian. However, this important concept seems not well-known for physicists. In this paper, we discuss Marcus integral for non-Gaussian processes and its computation in the context of stochastic energetics. We give a comprehensive introduction to Marcus integral and compare three equivalent definitions in the literature. We introduce the exact pathwise simulation algorithm and give the error analysis. We show how to compute the thermodynamic quantities based on the pathwise simulation algorithm. We highlight the information hidden in the Marcus mapping, which plays the key role in determining thermodynamic quantities. We further propose the tau-leaping algorithm, which advance the process with deterministic time steps when tau-leaping condition is satisfied. The numerical experiments and its efficiency analysis show that it is very promising.

  4. The Riemann-Lovelock curvature tensor

    International Nuclear Information System (INIS)

    Kastor, David

    2012-01-01

    In order to study the properties of Lovelock gravity theories in low dimensions, we define the kth-order Riemann-Lovelock tensor as a certain quantity having a total 4k-indices, which is kth order in the Riemann curvature tensor and shares its basic algebraic and differential properties. We show that the kth-order Riemann-Lovelock tensor is determined by its traces in dimensions 2k ≤ D < 4k. In D = 2k + 1 this identity implies that all solutions of pure kth-order Lovelock gravity are 'Riemann-Lovelock' flat. It is verified that the static, spherically symmetric solutions of these theories, which are missing solid angle spacetimes, indeed satisfy this flatness property. This generalizes results from Einstein gravity in D = 3, which corresponds to the k = 1 case. We speculate about some possible further consequences of Riemann-Lovelock curvature. (paper)

  5. Truth in advertising: Reporting performance of computer programs, algorithms and the impact of architecture

    Directory of Open Access Journals (Sweden)

    Scott Hazelhurst

    2010-11-01

    Full Text Available The level of detail and precision that appears in the experimental methodology section computer science papers is usually much less than in natural science disciplines. This is partially justified by different nature of experiments. The experimental evidence presented here shows that the time taken by the same algorithm varies so significantly on different CPUs that without knowing the exact model of CPU, it is difficult to compare the results. This is placed in context by analysing a cross-section of experimental results reported in the literature. The reporting of experimental results is sometimes insufficient to allow experiments to be replicated, and in some case is insufficient to support the claims made for the algorithms. New standards for reporting on algorithms results are suggested.

  6. Curvature-driven acceleration: a utopia or a reality?

    International Nuclear Information System (INIS)

    Das, Sudipta; Banerjee, Narayan; Dadhich, Naresh

    2006-01-01

    The present work shows that a combination of nonlinear contributions from the Ricci curvature in Einstein field equations can drive a late time acceleration of expansion of the universe. The transit from the decelerated to the accelerated phase of expansion takes place smoothly without having to resort to a study of asymptotic behaviour. This result emphasizes the need for thorough and critical examination of models with nonlinear contribution from the curvature

  7. Curvature-driven acceleration: a utopia or a reality?

    Energy Technology Data Exchange (ETDEWEB)

    Das, Sudipta [Relativity and Cosmology Research Centre, Department of Physics, Jadavpur University, Calcutta-700 032 (India); Banerjee, Narayan [Relativity and Cosmology Research Centre, Department of Physics, Jadavpur University, Calcutta-700 032 (India); Dadhich, Naresh [Inter University Centre for Astronomy and Astrophysics, Post Bag 4, Ganeshkhind, Pune 411 007 (India)

    2006-06-21

    The present work shows that a combination of nonlinear contributions from the Ricci curvature in Einstein field equations can drive a late time acceleration of expansion of the universe. The transit from the decelerated to the accelerated phase of expansion takes place smoothly without having to resort to a study of asymptotic behaviour. This result emphasizes the need for thorough and critical examination of models with nonlinear contribution from the curvature.

  8. Atrial Fibrillation Screening in Nonmetropolitan Areas Using a Telehealth Surveillance System With an Embedded Cloud-Computing Algorithm: Prospective Pilot Study

    Science.gov (United States)

    Chen, Ying-Hsien; Hung, Chi-Sheng; Huang, Ching-Chang; Hung, Yu-Chien

    2017-01-01

    Background Atrial fibrillation (AF) is a common form of arrhythmia that is associated with increased risk of stroke and mortality. Detecting AF before the first complication occurs is a recognized priority. No previous studies have examined the feasibility of undertaking AF screening using a telehealth surveillance system with an embedded cloud-computing algorithm; we address this issue in this study. Objective The objective of this study was to evaluate the feasibility of AF screening in nonmetropolitan areas using a telehealth surveillance system with an embedded cloud-computing algorithm. Methods We conducted a prospective AF screening study in a nonmetropolitan area using a single-lead electrocardiogram (ECG) recorder. All ECG measurements were reviewed on the telehealth surveillance system and interpreted by the cloud-computing algorithm and a cardiologist. The process of AF screening was evaluated with a satisfaction questionnaire. Results Between March 11, 2016 and August 31, 2016, 967 ECGs were recorded from 922 residents in nonmetropolitan areas. A total of 22 (2.4%, 22/922) residents with AF were identified by the physician’s ECG interpretation, and only 0.2% (2/967) of ECGs contained significant artifacts. The novel cloud-computing algorithm for AF detection had a sensitivity of 95.5% (95% CI 77.2%-99.9%) and specificity of 97.7% (95% CI 96.5%-98.5%). The overall satisfaction score for the process of AF screening was 92.1%. Conclusions AF screening in nonmetropolitan areas using a telehealth surveillance system with an embedded cloud-computing algorithm is feasible. PMID:28951384

  9. Integrated Graphics Operations and Analysis Lab Development of Advanced Computer Graphics Algorithms

    Science.gov (United States)

    Wheaton, Ira M.

    2011-01-01

    The focus of this project is to aid the IGOAL in researching and implementing algorithms for advanced computer graphics. First, this project focused on porting the current International Space Station (ISS) Xbox experience to the web. Previously, the ISS interior fly-around education and outreach experience only ran on an Xbox 360. One of the desires was to take this experience and make it into something that can be put on NASA s educational site for anyone to be able to access. The current code works in the Unity game engine which does have cross platform capability but is not 100% compatible. The tasks for an intern to complete this portion consisted of gaining familiarity with Unity and the current ISS Xbox code, porting the Xbox code to the web as is, and modifying the code to work well as a web application. In addition, a procedurally generated cloud algorithm will be developed. Currently, the clouds used in AGEA animations and the Xbox experiences are a texture map. The desire is to create a procedurally generated cloud algorithm to provide dynamically generated clouds for both AGEA animations and the Xbox experiences. This task consists of gaining familiarity with AGEA and the plug-in interface, developing the algorithm, creating an AGEA plug-in to implement the algorithm inside AGEA, and creating a Unity script to implement the algorithm for the Xbox. This portion of the project was unable to be completed in the time frame of the internship; however, the IGOAL will continue to work on it in the future.

  10. Connections and curvatures on complex Riemannian manifolds

    International Nuclear Information System (INIS)

    Ganchev, G.; Ivanov, S.

    1991-05-01

    Characteristic connection and characteristic holomorphic sectional curvatures are introduced on a complex Riemannian manifold (not necessarily with holomorphic metric). For the class of complex Riemannian manifolds with holomorphic characteristic connection a classification of the manifolds with (pointwise) constant holomorphic characteristic curvature is given. It is shown that the conformal geometry of complex analytic Riemannian manifolds can be naturally developed on the class of locally conformal holomorphic Riemannian manifolds. Complex Riemannian manifolds locally conformal to the complex Euclidean space are characterized with zero conformal fundamental tensor and zero conformal characteristic tensor. (author). 12 refs

  11. Berry Curvature in Magnon-Phonon Hybrid Systems.

    Science.gov (United States)

    Takahashi, Ryuji; Nagaosa, Naoto

    2016-11-18

    We study theoretically the Berry curvature of the magnon induced by the hybridization with the acoustic phonons via the spin-orbit and dipolar interactions. We first discuss the magnon-phonon hybridization via the dipolar interaction, and show that the dispersions have gapless points in momentum space, some of which form a loop. Next, when both spin-orbit and dipolar interactions are considered, we show anisotropic texture of the Berry curvature and its divergence with and without gap closing. Realistic evaluation of the consequent anomalous velocity is given for yttrium iron garnet.

  12. A prolongation-projection algorithm for computing the finite real variety of an ideal

    NARCIS (Netherlands)

    J.B. Lasserre; M. Laurent (Monique); P. Rostalski

    2009-01-01

    htmlabstractWe provide a real algebraic symbolic-numeric algorithm for computing the real variety $V_R(I)$ of an ideal $I$, assuming it is finite while $V_C(I)$ may not be. Our approach uses sets of linear functionals on $R[X]$, vanishing on a given set of polynomials generating $I$ and their

  13. A prolongation-projection algorithm for computing the finite real variety of an ideal

    NARCIS (Netherlands)

    J.B. Lasserre; M. Laurent (Monique); P. Rostalski

    2008-01-01

    htmlabstractWe provide a real algebraic symbolic-numeric algorithm for computing the real variety $V_R(I)$ of an ideal $I$, assuming it is finite while $V_C(I)$ may not be. Our approach uses sets of linear functionals on $R[X]$, vanishing on a given set of polynomials generating $I$ and their

  14. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  15. Algorithms for limited-view computed tomography: an annotated bibliography and a challenge

    International Nuclear Information System (INIS)

    Rangayyan, R.; Dhawan, A.P.; Gordon, R.

    1985-01-01

    In many applications of computed tomography, it may not be possible to acquire projection data at all angles, as required by the most commonly used algorithm of convolution backprojection. In such a limited-data situation, we face an ill-posed problem in attempting to reconstruct an image from an incomplete set of projections. Many techniques have been proposed to tackle this situation, employing diverse theories such as signal recovery, image restoration, constrained deconvolution, and constrained optimization, as well as novel schemes such as iterative object-dependent algorithms incorporating a priori knowledge and use of multispectral radiation. The authors present an overview of such techniques and offer a challenge to all readers to reconstruct images from a set of limited-view data provided here

  16. A curvature theory for discrete surfaces based on mesh parallelity

    KAUST Repository

    Bobenko, Alexander Ivanovich; Pottmann, Helmut; Wallner, Johannes

    2009-01-01

    We consider a general theory of curvatures of discrete surfaces equipped with edgewise parallel Gauss images, and where mean and Gaussian curvatures of faces are derived from the faces' areas and mixed areas. Remarkably these notions are capable

  17. Algorithms for Computing the Magnetic Field, Vector Potential, and Field Derivatives for Circular Current Loops in Cylindrical Coordinates

    Energy Technology Data Exchange (ETDEWEB)

    Walstrom, Peter Lowell [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-24

    A numerical algorithm for computing the field components Br and Bz and their r and z derivatives with open boundaries in cylindrical coordinates for circular current loops is described. An algorithm for computing the vector potential is also described. For the convenience of the reader, derivations of the final expressions from their defining integrals are given in detail, since their derivations (especially for the field derivatives) are not all easily found in textbooks. Numerical calculations are based on evaluation of complete elliptic integrals using the Bulirsch algorithm cel. Since cel can evaluate complete elliptic integrals of a fairly general type, in some cases the elliptic integrals can be evaluated without first reducing them to forms containing standard Legendre forms. The algorithms avoid the numerical difficulties that many of the textbook solutions have for points near the axis because of explicit factors of 1=r or 1=r2 in the some of the expressions.

  18. First contact: understanding the relationship between hominoid incisor curvature and diet.

    Science.gov (United States)

    Deane, Andrew

    2009-03-01

    Accurately interpreting fossil primate dietary behaviour is necessary to fully understand a species' ecology and connection to its environment. Traditional methods developed to infer diet from hominoid teeth successfully group taxa into broad dietary categories (i.e., folivore, frugivore) but often fail to represent the range of dietary variability characteristic of living apes. This oversimplification is not only a consequence of poor resolution, but may also reflect the use of similar fallback resources by closely related taxa with dissimilar diets. This study demonstrates that additional dietary specificity can be achieved using a morphometric approach to hominoid incisor curvature. High-resolution polynomial curve fitting (HR-PCF) was used to quantify the incisor curvatures of closely related hominoid taxa that have dissimilar diets but similar morphological adaptations to specific keystone resources (e.g., Gorilla gorilla beringei vs. G. g. gorilla). Given the key role of incisors in food processing, it is reasonable to assume that these teeth will be at least partially influenced by the unique selective pressures imposed by the mechanical loading specific to individual diets. Results from this study identify a strong correlation between hominoid dietary proportions and incisor linear dimensions and curvature, indicating that more pronounced incisor curvature is positively correlated with higher levels of frugivory. Hard-object frugivores have the greatest mesiodistal and cervico-incisal curvature and dedicated folivores have the least curved incisors. Mixed folivore/frugivores are morphological intermediates between dedicated folivores and hard- and soft-object frugivores. Mesiodistal curvature varied only in the degree of curvature; however, cervico-incisal curvature was shown to differ qualitatively between more frugivorous and more folivorous taxa. In addition to identifying a greater range of dietary variability among hominoids, this study also

  19. Algorithmic complexity of quantum capacity

    Science.gov (United States)

    Oskouei, Samad Khabbazi; Mancini, Stefano

    2018-04-01

    We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.

  20. Curvature-Continuous 3D Path-Planning Using QPMI Method

    Directory of Open Access Journals (Sweden)

    Seong-Ryong Chang

    2015-06-01

    Full Text Available It is impossible to achieve vertex movement and rapid velocity control in aerial robots and aerial vehicles because of momentum from the air. A continuous-curvature path ensures such robots and vehicles can fly with stable and continuous movements. General continuous path-planning methods use spline interpolation, for example B-spline and Bézier curves. However, these methods cannot be directly applied to continuous path planning in a 3D space. These methods use a subset of the waypoints to decide curvature and some waypoints are not included in the planned path. This paper proposes a method for constructing a curvature-continuous path in 3D space that includes every waypoint. The movements in each axis, x, y and z, are separated by the parameter u. Waypoint groups are formed, each with its own continuous path derived using quadratic polynomial interpolation. The membership function then combines each continuous path into one continuous path. The continuity of the path is verified and the curvature-continuous path is produced using the proposed method.