WorldWideScience

Sample records for delaunay triangulation algorithm

  1. A Sweepline Algorithm for Generalized Delaunay Triangulations

    DEFF Research Database (Denmark)

    Skyum, Sven

    We give a deterministic O(n log n) sweepline algorithm to construct the generalized Voronoi diagram for n points in the plane or rather its dual the generalized Delaunay triangulation. The algorithm uses no transformations and it is developed solely from the sweepline paradigm together...

  2. I/O-Efficient Construction of Constrained Delaunay Triangulations

    DEFF Research Database (Denmark)

    Agarwal, Pankaj Kumar; Arge, Lars; Yi, Ke

    2005-01-01

    In this paper, we designed and implemented an I/O-efficient algorithm for constructing constrained Delaunay triangulations. If the number of constraining segments is smaller than the memory size, our algorithm runs in expected O( N B logM/B NB ) I/Os for triangulating N points in the plane, where...

  3. Automated Photogrammetric Image Matching with Sift Algorithm and Delaunay Triangulation

    DEFF Research Database (Denmark)

    Karagiannis, Georgios; Antón Castro, Francesc/François; Mioc, Darka

    2016-01-01

    An algorithm for image matching of multi-sensor and multi-temporal satellite images is developed. The method is based on the SIFT feature detector proposed by Lowe in (Lowe, 1999). First, SIFT feature points are detected independently in two images (reference and sensed image). The features detec...... of each feature set for each image are computed. The isomorphism of the Delaunay triangulations is determined to guarantee the quality of the image matching. The algorithm is implemented in Matlab and tested on World-View 2, SPOT6 and TerraSAR-X image patches....

  4. Constructing Delaunay triangulations along space-filling curves

    NARCIS (Netherlands)

    Buchin, K.; Fiat, A.; Sanders, P.

    2009-01-01

    Incremental construction con BRIO using a space-filling curve order for insertion is a popular algorithm for constructing Delaunay triangulations. So far, it has only been analyzed for the case that a worst-case optimal point location data structure is used which is often avoided in implementations.

  5. Onomatopoeia characters extraction from comic images using constrained Delaunay triangulation

    Science.gov (United States)

    Liu, Xiangping; Shoji, Kenji; Mori, Hiroshi; Toyama, Fubito

    2014-02-01

    A method for extracting onomatopoeia characters from comic images was developed based on stroke width feature of characters, since they nearly have a constant stroke width in a number of cases. An image was segmented with a constrained Delaunay triangulation. Connected component grouping was performed based on the triangles generated by the constrained Delaunay triangulation. Stroke width calculation of the connected components was conducted based on the altitude of the triangles generated with the constrained Delaunay triangulation. The experimental results proved the effectiveness of the proposed method.

  6. A Novel Model of Conforming Delaunay Triangulation for Sensor Network Configuration

    Directory of Open Access Journals (Sweden)

    Yan Ma

    2015-01-01

    Full Text Available Delaunay refinement is a technique for generating unstructured meshes of triangles for sensor network configuration engineering practice. A new method for solving Delaunay triangulation problem is proposed in this paper, which is called endpoint triangle’s circumcircle model (ETCM. As compared with the original fractional node refinement algorithms, the proposed algorithm can get well refinement stability with least time cost. Simulations are performed under five aspects including refinement stability, the number of additional nodes, time cost, mesh quality after intruding additional nodes, and the aspect ratio improved by single additional node. All experimental results show the advantages of the proposed algorithm as compared with the existing algorithms and confirm the algorithm analysis sufficiently.

  7. Fast randomized point location without preprocessing in two- and three-dimensional Delaunay triangulations

    Energy Technology Data Exchange (ETDEWEB)

    Muecke, E.P.; Saias, I.; Zhu, B.

    1996-05-01

    This paper studies the point location problem in Delaunay triangulations without preprocessing and additional storage. The proposed procedure finds the query point simply by walking through the triangulation, after selecting a good starting point by random sampling. The analysis generalizes and extends a recent result of d = 2 dimensions by proving this procedure to take expected time close to O(n{sup 1/(d+1)}) for point location in Delaunay triangulations of n random points in d = 3 dimensions. Empirical results in both two and three dimensions show that this procedure is efficient in practice.

  8. Kinetic and dynamic Delaunay tetrahedralizations in three dimensions

    Science.gov (United States)

    Schaller, Gernot; Meyer-Hermann, Michael

    2004-09-01

    We describe algorithms to implement fully dynamic and kinetic three-dimensional unconstrained Delaunay triangulations, where the time evolution of the triangulation is not only governed by moving vertices but also by a changing number of vertices. We use three-dimensional simplex flip algorithms, a stochastic visibility walk algorithm for point location and in addition, we propose a new simple method of deleting vertices from an existing three-dimensional Delaunay triangulation while maintaining the Delaunay property. As an example, we analyse the performance in various cases of practical relevance. The dual Dirichlet tessellation can be used to solve differential equations on an irregular grid, to define partitions in cell tissue simulations, for collision detection etc.

  9. Incremental Reconstruction of Urban Environments by Edge-Points Delaunay Triangulation

    OpenAIRE

    Romanoni, Andrea; Matteucci, Matteo

    2016-01-01

    Urban reconstruction from a video captured by a surveying vehicle constitutes a core module of automated mapping. When computational power represents a limited resource and, a detailed map is not the primary goal, the reconstruction can be performed incrementally, from a monocular video, carving a 3D Delaunay triangulation of sparse points; this allows online incremental mapping for tasks such as traversability analysis or obstacle avoidance. To exploit the sharp edges of urban landscape, we ...

  10. Visualization research of 3D radiation field based on Delaunay triangulation

    International Nuclear Information System (INIS)

    Xie Changji; Chen Yuqing; Li Shiting; Zhu Bo

    2011-01-01

    Based on the characteristics of the three dimensional partition, the triangulation of discrete date sets is improved by the method of point-by-point insertion. The discrete data for the radiation field by theoretical calculation or actual measurement is restructured, and the continuous distribution of the radiation field data is obtained. Finally, the 3D virtual scene of the nuclear facilities is built with the VR simulation techniques, and the visualization of the 3D radiation field is also achieved by the visualization mapping techniques. It is shown that the method combined VR and Delaunay triangulation could greatly improve the quality and efficiency of 3D radiation field visualization. (authors)

  11. Classification and Filtering of Constrained Delaunay Triangulation for Automated Building Aggregation

    Directory of Open Access Journals (Sweden)

    GUO Peipei

    2016-08-01

    Full Text Available Building aggregation is an important part of research on large scale map generalization. A triangulation based approach is proposed from the perspective of shape features, six measure parameters of triangles in a constrained Delaunay triangulation are proposed. First of all, use the six measure parameters to determine which triangles are retained and which are erased. Then, the contours of retained triangles, as bridge areas between buildings, are automatically identified and right angle processed. And then, the buildings are aggregated with right angle feature retained by merging the bridge areas with connecting buildings. Finally, the approach is verified by being carried out on actual data. Experimental result shows that it is efficient and practical.

  12. A Delaunay Triangulation Approach For Segmenting Clumps Of Nuclei

    International Nuclear Information System (INIS)

    Wen, Quan; Chang, Hang; Parvin, Bahram

    2009-01-01

    Cell-based fluorescence imaging assays have the potential to generate massive amount of data, which requires detailed quantitative analysis. Often, as a result of fixation, labeled nuclei overlap and create a clump of cells. However, it is important to quantify phenotypic read out on a cell-by-cell basis. In this paper, we propose a novel method for decomposing clumps of nuclei using high-level geometric constraints that are derived from low-level features of maximum curvature computed along the contour of each clump. Points of maximum curvature are used as vertices for Delaunay triangulation (DT), which provides a set of edge hypotheses for decomposing a clump of nuclei. Each hypothesis is subsequently tested against a constraint satisfaction network for a near optimum decomposition. The proposed method is compared with other traditional techniques such as the watershed method with/without markers. The experimental results show that our approach can overcome the deficiencies of the traditional methods and is very effective in separating severely touching nuclei.

  13. The Extraction of Road Boundary from Crowdsourcing Trajectory Using Constrained Delaunay Triangulation

    Directory of Open Access Journals (Sweden)

    YANG Wei

    2017-02-01

    Full Text Available Extraction of road boundary accurately from crowdsourcing trajectory lines is still a hard work.Therefore,this study presented a new approach to use vehicle trajectory lines to extract road boundary.Firstly, constructing constrained Delaunay triangulation within interpolated track lines to calculate road boundary descriptors using triangle edge length and Voronoi cell.Road boundary recognition model was established by integrating the two boundary descriptors.Then,based on seed polygons,a regional growing method was proposed to extract road boundary. Finally, taxi GPS traces in Beijing were used to verify the validity of the novel method, and the results also showed that our method was suitable for GPS traces with disparity density,complex road structure and different time interval.

  14. A constrained Delaunay discretization method for adaptively meshing highly discontinuous geological media

    Science.gov (United States)

    Wang, Yang; Ma, Guowei; Ren, Feng; Li, Tuo

    2017-12-01

    A constrained Delaunay discretization method is developed to generate high-quality doubly adaptive meshes of highly discontinuous geological media. Complex features such as three-dimensional discrete fracture networks (DFNs), tunnels, shafts, slopes, boreholes, water curtains, and drainage systems are taken into account in the mesh generation. The constrained Delaunay triangulation method is used to create adaptive triangular elements on planar fractures. Persson's algorithm (Persson, 2005), based on an analogy between triangular elements and spring networks, is enriched to automatically discretize a planar fracture into mesh points with varying density and smooth-quality gradient. The triangulated planar fractures are treated as planar straight-line graphs (PSLGs) to construct piecewise-linear complex (PLC) for constrained Delaunay tetrahedralization. This guarantees the doubly adaptive characteristic of the resulted mesh: the mesh is adaptive not only along fractures but also in space. The quality of elements is compared with the results from an existing method. It is verified that the present method can generate smoother elements and a better distribution of element aspect ratios. Two numerical simulations are implemented to demonstrate that the present method can be applied to various simulations of complex geological media that contain a large number of discontinuities.

  15. Delaunay Triangulation as a New Coverage Measurement Method in Wireless Sensor Network

    Science.gov (United States)

    Chizari, Hassan; Hosseini, Majid; Poston, Timothy; Razak, Shukor Abd; Abdullah, Abdul Hanan

    2011-01-01

    Sensing and communication coverage are among the most important trade-offs in Wireless Sensor Network (WSN) design. A minimum bound of sensing coverage is vital in scheduling, target tracking and redeployment phases, as well as providing communication coverage. Some methods measure the coverage as a percentage value, but detailed information has been missing. Two scenarios with equal coverage percentage may not have the same Quality of Coverage (QoC). In this paper, we propose a new coverage measurement method using Delaunay Triangulation (DT). This can provide the value for all coverage measurement tools. Moreover, it categorizes sensors as ‘fat’, ‘healthy’ or ‘thin’ to show the dense, optimal and scattered areas. It can also yield the largest empty area of sensors in the field. Simulation results show that the proposed DT method can achieve accurate coverage information, and provides many tools to compare QoC between different scenarios. PMID:22163792

  16. Indoor Trajectory Tracking Scheme Based on Delaunay Triangulation and Heuristic Information in Wireless Sensor Networks.

    Science.gov (United States)

    Qin, Junping; Sun, Shiwen; Deng, Qingxu; Liu, Limin; Tian, Yonghong

    2017-06-02

    Object tracking and detection is one of the most significant research areas for wireless sensor networks. Existing indoor trajectory tracking schemes in wireless sensor networks are based on continuous localization and moving object data mining. Indoor trajectory tracking based on the received signal strength indicator ( RSSI ) has received increased attention because it has low cost and requires no special infrastructure. However, RSSI tracking introduces uncertainty because of the inaccuracies of measurement instruments and the irregularities (unstable, multipath, diffraction) of wireless signal transmissions in indoor environments. Heuristic information includes some key factors for trajectory tracking procedures. This paper proposes a novel trajectory tracking scheme based on Delaunay triangulation and heuristic information (TTDH). In this scheme, the entire field is divided into a series of triangular regions. The common side of adjacent triangular regions is regarded as a regional boundary. Our scheme detects heuristic information related to a moving object's trajectory, including boundaries and triangular regions. Then, the trajectory is formed by means of a dynamic time-warping position-fingerprint-matching algorithm with heuristic information constraints. Field experiments show that the average error distance of our scheme is less than 1.5 m, and that error does not accumulate among the regions.

  17. Image matching for digital close-range stereo photogrammetry based on constraints of Delaunay triangulated network and epipolar-line

    Science.gov (United States)

    Zhang, K.; Sheng, Y. H.; Li, Y. Q.; Han, B.; Liang, Ch.; Sha, W.

    2006-10-01

    In the field of digital photogrammetry and computer vision, the determination of conjugate points in a stereo image pair, referred to as "image matching," is the critical step to realize automatic surveying and recognition. Traditional matching methods encounter some problems in the digital close-range stereo photogrammetry, because the change of gray-scale or texture is not obvious in the close-range stereo images. The main shortcoming of traditional matching methods is that geometric information of matching points is not fully used, which will lead to wrong matching results in regions with poor texture. To fully use the geometry and gray-scale information, a new stereo image matching algorithm is proposed in this paper considering the characteristics of digital close-range photogrammetry. Compared with the traditional matching method, the new algorithm has three improvements on image matching. Firstly, shape factor, fuzzy maths and gray-scale projection are introduced into the design of synthetical matching measure. Secondly, the topology connecting relations of matching points in Delaunay triangulated network and epipolar-line are used to decide matching order and narrow the searching scope of conjugate point of the matching point. Lastly, the theory of parameter adjustment with constraint is introduced into least square image matching to carry out subpixel level matching under epipolar-line constraint. The new algorithm is applied to actual stereo images of a building taken by digital close-range photogrammetric system. The experimental result shows that the algorithm has a higher matching speed and matching accuracy than pyramid image matching algorithm based on gray-scale correlation.

  18. The finite body triangulation: algorithms, subgraphs, homogeneity estimation and application.

    Science.gov (United States)

    Carson, Cantwell G; Levine, Jonathan S

    2016-09-01

    The concept of a finite body Dirichlet tessellation has been extended to that of a finite body Delaunay 'triangulation' to provide a more meaningful description of the spatial distribution of nonspherical secondary phase bodies in 2- and 3-dimensional images. A finite body triangulation (FBT) consists of a network of minimum edge-to-edge distances between adjacent objects in a microstructure. From this is also obtained the characteristic object chords formed by the intersection of the object boundary with the finite body tessellation. These two sets of distances form the basis of a parsimonious homogeneity estimation. The characteristics of the spatial distribution are then evaluated with respect to the distances between objects and the distances within them. Quantitative analysis shows that more physically representative distributions can be obtained by selecting subgraphs, such as the relative neighbourhood graph and the minimum spanning tree, from the finite body tessellation. To demonstrate their potential, we apply these methods to 3-dimensional X-ray computed tomographic images of foamed cement and their 2-dimensional cross sections. The Python computer code used to estimate the FBT is made available. Other applications for the algorithm - such as porous media transport and crack-tip propagation - are also discussed. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  19. A methodology for automated cartographic data input, drawing and editing using kinetic Delaunay/Voronoi diagrams

    DEFF Research Database (Denmark)

    Gold, Christopher M.; Mioc, Darka; Anton, François

    2008-01-01

    This chapter presents a methodology for automated cartographic data in- put, drawing and editing. This methodology is based on kinematic algorithms for point and line Delaunay triangulation and the Voronoi diagram. It allows one to automate some parts of the manual digitization process......-oriented algorithm for large data sets, and all our algorithms are based on local operations (except for basic point location). Because the deletion of individual points or line segments is a necessary part of the manual editing process, incremental insertion and deletion is used. The original concept used here...

  20. A complete solution of cartographic displacement based on elastic beams model and Delaunay triangulation

    Science.gov (United States)

    Liu, Y.; Guo, Q.; Sun, Y.

    2014-04-01

    In map production and generalization, it is inevitable to arise some spatial conflicts, but the detection and resolution of these spatial conflicts still requires manual operation. It is become a bottleneck hindering the development of automated cartographic generalization. Displacement is the most useful contextual operator that is often used for resolving the conflicts arising between two or more map objects. Automated generalization researches have reported many approaches of displacement including sequential approaches and optimization approaches. As an excellent optimization approach on the basis of energy minimization principles, elastic beams model has been used in resolving displacement problem of roads and buildings for several times. However, to realize a complete displacement solution, techniques of conflict detection and spatial context analysis should be also take into consideration. So we proposed a complete solution of displacement based on the combined use of elastic beams model and constrained Delaunay triangulation (CDT) in this paper. The solution designed as a cyclic and iterative process containing two phases: detection phase and displacement phase. In detection phase, CDT of map is use to detect proximity conflicts, identify spatial relationships and structures, and construct auxiliary structure, so as to support the displacement phase on the basis of elastic beams. In addition, for the improvements of displacement algorithm, a method for adaptive parameters setting and a new iterative strategy are put forward. Finally, we implemented our solution on a testing map generalization platform, and successfully tested it against 2 hand-generated test datasets of roads and buildings respectively.

  1. A new approach for categorizing pig lying behaviour based on a Delaunay triangulation method.

    Science.gov (United States)

    Nasirahmadi, A; Hensel, O; Edwards, S A; Sturm, B

    2017-01-01

    Machine vision-based monitoring of pig lying behaviour is a fast and non-intrusive approach that could be used to improve animal health and welfare. Four pens with 22 pigs in each were selected at a commercial pig farm and monitored for 15 days using top view cameras. Three thermal categories were selected relative to room setpoint temperature. An image processing technique based on Delaunay triangulation (DT) was utilized. Different lying patterns (close, normal and far) were defined regarding the perimeter of each DT triangle and the percentages of each lying pattern were obtained in each thermal category. A method using a multilayer perceptron (MLP) neural network, to automatically classify group lying behaviour of pigs into three thermal categories, was developed and tested for its feasibility. The DT features (mean value of perimeters, maximum and minimum length of sides of triangles) were calculated as inputs for the MLP classifier. The network was trained, validated and tested and the results revealed that MLP could classify lying features into the three thermal categories with high overall accuracy (95.6%). The technique indicates that a combination of image processing, MLP classification and mathematical modelling can be used as a precise method for quantifying pig lying behaviour in welfare investigations.

  2. A density based algorithm to detect cavities and holes from planar points

    Science.gov (United States)

    Zhu, Jie; Sun, Yizhong; Pang, Yueyong

    2017-12-01

    Delaunay-based shape reconstruction algorithms are widely used in approximating the shape from planar points. However, these algorithms cannot ensure the optimality of varied reconstructed cavity boundaries and hole boundaries. This inadequate reconstruction can be primarily attributed to the lack of efficient mathematic formulation for the two structures (hole and cavity). In this paper, we develop an efficient algorithm for generating cavities and holes from planar points. The algorithm yields the final boundary based on an iterative removal of the Delaunay triangulation. Our algorithm is mainly divided into two steps, namely, rough and refined shape reconstructions. The rough shape reconstruction performed by the algorithm is controlled by a relative parameter. Based on the rough result, the refined shape reconstruction mainly aims to detect holes and pure cavities. Cavity and hole are conceptualized as a structure with a low-density region surrounded by the high-density region. With this structure, cavity and hole are characterized by a mathematic formulation called as compactness of point formed by the length variation of the edges incident to point in Delaunay triangulation. The boundaries of cavity and hole are then found by locating a shape gradient change in compactness of point set. The experimental comparison with other shape reconstruction approaches shows that the proposed algorithm is able to accurately yield the boundaries of cavity and hole with varying point set densities and distributions.

  3. An improved three-dimension reconstruction method based on guided filter and Delaunay

    Science.gov (United States)

    Liu, Yilin; Su, Xiu; Liang, Haitao; Xu, Huaiyuan; Wang, Yi; Chen, Xiaodong

    2018-01-01

    Binocular stereo vision is becoming a research hotspot in the area of image processing. Based on traditional adaptive-weight stereo matching algorithm, we improve the cost volume by averaging the AD (Absolute Difference) of RGB color channels and adding x-derivative of the grayscale image to get the cost volume. Then we use guided filter in the cost aggregation step and weighted median filter for post-processing to address the edge problem. In order to get the location in real space, we combine the deep information with the camera calibration to project each pixel in 2D image to 3D coordinate matrix. We add the concept of projection to region-growing algorithm for surface reconstruction, its specific operation is to project all the points to a 2D plane through the normals of clouds and return the results back to 3D space according to these connection relationship among the points in 2D plane. During the triangulation in 2D plane, we use Delaunay algorithm because it has optimal quality of mesh. We configure OpenCV and pcl on Visual Studio for testing, and the experimental results show that the proposed algorithm have higher computational accuracy of disparity and can realize the details of the real mesh model.

  4. Efficient Delaunay Tessellation through K-D Tree Decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Morozov, Dmitriy; Peterka, Tom

    2017-08-21

    Delaunay tessellations are fundamental data structures in computational geometry. They are important in data analysis, where they can represent the geometry of a point set or approximate its density. The algorithms for computing these tessellations at scale perform poorly when the input data is unbalanced. We investigate the use of k-d trees to evenly distribute points among processes and compare two strategies for picking split points between domain regions. Because resulting point distributions no longer satisfy the assumptions of existing parallel Delaunay algorithms, we develop a new parallel algorithm that adapts to its input and prove its correctness. We evaluate the new algorithm using two late-stage cosmology datasets. The new running times are up to 50 times faster using k-d tree compared with regular grid decomposition. Moreover, in the unbalanced data sets, decomposing the domain into a k-d tree is up to five times faster than decomposing it into a regular grid.

  5. Delaunay algorithm and principal component analysis for 3D visualization of mitochondrial DNA nucleoids by Biplane FPALM/dSTORM

    Czech Academy of Sciences Publication Activity Database

    Alán, Lukáš; Špaček, Tomáš; Ježek, Petr

    2016-01-01

    Roč. 45, č. 5 (2016), s. 443-461 ISSN 0175-7571 R&D Projects: GA ČR(CZ) GA13-02033S; GA MŠk(CZ) ED1.1.00/02.0109 Institutional support: RVO:67985823 Keywords : 3D object segmentation * Delaunay algorithm * principal component analysis * 3D super-resolution microscopy * nucleoids * mitochondrial DNA replication Subject RIV: BO - Biophysics Impact factor: 1.472, year: 2016

  6. Fast algorithm for Morphological Filters

    International Nuclear Information System (INIS)

    Lou Shan; Jiang Xiangqian; Scott, Paul J

    2011-01-01

    In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.

  7. An Adaptive Sweep-Circle Spatial Clustering Algorithm Based on Gestalt

    Directory of Open Access Journals (Sweden)

    Qingming Zhan

    2017-08-01

    Full Text Available An adaptive spatial clustering (ASC algorithm is proposed in this present study, which employs sweep-circle techniques and a dynamic threshold setting based on the Gestalt theory to detect spatial clusters. The proposed algorithm can automatically discover clusters in one pass, rather than through the modification of the initial model (for example, a minimal spanning tree, Delaunay triangulation, or Voronoi diagram. It can quickly identify arbitrarily-shaped clusters while adapting efficiently to non-homogeneous density characteristics of spatial data, without the need for prior knowledge or parameters. The proposed algorithm is also ideal for use in data streaming technology with dynamic characteristics flowing in the form of spatial clustering in large data sets.

  8. Three-Dimensional TIN Algorithm for Digital Terrain Modeling%数字地形建模的真三维TIN算法研究

    Institute of Scientific and Technical Information of China (English)

    朱庆; 张叶廷; 李逢春

    2008-01-01

    The problem of taking an unorganized point cloud in 3D space and fitting a polyhedral surface to those points is both important and difficult. Aiming at increasing applications of full three dimensional digital terrain surface modeling, a new algorithm for the automatic generation of three dimensional triangulated irregular network from a point cloud is proposed. Based on the local topological consistency test, a combined algorithm of constrained 3D Delaunay triangulation and region-growing is extended to ensure topologically correct reconstruction. This paper also introduced an efficient neighboring triangle location method by making full use of the surface normal information. Experimental results prove that this algorithm can efficiently obtain the most reasonable reconstructed mesh surface with arbitrary topology, wherein the automatically reconstructed surface has only small topological difference from the true surface. This algorithm has potential applications to virtual environments, computer vision, and so on.

  9. A three-dimensional electrostatic particle-in-cell methodology on unstructured Delaunay-Voronoi grids

    International Nuclear Information System (INIS)

    Gatsonis, Nikolaos A.; Spirkin, Anton

    2009-01-01

    The mathematical formulation and computational implementation of a three-dimensional particle-in-cell methodology on unstructured Delaunay-Voronoi tetrahedral grids is presented. The method allows simulation of plasmas in complex domains and incorporates the duality of the Delaunay-Voronoi in all aspects of the particle-in-cell cycle. Charge assignment and field interpolation weighting schemes of zero- and first-order are formulated based on the theory of long-range constraints. Electric potential and fields are derived from a finite-volume formulation of Gauss' law using the Voronoi-Delaunay dual. Boundary conditions and the algorithms for injection, particle loading, particle motion, and particle tracking are implemented for unstructured Delaunay grids. Error and sensitivity analysis examines the effects of particles/cell, grid scaling, and timestep on the numerical heating, the slowing-down time, and the deflection times. The problem of current collection by cylindrical Langmuir probes in collisionless plasmas is used for validation. Numerical results compare favorably with previous numerical and analytical solutions for a wide range of probe radius to Debye length ratios, probe potentials, and electron to ion temperature ratios. The versatility of the methodology is demonstrated with the simulation of a complex plasma microsensor, a directional micro-retarding potential analyzer that includes a low transparency micro-grid.

  10. A Parallel Non-Overlapping Domain-Decomposition Algorithm for Compressible Fluid Flow Problems on Triangulated Domains

    Science.gov (United States)

    Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai

    1998-01-01

    This paper considers an algebraic preconditioning algorithm for hyperbolic-elliptic fluid flow problems. The algorithm is based on a parallel non-overlapping Schur complement domain-decomposition technique for triangulated domains. In the Schur complement technique, the triangulation is first partitioned into a number of non-overlapping subdomains and interfaces. This suggests a reordering of triangulation vertices which separates subdomain and interface solution unknowns. The reordering induces a natural 2 x 2 block partitioning of the discretization matrix. Exact LU factorization of this block system yields a Schur complement matrix which couples subdomains and the interface together. The remaining sections of this paper present a family of approximate techniques for both constructing and applying the Schur complement as a domain-decomposition preconditioner. The approximate Schur complement serves as an algebraic coarse space operator, thus avoiding the known difficulties associated with the direct formation of a coarse space discretization. In developing Schur complement approximations, particular attention has been given to improving sequential and parallel efficiency of implementations without significantly degrading the quality of the preconditioner. A computer code based on these developments has been tested on the IBM SP2 using MPI message passing protocol. A number of 2-D calculations are presented for both scalar advection-diffusion equations as well as the Euler equations governing compressible fluid flow to demonstrate performance of the preconditioning algorithm.

  11. Efficient triangulation of Poisson-disk sampled point sets

    KAUST Repository

    Guo, Jianwei

    2014-05-06

    In this paper, we present a simple yet efficient algorithm for triangulating a 2D input domain containing a Poisson-disk sampled point set. The proposed algorithm combines a regular grid and a discrete clustering approach to speedup the triangulation. Moreover, our triangulation algorithm is flexible and performs well on more general point sets such as adaptive, non-maximal Poisson-disk sets. The experimental results demonstrate that our algorithm is robust for a wide range of input domains and achieves significant performance improvement compared to the current state-of-the-art approaches. © 2014 Springer-Verlag Berlin Heidelberg.

  12. Pre-processing for Triangulation of Probabilistic Networks

    NARCIS (Netherlands)

    Bodlaender, H.L.; Koster, A.M.C.A.; Eijkhof, F. van den; Gaag, L.C. van der

    2001-01-01

    The currently most efficient algorithm for inference with a probabilistic network builds upon a triangulation of a networks graph. In this paper, we show that pre-processing can help in finding good triangulations for probabilistic networks, that is, triangulations with a minimal maximum

  13. Surface meshing with curvature convergence

    KAUST Repository

    Li, Huibin; Zeng, Wei; Morvan, Jean-Marie; Chen, Liming; Gu, Xianfengdavid

    2014-01-01

    Surface meshing plays a fundamental role in graphics and visualization. Many geometric processing tasks involve solving geometric PDEs on meshes. The numerical stability, convergence rates and approximation errors are largely determined by the mesh qualities. In practice, Delaunay refinement algorithms offer satisfactory solutions to high quality mesh generations. The theoretical proofs for volume based and surface based Delaunay refinement algorithms have been established, but those for conformal parameterization based ones remain wide open. This work focuses on the curvature measure convergence for the conformal parameterization based Delaunay refinement algorithms. Given a metric surface, the proposed approach triangulates its conformal uniformization domain by the planar Delaunay refinement algorithms, and produces a high quality mesh. We give explicit estimates for the Hausdorff distance, the normal deviation, and the differences in curvature measures between the surface and the mesh. In contrast to the conventional results based on volumetric Delaunay refinement, our stronger estimates are independent of the mesh structure and directly guarantee the convergence of curvature measures. Meanwhile, our result on Gaussian curvature measure is intrinsic to the Riemannian metric and independent of the embedding. In practice, our meshing algorithm is much easier to implement and much more efficient. The experimental results verified our theoretical results and demonstrated the efficiency of the meshing algorithm. © 2014 IEEE.

  14. Surface meshing with curvature convergence

    KAUST Repository

    Li, Huibin

    2014-06-01

    Surface meshing plays a fundamental role in graphics and visualization. Many geometric processing tasks involve solving geometric PDEs on meshes. The numerical stability, convergence rates and approximation errors are largely determined by the mesh qualities. In practice, Delaunay refinement algorithms offer satisfactory solutions to high quality mesh generations. The theoretical proofs for volume based and surface based Delaunay refinement algorithms have been established, but those for conformal parameterization based ones remain wide open. This work focuses on the curvature measure convergence for the conformal parameterization based Delaunay refinement algorithms. Given a metric surface, the proposed approach triangulates its conformal uniformization domain by the planar Delaunay refinement algorithms, and produces a high quality mesh. We give explicit estimates for the Hausdorff distance, the normal deviation, and the differences in curvature measures between the surface and the mesh. In contrast to the conventional results based on volumetric Delaunay refinement, our stronger estimates are independent of the mesh structure and directly guarantee the convergence of curvature measures. Meanwhile, our result on Gaussian curvature measure is intrinsic to the Riemannian metric and independent of the embedding. In practice, our meshing algorithm is much easier to implement and much more efficient. The experimental results verified our theoretical results and demonstrated the efficiency of the meshing algorithm. © 2014 IEEE.

  15. Exact computation of the Voronoi Diagram of spheres in 3D, its topology and its geometric invariants

    DEFF Research Database (Denmark)

    Anton, François; Mioc, Darka; Santos, Marcelo

    2011-01-01

    In this paper, we are addressing the exact computation of the Delaunay graph (or quasi-triangulation) and the Voronoi diagram of spheres using Wu’s algorithm. Our main contribution is first a methodology for automated derivation of invariants of the Delaunay empty circumcircle predicate for spheres...... and the Voronoi vertex of four spheres, then the application of this methodology to get all geometrical invariants that intervene in this problem and the exact computation of the Delaunay graph and the Voronoi diagram of spheres. To the best of our knowledge, there does not exist a comprehensive treatment...... of the exact computation with geometrical invariants of the Delaunay graph and the Voronoi diagram of spheres. Starting from the system of equations defining the zero-dimensional algebraic set of the problem, we are following Wu’s algorithm to transform the initial system into an equivalent Wu characteristic...

  16. Multi-region unstructured volume segmentation using tetrahedron filling

    Energy Technology Data Exchange (ETDEWEB)

    Willliams, Sean Jamerson [Los Alamos National Laboratory; Dillard, Scott E [Los Alamos National Laboratory; Thoma, Dan J [MDI, INSTITUTES; Hlawitschka, Mario [UC DAVIS; Hamann, Bernd [UC DAVIS

    2010-01-01

    Segmentation is one of the most common operations in image processing, and while there are several solutions already present in the literature, they each have their own benefits and drawbacks that make them well-suited for some types of data and not for others. We focus on the problem of breaking an image into multiple regions in a single segmentation pass, while supporting both voxel and scattered point data. To solve this problem, we begin with a set of potential boundary points and use a Delaunay triangulation to complete the boundaries. We use heuristic- and interaction-driven Voronoi clustering to find reasonable groupings of tetrahedra. Apart from the computation of the Delaunay triangulation, our algorithm has linear time complexity with respect to the number of tetrahedra.

  17. Triangulating and guarding realistic polygons

    NARCIS (Netherlands)

    Aloupis, G.; Bose, P.; Dujmovic, V.; Gray, C.M.; Langerman, S.; Speckmann, B.

    2008-01-01

    We propose a new model of realistic input: k-guardable objects. An object is k-guardable if its boundary can be seen by k guards in the interior of the object. In this abstract, we describe a simple algorithm for triangulating k-guardable polygons. Our algorithm, which is easily implementable, takes

  18. Numerical convergence of discrete exterior calculus on arbitrary surface meshes

    KAUST Repository

    Mohamed, Mamdouh S.

    2018-02-13

    Discrete exterior calculus (DEC) is a structure-preserving numerical framework for partial differential equations solution, particularly suitable for simplicial meshes. A longstanding and widespread assumption has been that DEC requires special (Delaunay) triangulations, which complicated the mesh generation process especially for curved surfaces. This paper presents numerical evidence demonstrating that this restriction is unnecessary. Convergence experiments are carried out for various physical problems using both Delaunay and non-Delaunay triangulations. Signed diagonal definition for the key DEC operator (Hodge star) is adopted. The errors converge as expected for all considered meshes and experiments. This relieves the DEC paradigm from unnecessary triangulation limitation.

  19. linear time algorithm for finding the convex ropes between two vertices of a simple polygon without triangulation

    International Nuclear Information System (INIS)

    Phan Thanh An

    2008-06-01

    The convex rope problem, posed by Peshkin and Sanderson in IEEE J. Robotics Automat, 2 (1986) pp. 53-58, is to find the counterclockwise and clockwise convex ropes starting at the vertex a and ending at the vertex b of a simple polygon, where a is on the boundary of the convex hull of the polygon and b is visible from infinity. In this paper, we present a linear time algorithm for solving this problem without resorting to a linear-time triangulation algorithm and without resorting to a convex hull algorithm for the polygon. The counterclockwise (clockwise, respectively) convex rope consists of two polylines obtained in a basic incremental strategy described in convex hull algorithms for the polylines forming the polygon from a to b. (author)

  20. New method of three-dimensional reconstruction from two-dimensional MR data sets

    International Nuclear Information System (INIS)

    Wrazidlo, W.; Schneider, S.; Brambs, H.J.; Richter, G.M.; Kauffmann, G.W.; Geiger, B.; Fischer, C.

    1989-01-01

    In medical diagnosis and therapy, cross-sectional images are obtained by means of US, CT, or MR imaging. The authors propose a new solution to the problem of constructing a shape over a set of cross-sectional contours from two-dimensional (2D) MR data sets. The authors' method reduces the problem of constructing a shape over the cross sections to one of constructing a sequence of partial shapes, each of them connecting two cross sections lying on adjacent planes. The solution makes use of the Delaunay triangulation, which is isomorphic in that specific situation. The authors compute this Delaunay triangulation. Shape reconstruction is then achieved section by pruning Delaunay triangulations

  1. Surface Coverage in Wireless Sensor Networks Based on Delaunay Tetrahedralization

    International Nuclear Information System (INIS)

    Ribeiro, M G; Neves, L A; Zafalon, G F D; Valêncio, C; Pinto, A R; Nascimento, M Z

    2015-01-01

    In this work is presented a new method for sensor deployment on 3D surfaces. The method was structured on different steps. The first one aimed discretizes the relief of interest with Delaunay algorithm. The tetrahedra and relative values (spatial coordinates of each vertex and faces) were input to construction of 3D Voronoi diagram. Each circumcenter was calculated as a candidate position for a sensor node: the corresponding circular coverage area was calculated based on a radius r. The r value can be adjusted to simulate different kinds of sensors. The Dijkstra algorithm and a selection method were applied to eliminate candidate positions with overlapped coverage areas or beyond of surface of interest. Performance evaluations measures were defined using coverage area and communication as criteria. The results were relevant, once the mean coverage rate achieved on three different surfaces were among 91% and 100%

  2. Triangulation positioning system network

    Directory of Open Access Journals (Sweden)

    Sfendourakis Marios

    2017-01-01

    Full Text Available This paper presents ongoing work on localization and positioning through triangulation procedure for a Fixed Sensors Network - FSN.The FSN has to work as a system.As the triangulation problem becomes high complicated in a case with large numbers of sensors and transmitters, an adequate grid topology is needed in order to tackle the detection complexity.For that reason a Network grid topology is presented and areas that are problematic and need further analysis are analyzed.The Network System in order to deal with problems of saturation and False Triangulations - FTRNs will have to find adequate methods in every sub-area of the Area Of Interest - AOI.Also, concepts like Sensor blindness and overall Network blindness, are presented. All these concepts affect the Network detection rate and its performance and ought to be considered in a way that the network overall performance won’t be degraded.Network performance should be monitored contentiously, with right algorithms and methods.It is also shown that as the number of TRNs and FTRNs is increased Detection Complexity - DC is increased.It is hoped that with further research all the characteristics of a triangulation system network for positioning will be gained and the system will be able to perform autonomously with a high detection rate.

  3. Application of Delaunay tessellation for the characterization of solute-rich clusters in atom probe tomography

    International Nuclear Information System (INIS)

    Lefebvre, W.; Philippe, T.; Vurpillot, F.

    2011-01-01

    This work presents an original method for cluster selection in Atom Probe Tomography designed to be applied to large datasets. It is based on the calculation of the Delaunay tessellation generated by the distribution of atoms of a selected element. It requires a single input parameter from the user. Furthermore, no prior knowledge of the material is needed. The sensitivity of the proposed Delaunay cluster selection is demonstrated by its application on simulated APT datasets. A strong advantage of the proposed methodology is that it is reinforced by the availability of an analytical model for the distribution of Delaunay cells circumspheres, which is used to control the accuracy of the cluster selection procedure. Another advantage of the Delaunay cluster selection is the direct calculation of a sharp envelope for each identified cluster or precipitate, which leads to the more appropriate morphology of the objects as they are reconstructed in the APT dataset. -- Research Highligthts: →Original method for cluster selection in Atom Probe Tomography. →Delaunay tessellation generated by the distribution of solute atoms. →Direct calculation of a sharp envelope for each identified cluster or precipitate. →Delaunay cluster selection demonstrated by its application on simulated APT datasets.

  4. Triangulating and guarding realistic polygons

    NARCIS (Netherlands)

    Aloupis, G.; Bose, P.; Dujmovic, V.; Gray, C.M.; Langerman, S.; Speckmann, B.

    2014-01-01

    We propose a new model of realistic input: k-guardable objects. An object is k-guardable if its boundary can be seen by k guards. We show that k-guardable polygons generalize two previously identified classes of realistic input. Following this, we give two simple algorithms for triangulating

  5. Simulating triangulations. Graphs, manifolds and (quantum) spacetime

    International Nuclear Information System (INIS)

    Krueger, Benedikt

    2016-01-01

    Triangulations, which can intuitively be described as a tessellation of space into simplicial building blocks, are structures that arise in various different branches of physics: They can be used for describing complicated and curved objects in a discretized way, e.g., in foams, gels or porous media, or for discretizing curved boundaries for fluid simulations or dissipative systems. Interpreting triangulations as (maximal planar) graphs makes it possible to use them in graph theory or statistical physics, e.g., as small-world networks, as networks of spins or in biological physics as actin networks. Since one can find an analogue of the Einstein-Hilbert action on triangulations, they can even be used for formulating theories of quantum gravity. Triangulations have also important applications in mathematics, especially in discrete topology. Despite their wide occurrence in different branches of physics and mathematics, there are still some fundamental open questions about triangulations in general. It is a prior unknown how many triangulations there are for a given set of points or a given manifold, or even whether there are exponentially many triangulations or more, a question that relates to a well-defined behavior of certain quantum geometry models. Another major unknown question is whether elementary steps transforming triangulations into each other, which are used in computer simulations, are ergodic. Using triangulations as model for spacetime, it is not clear whether there is a meaningful continuum limit that can be identified with the usual and well-tested theory of general relativity. Within this thesis some of these fundamental questions about triangulations are answered by the use of Markov chain Monte Carlo simulations, which are a probabilistic method for calculating statistical expectation values, or more generally a tool for calculating high-dimensional integrals. Additionally, some details about the Wang-Landau algorithm, which is the primary used

  6. Simulating triangulations. Graphs, manifolds and (quantum) spacetime

    Energy Technology Data Exchange (ETDEWEB)

    Krueger, Benedikt

    2016-07-01

    Triangulations, which can intuitively be described as a tessellation of space into simplicial building blocks, are structures that arise in various different branches of physics: They can be used for describing complicated and curved objects in a discretized way, e.g., in foams, gels or porous media, or for discretizing curved boundaries for fluid simulations or dissipative systems. Interpreting triangulations as (maximal planar) graphs makes it possible to use them in graph theory or statistical physics, e.g., as small-world networks, as networks of spins or in biological physics as actin networks. Since one can find an analogue of the Einstein-Hilbert action on triangulations, they can even be used for formulating theories of quantum gravity. Triangulations have also important applications in mathematics, especially in discrete topology. Despite their wide occurrence in different branches of physics and mathematics, there are still some fundamental open questions about triangulations in general. It is a prior unknown how many triangulations there are for a given set of points or a given manifold, or even whether there are exponentially many triangulations or more, a question that relates to a well-defined behavior of certain quantum geometry models. Another major unknown question is whether elementary steps transforming triangulations into each other, which are used in computer simulations, are ergodic. Using triangulations as model for spacetime, it is not clear whether there is a meaningful continuum limit that can be identified with the usual and well-tested theory of general relativity. Within this thesis some of these fundamental questions about triangulations are answered by the use of Markov chain Monte Carlo simulations, which are a probabilistic method for calculating statistical expectation values, or more generally a tool for calculating high-dimensional integrals. Additionally, some details about the Wang-Landau algorithm, which is the primary used

  7. An overview of the stereo correlation and triangulation formulations used in DICe.

    Energy Technology Data Exchange (ETDEWEB)

    Turner, Daniel Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    This document provides a detailed overview of the stereo correlation algorithm and triangulation formulation used in the Digital Image Correlation Engine (DICe) to triangulate three dimensional motion in space given the image coordinates and camera calibration parameters.

  8. Moment analysis of the Delaunay tessellation field estimator

    NARCIS (Netherlands)

    Lieshout, van M.N.M.

    2009-01-01

    The Campbell–Mecke theorem is used to derive explicit expressions for the mean and variance of Schaap and Van de Weygaert’s Delaunay tessellation field estimator. Special attention is paid to Poisson processes.

  9. A grand-canonical ensemble of randomly triangulated surfaces

    International Nuclear Information System (INIS)

    Jurkiewicz, J.; Krzywicki, A.; Petersson, B.

    1986-01-01

    An algorithm is presented generating the grand-canonical ensemble of discrete, randomly triangulated Polyakov surfaces. The algorithm is used to calculate the susceptibility exponent, which controls the existence of the continuum limit of the considered model, for the dimensionality of the embedding space ranging from 0 to 20. (orig.)

  10. Invariants of the Dirichlet/Voronoi Tilings of Hyperspheres in Rn and their Dual Delone/Delaunay Graphs

    DEFF Research Database (Denmark)

    Antón Castro, Francesc/François

    2015-01-01

    In this paper, we are addressing the geometric and topological invariants that arise in the exact computation of the Delone (Delaunay) graph and the Dirichlet/Voronoi tiling of N-dimensional hyperspheres using Ritt-Wu's algorithm. Our main contribution is a methodology for automated derivation...... of geometric and topological invariants of the Dirichlet tiling of N + 1-dimenional hyperspheres and its dual Delone graph from the invariants of the Dirichlet tiling of N-dimensional hyperspheres and its dual Delone graph (starting from N = 3)....

  11. Invariants of the dirichlet/voronoi tilings of hyperspheres in RN and their dual delone/delaunay graphs

    DEFF Research Database (Denmark)

    Anton, François

    In this paper, we are addressing the geometric and topological invariants that arise in the exact computation of the Delone (Delaunay) graph and the Dirichlet/Voronoi tiling of n-dimensional hyperspheres using Ritt-Wu's algorithm. Our main contribution is a methodology for automated derivation...... of geometric and topological invariants of the Dirichlet tiling of N + 1-dimenional hyperspheres and its dual Delone graph from the invariants of the Dirichlet tiling of N-dimensional hyperspheres and its dual Delone graph (starting from N = 3)....

  12. AUTOMATIC MESH GENERATION OF 3-D GEOMETRIC MODELS

    Institute of Scientific and Technical Information of China (English)

    刘剑飞

    2003-01-01

    In this paper the presentation of the ball-packing method is reviewed,and a scheme to generate mesh for complex 3-D geometric models is given,which consists of 4 steps:(1)create nodes in 3-D models by ball-packing method,(2)connect nodes to generate mesh by 3-D Delaunay triangulation,(3)retrieve the boundary of the model after Delaunay triangulation,(4)improve the mesh.

  13. Triangulation Made Easy

    Energy Technology Data Exchange (ETDEWEB)

    Lindstrom, P

    2009-12-23

    We describe a simple and efficient algorithm for two-view triangulation of 3D points from approximate 2D matches based on minimizing the L2 reprojection error. Our iterative algorithm improves on the one by Kanatani et al. by ensuring that in each iteration the epipolar constraint is satisfied. In the case where the two cameras are pointed in the same direction, the method provably converges to an optimal solution in exactly two iterations. For more general camera poses, two iterations are sufficient to achieve convergence to machine precision, which we exploit to devise a fast, non-iterative method. The resulting algorithm amounts to little more than solving a quadratic equation, and involves a fixed, small number of simple matrixvector operations and no conditional branches. We demonstrate that the method computes solutions that agree to very high precision with those of Hartley and Sturm's original polynomial method, though achieves higher numerical stability and 1-4 orders of magnitude greater speed.

  14. A general and Robust Ray-Casting-Based Algorithm for Triangulating Surfaces at the Nanoscale

    Science.gov (United States)

    Decherchi, Sergio; Rocchia, Walter

    2013-01-01

    We present a general, robust, and efficient ray-casting-based approach to triangulating complex manifold surfaces arising in the nano-bioscience field. This feature is inserted in a more extended framework that: i) builds the molecular surface of nanometric systems according to several existing definitions, ii) can import external meshes, iii) performs accurate surface area estimation, iv) performs volume estimation, cavity detection, and conditional volume filling, and v) can color the points of a grid according to their locations with respect to the given surface. We implemented our methods in the publicly available NanoShaper software suite (www.electrostaticszone.eu). Robustness is achieved using the CGAL library and an ad hoc ray-casting technique. Our approach can deal with any manifold surface (including nonmolecular ones). Those explicitly treated here are the Connolly-Richards (SES), the Skin, and the Gaussian surfaces. Test results indicate that it is robust to rotation, scale, and atom displacement. This last aspect is evidenced by cavity detection of the highly symmetric structure of fullerene, which fails when attempted by MSMS and has problems in EDTSurf. In terms of timings, NanoShaper builds the Skin surface three times faster than the single threaded version in Lindow et al. on a 100,000 atoms protein and triangulates it at least ten times more rapidly than the Kruithof algorithm. NanoShaper was integrated with the DelPhi Poisson-Boltzmann equation solver. Its SES grid coloring outperformed the DelPhi counterpart. To test the viability of our method on large systems, we chose one of the biggest molecular structures in the Protein Data Bank, namely the 1VSZ entry, which corresponds to the human adenovirus (180,000 atoms after Hydrogen addition). We were able to triangulate the corresponding SES and Skin surfaces (6.2 and 7.0 million triangles, respectively, at a scale of 2 grids per Å) on a middle-range workstation. PMID:23577073

  15. A flocking algorithm for multi-agent systems with connectivity preservation under hybrid metric-topological interactions.

    Science.gov (United States)

    He, Chenlong; Feng, Zuren; Ren, Zhigang

    2018-01-01

    In this paper, we propose a connectivity-preserving flocking algorithm for multi-agent systems in which the neighbor set of each agent is determined by the hybrid metric-topological distance so that the interaction topology can be represented as the range-limited Delaunay graph, which combines the properties of the commonly used disk graph and Delaunay graph. As a result, the proposed flocking algorithm has the following advantages over the existing ones. First, range-limited Delaunay graph is sparser than the disk graph so that the information exchange among agents is reduced significantly. Second, some links irrelevant to the connectivity can be dynamically deleted during the evolution of the system. Thus, the proposed flocking algorithm is more flexible than existing algorithms, where links are not allowed to be disconnected once they are created. Finally, the multi-agent system spontaneously generates a regular quasi-lattice formation without imposing the constraint on the ratio of the sensing range of the agent to the desired distance between two adjacent agents. With the interaction topology induced by the hybrid distance, the proposed flocking algorithm can still be implemented in a distributed manner. We prove that the proposed flocking algorithm can steer the multi-agent system to a stable flocking motion, provided the initial interaction topology of multi-agent systems is connected and the hysteresis in link addition is smaller than a derived upper bound. The correctness and effectiveness of the proposed algorithm are verified by extensive numerical simulations, where the flocking algorithms based on the disk and Delaunay graph are compared.

  16. A flocking algorithm for multi-agent systems with connectivity preservation under hybrid metric-topological interactions.

    Directory of Open Access Journals (Sweden)

    Chenlong He

    Full Text Available In this paper, we propose a connectivity-preserving flocking algorithm for multi-agent systems in which the neighbor set of each agent is determined by the hybrid metric-topological distance so that the interaction topology can be represented as the range-limited Delaunay graph, which combines the properties of the commonly used disk graph and Delaunay graph. As a result, the proposed flocking algorithm has the following advantages over the existing ones. First, range-limited Delaunay graph is sparser than the disk graph so that the information exchange among agents is reduced significantly. Second, some links irrelevant to the connectivity can be dynamically deleted during the evolution of the system. Thus, the proposed flocking algorithm is more flexible than existing algorithms, where links are not allowed to be disconnected once they are created. Finally, the multi-agent system spontaneously generates a regular quasi-lattice formation without imposing the constraint on the ratio of the sensing range of the agent to the desired distance between two adjacent agents. With the interaction topology induced by the hybrid distance, the proposed flocking algorithm can still be implemented in a distributed manner. We prove that the proposed flocking algorithm can steer the multi-agent system to a stable flocking motion, provided the initial interaction topology of multi-agent systems is connected and the hysteresis in link addition is smaller than a derived upper bound. The correctness and effectiveness of the proposed algorithm are verified by extensive numerical simulations, where the flocking algorithms based on the disk and Delaunay graph are compared.

  17. Constant-work-space algorithms for geometric problems

    Directory of Open Access Journals (Sweden)

    Tetsuo Asano

    2011-07-01

    Full Text Available Constant-work-space algorithms may use only constantly many cells of storage in addition to their input, which is provided as a read-only array. We show how to construct several geometric structures efficiently in the constant-work-space model. Traditional algorithms process the input into a suitable data structure (like a doubly-connected edge list that allows efficient traversal of the structure at hand. In the constant-work-space setting, however, we cannot afford to do this. Instead, we provide operations that compute the desired features on the fly by accessing the input with no extra space. The whole geometric structure can be obtained by using these operations to enumerate all the features. Of course, we must pay for the space savings by slower running times. While the standard data structure allows us to implement traversal operations in constant time, our schemes typically take linear time to read the input data in each step.We begin with two simple problems: triangulating a planar point set and finding the trapezoidal decomposition of a simple polygon. In both cases adjacent features can be enumerated in linear time per step, resulting in total quadratic running time to output the whole structure. Actually, we show that the former result carries over to the Delaunay triangulation, and hence the Voronoi diagram. This also means that we can compute the largest empty circle of a planar point set in quadratic time and constant work-space. As another application, we demonstrate how to enumerate the features of an Euclidean minimum spanning tree (EMST in quadratic time per step, so that the whole EMST can be found in cubic time using constant work-space.Finally, we describe how to compute a shortest geodesic path between two points in a simple polygon. Although the shortest path problem in general graphs is NL-complete (Jakoby and Tantau 2003, this constrained problem can be solved in quadratic time using only constant work-space.

  18. A computational geometry approach to pore network construction for granular packings

    Science.gov (United States)

    van der Linden, Joost H.; Sufian, Adnan; Narsilio, Guillermo A.; Russell, Adrian R.; Tordesillas, Antoinette

    2018-03-01

    Pore network construction provides the ability to characterize and study the pore space of inhomogeneous and geometrically complex granular media in a range of scientific and engineering applications. Various approaches to the construction have been proposed, however subtle implementational details are frequently omitted, open access to source code is limited, and few studies compare multiple algorithms in the context of a specific application. This study presents, in detail, a new pore network construction algorithm, and provides a comprehensive comparison with two other, well-established Delaunay triangulation-based pore network construction methods. Source code is provided to encourage further development. The proposed algorithm avoids the expensive non-linear optimization procedure in existing Delaunay approaches, and is robust in the presence of polydispersity. Algorithms are compared in terms of structural, geometrical and advanced connectivity parameters, focusing on the application of fluid flow characteristics. Sensitivity of the various networks to permeability is assessed through network (Stokes) simulations and finite-element (Navier-Stokes) simulations. Results highlight strong dependencies of pore volume, pore connectivity, throat geometry and fluid conductance on the degree of tetrahedra merging and the specific characteristics of the throats targeted by the merging algorithm. The paper concludes with practical recommendations on the applicability of the three investigated algorithms.

  19. The use of triangulation in qualitative research.

    Science.gov (United States)

    Carter, Nancy; Bryant-Lukosius, Denise; DiCenso, Alba; Blythe, Jennifer; Neville, Alan J

    2014-09-01

    Triangulation refers to the use of multiple methods or data sources in qualitative research to develop a comprehensive understanding of phenomena (Patton, 1999). Triangulation also has been viewed as a qualitative research strategy to test validity through the convergence of information from different sources. Denzin (1978) and Patton (1999) identified four types of triangulation: (a) method triangulation, (b) investigator triangulation, (c) theory triangulation, and (d) data source triangulation. The current article will present the four types of triangulation followed by a discussion of the use of focus groups (FGs) and in-depth individual (IDI) interviews as an example of data source triangulation in qualitative inquiry.

  20. Applications of Voronoi and Delaunay Diagrams in the solution of the geodetic boundary value problem

    Directory of Open Access Journals (Sweden)

    C. A. B. Quintero

    Full Text Available Voronoi and Delaunay structures are presented as discretization tools to be used in numerical surface integration aiming the computation of geodetic problems solutions, when under the integral there is a non-analytical function (e. g., gravity anomaly and height. In the Voronoi approach, the target area is partitioned into polygons which contain the observed point and no interpolation is necessary, only the original data is used. In the Delaunay approach, the observed points are vertices of triangular cells and the value for a cell is interpolated for its barycenter. If the amount and distribution of the observed points are adequate, gridding operation is not required and the numerical surface integration is carried out by point-wise. Even when the amount and distribution of the observed points are not enough, the structures of Voronoi and Delaunay can combine grid with observed points in order to preserve the integrity of the original information. Both schemes are applied to the computation of the Stokes' integral, the terrain correction, the indirect effect and the gradient of the gravity anomaly, in the State of Rio de Janeiro, Brazil area.

  1. Depth Measurement Based on Infrared Coded Structured Light

    Directory of Open Access Journals (Sweden)

    Tong Jia

    2014-01-01

    Full Text Available Depth measurement is a challenging problem in computer vision research. In this study, we first design a new grid pattern and develop a sequence coding and decoding algorithm to process the pattern. Second, we propose a linear fitting algorithm to derive the linear relationship between the object depth and pixel shift. Third, we obtain depth information on an object based on this linear relationship. Moreover, 3D reconstruction is implemented based on Delaunay triangulation algorithm. Finally, we utilize the regularity of the error curves to correct the system errors and improve the measurement accuracy. The experimental results show that the accuracy of depth measurement is related to the step length of moving object.

  2. Triangulated categories (AM-148)

    CERN Document Server

    Neeman, Amnon

    2014-01-01

    The first two chapters of this book offer a modern, self-contained exposition of the elementary theory of triangulated categories and their quotients. The simple, elegant presentation of these known results makes these chapters eminently suitable as a text for graduate students. The remainder of the book is devoted to new research, providing, among other material, some remarkable improvements on Brown''s classical representability theorem. In addition, the author introduces a class of triangulated categories""--the ""well generated triangulated categories""--and studies their properties. This

  3. Random discrete Morse theory and a new library of triangulations

    DEFF Research Database (Denmark)

    Benedetti, Bruno; Lutz, Frank Hagen

    2014-01-01

    We introduce random discrete Morse theory as a computational scheme to measure the complexity of a triangulation. The idea is to try to quantify the frequency of discrete Morse matchings with few critical cells. Our measure will depend on the topology of the space, but also on how nicely the space...... is triangulated. The scheme we propose looks for optimal discrete Morse functions with an elementary random heuristic. Despite its naiveté, this approach turns out to be very successful even in the case of huge inputs. In our view, the existing libraries of examples in computational topology are “too easy......” for testing algorithms based on discrete Morse theory. We propose a new library containing more complicated (and thus more meaningful) test examples....

  4. ConnectViz: Accelerated Approach for Brain Structural Connectivity Using Delaunay Triangulation.

    Science.gov (United States)

    Adeshina, A M; Hashim, R

    2016-03-01

    nodes and the edges. The framework is very efficient in providing greater interactivity as a way of representing the nodes and the edges intuitively, all achieved at a considerably interactive speed for instantaneous mapping of the datasets' features. Uniquely, the connectomic algorithm performed remarkably fast with normal hardware requirement specifications.

  5. Visualization of 2-D and 3-D fields from its value in a finite number of points

    International Nuclear Information System (INIS)

    Dari, E.A.; Venere, M.J.

    1990-01-01

    This work describes a method for the visualization of two- and three-dimensional fields, given its value in a finite number of points. These data can be originated in experimental measurements, numerical results, or any other source. For the field interpolation, the space is divided into simplices (triangles or tetrahedrons), using the Watson algorithm to obtain the Delaunay triangulation. Inside each simplex, linear interpolation is assumed. The visualization is accomplished by means of Finite Elements post-processors, capable of handling unstructured meshes, which were also developed by the authors. (Author) [es

  6. Algorithm that mimics human perceptual grouping of dot patterns

    NARCIS (Netherlands)

    Papari, G.; Petkov, N.; Gregorio, MD; DiMaio,; Frucci, M; Musio, C

    2005-01-01

    We propose an algorithm that groups points similarly to how human observers do. It is simple, totally unsupervised and able to find clusters of complex and not necessarily convex shape. Groups are identified as the connected components of a Reduced Delaunay Graph (RDG) that we define in this paper.

  7. Investigation of point triangulation methods for optimality and performance in Structure from Motion systems

    DEFF Research Database (Denmark)

    Structure from Motion (SFM) systems are composed of cameras and structure in the form of 3D points and other features. It is most often that the structure components outnumber the cameras by a great margin. It is not uncommon to have a configuration with 3 cameras observing more than 500 3D points...... an overview of existing triangulation methods with emphasis on performance versus optimality, and will suggest a fast triangulation algorithm based on linear constraints. The structure and camera motion estimation in a SFM system is based on the minimization of some norm of the reprojection error between...

  8. Method for Optimal Sensor Deployment on 3D Terrains Utilizing a Steady State Genetic Algorithm with a Guided Walk Mutation Operator Based on the Wavelet Transform

    Science.gov (United States)

    Unaldi, Numan; Temel, Samil; Asari, Vijayan K.

    2012-01-01

    One of the most critical issues of Wireless Sensor Networks (WSNs) is the deployment of a limited number of sensors in order to achieve maximum coverage on a terrain. The optimal sensor deployment which enables one to minimize the consumed energy, communication time and manpower for the maintenance of the network has attracted interest with the increased number of studies conducted on the subject in the last decade. Most of the studies in the literature today are proposed for two dimensional (2D) surfaces; however, real world sensor deployments often arise on three dimensional (3D) environments. In this paper, a guided wavelet transform (WT) based deployment strategy (WTDS) for 3D terrains, in which the sensor movements are carried out within the mutation phase of the genetic algorithms (GAs) is proposed. The proposed algorithm aims to maximize the Quality of Coverage (QoC) of a WSN via deploying a limited number of sensors on a 3D surface by utilizing a probabilistic sensing model and the Bresenham's line of sight (LOS) algorithm. In addition, the method followed in this paper is novel to the literature and the performance of the proposed algorithm is compared with the Delaunay Triangulation (DT) method as well as a standard genetic algorithm based method and the results reveal that the proposed method is a more powerful and more successful method for sensor deployment on 3D terrains. PMID:22666078

  9. Nonequilibrium phase transition in directed small-world-Voronoi-Delaunay random lattices

    International Nuclear Information System (INIS)

    Lima, F.W.S.

    2016-01-01

    On directed small-world-Voronoi-Delaunay random lattices in two dimensions with quenched connectivity disorder we study the critical properties of the dynamics evolution of public opinion in social influence networks using a simple spin-like model. The system is treated by applying Monte Carlo simulations. We show that directed links on these random lattices may lead to phase diagram with first- and second-order social phase transitions out of equilibrium. (paper)

  10. Evolutionary computation applied to the reconstruction of 3-D surface topography in the SEM.

    Science.gov (United States)

    Kodama, Tetsuji; Li, Xiaoyuan; Nakahira, Kenji; Ito, Dai

    2005-10-01

    A genetic algorithm has been applied to the line profile reconstruction from the signals of the standard secondary electron (SE) and/or backscattered electron detectors in a scanning electron microscope. This method solves the topographical surface reconstruction problem as one of combinatorial optimization. To extend this optimization approach for three-dimensional (3-D) surface topography, this paper considers the use of a string coding where a 3-D surface topography is represented by a set of coordinates of vertices. We introduce the Delaunay triangulation, which attains the minimum roughness for any set of height data to capture the fundamental features of the surface being probed by an electron beam. With this coding, the strings are processed with a class of hybrid optimization algorithms that combine genetic algorithms and simulated annealing algorithms. Experimental results on SE images are presented.

  11. On the stretch factor of convex Delaunay graphs

    Directory of Open Access Journals (Sweden)

    Prosenjit Bose

    2010-06-01

    Full Text Available Let C be a compact and convex set in the plane that contains the origin in its interior, and let S be a finite set of points in the plane. The Delaunay graph DGC(S of S is defined to be the dual of the Voronoi diagram of S with respect to the convex distance function defined by C. We prove that DGC(S is a t-spanner for S, for some constant t that depends only on the shape of the set C. Thus, for any two points p and q in S, the graph DGC(S contains a path between p and q whose Euclidean length is at most t times the Euclidean distance between p and q.

  12. Mixed Methods, Triangulation, and Causal Explanation

    Science.gov (United States)

    Howe, Kenneth R.

    2012-01-01

    This article distinguishes a disjunctive conception of mixed methods/triangulation, which brings different methods to bear on different questions, from a conjunctive conception, which brings different methods to bear on the same question. It then examines a more inclusive, holistic conception of mixed methods/triangulation that accommodates…

  13. Hamiltonian Cycles on Random Eulerian Triangulations

    DEFF Research Database (Denmark)

    Guitter, E.; Kristjansen, C.; Nielsen, Jakob Langgaard

    1998-01-01

    . Considering the case n -> 0, this implies that the system of random Eulerian triangulations equipped with Hamiltonian cycles describes a c=-1 matter field coupled to 2D quantum gravity as opposed to the system of usual random triangulations equipped with Hamiltonian cycles which has c=-2. Hence, in this case...

  14. Generation of triangulated random surfaces by the Monte Carlo method in the grand canonical ensemble

    International Nuclear Information System (INIS)

    Zmushko, V.V.; Migdal, A.A.

    1987-01-01

    A model of triangulated random surfaces which is the discrete analog of the Polyakov string is considered. An algorithm is proposed which enables one to study the model by the Monte Carlo method in the grand canonical ensemble. Preliminary results on the determination of the critical index γ are presented

  15. Recent development of micro-triangulation for magnet fiducialisation

    CERN Document Server

    Vlachakis, Vasileios; Mainaud Durand, Helene; CERN. Geneva. ATS Department

    2016-01-01

    The micro-triangulation method is proposed as an alternative for magnet fiducialisation. The main objective is to measure horizontal and vertical angles to fiducial points and stretched wires, utilising theodolites equipped with cameras. This study aims to develop various methods, algorithms and software tools to enable the data acquisition and processing. In this paper, we present the first test measurement as an attempt to demonstrate the feasibility of the method and to evaluate the accuracy. The preliminary results are very promising, with accuracy always better than 20 μm for the wire position, and of about40 μm/m for the wire orientation, compared with a coordinate measuring machine.

  16. Numerical Validation of the Delaunay Normalization and the Krylov-Bogoliubov-Mitropolsky Method

    Directory of Open Access Journals (Sweden)

    David Ortigosa

    2014-01-01

    Full Text Available A scalable second-order analytical orbit propagator programme based on modern and classical perturbation methods is being developed. As a first step in the validation and verification of part of our orbit propagator programme, we only consider the perturbation produced by zonal harmonic coefficients in the Earth’s gravity potential, so that it is possible to analyze the behaviour of the mathematical expressions involved in Delaunay normalization and the Krylov-Bogoliubov-Mitropolsky method in depth and determine their limits.

  17. Lines of landscape organisation

    DEFF Research Database (Denmark)

    Løvschal, Mette

    2015-01-01

    This paper offers a landscape analysis of the earliest linear landscape boundaries on Skovbjerg Moraine, Denmark, during the first millennium BC. Using Delaunay triangulation as well as classic distribution analyses, it demonstrates that landscape boundaries articulated already established use-pa...

  18. Comparison On Matching Methods Used In Pose Tracking For 3D Shape Representation

    Directory of Open Access Journals (Sweden)

    Khin Kyu Kyu Win

    2017-01-01

    Full Text Available In this work three different algorithms such as Brute Force Delaunay Triangulation and k-d Tree are analyzed on matching comparison for 3D shape representation. It is intended for developing the pose tracking of moving objects in video surveillance. To determine 3D pose of moving objects some tracking system may require full 3D pose estimation of arbitrarily shaped objects in real time. In order to perform 3D pose estimation in real time each step in the tracking algorithm must be computationally efficient. This paper presents method comparison for the computationally efficient registration of 3D shapes including free-form surfaces. Matching of free-form surfaces are carried out by using geometric point matching algorithm ICP. Several aspects of the ICP algorithm are investigated and analyzed by using specified surface setup. The surface setup processed in this system is represented by simple geometric primitive dealing with objects of free-from shape. Considered representations are a cloud of points.

  19. Triangulation in rewriting

    NARCIS (Netherlands)

    Oostrom, V. van; Zantema, Hans

    2012-01-01

    We introduce a process, dubbed triangulation, turning any rewrite relation into a confluent one. It is more direct than usual completion, in the sense that objects connected by a peak are directly oriented rather than their normal forms. We investigate conditions under which this process preserves

  20. Tetrahedral meshing via maximal Poisson-disk sampling

    KAUST Repository

    Guo, Jianwei; Yan, Dongming; Chen, Li; Zhang, Xiaopeng; Deussen, Oliver; Wonka, Peter

    2016-01-01

    -distributed point sets in arbitrary domains. We first perform MPS on the boundary of the input domain, we then sample the interior of the domain, and we finally extract the tetrahedral mesh from the samples by using 3D Delaunay or regular triangulation for uniform

  1. A TQFT of Tuarev-Viro type on shaped triangulations

    Energy Technology Data Exchange (ETDEWEB)

    Kashaev, Rinat [Geneva Univ. (Switzerland); Luo, Feng [Rutgers Univ., Piscataway, NJ (United States). Dept. of Mathematics; Vartanov, Grigory [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2012-10-15

    A shaped triangulation is a finite triangulation of an oriented pseudo three manifold where each tetrahedron carries dihedral angles of an ideal hyberbolic tetrahedron. To each shaped triangulation, we associate a quantum partition function in the form of an absolutely convergent state integral which is invariant under shaped 3-2 Pachner moves and invariant with respect to shape gauge transformations generated by total dihedral angles around internal edges through the Neumann-Zagier Poisson bracket. Similarly to Turaev-Viro theory, the state variables live on edges of the triangulation but take their values on the whole real axis. The tetrahedral weight functions are composed of three hyperbolic gamma functions in a way that they enjoy a manifest tetrahedral symmetry. We conjecture that for shaped triangulations of closed 3-manifolds, our partition function is twice the absolute value squared of the partition function of Techmueller TQFT defined by Andersen and Kashaev. This is similar to the known relationship between the Turaev-Viro and the Witten-Reshetikhin-Turaev invariants of three manifolds. We also discuss interpretations of our construction in terms of three-dimensional supersymmetric field theories related to triangulated three-dimensional manifolds.

  2. A TQFT of Tuarev-Viro type on shaped triangulations

    International Nuclear Information System (INIS)

    Kashaev, Rinat; Luo, Feng

    2012-10-01

    A shaped triangulation is a finite triangulation of an oriented pseudo three manifold where each tetrahedron carries dihedral angles of an ideal hyberbolic tetrahedron. To each shaped triangulation, we associate a quantum partition function in the form of an absolutely convergent state integral which is invariant under shaped 3-2 Pachner moves and invariant with respect to shape gauge transformations generated by total dihedral angles around internal edges through the Neumann-Zagier Poisson bracket. Similarly to Turaev-Viro theory, the state variables live on edges of the triangulation but take their values on the whole real axis. The tetrahedral weight functions are composed of three hyperbolic gamma functions in a way that they enjoy a manifest tetrahedral symmetry. We conjecture that for shaped triangulations of closed 3-manifolds, our partition function is twice the absolute value squared of the partition function of Techmueller TQFT defined by Andersen and Kashaev. This is similar to the known relationship between the Turaev-Viro and the Witten-Reshetikhin-Turaev invariants of three manifolds. We also discuss interpretations of our construction in terms of three-dimensional supersymmetric field theories related to triangulated three-dimensional manifolds.

  3. Refueling Stop Activity Detection and Gas Station Extraction Using Crowdsourcing Vehicle Trajectory Data

    Directory of Open Access Journals (Sweden)

    YANG Wei

    2017-07-01

    Full Text Available In view of the deficiencies of current surveying methods of gas station, an approach is proposed to extract gas station from vehicle traces. Firstly, the spatial-temporal characteristics of individual and collective refueling behavior of trajectory is analyzed from aspects of movement features and geometric patterns. Secondly, based on Stop/Move model, the velocity sequence linear clustering algorithm is proposed to extract refueling stop tracks. Finally, using the methods including Delaunay triangulation, Fourier shape recognition and semantic constraints to identify and extract gas station. An experiment using 7 days taxi GPS traces in Beijing verified the novel method. The experimental results of 482 gas stations are extracted and the correct rate achieves to 93.1%.

  4. Aerial Triangulation Close-range Images with Dual Quaternion

    Directory of Open Access Journals (Sweden)

    SHENG Qinghong

    2015-05-01

    Full Text Available A new method for the aerial triangulation of close-range images based on dual quaternion is presented. Using dual quaternion to represent the spiral screw motion of the beam in the space, the real part of dual quaternion represents the angular elements of all the beams in the close-range area networks, the real part and the dual part of dual quaternion represents the line elements corporately. Finally, an aerial triangulation adjustment model based on dual quaternion is established, and the elements of interior orientation and exterior orientation and the object coordinates of the ground points are calculated. Real images and large attitude angle simulated images are selected to run the experiments of aerial triangulation. The experimental results show that the new method for the aerial triangulation of close-range images based on dual quaternion can obtain higher accuracy.

  5. A Combined Approach to Cartographic Displacement for Buildings Based on Skeleton and Improved Elastic Beam Algorithm

    Science.gov (United States)

    Liu, Yuangang; Guo, Qingsheng; Sun, Yageng; Ma, Xiaoya

    2014-01-01

    Scale reduction from source to target maps inevitably leads to conflicts of map symbols in cartography and geographic information systems (GIS). Displacement is one of the most important map generalization operators and it can be used to resolve the problems that arise from conflict among two or more map objects. In this paper, we propose a combined approach based on constraint Delaunay triangulation (CDT) skeleton and improved elastic beam algorithm for automated building displacement. In this approach, map data sets are first partitioned. Then the displacement operation is conducted in each partition as a cyclic and iterative process of conflict detection and resolution. In the iteration, the skeleton of the gap spaces is extracted using CDT. It then serves as an enhanced data model to detect conflicts and construct the proximity graph. Then, the proximity graph is adjusted using local grouping information. Under the action of forces derived from the detected conflicts, the proximity graph is deformed using the improved elastic beam algorithm. In this way, buildings are displaced to find an optimal compromise between related cartographic constraints. To validate this approach, two topographic map data sets (i.e., urban and suburban areas) were tested. The results were reasonable with respect to each constraint when the density of the map was not extremely high. In summary, the improvements include (1) an automated parameter-setting method for elastic beams, (2) explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, (3) preservation of local building groups through displacement over an adjusted proximity graph, and (4) an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm. PMID:25470727

  6. Strongly minimal triangulations of (S × S )#3 and (S S

    Indian Academy of Sciences (India)

    2011) 986–995). We show that there are exactly 12 such triangulations up to isomorphism, 10 of which are orientable. Keywords. Stacked sphere; tight neighbourly triangulation; minimal triangulation. 2000 Mathematics Subject Classification.

  7. Generation of triangulated random surfaces by means of the Monte Carlo method in the grand canonical ensemble

    International Nuclear Information System (INIS)

    Zmushko, V.V.; Migdal, A.A.

    1987-01-01

    A model of triangulated random surfaces which is the discrete analogue of the Polyakov string is considered in the work. An algorithm is proposed which enables one to study the model by means of the Monte Carlo method in the grand canonical ensemble. Preliminary results are presented on the evaluation of the critical index γ

  8. Approximation and geomatric modeling with simplex B-splines associates with irregular triangular

    NARCIS (Netherlands)

    Auerbach, S.; Gmelig Meyling, R.H.J.; Neamtu, M.; Neamtu, M.; Schaeben, H.

    1991-01-01

    Bivariate quadratic simplical B-splines defined by their corresponding set of knots derived from a (suboptimal) constrained Delaunay triangulation of the domain are employed to obtain a C1-smooth surface. The generation of triangle vertices is adjusted to the areal distribution of the data in the

  9. Enhancing Breast Cancer Recurrence Algorithms Through Selective Use of Medical Record Data.

    Science.gov (United States)

    Kroenke, Candyce H; Chubak, Jessica; Johnson, Lisa; Castillo, Adrienne; Weltzien, Erin; Caan, Bette J

    2016-03-01

    The utility of data-based algorithms in research has been questioned because of errors in identification of cancer recurrences. We adapted previously published breast cancer recurrence algorithms, selectively using medical record (MR) data to improve classification. We evaluated second breast cancer event (SBCE) and recurrence-specific algorithms previously published by Chubak and colleagues in 1535 women from the Life After Cancer Epidemiology (LACE) and 225 women from the Women's Health Initiative cohorts and compared classification statistics to published values. We also sought to improve classification with minimal MR examination. We selected pairs of algorithms-one with high sensitivity/high positive predictive value (PPV) and another with high specificity/high PPV-using MR information to resolve discrepancies between algorithms, properly classifying events based on review; we called this "triangulation." Finally, in LACE, we compared associations between breast cancer survival risk factors and recurrence using MR data, single Chubak algorithms, and triangulation. The SBCE algorithms performed well in identifying SBCE and recurrences. Recurrence-specific algorithms performed more poorly than published except for the high-specificity/high-PPV algorithm, which performed well. The triangulation method (sensitivity = 81.3%, specificity = 99.7%, PPV = 98.1%, NPV = 96.5%) improved recurrence classification over two single algorithms (sensitivity = 57.1%, specificity = 95.5%, PPV = 71.3%, NPV = 91.9%; and sensitivity = 74.6%, specificity = 97.3%, PPV = 84.7%, NPV = 95.1%), with 10.6% MR review. Triangulation performed well in survival risk factor analyses vs analyses using MR-identified recurrences. Use of multiple recurrence algorithms in administrative data, in combination with selective examination of MR data, may improve recurrence data quality and reduce research costs. © The Author 2015. Published by Oxford University Press. All rights reserved. For

  10. Observation, innovation and triangulation

    DEFF Research Database (Denmark)

    Hetmar, Vibeke

    2007-01-01

    on experiences from a pilot project in three different classrooms methodological possibilities and problems are presented and discussed: 1) educational criticism, including the concepts of positions, perspectives and connoisseurship, 2) classroom observations and 3) triangulation as a methodological tool....

  11. Degree-regular triangulations of torus and Klein bottle

    Indian Academy of Sciences (India)

    Home; Journals; Proceedings – Mathematical Sciences; Volume 115; Issue 3 ... A triangulation of a connected closed surface is called degree-regular if each of its vertices have the same degree. ... In [5], Datta and Nilakantan have classified all the degree-regular triangulations of closed surfaces on at most 11 vertices.

  12. Looseness and Independence Number of Triangulations on Closed Surfaces

    Directory of Open Access Journals (Sweden)

    Nakamoto Atsuhiro

    2016-08-01

    Full Text Available The looseness of a triangulation G on a closed surface F2, denoted by ξ (G, is defined as the minimum number k such that for any surjection c : V (G → {1, 2, . . . , k + 3}, there is a face uvw of G with c(u, c(v and c(w all distinct. We shall bound ξ (G for triangulations G on closed surfaces by the independence number of G denoted by α(G. In particular, for a triangulation G on the sphere, we have

  13. The DEEP2 Galaxy Redshift Survey: The Voronoi-Delaunay Method Catalog of Galaxy Groups

    Energy Technology Data Exchange (ETDEWEB)

    Gerke, Brian F.; /UC, Berkeley; Newman, Jeffrey A.; /LBNL, NSD; Davis, Marc; /UC, Berkeley /UC, Berkeley, Astron.Dept.; Marinoni, Christian; /Brera Observ.; Yan, Renbin; Coil, Alison L.; Conroy, Charlie; Cooper, Michael C.; /UC, Berkeley, Astron.Dept.; Faber, S.M.; /Lick Observ.; Finkbeiner, Douglas P.; /Princeton U. Observ.; Guhathakurta, Puragra; /Lick Observ.; Kaiser, Nick; /Hawaii U.; Koo, David C.; Phillips, Andrew C.; /Lick Observ.; Weiner, Benjamin J.; /Maryland U.

    2012-02-14

    We use the first 25% of the DEEP2 Galaxy Redshift Survey spectroscopic data to identify groups and clusters of galaxies in redshift space. The data set contains 8370 galaxies with confirmed redshifts in the range 0.7 {<=} z {<=} 1.4, over one square degree on the sky. Groups are identified using an algorithm (the Voronoi-Delaunay Method) that has been shown to accurately reproduce the statistics of groups in simulated DEEP2-like samples. We optimize this algorithm for the DEEP2 survey by applying it to realistic mock galaxy catalogs and assessing the results using a stringent set of criteria for measuring group-finding success, which we develop and describe in detail here. We find in particular that the group-finder can successfully identify {approx}78% of real groups and that {approx}79% of the galaxies that are true members of groups can be identified as such. Conversely, we estimate that {approx}55% of the groups we find can be definitively identified with real groups and that {approx}46% of the galaxies we place into groups are interloper field galaxies. Most importantly, we find that it is possible to measure the distribution of groups in redshift and velocity dispersion, n({sigma}, z), to an accuracy limited by cosmic variance, for dispersions greater than 350 km s{sup -1}. We anticipate that such measurements will allow strong constraints to be placed on the equation of state of the dark energy in the future. Finally, we present the first DEEP2 group catalog, which assigns 32% of the galaxies to 899 distinct groups with two or more members, 153 of which have velocity dispersions above 350 km s{sup -1}. We provide locations, redshifts and properties for this high-dispersion subsample. This catalog represents the largest sample to date of spectroscopically detected groups at z {approx} 1.

  14. Label triangulation

    International Nuclear Information System (INIS)

    May, R.P.

    1983-01-01

    Label Triangulation (LT) with neutrons allows the investigation of the quaternary structure of biological multicomponent complexes under native conditions. Provided that the complex can be fully separated into and reconstituted from its single - protonated and deuterated - components, small angle neutron scattering (SANS) can give selective information on shapes and pair distances of these components. Following basic geometrical rules, the spatial arrangement of the components can be reconstructed from these data. LT has so far been successfully applied to the small and large ribosomal subunits and the transcriptase of E. coli. (author)

  15. Los Triángulos de Delaunay como Procesamiento Previo para Extractores Difusos

    Directory of Open Access Journals (Sweden)

    Manuel Ramírez Flores

    2014-01-01

    Full Text Available La información biométrica que se extrae de las huellas dactilares tiende a ser diferente en cada adquisición, dada la incertidumbre existente en las mediciones y la presencia de ruido en las muestras, lo cual puede ocasionar que las palabras código generadas dentro de un extractor difuso posean un número de errores tal que rebase la capacidad de corrección de la codificación. Como consecuencia se tiene que lo anterior puede ocasionar que las huellas dactilares de una misma persona sean catalogadas como no coincidentes en su verificación o bien, que huellas de individuos diferentes parezcan demasiado similares. Para mitigar los efectos antes mencionados y sobrepasar las dificultades del pre-alineamiento de huellas dactilares, se propuso el uso de triángulos de Delaunay, lo cual permite proveer de estabilidad estructural local a la representación espacial de la información biométrica. En esa propuesta, las minucias de la huella son utilizadas como vértices de las triangulaciones y la red formada por éstas es tolerante a distorsiones, rotaciones y traslaciones. Sin embargo, en dicha propuesta se considera a la dispersión de minucias de huellas dactilares como no degenerativa y por tanto no se mencionan los umbrales o criterios necesarios para la formación de dichas triangulaciones, lo cual repercute en el desempeño de los extractores difusos. Con base en ello, este artículo presenta los resultados obtenidos al probar la formación de triangulaciones de Delaunay en imágenes de huella dactilar, en donde se aplican umbrales y criterios geométricos para luego contabilizar los triángulos coincidentes entre las estructuras formadas y definir los umbrales que maximicen dichas coincidencias.

  16. Mobile 3D Viewer Supporting RFID System

    International Nuclear Information System (INIS)

    Kim, J. J.; Yang, S. W.; Choi, Y.

    2007-01-01

    As hardware capabilities of mobile devices are being rapidly enhanced, applications based upon mobile devices are also being developed in wider areas. In this paper, a prototype mobile 3D viewer with the object identification through RFID system is presented. To visualize 3D engineering data such as CAD data, we need a process to compute triangulated data from boundary based surface like B-rep solid or trimmed surfaces. Since existing rendering engines on mobile devices do not provide triangulation capability, mobile 3D programs have focused only on an efficient handling with pre-tessellated geometry. We have developed a light and fast triangulation process based on constrained Delaunay triangulation suitable for mobile devices in the previous research. This triangulation software is used as a core for the mobile 3D viewer on a PDA with RFID system that may have potentially wide applications in many areas

  17. Mobile 3D Viewer Supporting RFID System

    Energy Technology Data Exchange (ETDEWEB)

    Kim, J J; Yang, S W; Choi, Y [Chungang Univ., Seoul (Korea, Republic of)

    2007-07-01

    As hardware capabilities of mobile devices are being rapidly enhanced, applications based upon mobile devices are also being developed in wider areas. In this paper, a prototype mobile 3D viewer with the object identification through RFID system is presented. To visualize 3D engineering data such as CAD data, we need a process to compute triangulated data from boundary based surface like B-rep solid or trimmed surfaces. Since existing rendering engines on mobile devices do not provide triangulation capability, mobile 3D programs have focused only on an efficient handling with pre-tessellated geometry. We have developed a light and fast triangulation process based on constrained Delaunay triangulation suitable for mobile devices in the previous research. This triangulation software is used as a core for the mobile 3D viewer on a PDA with RFID system that may have potentially wide applications in many areas.

  18. A REST Service for Triangulation of Point Sets Using Oriented Matroids

    Directory of Open Access Journals (Sweden)

    José Antonio Valero Medina

    2014-05-01

    Full Text Available This paper describes the implementation of a prototype REST service for triangulation of point sets collected by mobile GPS receivers. The first objective of this paper is to test functionalities of an application, which exploits mobile devices’ capabilities to get data associated with their spatial location. A triangulation of a set of points provides a mechanism through which it is possible to produce an accurate representation of spatial data. Such triangulation may be used for representing surfaces by Triangulated Irregular Networks (TINs, and for decomposing complex two-dimensional spatial objects into simpler geometries. The second objective of this paper is to promote the use of oriented matroids for finding alternative solutions to spatial data processing and analysis tasks. This study focused on the particular case of the calculation of triangulations based on oriented matroids. The prototype described in this paper used a wrapper to integrate and expose several tools previously implemented in C++.

  19. On Using Particle Finite Element for Hydrodynamics Problems Solving

    Directory of Open Access Journals (Sweden)

    E. V. Davidova

    2015-01-01

    Full Text Available The aim of the present research is to develop software for the Particle Finite Element Method (PFEM and its verification on the model problem of viscous incompressible flow simulation in a square cavity. The Lagrangian description of the medium motion is used: the nodes of the finite element mesh move together with the fluid that allows to consider them as particles of the medium. Mesh cells deform when in time-stepping procedure, so it is necessary to reconstruct the mesh to provide stability of the finite element numerical procedure.Meshing algorithm allows us to obtain the mesh, which satisfies the Delaunay criteria: it is called \\the possible triangles method". This algorithm is based on the well-known Fortune method of Voronoi diagram constructing for a certain set of points in the plane. The graphical representation of the possible triangles method is shown. It is suitable to use generalization of Delaunay triangulation in order to construct meshes with polygonal cells in case of multiple nodes close to be lying on the same circle.The viscous incompressible fluid flow is described by the Navier | Stokes equations and the mass conservation equation with certain initial and boundary conditions. A fractional steps method, which allows us to avoid non-physical oscillations of the pressure, provides the timestepping procedure. Using the finite element discretization and the Bubnov | Galerkin method allows us to carry out spatial discretization.For form functions calculation of finite element mesh with polygonal cells, \

  20. Stereo-tomography in triangulated models

    Science.gov (United States)

    Yang, Kai; Shao, Wei-Dong; Xing, Feng-yuan; Xiong, Kai

    2018-04-01

    Stereo-tomography is a distinctive tomographic method. It is capable of estimating the scatterer position, the local dip of scatterer and the background velocity simultaneously. Building a geologically consistent velocity model is always appealing for applied and earthquake seismologists. Differing from the previous work to incorporate various regularization techniques into the cost function of stereo-tomography, we think extending stereo-tomography to the triangulated model will be the most straightforward way to achieve this goal. In this paper, we provided all the Fréchet derivatives of stereo-tomographic data components with respect to model components for slowness-squared triangulated model (or sloth model) in 2D Cartesian coordinate based on the ray perturbation theory for interfaces. A sloth model representation means a sparser model representation when compared with conventional B-spline model representation. A sparser model representation leads to a smaller scale of stereo-tomographic (Fréchet) matrix, a higher-accuracy solution when solving linear equations, a faster convergence rate and a lower requirement for quantity of data space. Moreover, a quantitative representation of interface strengthens the relationships among different model components, which makes the cross regularizations among these model components, such as node coordinates, scatterer coordinates and scattering angles, etc., more straightforward and easier to be implemented. The sensitivity analysis, the model resolution matrix analysis and a series of synthetic data examples demonstrate the correctness of the Fréchet derivatives, the applicability of the regularization terms and the robustness of the stereo-tomography in triangulated model. It provides a solid theoretical foundation for the real applications in the future.

  1. Lagrangian fluid dynamics using the Voronoi-Delauanay mesh

    International Nuclear Information System (INIS)

    Dukowicz, J.K.

    1981-01-01

    A Lagrangian technique for numerical fluid dynamics is described. This technique makes use of the Voronoi mesh to efficiently locate new neighbors, and it uses the dual (Delaunay) triangulation to define computational cells. This removes all topological restrictions and facilitates the solution of problems containing interfaces and multiple materials. To improve computational accuracy a mesh smoothing procedure is employed

  2. Dynamical triangulated fermionic surfaces

    International Nuclear Information System (INIS)

    Ambjoern, J.; Varsted, S.

    1990-12-01

    We perform Monte Carlo simulations of randomly triangulated random surfaces which have fermionic world-sheet scalars θ i associated with each vertex i in addition to the usual bosonic world-sheet scalar χ i μ . The fermionic degrees of freedom force the internal metrics of the string to be less singular than the internal metric of the pure bosonic string. (orig.)

  3. Perceptually stable regions for arbitrary polygons.

    Science.gov (United States)

    Rocha, J

    2003-01-01

    Zou and Yan have recently developed a skeletonization algorithm of digital shapes based on a regularity/singularity analysis; they use the polygon whose vertices are the boundary pixels of the image to compute a constrained Delaunay triangulation (CDT) in order to find local symmetries and stable regions. Their method has produced good results but it is slow since its complexity depends on the number of contour pixels. This paper presents an extension of their technique to handle arbitrary polygons, not only polygons of short edges. Consequently, not only can we achieve results as good as theirs for digital images, but we can also compute skeletons of polygons of any number of edges. Since we can handle polygonal approximations of figures, the skeletons are more resilient to noise and faster to process.

  4. Triangulation, Respondent Validation, and Democratic Participation in Mixed Methods Research

    Science.gov (United States)

    Torrance, Harry

    2012-01-01

    Over the past 10 years or so the "Field" of "Mixed Methods Research" (MMR) has increasingly been exerting itself as something separate, novel, and significant, with some advocates claiming paradigmatic status. Triangulation is an important component of mixed methods designs. Triangulation has its origins in attempts to validate research findings…

  5. Measuring and Controlling Fairness of Triangulations

    KAUST Repository

    Jiang, Caigui; Gü nther, Felix; Wallner, Johannes; Pottmann, Helmut

    2016-01-01

    of fairness must take new aspects into account. We use concepts from discrete differential geometry (star-shaped Gauss images) to express fairness, and we also demonstrate how fairness can be incorporated into interactive geometric design of triangulated

  6. Path integral measure and triangulation independence in discrete gravity

    Science.gov (United States)

    Dittrich, Bianca; Steinhaus, Sebastian

    2012-02-01

    A path integral measure for gravity should also preserve the fundamental symmetry of general relativity, which is diffeomorphism symmetry. In previous work, we argued that a successful implementation of this symmetry into discrete quantum gravity models would imply discretization independence. We therefore consider the requirement of triangulation independence for the measure in (linearized) Regge calculus, which is a discrete model for quantum gravity, appearing in the semi-classical limit of spin foam models. To this end we develop a technique to evaluate the linearized Regge action associated to Pachner moves in 3D and 4D and show that it has a simple, factorized structure. We succeed in finding a local measure for 3D (linearized) Regge calculus that leads to triangulation independence. This measure factor coincides with the asymptotics of the Ponzano Regge Model, a 3D spin foam model for gravity. We furthermore discuss to which extent one can find a triangulation independent measure for 4D Regge calculus and how such a measure would be related to a quantum model for 4D flat space. To this end, we also determine the dependence of classical Regge calculus on the choice of triangulation in 3D and 4D.

  7. AUTOMATIC MESH GENERATION OF 3—D GEOMETRIC MODELS

    Institute of Scientific and Technical Information of China (English)

    刘剑飞

    2003-01-01

    In this paper the presentation of the ball-packing method is reviewed, and a schemeto generate mesh for complex 3-D geometric models is given, which consists of 4 steps: (1) createnodes in 3-D models by ball-packing method, (2) connect nodes to generate mesh by 3-D Delaunaytriangulation, (3) retrieve the boundary of the model after Delaunay triangulation, (4) improve themesh.

  8. Triangulation in Friedmann's cosmological model

    International Nuclear Information System (INIS)

    Fagundes, H.V.

    1977-01-01

    In Friedmann's model, physical 3-space has a curvature K = constant. In the cases of greatest interest (K different from 0) triangulation for the measurement of great distances should be based on non-Euclidean geometries: Riemannian (or doubly elliptic) geometry for a closed universe and Bolyai-Lobatchevsky's (or hiperbolic) geometry for an open universe [pt

  9. Introductory review on `Flying Triangulation': a motion-robust optical 3D measurement principle

    Science.gov (United States)

    Ettl, Svenja

    2015-04-01

    'Flying Triangulation' (FlyTri) is a recently developed principle which allows for a motion-robust optical 3D measurement of rough surfaces. It combines a simple sensor with sophisticated algorithms: a single-shot sensor acquires 2D camera images. From each camera image, a 3D profile is generated. The series of 3D profiles generated are aligned to one another by algorithms, without relying on any external tracking device. It delivers real-time feedback of the measurement process which enables an all-around measurement of objects. The principle has great potential for small-space acquisition environments, such as the measurement of the interior of a car, and motion-sensitive measurement tasks, such as the intraoral measurement of teeth. This article gives an overview of the basic ideas and applications of FlyTri. The main challenges and their solutions are discussed. Measurement examples are also given to demonstrate the potential of the measurement principle.

  10. Quantum triangulations. Moduli spaces, strings, and quantum computing

    Energy Technology Data Exchange (ETDEWEB)

    Carfora, Mauro; Marzouli, Annalisa [Univ. degli Studi di Pavia (Italy). Dipt. Fisica Nucleare e Teorica; Istituto Nazionale di Fisica Nucleare e Teorica, Pavia (Italy)

    2012-07-01

    Research on polyhedral manifolds often points to unexpected connections between very distinct aspects of Mathematics and Physics. In particular triangulated manifolds play quite a distinguished role in such settings as Riemann moduli space theory, strings and quantum gravity, topological quantum field theory, condensed matter physics, and critical phenomena. Not only do they provide a natural discrete analogue to the smooth manifolds on which physical theories are typically formulated, but their appearance is rather often a consequence of an underlying structure which naturally calls into play non-trivial aspects of representation theory, of complex analysis and topology in a way which makes manifest the basic geometric structures of the physical interactions involved. Yet, in most of the existing literature, triangulated manifolds are still merely viewed as a convenient discretization of a given physical theory to make it more amenable for numerical treatment. The motivation for these lectures notes is thus to provide an approachable introduction to this topic, emphasizing the conceptual aspects, and probing, through a set of cases studies, the connection between triangulated manifolds and quantum physics to the deepest. This volume addresses applied mathematicians and theoretical physicists working in the field of quantum geometry and its applications. (orig.)

  11. Gaussian vector fields on triangulated surfaces

    DEFF Research Database (Denmark)

    Ipsen, John H

    2016-01-01

    proven to be very useful to resolve the complex interplay between in-plane ordering of membranes and membrane conformations. In the present work we have developed a procedure for realistic representations of Gaussian models with in-plane vector degrees of freedoms on a triangulated surface. The method...

  12. Accuracy enhancement of point triangulation probes for linear displacement measurement

    Science.gov (United States)

    Kim, Kyung-Chan; Kim, Jong-Ahn; Oh, SeBaek; Kim, Soo Hyun; Kwak, Yoon Keun

    2000-03-01

    Point triangulation probes (PTBs) fall into a general category of noncontact height or displacement measurement devices. PTBs are widely used for their simple structure, high resolution, and long operating range. However, there are several factors that must be taken into account in order to obtain high accuracy and reliability; measurement errors from inclinations of an object surface, probe signal fluctuations generated by speckle effects, power variation of a light source, electronic noises, and so on. In this paper, we propose a novel signal processing algorithm, named as EASDF (expanded average square difference function), for a newly designed PTB which is composed of an incoherent source (LED), a line scan array detector, a specially selected diffuse reflecting surface, and several optical components. The EASDF, which is a modified correlation function, is able to calculate displacement between the probe and the object surface effectively even if there are inclinations, power fluctuations, and noises.

  13. The chromatic class and the chromatic number of the planar conjugated triangulation

    OpenAIRE

    Malinina, Natalia

    2013-01-01

    This material is dedicated to the estimation of the chromatic number and chromatic class of the conjugated triangulation (first conversion) and also of the second conversion of the planar triangulation. Also this paper introduces some new hypotheses, which are equivalent to Four Color Problem.

  14. Drug repurposing by integrated literature mining and drug–gene–disease triangulation

    DEFF Research Database (Denmark)

    Sun, Peng; Guo, Jiong; Winnenburg, Rainer

    2017-01-01

    recent developments in computational drug repositioning and introduce the utilized data sources. Afterwards, we introduce a new data fusion model based on n-cluster editing as a novel multi-source triangulation strategy, which was further combined with semantic literature mining. Our evaluation suggests...... that utilizing drug–gene–disease triangulation coupled to sophisticated text analysis is a robust approach for identifying new drug candidates for repurposing....

  15. Triangulation-based 3D surveying borescope

    Science.gov (United States)

    Pulwer, S.; Steglich, P.; Villringer, C.; Bauer, J.; Burger, M.; Franz, M.; Grieshober, K.; Wirth, F.; Blondeau, J.; Rautenberg, J.; Mouti, S.; Schrader, S.

    2016-04-01

    In this work, a measurement concept based on triangulation was developed for borescopic 3D-surveying of surface defects. The integration of such measurement system into a borescope environment requires excellent space utilization. The triangulation angle, the projected pattern, the numerical apertures of the optical system, and the viewing angle were calculated using partial coherence imaging and geometric optical raytracing methods. Additionally, optical aberrations and defocus were considered by the integration of Zernike polynomial coefficients. The measurement system is able to measure objects with a size of 50 μm in all dimensions with an accuracy of +/- 5 μm. To manage the issue of a low depth of field while using an optical high resolution system, a wavelength dependent aperture was integrated. Thereby, we are able to control depth of field and resolution of the optical system and can use the borescope in measurement mode with high resolution and low depth of field or in inspection mode with low resolution and higher depth of field. First measurements of a demonstrator system are in good agreement with our simulations.

  16. Triangulating' AMPATH: Demonstration of a multi-perspective ...

    African Journals Online (AJOL)

    For strategic planning, the Kenyan HIV/AIDS programme AMPATH (Academic Model Providing Access to Healthcare) sought to evaluate its performance in 2006. The method used for this evaluation was termed 'triangulation,' because it used information from three different sources – patients, communities, and programme ...

  17. Triangulation applied to Jan H. van Bemmel

    NARCIS (Netherlands)

    Hasman, A.; Bergemann, D.; McCray, A. T.; Talmon, J. L.; Zvárová, J.

    2006-01-01

    OBJECTIVE: To describe the person of Jan H. van Bemmel from different points of view. METHOD: Triangulation. RESULTS AND CONCLUSIONS: Jan H. van Bemmel successfully contributed to research and education in medical informatics. He inspired a lot of people in The Netherlands and internationally

  18. Tradeoffs in Design Research: Development Oriented Triangulation

    NARCIS (Netherlands)

    Koen van Turnhout; Sabine Craenmehr; Robert Holwerda; Mark Menijn; Jan-Pieter Zwart; René Bakker

    2013-01-01

    The Development Oriented Triangulation (DOT) framework in this paper can spark and focus the debate about mixed-method approaches in HCI. The framework can be used to classify HCI methods, create mixed-method designs, and to align research activities in multidisciplinary projects. The framework is

  19. GENUS STATISTICS USING THE DELAUNAY TESSELLATION FIELD ESTIMATION METHOD. I. TESTS WITH THE MILLENNIUM SIMULATION AND THE SDSS DR7

    International Nuclear Information System (INIS)

    Zhang Youcai; Yang Xiaohu; Springel, Volker

    2010-01-01

    We study the topology of cosmic large-scale structure through the genus statistics, using galaxy catalogs generated from the Millennium Simulation and observational data from the latest Sloan Digital Sky Survey Data Release (SDSS DR7). We introduce a new method for constructing galaxy density fields and for measuring the genus statistics of its isodensity surfaces. It is based on a Delaunay tessellation field estimation (DTFE) technique that allows the definition of a piece-wise continuous density field and the exact computation of the topology of its polygonal isodensity contours, without introducing any free numerical parameter. Besides this new approach, we also employ the traditional approaches of smoothing the galaxy distribution with a Gaussian of fixed width, or by adaptively smoothing with a kernel that encloses a constant number of neighboring galaxies. Our results show that the Delaunay-based method extracts the largest amount of topological information. Unlike the traditional approach for genus statistics, it is able to discriminate between the different theoretical galaxy catalogs analyzed here, both in real space and in redshift space, even though they are based on the same underlying simulation model. In particular, the DTFE approach detects with high confidence a discrepancy of one of the semi-analytic models studied here compared with the SDSS data, while the other models are found to be consistent.

  20. Wireless Sensor Networks - Node Localization for Various Industry Problems

    International Nuclear Information System (INIS)

    Derr, Kurt; Manic, Milos

    2015-01-01

    Fast, effective monitoring following airborne releases of toxic substances is critical to mitigate risks to threatened population areas. Wireless sensor nodes at fixed predetermined locations may monitor such airborne releases and provide early warnings to the public. A challenging algorithmic problem is determining the locations to place these sensor nodes while meeting several criteria: 1) provide complete coverage of the domain, and 2) create a topology with problem dependent node densities, while 3) minimizing the number of sensor nodes. This manuscript presents a novel approach to determining optimal sensor placement, Advancing Front mEsh generation with Constrained dElaunay Triangulation and Smoothing (AFECETS) that addresses these criteria. A unique aspect of AFECETS is the ability to determine wireless sensor node locations for areas of high interest (hospitals, schools, high population density areas) that require higher density of nodes for monitoring environmental conditions, a feature that is difficult to find in other research work. The AFECETS algorithm was tested on several arbitrary shaped domains. AFECETS simulation results show that the algorithm 1) provides significant reduction in the number of nodes, in some cases over 40%, compared to an advancing front mesh generation algorithm, 2) maintains and improves optimal spacing between nodes, and 3) produces simulation run times suitable for real-time applications

  1. Altitude, Orthocenter of a Triangle and Triangulation

    Directory of Open Access Journals (Sweden)

    Coghetto Roland

    2016-03-01

    Full Text Available We introduce the altitudes of a triangle (the cevians perpendicular to the opposite sides. Using the generalized Ceva’s Theorem, we prove the existence and uniqueness of the orthocenter of a triangle [7]. Finally, we formalize in Mizar [1] some formulas [2] to calculate distance using triangulation.

  2. The relationships between stressful life events during childhood and differentiation of self and intergenerational triangulation in adulthood.

    Science.gov (United States)

    Peleg, Ora

    2014-12-01

    This study examined the relationships between stressful life events in childhood and differentiation of self and intergenerational triangulation in adulthood. The sample included 217 students (173 females and 44 males) from a college in northern Israel. Participants completed the Hebrew versions of Life Events Checklist (LEC), Differentiation of Self Inventory-Revised (DSI-R) and intergenerational triangulation (INTRI). The main findings were that levels of stressful life events during childhood and adolescence among both genders were positively correlated with the levels of fusion with others and intergenerational triangulation. The levels of positive life events were negatively related to levels of emotional reactivity, emotional cut-off and intergenerational triangulation. Levels of stressful life events in females were positively correlated with emotional reactivity. Intergenerational triangulation was correlated with emotional reactivity, emotional cut-off, fusion with others and I-position. Findings suggest that families that experience higher levels of stressful life events may be at risk for higher levels of intergenerational triangulation and lower levels of differentiation of self. © 2014 International Union of Psychological Science.

  3. Summations over equilaterally triangulated surfaces and the critical string measure

    International Nuclear Information System (INIS)

    Smit, D.J.; Lawrence Berkeley Lab., CA

    1992-01-01

    We propose a new approach to the summation over dynamically triangulated Riemann surfaces which does not rely on properties of the potential in a matrix model. Instead, we formulate a purely algebraic discretization of critical string path integral. This is combined with a technique which assigns to each equilateral triangulation of a two-dimensional surface a Riemann surface defined over a certain finite extension of the field of rational numbers, i.e. an arithmetic surface. Thus we establish a new formulated in which the sum over randomly triangulated surfaces defines an invariant measure on the moduli space of arithmetic surfaces. It is shown that because of this it is far from obvious that this measure for large genera approximates the measure defined by the continuum theory, i.e. Liouville theory or critical string theory. In low genus this subtlety does not exist. In the case of critical string theory we explicitly compute the volume of the moduli space of arithmetic surfaces in terms of the modular height function and show that for low genus it approximates correctly the continuum measure. We also discuss a continuum limit which bears some resemblance with a double scaling limit in matrix models. (orig.)

  4. TRIANGULATION OF METHODS OF CAREER EDUCATION

    Directory of Open Access Journals (Sweden)

    Marija Turnsek Mikacic

    2015-09-01

    Full Text Available This paper is an overview of the current research in the field of career education and career planning. Presented results constitute a model based on the insight into different theories and empirical studies about career planning as a building block of personal excellence. We defined credibility, transferability and reliability of the research by means of triangulation. As sources of data of triangulation we included essays of participants of education and questionnaires. Qualitative analysis represented the framework for the construction of the paradigmatic model and the formulation of the final theory. We formulated a questionnaire on the basis of our own experiences in the area of the education of individuals. The quantitative analysis, based on the results of the interviews, confirms the following three hypotheses: The individuals who elaborated a personal career plan and acted accordingly, changed their attitudes towards their careers and took control over their lives; in addition, they achieved a high level of self-esteem and self-confidence, in tandem with the perception of personal excellence, in contrast to the individuals who did not participate in career education and did not elaborate a career plan. We used the tools of NLP (neurolinguistic programming as an additional technique at learning.

  5. Internet information triangulation: Design theory and prototype evaluation

    NARCIS (Netherlands)

    Wijnhoven, Alphonsus B.J.M.; Brinkhuis, Michel

    2014-01-01

    Many discussions exist regarding the credibility of information on the Internet. Similar discussions happen on the interpretation of social scientific research data, for which information triangulation has been proposed as a useful method. In this article, we explore a design theory—consisting of a

  6. A resistor interpretation of general anisotropic cardiac tissue.

    Science.gov (United States)

    Shao, Hai; Sampson, Kevin J; Pormann, John B; Rose, Donald J; Henriquez, Craig S

    2004-02-01

    This paper describes a spatial discretization scheme for partial differential equation systems that contain anisotropic diffusion. The discretization method uses unstructured finite volumes, or the boxes, that are formed as a secondary geometric structure from an underlying triangular mesh. We show how the discretization can be interpreted as a resistive circuit network, where each resistor is assigned at each edge of the triangular element. The resistor is computed as an anisotropy dependent geometric quantity of the local mesh structure. Finally, we show that under certain conditions, the discretization gives rise to negative resistors that can produce non-physical hyperpolarizations near depolarizing stimuli. We discuss how the proper choice of triangulation (anisotropic Delaunay triangulation) can ensure monotonicity (i.e. all resistors are positive).

  7. Relating covariant and canonical approaches to triangulated models of quantum gravity

    International Nuclear Information System (INIS)

    Arnsdorf, Matthias

    2002-01-01

    In this paper we explore the relation between covariant and canonical approaches to quantum gravity and BF theory. We will focus on the dynamical triangulation and spin-foam models, which have in common that they can be defined in terms of sums over spacetime triangulations. Our aim is to show how we can recover these covariant models from a canonical framework by providing two regularizations of the projector onto the kernel of the Hamiltonian constraint. This link is important for the understanding of the dynamics of quantum gravity. In particular, we will see how in the simplest dynamical triangulation model we can recover the Hamiltonian constraint via our definition of the projector. Our discussion of spin-foam models will show how the elementary spin-network moves in loop quantum gravity, which were originally assumed to describe the Hamiltonian constraint action, are in fact related to the time-evolution generated by the constraint. We also show that the Immirzi parameter is important for the understanding of a continuum limit of the theory

  8. Dynamically triangulated surfaces - some analytical results

    International Nuclear Information System (INIS)

    Kostov, I.K.

    1987-01-01

    We give a brief review of the analytical results concerning the model of dynamically triangulated surfaces. We will discuss the possible types of critical behaviour (depending on the dimension D of the embedding space) and the exact solutions obtained for D=0 and D=-2. The latter are important as a check of the Monte Carlo simulations applyed to study the model in more physical dimensions. They give also some general insight of its critical properties

  9. Triangulation and the importance of establishing valid methods for food safety culture evaluation.

    Science.gov (United States)

    Jespersen, Lone; Wallace, Carol A

    2017-10-01

    The research evaluates maturity of food safety culture in five multi-national food companies using method triangulation, specifically self-assessment scale, performance documents, and semi-structured interviews. Weaknesses associated with each individual method are known but there are few studies in food safety where a method triangulation approach is used for both data collection and data analysis. Significantly, this research shows that individual results taken in isolation can lead to wrong conclusions, resulting in potentially failing tactics and wasted investments. However, by applying method triangulation and reviewing results from a range of culture measurement tools it is possible to better direct investments and interventions. The findings add to the food safety culture paradigm beyond a single evaluation of food safety culture using generic culture surveys. Copyright © 2017. Published by Elsevier Ltd.

  10. Flattening of the electrocardiographic T-wave is a sign of proarrhythmic risk and a reflection of action potential triangulation

    DEFF Research Database (Denmark)

    Bhuiyan, Tanveer Ahmed; Graff, Claus; Kanters, J.K.

    2013-01-01

    Drug-induced triangulation of the cardiac action potential is associated with increased risk of arrhythmic events. It has been suggested that triangulation causes a flattening of the electrocardiographic T-wave but the relationship between triangulation, T-wave flattening and onset of arrhythmia ...

  11. Quadtree of TIN: a new algorithm of dynamic LOD

    Science.gov (United States)

    Zhang, Junfeng; Fei, Lifan; Chen, Zhen

    2009-10-01

    Currently, Real-time visualization of large-scale digital elevation model mainly employs the regular structure of GRID based on quadtree and triangle simplification methods based on irregular triangulated network (TIN). TIN is a refined means to express the terrain surface in the computer science, compared with GRID. However, the data structure of TIN model is complex, and is difficult to realize view-dependence representation of level of detail (LOD) quickly. GRID is a simple method to realize the LOD of terrain, but contains more triangle count. A new algorithm, which takes full advantage of the two methods' merit, is presented in this paper. This algorithm combines TIN with quadtree structure to realize the view-dependence LOD controlling over the irregular sampling point sets, and holds the details through the distance of viewpoint and the geometric error of terrain. Experiments indicate that this approach can generate an efficient quadtree triangulation hierarchy over any irregular sampling point sets and achieve dynamic and visual multi-resolution performance of large-scale terrain at real-time.

  12. The ising model on the dynamical triangulated random surface

    International Nuclear Information System (INIS)

    Aleinov, I.D.; Migelal, A.A.; Zmushkow, U.V.

    1990-01-01

    The critical properties of Ising model on the dynamical triangulated random surface embedded in D-dimensional Euclidean space are investigated. The strong coupling expansion method is used. The transition to thermodynamical limit is performed by means of continuous fractions

  13. Detection of Water Contamination Events Using Fluorescence Spectroscopy and Alternating Trilinear Decomposition Algorithm

    Directory of Open Access Journals (Sweden)

    Jie Yu

    2017-01-01

    Full Text Available The method based on conventional index and UV-vision has been widely applied in the field of water quality abnormality detection. This paper presents a qualitative analysis approach to detect the water contamination events with unknown pollutants. Fluorescence spectra were used as water quality monitoring tools, and the detection method of unknown contaminants in water based on alternating trilinear decomposition (ATLD is proposed to analyze the excitation and emission spectra of the samples. The Delaunay triangulation interpolation method was used to make the pretreatment of three-dimensional fluorescence spectra data, in order to estimate the effect of Rayleigh and Raman scattering; ATLD model was applied to establish the model of normal water sample, and the residual matrix was obtained by subtracting the measured matrix from the model matrix; the residual sum of squares obtained from the residual matrix and threshold was used to make qualitative discrimination of test samples and distinguish drinking water samples and organic pollutant samples. The results of the study indicate that ATLD modeling with three-dimensional fluorescence spectra can provide a tool for detecting unknown organic pollutants in water qualitatively. The method based on fluorescence spectra can be complementary to the method based on conventional index and UV-vision.

  14. An Efficient Mesh Generation Method for Fractured Network System Based on Dynamic Grid Deformation

    Directory of Open Access Journals (Sweden)

    Shuli Sun

    2013-01-01

    Full Text Available Meshing quality of the discrete model influences the accuracy, convergence, and efficiency of the solution for fractured network system in geological problem. However, modeling and meshing of such a fractured network system are usually tedious and difficult due to geometric complexity of the computational domain induced by existence and extension of fractures. The traditional meshing method to deal with fractures usually involves boundary recovery operation based on topological transformation, which relies on many complicated techniques and skills. This paper presents an alternative and efficient approach for meshing fractured network system. The method firstly presets points on fractures and then performs Delaunay triangulation to obtain preliminary mesh by point-by-point centroid insertion algorithm. Then the fractures are exactly recovered by local correction with revised dynamic grid deformation approach. Smoothing algorithm is finally applied to improve the quality of mesh. The proposed approach is efficient, easy to implement, and applicable to the cases of initial existing fractures and extension of fractures. The method is successfully applied to modeling of two- and three-dimensional discrete fractured network (DFN system in geological problems to demonstrate its effectiveness and high efficiency.

  15. Non-degenerated Ground States and Low-degenerated Excited States in the Antiferromagnetic Ising Model on Triangulations

    Science.gov (United States)

    Jiménez, Andrea

    2014-02-01

    We study the unexpected asymptotic behavior of the degeneracy of the first few energy levels in the antiferromagnetic Ising model on triangulations of closed Riemann surfaces. There are strong mathematical and physical reasons to expect that the number of ground states (i.e., degeneracy) of the antiferromagnetic Ising model on the triangulations of a fixed closed Riemann surface is exponential in the number of vertices. In the set of plane triangulations, the degeneracy equals the number of perfect matchings of the geometric duals, and thus it is exponential by a recent result of Chudnovsky and Seymour. From the physics point of view, antiferromagnetic triangulations are geometrically frustrated systems, and in such systems exponential degeneracy is predicted. We present results that contradict these predictions. We prove that for each closed Riemann surface S of positive genus, there are sequences of triangulations of S with exactly one ground state. One possible explanation of this phenomenon is that exponential degeneracy would be found in the excited states with energy close to the ground state energy. However, as our second result, we show the existence of a sequence of triangulations of a closed Riemann surface of genus 10 with exactly one ground state such that the degeneracy of each of the 1st, 2nd, 3rd and 4th excited energy levels belongs to O( n), O( n 2), O( n 3) and O( n 4), respectively.

  16. The use of Triangulation in Social Sciences Research : Can qualitative and quantitative methods be combined?

    Directory of Open Access Journals (Sweden)

    Ashatu Hussein

    2015-03-01

    Full Text Available This article refers to a study in Tanzania on fringe benefits or welfare via the work contract1 where we will work both quantitatively and qualitatively. My focus is on the vital issue of combining methods or methodologies. There has been mixed views on the uses of triangulation in researches. Some authors argue that triangulation is just for increasing the wider and deep understanding of the study phenomenon, while others have argued that triangulation is actually used to increase the study accuracy, in this case triangulation is one of the validity measures. Triangulation is defined as the use of multiple methods mainly qualitative and quantitative methods in studying the same phenomenon for the purpose of increasing study credibility. This implies that triangulation is the combination of two or more methodological approaches, theoretical perspectives, data sources, investigators and analysis methods to study the same phenomenon.However, using both qualitative and quantitative paradigms in the same study has resulted into debate from some researchers arguing that the two paradigms differ epistemologically and ontologically. Nevertheless, both paradigms are designed towards understanding about a particular subject area of interest and both of them have strengths and weaknesses. Thus, when combined there is a great possibility of neutralizing the flaws of one method and strengthening the benefits of the other for the better research results. Thus, to reap the benefits of two paradigms and minimizing the drawbacks of each, the combination of the two approaches have been advocated in this article. The quality of our studies on welfare to combat poverty is crucial, and especially when we want our conclusions to matter in practice.

  17. Employee-satisfaction: A triangulation approach

    Directory of Open Access Journals (Sweden)

    P. J. Visser

    1997-06-01

    Full Text Available The research on employee-satisfaction was conducted in the manufacturing industry. The sample consisted of 543 employees. The methodology could be described as a "triangulation approach" where a combination of quantitative and qualitative measurements were utilised and the results of both types of measurement integrated in the study of the construct. The research confirms existing findings that although the measurement of dimensions such as equitable rewards, working conditions, supportive colleagues, job content, etc. yield results on the level of employee-satisfaction, a single question, namely, "How satisfied are you with your job?" compares favourably with the general index. The findings also suggest the advantage of complimenting the quantitative data with qualitative information. The conclusions confirm the value of a qualitative method in cross-cultural research in an African environment. Opsomming Die navorsing omtrent werknemerstevredenheid is uitgevoer in die vervaardigingsbedryf. Die steekproef het bestaan uit 543 werknemers. Die metode van ondersoek kan beskryf word as 'n "driekantige benadering" (triangulation approach waar daar van kwantitatiewe en kwalitatiewe meting gebruik gemaak is en die resultate geihtegreer is in die bestudering van die konstruk. Die navorsing bevestig bestaande bevindinge dat die meting van dimensies soos vergelykbare belonings, werkstoestande, ondersteunende kollegas, inhoud van werk, ens. resultate lewer rakende die vlak van werknemerstevredenheid, 'n enkel vraag, naamlik, "Hoe tevrede is jy met jou werk?" gunstig vergelyk met die algemene indeks. Die bevindinge dui ook op die voordele van 'n benadering waar die kwantitatiewe data gekomplimenteer word deur kwalitatiewe inligting soos verkry uit individuele onderhoude. Die gevolgtrekkings bevestig die waarde wat die kwalitatiewe navorsingsmetode inhou vir kruis-kulturele navorsing in 'n Afrika konteks.

  18. Large N Limits in Tensor Models: Towards More Universality Classes of Colored Triangulations in Dimension d≥2

    Science.gov (United States)

    Bonzom, Valentin

    2016-07-01

    We review an approach which aims at studying discrete (pseudo-)manifolds in dimension d≥ 2 and called random tensor models. More specifically, we insist on generalizing the two-dimensional notion of p-angulations to higher dimensions. To do so, we consider families of triangulations built out of simplices with colored faces. Those simplices can be glued to form new building blocks, called bubbles which are pseudo-manifolds with boundaries. Bubbles can in turn be glued together to form triangulations. The main challenge is to classify the triangulations built from a given set of bubbles with respect to their numbers of bubbles and simplices of codimension two. While the colored triangulations which maximize the number of simplices of codimension two at fixed number of simplices are series-parallel objects called melonic triangulations, this is not always true anymore when restricting attention to colored triangulations built from specific bubbles. This opens up the possibility of new universality classes of colored triangulations. We present three existing strategies to find those universality classes. The first two strategies consist in building new bubbles from old ones for which the problem can be solved. The third strategy is a bijection between those colored triangulations and stuffed, edge-colored maps, which are some sort of hypermaps whose hyperedges are replaced with edge-colored maps. We then show that the present approach can lead to enumeration results and identification of universality classes, by working out the example of quartic tensor models. They feature a tree-like phase, a planar phase similar to two-dimensional quantum gravity and a phase transition between them which is interpreted as a proliferation of baby universes. While this work is written in the context of random tensors, it is almost exclusively of combinatorial nature and we hope it is accessible to interested readers who are not familiar with random matrices, tensors and quantum

  19. Putting a cap on causality violations in causal dynamical triangulations

    International Nuclear Information System (INIS)

    Ambjoern, Jan; Loll, Renate; Westra, Willem; Zohren, Stefan

    2007-01-01

    The formalism of causal dynamical triangulations (CDT) provides us with a non-perturbatively defined model of quantum gravity, where the sum over histories includes only causal space-time histories. Path integrals of CDT and their continuum limits have been studied in two, three and four dimensions. Here we investigate a generalization of the two-dimensional CDT model, where the causality constraint is partially lifted by introducing branching points with a weight g s , and demonstrate that the system can be solved analytically in the genus-zero sector. The solution is analytic in a neighborhood around weight g s = 0 and cannot be analytically continued to g s = ∞, where the branching is entirely geometric and where one would formally recover standard Euclidean two-dimensional quantum gravity defined via dynamical triangulations or Liouville theory

  20. Methodological triangulation in work life research

    DEFF Research Database (Denmark)

    Warring, Niels

    Based on examples from two research projects on preschool teachers' work, the paper will discuss potentials and challenges in methodological triangulation in work life research. Analysis of ethnographic and phenomenological inspired observations of everyday life in day care centers formed the basis...... for individual interviews and informal talks with employees. The interviews and conversations were based on a critical hermeneutic approach. The analysis of observations and interviews constituted a knowledge base as the project went in to the last phase: action research workshops. In the workshops findings from...

  1. GEOPOSITIONING PRECISION ANALYSIS OF MULTIPLE IMAGE TRIANGULATION USING LRO NAC LUNAR IMAGES

    Directory of Open Access Journals (Sweden)

    K. Di

    2016-06-01

    Full Text Available This paper presents an empirical analysis of the geopositioning precision of multiple image triangulation using Lunar Reconnaissance Orbiter Camera (LROC Narrow Angle Camera (NAC images at the Chang’e-3(CE-3 landing site. Nine LROC NAC images are selected for comparative analysis of geopositioning precision. Rigorous sensor models of the images are established based on collinearity equations with interior and exterior orientation elements retrieved from the corresponding SPICE kernels. Rational polynomial coefficients (RPCs of each image are derived by least squares fitting using vast number of virtual control points generated according to rigorous sensor models. Experiments of different combinations of images are performed for comparisons. The results demonstrate that the plane coordinates can achieve a precision of 0.54 m to 2.54 m, with a height precision of 0.71 m to 8.16 m when only two images are used for three-dimensional triangulation. There is a general trend that the geopositioning precision, especially the height precision, is improved with the convergent angle of the two images increasing from several degrees to about 50°. However, the image matching precision should also be taken into consideration when choosing image pairs for triangulation. The precisions of using all the 9 images are 0.60 m, 0.50 m, 1.23 m in along-track, cross-track, and height directions, which are better than most combinations of two or more images. However, triangulation with selected fewer images could produce better precision than that using all the images.

  2. Fates Intertwining, Cultures Connection. Impact of Delaunay Couple Research Activity on Gabriel Guevrekian’s Oeuvre

    Directory of Open Access Journals (Sweden)

    Elena V. Zabelina

    2012-08-01

    Full Text Available The article deals with an avant-garde trend simultaneism, which primarily emerged in painting, then developed in textile painting, clothes design and cinematography. The article attempts to interpret simultaneism theoretically as a holistic phenomenon and determine its place in the art of the XX century. Certain displays of simultaneism trend in painting and textile design have been studied well. But special displays of simultaneism in cinematography and architecture have been studied insufficiently. This fact prevents interpreting simultaneism as a holistic phenomenon. To study the impact of simultaneism on landscape architecture, the article uses such scientific methods as stylistic-formal analysis and comparative analysis. The article discloses principles of simultaneism, the trend, which originates at the intersection of art and science, traces the impact of Robert and Sonia Delaunay research activities on Guevrekian’s creative concept. The author introduces for the scientific use some sources, unknown to the domestic study of art.

  3. Grey signal processing and data reconstruction in the non-diffracting beam triangulation measurement system

    Science.gov (United States)

    Meng, Hao; Wang, Zhongyu; Fu, Jihua

    2008-12-01

    The non-diffracting beam triangulation measurement system possesses the advantages of longer measurement range, higher theoretical measurement accuracy and higher resolution over the traditional laser triangulation measurement system. Unfortunately the measurement accuracy of the system is greatly degraded due to the speckle noise, the CCD photoelectric noise and the background light noise in practical applications. Hence, some effective signal processing methods must be applied to improve the measurement accuracy. In this paper a novel effective method for removing the noises in the non-diffracting beam triangulation measurement system is proposed. In the method the grey system theory is used to process and reconstruct the measurement signal. Through implementing the grey dynamic filtering based on the dynamic GM(1,1), the noises can be effectively removed from the primary measurement data and the measurement accuracy of the system can be improved as a result.

  4. BIOINFORMATICS MODEL OF THE CARAPACE SCUTE PATTERN OF THE RED-EARED SLIDER TRACHEMYS SCRIPTA ELEGANS (WIED-NEUWIED, 1839

    Directory of Open Access Journals (Sweden)

    Andrey Kiladze

    2017-10-01

    Full Text Available The scutes located on the carapace of the red-eared slider Trachemys scripta elegans (Wied-Neuwied, 1839 have been modeled. Bioinformatics modeling of carapace’s scutes were carried out by utilizing the Voronoi decomposition and Delaunay triangulation method. These two geometric techniques allow the patterns of vertebral and costal scutes to be recreated. The proposed model may have a certain value for taxonomy as well as for estimating the symmetry of the morphological structures, which is important for the purposes of biomimetics.

  5. Reconstructing Surface Triangulations by Their Intersection Matrices 26 September 2014

    Directory of Open Access Journals (Sweden)

    Arocha Jorge L.

    2015-08-01

    Full Text Available The intersection matrix of a simplicial complex has entries equal to the rank of the intersecction of its facets. We prove that this matrix is enough to define up to isomorphism a triangulation of a surface.

  6. All roads lead to Rome - New search methods for the optimal triangulation problem

    Czech Academy of Sciences Publication Activity Database

    Ottosen, T. J.; Vomlel, Jiří

    2012-01-01

    Roč. 53, č. 9 (2012), s. 1350-1366 ISSN 0888-613X R&D Projects: GA MŠk 1M0572; GA ČR GEICC/08/E010; GA ČR GA201/09/1891 Grant - others:GA MŠk(CZ) 2C06019 Institutional support: RVO:67985556 Keywords : Bayesian networks * Optimal triangulation * Probabilistic inference * Cliques in a graph Subject RIV: BD - Theory of Information Impact factor: 1.729, year: 2012 http://library.utia.cas.cz/separaty/2012/MTR/vomlel-all roads lead to rome - new search methods for the optimal triangulation problem.pdf

  7. A simplicial algorithm for testing the integral properties of polytopes : A revision

    NARCIS (Netherlands)

    Yang, Z.F.

    1994-01-01

    Given an arbitrary polytope P in the n-dimensional Euclidean space R n , the question is to determine whether P contains an integral point or not. We propose a simplicial algorithm to answer this question based on a specifc integer labeling rule and a specific triangulation of R n . Starting from an

  8. Quantum Computing in Decoherence-Free Subspace Constructed by Triangulation

    OpenAIRE

    Bi, Qiao; Guo, Liu; Ruda, H. E.

    2010-01-01

    A formalism for quantum computing in decoherence-free subspaces is presented. The constructed subspaces are partial triangulated to an index related to environment. The quantum states in the subspaces are just projected states which are ruled by a subdynamic kinetic equation. These projected states can be used to perform ideal quantum logical operations without decoherence.

  9. Euclidean Dynamical Triangulation revisited: is the phase transition really 1st order?

    International Nuclear Information System (INIS)

    Rindlisbacher, Tobias; Forcrand, Philippe de

    2015-01-01

    The transition between the two phases of 4D Euclidean Dynamical Triangulation (http://dx.doi.org/10.1016/0370-2693(92)90709-D) was long believed to be of second order until in 1996 first order behavior was found for sufficiently large systems (http://dx.doi.org/10.1016/0550-3213(96)00214-3, http://dx.doi.org/10.1016/S0370-2693(96)01277-4). However, one may wonder if this finding was affected by the numerical methods used: to control volume fluctuations, in both studies (http://dx.doi.org/10.1016/0550-3213(96)00214-3, http://dx.doi.org/10.1016/S0370-2693(96)01277-4) an artificial harmonic potential was added to the action and in (http://dx.doi.org/10.1016/S0370-2693(96)01277-4) measurements were taken after a fixed number of accepted instead of attempted moves which introduces an additional error. Finally the simulations suffer from strong critical slowing down which may have been underestimated. In the present work, we address the above weaknesses: we allow the volume to fluctuate freely within a fixed interval; we take measurements after a fixed number of attempted moves; and we overcome critical slowing down by using an optimized parallel tempering algorithm (http://dx.doi.org/10.1088/1742-5468/2010/01/P01020). With these improved methods, on systems of size up to N_4=64k 4-simplices, we confirm that the phase transition is 1"s"t order. In addition, we discuss a local criterion to decide whether parts of a triangulation are in the elongated or crumpled state and describe a new correspondence between EDT and the balls in boxes model. The latter gives rise to a modified partition function with an additional, third coupling. Finally, we propose and motivate a class of modified path-integral measures that might remove the metastability of the Markov chain and turn the phase transition into 2"n"d order.

  10. A CDT-Based Heuristic Zone Design Approach for Economic Census Investigators

    Directory of Open Access Journals (Sweden)

    Changixu Cheng

    2015-01-01

    Full Text Available This paper addresses a special zone design problem for economic census investigators that is motivated by a real-world application. This paper presented a heuristic multikernel growth approach via Constrained Delaunay Triangulation (CDT. This approach not only solved the barriers problem but also dealt with the polygon data in zoning procedure. In addition, it uses a new heuristic method to speed up the zoning process greatly on the premise of the required quality of zoning. At last, two special instances for economic census were performed, highlighting the performance of this approach.

  11. Quantum Computing in Decoherence-Free Subspace Constructed by Triangulation

    Directory of Open Access Journals (Sweden)

    Qiao Bi

    2010-01-01

    Full Text Available A formalism for quantum computing in decoherence-free subspaces is presented. The constructed subspaces are partial triangulated to an index related to environment. The quantum states in the subspaces are just projected states which are ruled by a subdynamic kinetic equation. These projected states can be used to perform ideal quantum logical operations without decoherence.

  12. SOFTWARE MODULE FOR CONSTRUCTING THE INTERSECTION OF TRIANGULATED SURFACES

    Directory of Open Access Journals (Sweden)

    Vladimir V. Kurgansky

    2018-03-01

    Full Text Available The effective algorithm is proposed for implementing Boolean operations over triangulated surfaces, namely, disjunction, conjunction and Boolean difference, and its software implementation. The idea consists in as follow. The first step is to determine pairs of intersecting triangles: localizing the intersection of the two surfaces using the bounding volume of the parallelepipeds and the future of their intersection. The second step is constructing an intersection line for each pair of triangles: a pair of intersecting triangles is selected, and the segment along which they intersect is constructed. Further, thanks to the entered data structure, "adjacent" triangles are selected, among which are selected those that form the intersecting pair. The process described above continues as long as such triangles can be detected. After that the triangles involved in the intersection are retriangulated. For each triangle, all the edges are known on which it intersects with triangles from another surface. These edges are structural edges in the triangulation problem with constraints for a given triangle. The third step is to combine all surfaces into one surface. Further, subsurfaces are constructed along the loops of intersection limited by the found loops. Since the intersection line of the surfaces was constructed in sequence, it is possible to specify the direction of each edge. Any edge from the intersection line is selected. The triangle is added to the subsurface under construction, which includes this edge and its orientation is the same as the direction of the edge. The edge which was selected previously is deleted from intersection line, but two new edges are added is the remaining edges of added triangle. The third step is to combine all surfaces into one surface. Further, subsurfaces are constructed along the cycles of intersection limited by the found cycles. Since the intersection line of the surfaces was constructed in sequence, it is

  13. Python and computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Doak, J. E. (Justin E.); Prasad, Lakshman

    2002-01-01

    This paper discusses the use of Python in a computer vision (CV) project. We begin by providing background information on the specific approach to CV employed by the project. This includes a brief discussion of Constrained Delaunay Triangulation (CDT), the Chordal Axis Transform (CAT), shape feature extraction and syntactic characterization, and normalization of strings representing objects. (The terms 'object' and 'blob' are used interchangeably, both referring to an entity extracted from an image.) The rest of the paper focuses on the use of Python in three critical areas: (1) interactions with a MySQL database, (2) rapid prototyping of algorithms, and (3) gluing together all components of the project including existing C and C++ modules. For (l), we provide a schema definition and discuss how the various tables interact to represent objects in the database as tree structures. (2) focuses on an algorithm to create a hierarchical representation of an object, given its string representation, and an algorithm to match unknown objects against objects in a database. And finally, (3) discusses the use of Boost Python to interact with the pre-existing C and C++ code that creates the CDTs and CATS, performs shape feature extraction and syntactic characterization, and normalizes object strings. The paper concludes with a vision of the future use of Python for the CV project.

  14. Marginal elasticity of periodic triangulated origami

    Science.gov (United States)

    Chen, Bryan; Sussman, Dan; Lubensky, Tom; Santangelo, Chris

    Origami, the classical art of folding paper, has inspired much recent work on assembling complex 3D structures from planar sheets. Origami, and more generally hinged structures with rigid panels, where all faces are triangles have special properties due to having a bulk balance of mechanical degrees of freedom and constraints. We study two families of periodic triangulated origami structures, one based on the Miura ori and one based on a kagome-like pattern due to Ron Resch. We point out the consequences of the balance of degrees of freedom and constraints for these ''metamaterial plates'' and show how the elasticity can be tuned by changing the unit cell geometry.

  15. Robotic tool positioning process using a multi-line off-axis laser triangulation sensor

    Science.gov (United States)

    Pinto, T. C.; Matos, G.

    2018-03-01

    Proper positioning of a friction stir welding head for pin insertion, driven by a closed chain robot, is important to ensure quality repair of cracks. A multi-line off-axis laser triangulation sensor was designed to be integrated to the robot, allowing relative measurements of the surface to be repaired. This work describes the sensor characteristics, its evaluation and the measurement process for tool positioning to a surface point of interest. The developed process uses a point of interest image and a measured point cloud to define the translation and rotation for tool positioning. Sensor evaluation and tests are described. Keywords: laser triangulation, 3D measurement, tool positioning, robotics.

  16. Measuring and Controlling Fairness of Triangulations

    KAUST Repository

    Jiang, Caigui

    2016-09-30

    The fairness of meshes that represent geometric shapes is a topic that has been studied extensively and thoroughly. However, the focus in such considerations often is not on the mesh itself, but rather on the smooth surface approximated by it, and fairness essentially expresses a mesh’s suitability for purposes such as visualization or simulation. This paper focusses on meshes in the architectural context, where vertices, edges, and faces of meshes are often highly visible, and any notion of fairness must take new aspects into account. We use concepts from discrete differential geometry (star-shaped Gauss images) to express fairness, and we also demonstrate how fairness can be incorporated into interactive geometric design of triangulated freeform skins.

  17. Chromatic polynomials of planar triangulations, the Tutte upper bound and chromatic zeros

    International Nuclear Information System (INIS)

    Shrock, Robert; Xu Yan

    2012-01-01

    Tutte proved that if G pt is a planar triangulation and P(G pt , q) is its chromatic polynomial, then |P(G pt , τ + 1)| ⩽ (τ − 1) n−5 , where τ=(1+√5 )/2 and n is the number of vertices in G pt . Here we study the ratio r(G pt ) = |P(G pt , τ + 1)|/(τ − 1) n−5 for a variety of planar triangulations. We construct infinite recursive families of planar triangulations G pt,m depending on a parameter m linearly related to n and show that if P(G pt,m , q) only involves a single power of a polynomial, then r(G pt,m ) approaches zero exponentially fast as n → ∞. We also construct infinite recursive families for which P(G pt,m , q) is a sum of powers of certain functions and show that for these, r(G pt,m ) may approach a finite nonzero constant as n → ∞. The connection between the Tutte upper bound and the observed chromatic zero(s) near to τ + 1 is investigated. We report the first known graph for which the zero(s) closest to τ + 1 is not real, but instead is a complex-conjugate pair. Finally, we discuss connections with the nonzero ground-state entropy of the Potts antiferromagnet on these families of graphs. (paper)

  18. Quantum gravity from simplices: analytical investigations of causal dynamical triangulations

    NARCIS (Netherlands)

    Benedetti, D.

    2007-01-01

    A potentially powerful approach to quantum gravity has been developed over the last few years under the name of Causal Dynamical Triangulations. Although these models can be solved exactly in a variety of ways in the case of pure gravity in (1+1) dimensions,it is difficult to extend any of the

  19. Measuring teamwork in primary care: Triangulation of qualitative and quantitative data.

    Science.gov (United States)

    Brown, Judith Belle; Ryan, Bridget L; Thorpe, Cathy; Markle, Emma K R; Hutchison, Brian; Glazier, Richard H

    2015-09-01

    This article describes the triangulation of qualitative dimensions, reflecting high functioning teams, with the results of standardized teamwork measures. The study used a mixed methods design using qualitative and quantitative approaches to assess teamwork in 19 Family Health Teams in Ontario, Canada. This article describes dimensions from the qualitative phase using grounded theory to explore the issues and challenges to teamwork. Two quantitative measures were used in the study, the Team Climate Inventory (TCI) and the Providing Effective Resources and Knowledge (PERK) scale. For the triangulation analysis, the mean scores of these measures were compared with the qualitatively derived ratings for the dimensions. The final sample for the qualitative component was 107 participants. The qualitative analysis identified 9 dimensions related to high team functioning such as common philosophy, scope of practice, conflict resolution, change management, leadership, and team evolution. From these dimensions, teams were categorized numerically as high, moderate, or low functioning. Three hundred seventeen team members completed the survey measures. Mean site scores for the TCI and PERK were 3.87 and 3.88, respectively (of 5). The TCI was associated will all dimensions except for team location, space allocation, and executive director leadership. The PERK was associated with all dimensions except team location. Data triangulation provided qualitative and quantitative evidence of what constitutes teamwork. Leadership was pivotal in forging a common philosophy and encouraging team collaboration. Teams used conflict resolution strategies and adapted to the changes they encountered. These dimensions advanced the team's evolution toward a high functioning team. (c) 2015 APA, all rights reserved).

  20. Triangulation and Mixed Methods Designs: Data Integration with New Research Technologies

    Science.gov (United States)

    Fielding, Nigel G.

    2012-01-01

    Data integration is a crucial element in mixed methods analysis and conceptualization. It has three principal purposes: illustration, convergent validation (triangulation), and the development of analytic density or "richness." This article discusses such applications in relation to new technologies for social research, looking at three…

  1. A range-based predictive localization algorithm for WSID networks

    Science.gov (United States)

    Liu, Yuan; Chen, Junjie; Li, Gang

    2017-11-01

    Most studies on localization algorithms are conducted on the sensor networks with densely distributed nodes. However, the non-localizable problems are prone to occur in the network with sparsely distributed sensor nodes. To solve this problem, a range-based predictive localization algorithm (RPLA) is proposed in this paper for the wireless sensor networks syncretizing the RFID (WSID) networks. The Gaussian mixture model is established to predict the trajectory of a mobile target. Then, the received signal strength indication is used to reduce the residence area of the target location based on the approximate point-in-triangulation test algorithm. In addition, collaborative localization schemes are introduced to locate the target in the non-localizable situations. Simulation results verify that the RPLA achieves accurate localization for the network with sparsely distributed sensor nodes. The localization accuracy of the RPLA is 48.7% higher than that of the APIT algorithm, 16.8% higher than that of the single Gaussian model-based algorithm and 10.5% higher than that of the Kalman filtering-based algorithm.

  2. The use of linear programming in optimization of HDR implant dose distributions

    International Nuclear Information System (INIS)

    Jozsef, Gabor; Streeter, Oscar E.; Astrahan, Melvin A.

    2003-01-01

    The introduction of high dose rate brachytherapy enabled optimization of dose distributions to be used on a routine basis. The objective of optimization is to homogenize the dose distribution within the implant while simultaneously satisfying dose constraints on certain points. This is accomplished by varying the time the source dwells at different locations. As the dose at any point is a linear function of the dwell times, a linear programming approach seems to be a natural choice. The dose constraints are inherently linear inequalities. Homogeneity requirements are linearized by minimizing the maximum deviation of the doses at points inside the implant from a prescribed dose. The revised simplex method was applied for the solution of this linear programming problem. In the homogenization process the possible source locations were chosen as optimization points. To avoid the problem of the singular value of the dose at a source location from the source itself we define the 'self-contribution' as the dose at a small distance from the source. The effect of varying this distance is discussed. Test cases were optimized for planar, biplanar and cylindrical implants. A semi-irregular, fan-like implant with diverging needles was also investigated. Mean central dose calculation based on 3D Delaunay-triangulation of the source locations was used to evaluate the dose distributions. The optimization method resulted in homogeneous distributions (for brachytherapy). Additional dose constraints--when applied--were satisfied. The method is flexible enough to include other linear constraints such as the inclusion of the centroids of the Delaunay-triangulation for homogenization, or limiting the maximum allowable dwell time

  3. Interferometer predictions with triangulated images

    DEFF Research Database (Denmark)

    Brinch, Christian; Dullemond, C. P.

    2014-01-01

    the synthetic model images. To get the correct values of these integrals, the model images must have the right size and resolution. Insufficient care in these choices can lead to wrong results. We present a new general-purpose scheme for the computation of visibilities of radiative transfer images. Our method...... requires a model image that is a list of intensities at arbitrarily placed positions on the image-plane. It creates a triangulated grid from these vertices, and assumes that the intensity inside each triangle of the grid is a linear function. The Fourier integral over each triangle is then evaluated...... with an analytic expression and the complex visibility of the entire image is then the sum of all triangles. The result is a robust Fourier transform that does not suffer from aliasing effects due to grid regularities. The method automatically ensures that all structure contained in the model gets reflected...

  4. Interprofessional collaboration from nurses and physicians – A triangulation of quantitative and qualitative data

    Science.gov (United States)

    Schärli, Marianne; Müller, Rita; Martin, Jacqueline S; Spichiger, Elisabeth; Spirig, Rebecca

    2017-01-01

    Background: Interprofessional collaboration between nurses and physicians is a recurrent challenge in daily clinical practice. To ameliorate the situation, quantitative or qualitative studies are conducted. However, the results of these studies have often been limited by the methods chosen. Aim: To describe the synthesis of interprofessional collaboration from the nursing perspective by triangulating quantitative and qualitative data. Method: Data triangulation was performed as a sub-project of the interprofessional Sinergia DRG Research program. Initially, quantitative and qualitative data were analyzed separately in a mixed methods design. By means of triangulation a „meta-matrix“ resulted in a four-step process. Results: The „meta-matrix“ displays all relevant quantitative and qualitative results as well as their interrelations on one page. Relevance, influencing factors as well as consequences of interprofessional collaboration for patients, relatives and systems become visible. Conclusion: For the first time, the interprofessional collaboration from the nursing perspective at five Swiss hospitals is shown in a „meta-matrix“. The consequences of insufficient collaboration between nurses and physicians are considerable. This is why it’s necessary to invest in interprofessional concepts. In the „meta-matrix“ the factors which influence the interprofessional collaboration positively or negatively are visible.

  5. (2+1)-dimensional quantum gravity as the continuum limit of causal dynamical triangulations

    International Nuclear Information System (INIS)

    Benedetti, D.; Loll, R.; Zamponi, F.

    2007-01-01

    We perform a nonperturbative sum over geometries in a (2+1)-dimensional quantum gravity model given in terms of causal dynamical triangulations. Inspired by the concept of triangulations of product type introduced previously, we impose an additional notion of order on the discrete, causal geometries. This simplifies the combinatorial problem of counting geometries just enough to enable us to calculate the transfer matrix between boundary states labeled by the area of the spatial universe, as well as the corresponding quantum Hamiltonian of the continuum theory. This is the first time in dimension larger than 2 that a Hamiltonian has been derived from such a model by mainly analytical means, and it opens the way for a better understanding of scaling and renormalization issues

  6. Efficient Algorithms for Real-Time GPU Volumetric Cloud Rendering with Enhanced Geometry

    OpenAIRE

    Carlos Jiménez de Parga; Sebastián Rubén Gómez Palomo

    2018-01-01

    This paper presents several new techniques for volumetric cloud rendering using efficient algorithms and data structures based on ray-tracing methods for cumulus generation, achieving an optimum balance between realism and performance. These techniques target applications such as flight simulations, computer games, and educational software, even with conventional graphics hardware. The contours of clouds are defined by implicit mathematical expressions or triangulated structures inside which ...

  7. Triangulation based inclusion probabilities: a design-unbiased sampling approach

    OpenAIRE

    Fehrmann, Lutz; Gregoire, Timothy; Kleinn, Christoph

    2011-01-01

    A probabilistic sampling approach for design-unbiased estimation of area-related quantitative characteristics of spatially dispersed population units is proposed. The developed field protocol includes a fixed number of 3 units per sampling location and is based on partial triangulations over their natural neighbors to derive the individual inclusion probabilities. The performance of the proposed design is tested in comparison to fixed area sample plots in a simulation with two forest stands. ...

  8. Three-Dimensional Reconstruction Optical System Using Shadows Triangulation

    Science.gov (United States)

    Barba, J. Leiner; Vargas, Q. Lorena; Torres, M. Cesar; Mattos, V. Lorenzo

    2008-04-01

    In this work is developed a three-dimensional reconstruction system using the Shades3D tool of the Matlab® Programming Language and materials of low cost, such as webcam camera, a stick, a weak structured lighting system composed by a desk lamp, and observation plane in which the object is located. The reconstruction is obtained through a triangulation process that is executed after acquiring a sequence of images of the scene with a shadow projected on the object; additionally an image filtering process is done for obtaining only the part of the scene that will be reconstructed. Previously, it is necessary to develop a calibration process for determining the internal camera geometric and optical characteristics (intrinsic parameters), and the 3D position and orientation of the camera frame relative to a certain world coordinate system (extrinsic parameters). The lamp and the stick are used to produce a shadow which scans the object; in this technique, it is not necessary to know the position of the light source, instead the triangulation is obtained using shadow plane produced by intersection between the stick and the illumination pattern. The webcam camera captures all images with the shadow scanning the object, and Shades3D tool processes all information taking into account captured images and calibration parameters. Likewise, this technique is evaluated in the reconstruction of parts of the human body and its application in the detection of external abnormalities and elaboration of prosthesis or implant.

  9. Quantitative evaluation for small surface damage based on iterative difference and triangulation of 3D point cloud

    Science.gov (United States)

    Zhang, Yuyan; Guo, Quanli; Wang, Zhenchun; Yang, Degong

    2018-03-01

    This paper proposes a non-contact, non-destructive evaluation method for the surface damage of high-speed sliding electrical contact rails. The proposed method establishes a model of damage identification and calculation. A laser scanning system is built to obtain the 3D point cloud data of the rail surface. In order to extract the damage region of the rail surface, the 3D point cloud data are processed using iterative difference, nearest neighbours search and a data registration algorithm. The curvature of the point cloud data in the damage region is mapped to RGB color information, which can directly reflect the change trend of the curvature of the point cloud data in the damage region. The extracted damage region is divided into three prism elements by a method of triangulation. The volume and mass of a single element are calculated by the method of geometric segmentation. Finally, the total volume and mass of the damage region are obtained by the principle of superposition. The proposed method is applied to several typical injuries and the results are discussed. The experimental results show that the algorithm can identify damage shapes and calculate damage mass with milligram precision, which are useful for evaluating the damage in a further research stage.

  10. Accurate measurement of surface areas of anatomical structures by computer-assisted triangulation of computed tomography images

    Energy Technology Data Exchange (ETDEWEB)

    Allardice, J.T.; Jacomb-Hood, J.; Abulafi, A.M.; Williams, N.S. (Royal London Hospital (United Kingdom)); Cookson, J.; Dykes, E.; Holman, J. (London Hospital Medical College (United Kingdom))

    1993-05-01

    There is a need for accurate surface area measurement of internal anatomical structures in order to define light dosimetry in adjunctive intraoperative photodynamic therapy (AIOPDT). The authors investigated whether computer-assisted triangulation of serial sections generated by computed tomography (CT) scanning can give an accurate assessment of the surface area of the walls of the true pelvis after anterior resection and before colorectal anastomosis. They show that the technique of paper density tessellation is an acceptable method of measuring the surface areas of phantom objects, with a maximum error of 0.5%, and is used as the gold standard. Computer-assisted triangulation of CT images of standard geometric objects and accurately-constructed pelvic phantoms gives a surface area assessment with a maximum error of 2.5% compared with the gold standard. The CT images of 20 patients' pelves have been analysed by computer-assisted triangulation and this shows the surface area of the walls varies from 143 cm[sup 2] to 392 cm[sup 2]. (Author).

  11. The Application of a Multiphase Triangulation Approach to Mixed Methods: The Research of an Aspiring School Principal Development Program

    Science.gov (United States)

    Youngs, Howard; Piggot-Irvine, Eileen

    2012-01-01

    Mixed methods research has emerged as a credible alternative to unitary research approaches. The authors show how a combination of a triangulation convergence model with a triangulation multilevel model was used to research an aspiring school principal development pilot program. The multilevel model is used to show the national and regional levels…

  12. Triangulation of written assessments from patients, teachers and students: useful for students and teachers?

    Science.gov (United States)

    Gran, Sarah Frandsen; Braend, Anja Maria; Lindbaek, Morten

    2010-01-01

    Many medical students in general practice clerkships experience lack of observation-based feedback. The StudentPEP project combined written feedback from patients, observing teachers and students. This study analyzes the perceived usefulness of triangulated written feedback. A total of 71 general practitioners and 79 medical students at the University of Oslo completed project evaluation forms after a 6-week clerkship. A principal component analysis was performed to find structures within the questionnaire. Regression analysis was performed regarding students' answers to whether StudentPEP was worthwhile. Free-text answers were analyzed qualitatively. Student and teacher responses were mixed within six subscales, with highest agreement on 'Teachers oral and written feedback' and 'Attitude to patient evaluation'. Fifty-four per cent of the students agreed that the triangulation gave concrete feedback on their weaknesses, and 59% valued the teachers' feedback provided. Two statements regarding the teacher's attitudes towards StudentPEP were significantly associated with the student's perception of worthwhileness. Qualitative analysis showed that patient evaluations were encouraging or distrusted. Some students thought that StudentPEP ensured observation and feedback. The patient evaluations increased the students' awareness of the patient perspective. A majority of the students considered the triangulated written feedback beneficial, although time-consuming. The teacher's attitudes strongly influenced how the students perceived the usefulness of StudentPEP.

  13. Exploring Torus Universes in Causal Dynamical Triangulations

    DEFF Research Database (Denmark)

    Budd, Timothy George; Loll, R.

    2013-01-01

    Motivated by the search for new observables in nonperturbative quantum gravity, we consider Causal Dynamical Triangulations (CDT) in 2+1 dimensions with the spatial topology of a torus. This system is of particular interest, because one can study not only the global scale factor, but also global...... shape variables in the presence of arbitrary quantum fluctuations of the geometry. Our initial investigation focusses on the dynamics of the scale factor and uncovers a qualitatively new behaviour, which leads us to investigate a novel type of boundary conditions for the path integral. Comparing large....... Apart from setting the stage for the analysis of shape dynamics on the torus, the new set-up highlights the role of nontrivial boundaries and topology....

  14. Target-type probability combining algorithms for multisensor tracking

    Science.gov (United States)

    Wigren, Torbjorn

    2001-08-01

    Algorithms for the handing of target type information in an operational multi-sensor tracking system are presented. The paper discusses recursive target type estimation, computation of crosses from passive data (strobe track triangulation), as well as the computation of the quality of the crosses for deghosting purposes. The focus is on Bayesian algorithms that operate in the discrete target type probability space, and on the approximations introduced for computational complexity reduction. The centralized algorithms are able to fuse discrete data from a variety of sensors and information sources, including IFF equipment, ESM's, IRST's as well as flight envelopes estimated from track data. All algorithms are asynchronous and can be tuned to handle clutter, erroneous associations as well as missed and erroneous detections. A key to obtain this ability is the inclusion of data forgetting by a procedure for propagation of target type probability states between measurement time instances. Other important properties of the algorithms are their abilities to handle ambiguous data and scenarios. The above aspects are illustrated in a simulations study. The simulation setup includes 46 air targets of 6 different types that are tracked by 5 airborne sensor platforms using ESM's and IRST's as data sources.

  15. Optimizing 3D Triangulations to Recapture Sharp Edges

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas

    2006-01-01

    In this report, a technique for optimizing 3D triangulations is proposed. The method seeks to minimize an energy defined as a sum of energy terms for each edge in a triangle mesh. The main contribution is a novel per edge energy which strikes a balance between penalizing dihedral angle yet allowing...... sharp edges. The energy is minimized using edge swapping, and this can be done either in a greedy fashion or using simulated annealing. The latter is more costly, but effectively avoids local minima. The method has been used on a number of models. Particularly good results have been obtained on digital...

  16. A study on the effect of different image centres on stereo triangulation accuracy

    CSIR Research Space (South Africa)

    De Villiers, J

    2015-11-01

    Full Text Available This paper evaluates the effect of mixing the distortion centre, principal point and arithmetic image centre on the distortion correction, focal length determination and resulting real-world stereo vision triangulation. A robotic arm is used...

  17. Triangulation of the monophasic action potential causes flattening of the electrocardiographic T-wave

    DEFF Research Database (Denmark)

    Bhuiyan, Tanveer Ahmed; Graff, Claus; Thomsen, Morten Bækgaard

    2012-01-01

    of the action potential under the effect of the IKr blocker sertindole and associated these changes to concurrent changes in the morphology of electrocardiographic T-waves in dogs. We show that, under the effect of sertindole, the peak changes in the morphology of action potentials occur at time points similar......It has been proposed that triangulation on the cardiac action potential manifests as a broadened, more flat and notched T-wave on the ECG but to what extent such morphology characteristics are indicative of triangulation is more unclear. In this paper, we have analyzed the morphological changes...... to those observed for the peak changes in T-wave morphology on the ECG. We further show that the association between action potential shape and ECG shape is dose-dependent and most prominent at the time corresponding to phase 3 of the action potential....

  18. A structural framework for anomalous change detection and characterization

    Energy Technology Data Exchange (ETDEWEB)

    Prasad, Lakshman [Los Alamos National Laboratory; Theiler, James P [Los Alamos National Laboratory

    2009-01-01

    We present a spatially adaptive scheme for automatically searching a pair of images of a scene for unusual and interesting changes. Our motivation is to bring into play structural aspects of image features alongside the spectral attributes used for anomalous change detection (ACD). We leverage a small but informative subset of pixels, namely edge pixels of the images, as anchor points of a Delaunay triangulation to jointly decompose the images into a set of triangular regions, called trixels, which are spectrally uniform. Such decomposition helps in image regularization by simple-function approximation on a feature-adaptive grid. Applying ACD to this trixel grid instead of pixels offers several advantages. It allows: (1) edge-preserving smoothing of images, (2) speed-up of spatial computations by significantly reducing the representation of the images, and (3) the easy recovery of structure of the detected anomalous changes by associating anomalous trixels with polygonal image features. The latter facility further enables the application of shape-theoretic criteria and algorithms to characterize the changes and recognize them as interesting or not. This incorporation of spatial information has the potential to filter out some spurious changes, such as due to parallax, shadows, and misregistration, by identifying and filtering out those that are structurally similar and spatially pervasive. Our framework supports the joint spatial and spectral analysis of images, potentially enabling the design of more robust ACD algorithms.

  19. Scaling analyses of the spectral dimension in 3-dimensional causal dynamical triangulations

    Science.gov (United States)

    Cooperman, Joshua H.

    2018-05-01

    The spectral dimension measures the dimensionality of a space as witnessed by a diffusing random walker. Within the causal dynamical triangulations approach to the quantization of gravity (Ambjørn et al 2000 Phys. Rev. Lett. 85 347, 2001 Nucl. Phys. B 610 347, 1998 Nucl. Phys. B 536 407), the spectral dimension exhibits novel scale-dependent dynamics: reducing towards a value near 2 on sufficiently small scales, matching closely the topological dimension on intermediate scales, and decaying in the presence of positive curvature on sufficiently large scales (Ambjørn et al 2005 Phys. Rev. Lett. 95 171301, Ambjørn et al 2005 Phys. Rev. D 72 064014, Benedetti and Henson 2009 Phys. Rev. D 80 124036, Cooperman 2014 Phys. Rev. D 90 124053, Cooperman et al 2017 Class. Quantum Grav. 34 115008, Coumbe and Jurkiewicz 2015 J. High Energy Phys. JHEP03(2015)151, Kommu 2012 Class. Quantum Grav. 29 105003). I report the first comprehensive scaling analysis of the small-to-intermediate scale spectral dimension for the test case of the causal dynamical triangulations of 3-dimensional Einstein gravity. I find that the spectral dimension scales trivially with the diffusion constant. I find that the spectral dimension is completely finite in the infinite volume limit, and I argue that its maximal value is exactly consistent with the topological dimension of 3 in this limit. I find that the spectral dimension reduces further towards a value near 2 as this case’s bare coupling approaches its phase transition, and I present evidence against the conjecture that the bare coupling simply sets the overall scale of the quantum geometry (Ambjørn et al 2001 Phys. Rev. D 64 044011). On the basis of these findings, I advance a tentative physical explanation for the dynamical reduction of the spectral dimension observed within causal dynamical triangulations: branched polymeric quantum geometry on sufficiently small scales. My analyses should facilitate attempts to employ the spectral

  20. The Family System and Depressive Symptoms during the College Years: Triangulation, Parental Differential Treatment, and Sibling Warmth as Predictors.

    Science.gov (United States)

    Ponappa, Sujata; Bartle-Haring, Suzanne; Holowacz, Eugene; Ferriby, Megan

    2017-01-01

    Guided by Bowen theory, we investigated the relationships between parent-child triangulation, parental differential treatment (PDT), sibling warmth, and individual depressive symptoms in a sample of 77 sibling dyads, aged 18-25 years, recruited through undergraduate classes at a U.S. public University. Results of the actor-partner interdependence models suggested that being triangulated into parental conflict was positively related to both siblings' perception of PDT; however, as one sibling felt triangulated, the other perceived reduced levels of PDT. For both siblings, the perception of higher levels of PDT was related to decreased sibling warmth and higher sibling warmth was associated with fewer depressive symptoms. The implications of these findings for research and the treatment of depression in the college-aged population are discussed. © 2016 American Association for Marriage and Family Therapy.

  1. DCS-Neural-Network Program for Aircraft Control and Testing

    Science.gov (United States)

    Jorgensen, Charles C.

    2006-01-01

    A computer program implements a dynamic-cell-structure (DCS) artificial neural network that can perform such tasks as learning selected aerodynamic characteristics of an airplane from wind-tunnel test data and computing real-time stability and control derivatives of the airplane for use in feedback linearized control. A DCS neural network is one of several types of neural networks that can incorporate additional nodes in order to rapidly learn increasingly complex relationships between inputs and outputs. In the DCS neural network implemented by the present program, the insertion of nodes is based on accumulated error. A competitive Hebbian learning rule (a supervised-learning rule in which connection weights are adjusted to minimize differences between actual and desired outputs for training examples) is used. A Kohonen-style learning rule (derived from a relatively simple training algorithm, implements a Delaunay triangulation layout of neurons) is used to adjust node positions during training. Neighborhood topology determines which nodes are used to estimate new values. The network learns, starting with two nodes, and adds new nodes sequentially in locations chosen to maximize reductions in global error. At any given time during learning, the error becomes homogeneously distributed over all nodes.

  2. Theoretical triangulation as an approach for revealing the complexity of a classroom discussion

    NARCIS (Netherlands)

    van Drie, J.; Dekker, R.

    2013-01-01

    In this paper we explore the value of theoretical triangulation as a methodological approach for the analysis of classroom interaction. We analyze an excerpt of a whole-class discussion in history from three theoretical perspectives: interactivity of the discourse, conceptual level raising and

  3. Quantum triangulations moduli space, quantum computing, non-linear sigma models and Ricci flow

    CERN Document Server

    Carfora, Mauro

    2017-01-01

    This book discusses key conceptual aspects and explores the connection between triangulated manifolds and quantum physics, using a set of case studies ranging from moduli space theory to quantum computing to provide an accessible introduction to this topic. Research on polyhedral manifolds often reveals unexpected connections between very distinct aspects of mathematics and physics. In particular, triangulated manifolds play an important role in settings such as Riemann moduli space theory, strings and quantum gravity, topological quantum field theory, condensed matter physics, critical phenomena and complex systems. Not only do they provide a natural discrete analogue to the smooth manifolds on which physical theories are typically formulated, but their appearance is also often a consequence of an underlying structure that naturally calls into play non-trivial aspects of representation theory, complex analysis and topology in a way that makes the basic geometric structures of the physical interactions involv...

  4. Shared decision-making in medical encounters regarding breast cancer treatment: the contribution of methodological triangulation.

    Science.gov (United States)

    Durif-Bruckert, C; Roux, P; Morelle, M; Mignotte, H; Faure, C; Moumjid-Ferdjaoui, N

    2015-07-01

    The aim of this study on shared decision-making in the doctor-patient encounter about surgical treatment for early-stage breast cancer, conducted in a regional cancer centre in France, was to further the understanding of patient perceptions on shared decision-making. The study used methodological triangulation to collect data (both quantitative and qualitative) about patient preferences in the context of a clinical consultation in which surgeons followed a shared decision-making protocol. Data were analysed from a multi-disciplinary research perspective (social psychology and health economics). The triangulated data collection methods were questionnaires (n = 132), longitudinal interviews (n = 47) and observations of consultations (n = 26). Methodological triangulation revealed levels of divergence and complementarity between qualitative and quantitative results that suggest new perspectives on the three inter-related notions of decision-making, participation and information. Patients' responses revealed important differences between shared decision-making and participation per se. The authors note that subjecting patients to a normative behavioural model of shared decision-making in an era when paradigms of medical authority are shifting may undermine the patient's quest for what he or she believes is a more important right: a guarantee of the best care available. © 2014 John Wiley & Sons Ltd.

  5. Generalized neurofuzzy network modeling algorithms using Bézier-Bernstein polynomial functions and additive decomposition.

    Science.gov (United States)

    Hong, X; Harris, C J

    2000-01-01

    This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.

  6. Public health triangulation: approach and application to synthesizing data to understand national and local HIV epidemics

    Directory of Open Access Journals (Sweden)

    Aberle-Grasse John

    2010-07-01

    Full Text Available Abstract Background Public health triangulation is a process for reviewing, synthesising and interpreting secondary data from multiple sources that bear on the same question to make public health decisions. It can be used to understand the dynamics of HIV transmission and to measure the impact of public health programs. While traditional intervention research and metaanalysis would be ideal sources of information for public health decision making, they are infrequently available, and often decisions can be based only on surveillance and survey data. Methods The process involves examination of a wide variety of data sources and both biological, behavioral and program data and seeks input from stakeholders to formulate meaningful public health questions. Finally and most importantly, it uses the results to inform public health decision-making. There are 12 discrete steps in the triangulation process, which included identification and assessment of key questions, identification of data sources, refining questions, gathering data and reports, assessing the quality of those data and reports, formulating hypotheses to explain trends in the data, corroborating or refining working hypotheses, drawing conclusions, communicating results and recommendations and taking public health action. Results Triangulation can be limited by the quality of the original data, the potentials for ecological fallacy and "data dredging" and reproducibility of results. Conclusions Nonetheless, we believe that public health triangulation allows for the interpretation of data sets that cannot be analyzed using meta-analysis and can be a helpful adjunct to surveillance, to formal public health intervention research and to monitoring and evaluation, which in turn lead to improved national strategic planning and resource allocation.

  7. Automated matching of corresponding seed images of three simulator radiographs to allow 3D triangulation of implanted seeds

    Science.gov (United States)

    Altschuler, Martin D.; Kassaee, Alireza

    1997-02-01

    To match corresponding seed images in different radiographs so that the 3D seed locations can be triangulated automatically and without ambiguity requires (at least) three radiographs taken from different perspectives, and an algorithm that finds the proper permutations of the seed-image indices. Matching corresponding images in only two radiographs introduces inherent ambiguities which can be resolved only with the use of non-positional information obtained with intensive human effort. Matching images in three or more radiographs is an `NP (Non-determinant in Polynomial time)-complete' problem. Although the matching problem is fundamental, current methods for three-radiograph seed-image matching use `local' (seed-by-seed) methods that may lead to incorrect matchings. We describe a permutation-sampling method which not only gives good `global' (full permutation) matches for the NP-complete three-radiograph seed-matching problem, but also determines the reliability of the radiographic data themselves, namely, whether the patient moved in the interval between radiographic perspectives.

  8. Improved laser-based triangulation sensor with enhanced range and resolution through adaptive optics-based active beam control.

    Science.gov (United States)

    Reza, Syed Azer; Khwaja, Tariq Shamim; Mazhar, Mohsin Ali; Niazi, Haris Khan; Nawab, Rahma

    2017-07-20

    Various existing target ranging techniques are limited in terms of the dynamic range of operation and measurement resolution. These limitations arise as a result of a particular measurement methodology, the finite processing capability of the hardware components deployed within the sensor module, and the medium through which the target is viewed. Generally, improving the sensor range adversely affects its resolution and vice versa. Often, a distance sensor is designed for an optimal range/resolution setting depending on its intended application. Optical triangulation is broadly classified as a spatial-signal-processing-based ranging technique and measures target distance from the location of the reflected spot on a position sensitive detector (PSD). In most triangulation sensors that use lasers as a light source, beam divergence-which severely affects sensor measurement range-is often ignored in calculations. In this paper, we first discuss in detail the limitations to ranging imposed by beam divergence, which, in effect, sets the sensor dynamic range. Next, we show how the resolution of laser-based triangulation sensors is limited by the interpixel pitch of a finite-sized PSD. In this paper, through the use of tunable focus lenses (TFLs), we propose a novel design of a triangulation-based optical rangefinder that improves both the sensor resolution and its dynamic range through adaptive electronic control of beam propagation parameters. We present the theory and operation of the proposed sensor and clearly demonstrate a range and resolution improvement with the use of TFLs. Experimental results in support of our claims are shown to be in strong agreement with theory.

  9. On-Line Metrology with Conoscopic Holography: Beyond Triangulation

    Directory of Open Access Journals (Sweden)

    Ignacio Álvarez

    2009-09-01

    Full Text Available On-line non-contact surface inspection with high precision is still an open problem. Laser triangulation techniques are the most common solution for this kind of systems, but there exist fundamental limitations to their applicability when high precisions, long standoffs or large apertures are needed, and when there are difficult operating conditions. Other methods are, in general, not applicable in hostile environments or inadequate for on-line measurement. In this paper we review the latest research in Conoscopic Holography, an interferometric technique that has been applied successfully in this kind of applications, ranging from submicrometric roughness measurements, to long standoff sensors for surface defect detection in steel at high temperatures.

  10. An Array of Qualitative Data Analysis Tools: A Call for Data Analysis Triangulation

    Science.gov (United States)

    Leech, Nancy L.; Onwuegbuzie, Anthony J.

    2007-01-01

    One of the most important steps in the qualitative research process is analysis of data. The purpose of this article is to provide elements for understanding multiple types of qualitative data analysis techniques available and the importance of utilizing more than one type of analysis, thus utilizing data analysis triangulation, in order to…

  11. Ising model of a randomly triangulated random surface as a definition of fermionic string theory

    International Nuclear Information System (INIS)

    Bershadsky, M.A.; Migdal, A.A.

    1986-01-01

    Fermionic degrees of freedom are added to randomly triangulated planar random surfaces. It is shown that the Ising model on a fixed graph is equivalent to a certain Majorana fermion theory on the dual graph. (orig.)

  12. RECONSTRUCTION, QUANTIFICATION, AND VISUALIZATION OF FOREST CANOPY BASED ON 3D TRIANGULATIONS OF AIRBORNE LASER SCANNING POINT DATA

    Directory of Open Access Journals (Sweden)

    J. Vauhkonen

    2015-03-01

    Full Text Available Reconstruction of three-dimensional (3D forest canopy is described and quantified using airborne laser scanning (ALS data with densities of 0.6–0.8 points m-2 and field measurements aggregated at resolutions of 400–900 m2. The reconstruction was based on computational geometry, topological connectivity, and numerical optimization. More precisely, triangulations and their filtrations, i.e. ordered sets of simplices belonging to the triangulations, based on the point data were analyzed. Triangulating the ALS point data corresponds to subdividing the underlying space of the points into weighted simplicial complexes with weights quantifying the (empty space delimited by the points. Reconstructing the canopy volume populated by biomass will thus likely require filtering to exclude that volume from canopy voids. The approaches applied for this purpose were (i to optimize the degree of filtration with respect to the field measurements, and (ii to predict this degree by means of analyzing the persistent homology of the obtained triangulations, which is applied for the first time for vegetation point clouds. When derived from optimized filtrations, the total tetrahedral volume had a high degree of determination (R2 with the stem volume considered, both alone (R2=0.65 and together with other predictors (R2=0.78. When derived by analyzing the topological persistence of the point data and without any field input, the R2 were lower, but the predictions still showed a correlation with the field-measured stem volumes. Finally, producing realistic visualizations of a forested landscape using the persistent homology approach is demonstrated.

  13. Laser triangulation method for measuring the size of parking claw

    Science.gov (United States)

    Liu, Bo; Zhang, Ming; Pang, Ying

    2017-10-01

    With the development of science and technology and the maturity of measurement technology, the 3D profile measurement technology has been developed rapidly. Three dimensional measurement technology is widely used in mold manufacturing, industrial inspection, automatic processing and manufacturing, etc. There are many kinds of situations in scientific research and industrial production. It is necessary to transform the original mechanical parts into the 3D data model on the computer quickly and accurately. At present, many methods have been developed to measure the contour size, laser triangulation is one of the most widely used methods.

  14. Technique Triangulation for Validation in Directed Content Analysis

    Directory of Open Access Journals (Sweden)

    Áine M. Humble PhD

    2009-09-01

    Full Text Available Division of labor in wedding planning varies for first-time marriages, with three types of couples—traditional, transitional, and egalitarian—identified, but nothing is known about wedding planning for remarrying individuals. Using semistructured interviews, the author interviewed 14 couples in which at least one person had remarried and used directed content analysis to investigate the extent to which the aforementioned typology could be transferred to this different context. In this paper she describes how a triangulation of analytic techniques provided validation for couple classifications and also helped with moving beyond “blind spots” in data analysis. Analytic approaches were the constant comparative technique, rank order comparison, and visual representation of coding, using MAXQDA 2007's tool called TextPortraits.

  15. Qualitative to quantitative: linked trajectory of method triangulation in a study on HIV/AIDS in Goa, India.

    Science.gov (United States)

    Bailey, Ajay; Hutter, Inge

    2008-10-01

    With 3.1 million people estimated to be living with HIV/AIDS in India and 39.5 million people globally, the epidemic has posed academics the challenge of identifying behaviours and their underlying beliefs in the effort to reduce the risk of HIV transmission. The Health Belief Model (HBM) is frequently used to identify risk behaviours and adherence behaviour in the field of HIV/AIDS. Risk behaviour studies that apply HBM have been largely quantitative and use of qualitative methodology is rare. The marriage of qualitative and quantitative methods has never been easy. The challenge is in triangulating the methods. Method triangulation has been largely used to combine insights from the qualitative and quantitative methods but not to link both the methods. In this paper we suggest a linked trajectory of method triangulation (LTMT). The linked trajectory aims to first gather individual level information through in-depth interviews and then to present the information as vignettes in focus group discussions. We thus validate information obtained from in-depth interviews and gather emic concepts that arise from the interaction. We thus capture both the interpretation and the interaction angles of the qualitative method. Further, using the qualitative information gained, a survey is designed. In doing so, the survey questions are grounded and contextualized. We employed this linked trajectory of method triangulation in a study on the risk assessment of HIV/AIDS among migrant and mobile men. Fieldwork was carried out in Goa, India. Data come from two waves of studies, first an explorative qualitative study (2003), second a larger study (2004-2005), including in-depth interviews (25), focus group discussions (21) and a survey (n=1259). By employing the qualitative to quantitative LTMT we can not only contextualize the existing concepts of the HBM, but also validate new concepts and identify new risk groups.

  16. Research on the Perforating Algorithm Based on STL Files

    Science.gov (United States)

    Yuchuan, Han; Xianfeng, Zhu; Yunrui, Bai; Zhiwen, Wu

    2018-04-01

    In the process of making medical personalized external fixation brace, the 3D data file should be perforated to increase the air permeability and reduce the weight. In this paper, a perforating algorithm for 3D STL file is proposed, which can perforate holes, hollow characters and engrave decorative patterns on STL files. The perforating process is composed of three steps. Firstly, make the imaginary space surface intersect with the STL model, and reconstruct triangles at the intersection. Secondly, delete the triangular facets inside the space surface and make a hole on the STL model. Thirdly, triangulate the inner surface of the hole, and thus realize the perforating. Choose the simple space equations such as cylindrical and rectangular prism equations as perforating equations can perforate round holes and rectangular holes. Through the combination of different holes, lettering, perforating decorative patterns and other perforated results can be accomplished. At last, an external fixation brace and an individual pen container were perforated holes using the algorithm, and the expected results were reached, which proved the algorithm is feasible.

  17. Obtaining the Andersen's chart, triangulation algorithm

    DEFF Research Database (Denmark)

    Sabaliauskas, Tomas; Ibsen, Lars Bo

    Andersen’s chart (Andersen & Berre, 1999) is a graphical method of observing cyclic soil response. It allows observing soil response to various stress amplitudes that can lead to liquefaction, excess plastic deformation or stabilizing soil response. The process of obtaining the original chart has...

  18. Simultaneous hierarchical segmentation and vectorization of satellite images through combined data sampling and anisotropic triangulation

    Energy Technology Data Exchange (ETDEWEB)

    Grazzini, Jacopo [Los Alamos National Laboratory; Prasad, Lakshman [Los Alamos National Laboratory; Dillard, Scott [PNNL

    2010-10-21

    The automatic detection, recognition , and segmentation of object classes in remote sensed images is of crucial importance for scene interpretation and understanding. However, it is a difficult task because of the high variability of satellite data. Indeed, the observed scenes usually exhibit a high degree of complexity, where complexity refers to the large variety of pictorial representations of objects with the same semantic meaning and also to the extensive amount of available det.ails. Therefore, there is still a strong demand for robust techniques for automatic information extraction and interpretation of satellite images. In parallel, there is a growing interest in techniques that can extract vector features directly from such imagery. In this paper, we investigate the problem of automatic hierarchical segmentation and vectorization of multispectral satellite images. We propose a new algorithm composed of the following steps: (i) a non-uniform sampling scheme extracting most salient pixels in the image, (ii) an anisotropic triangulation constrained by the sampled pixels taking into account both strength and directionality of local structures present in the image, (iii) a polygonal grouping scheme merging, through techniques based on perceptual information , the obtained segments to a smaller quantity of superior vectorial objects. Besides its computational efficiency, this approach provides a meaningful polygonal representation for subsequent image analysis and/or interpretation.

  19. The Marginalized "Model" Minority: An Empirical Examination of the Racial Triangulation of Asian Americans

    Science.gov (United States)

    Xu, Jun; Lee, Jennifer C.

    2013-01-01

    In this article, we propose a shift in race research from a one-dimensional hierarchical approach to a multidimensional system of racial stratification. Building upon Claire Kim's (1999) racial triangulation theory, we examine how the American public rates Asians relative to blacks and whites along two dimensions of racial stratification: racial…

  20. Detecting objects in radiographs for homeland security

    Science.gov (United States)

    Prasad, Lakshman; Snyder, Hans

    2005-05-01

    We present a general scheme for segmenting a radiographic image into polygons that correspond to visual features. This decomposition provides a vectorized representation that is a high-level description of the image. The polygons correspond to objects or object parts present in the image. This characterization of radiographs allows the direct application of several shape recognition algorithms to identify objects. In this paper we describe the use of constrained Delaunay triangulations as a uniform foundational tool to achieve multiple visual tasks, namely image segmentation, shape decomposition, and parts-based shape matching. Shape decomposition yields parts that serve as tokens representing local shape characteristics. Parts-based shape matching enables the recognition of objects in the presence of occlusions, which commonly occur in radiographs. The polygonal representation of image features affords the efficient design and application of sophisticated geometric filtering methods to detect large-scale structural properties of objects in images. Finally, the representation of radiographs via polygons results in significant reduction of image file sizes and permits the scalable graphical representation of images, along with annotations of detected objects, in the SVG (scalable vector graphics) format that is proposed by the world wide web consortium (W3C). This is a textual representation that can be compressed and encrypted for efficient and secure transmission of information over wireless channels and on the Internet. In particular, our methods described here provide an algorithmic framework for developing image analysis tools for screening cargo at ports of entry for homeland security.

  1. Lymphoscintigraphy and triangulated body marking for morbidity reduction during sentinel node biopsy in breast cancer.

    Science.gov (United States)

    Krynyckyi, Borys R; Shafir, Michail K; Kim, Suk Chul; Kim, Dong Wook; Travis, Arlene; Moadel, Renee M; Kim, Chun K

    2005-11-08

    Current trends in patient care include the desire for minimizing invasiveness of procedures and interventions. This aim is reflected in the increasing utilization of sentinel lymph node biopsy, which results in a lower level of morbidity in breast cancer staging, in comparison to extensive conventional axillary dissection. Optimized lymphoscintigraphy with triangulated body marking is a clinical option that can further reduce morbidity, more than when a hand held gamma probe alone is utilized. Unfortunately it is often either overlooked or not fully understood, and thus not utilized. This results in the unnecessary loss of an opportunity to further reduce morbidity. Optimized lymphoscintigraphy and triangulated body marking provides a detailed 3 dimensional map of the number and location of the sentinel nodes, available before the first incision is made. The number, location, relevance based on time/sequence of appearance of the nodes, all can influence 1) where the incision is made, 2) how extensive the dissection is, and 3) how many nodes are removed. In addition, complex patterns can arise from injections. These include prominent lymphatic channels, pseudo-sentinel nodes, echelon and reverse echelon nodes and even contamination, which are much more difficult to access with the probe only. With the detailed information provided by optimized lymphoscintigraphy and triangulated body marking, the surgeon can approach the axilla in a more enlightened fashion, in contrast to when the less informed probe only method is used. This allows for better planning, resulting in the best cosmetic effect and less trauma to the tissues, further reducing morbidity while maintaining adequate sampling of the sentinel node(s).

  2. First Instances of Generalized Expo-Rational Finite Elements on Triangulations

    Science.gov (United States)

    Dechevsky, Lubomir T.; Zanaty, Peter; Laksa˚, Arne; Bang, Børre

    2011-12-01

    In this communication we consider a construction of simplicial finite elements on triangulated two-dimensional polygonal domains. This construction is, in some sense, dual to the construction of generalized expo-rational B-splines (GERBS). The main result is in the obtaining of new polynomial simplicial patches of the first several lowest possible total polynomial degrees which exhibit Hermite interpolatory properties. The derivation of these results is based on the theory of piecewise polynomial GERBS called Euler Beta-function B-splines. We also provide 3-dimensional visualization of the graphs of the new polynomial simplicial patches and their control polygons.

  3. Options for a health system researcher to choose in Meta Review (MR approaches-Meta Narrative (MN and Meta Triangulation (MT

    Directory of Open Access Journals (Sweden)

    Sanjeev Davey

    2015-01-01

    Full Text Available Two new approaches in systematic reviewing i.e. Meta-narrative review(MNR (which a health researcher can use for topics which are differently conceptualized and studied by different types of researchers for policy decisions and Meta-triangulation review(MTR (done to build theory for studying multifaceted phenomena characterized by expansive and contested research domains are ready for penetration in an arena of health system research. So critical look at which approach in Meta-review is better i.e. Meta-narrative review or Meta-triangulation review, can give new insights to a health system researcher. A systematic review on 2 key words-"meta-narrative review" and "meta-triangulation review" in health system research, were searched from key search engines, such as Pubmed, Cochrane library, Bio-med Central and Google Scholar etc till 21st March 2014 since last 20 years. Studies from both developed and developing world were included in any form and scope to draw final conclusions. However unpublished data from thesis was not included in systematic review. Meta-narrative review is a type of systematic review which can be used for a wide range of topics and questions involving making judgments and inferences in public health. On the other hand Meta-triangulation review is a three-phased, qualitative meta-analysis process which can be used to explore variations in the assumptions of alternative paradigms, gain insights into these multiple paradigms at one point of time and addresses emerging themes and the resulting theories.

  4. Making the Most of Obesity Research: Developing Research and Policy Objectives through Evidence Triangulation

    Science.gov (United States)

    Oliver, Kathryn; Aicken, Catherine; Arai, Lisa

    2013-01-01

    Drawing lessons from research can help policy makers make better decisions. If a large and methodologically varied body of research exists, as with childhood obesity, this is challenging. We present new research and policy objectives for child obesity developed by triangulating user involvement data with a mapping study of interventions aimed at…

  5. Triangulation-based edge measurement using polyview optics

    Science.gov (United States)

    Li, Yinan; Kästner, Markus; Reithmeier, Eduard

    2018-04-01

    Laser triangulation sensors as non-contact measurement devices are widely used in industry and research for profile measurements and quantitative inspections. Some technical applications e.g. edge measurements usually require a configuration of a single sensor and a translation stage or a configuration of multiple sensors, so that they can measure a large measurement range that is out of the scope of a single sensor. However, the cost of both configurations is high, due to the additional rotational axis or additional sensor. This paper provides a special measurement system for measurement of great curved surfaces based on a single sensor configuration. Utilizing a self-designed polyview optics and calibration process, the proposed measurement system allows an over 180° FOV (field of view) with a precise measurement accuracy as well as an advantage of low cost. The detailed capability of this measurement system based on experimental data is discussed in this paper.

  6. On the effect of standard PFEM remeshing on volume conservation in free-surface fluid flow problems

    Science.gov (United States)

    Franci, Alessandro; Cremonesi, Massimiliano

    2017-07-01

    The aim of this work is to analyze the remeshing procedure used in the particle finite element method (PFEM) and to investigate how this operation may affect the numerical results. The PFEM remeshing algorithm combines the Delaunay triangulation and the Alpha Shape method to guarantee a good quality of the Lagrangian mesh also in large deformation processes. However, this strategy may lead to local variations of the topology that may cause an artificial change of the global volume. The issue of volume conservation is here studied in detail. An accurate description of all the situations that may induce a volume variation during the PFEM regeneration of the mesh is provided. Moreover, the crucial role of the parameter α used in the Alpha Shape method is highlighted and a range of values of α for which the differences between the numerical results are negligible, is found. Furthermore, it is shown that the variation of volume induced by the remeshing reduces by refining the mesh. This check of convergence is of paramount importance for the reliability of the PFEM. The study is carried out for 2D free-surface fluid dynamics problems, however the conclusions can be extended to 3D and to all those problems characterized by significant variations of internal and external boundaries.

  7. Feminist Approaches to Triangulation: Uncovering Subjugated Knowledge and Fostering Social Change in Mixed Methods Research

    Science.gov (United States)

    Hesse-Biber, Sharlene

    2012-01-01

    This article explores the deployment of triangulation in the service of uncovering subjugated knowledge and promoting social change for women and other oppressed groups. Feminist approaches to mixed methods praxis create a tight link between the research problem and the research design. An analysis of selected case studies of feminist praxis…

  8. Hand-held triangulation laser profilometer with audio output for blind people Profilométre laser à triangulation tenu en main avec sortie sonare pour non-voyants

    Science.gov (United States)

    Farcy, R.; Damaschini, R.

    1998-06-01

    We describe a device currently under industrial development which will give to the blind a means of three-dimensional space perception. It consists of a 350 g hand-held triangulating laser telemeter including electronic parts and batteries, with auditory feedback either inside the apparatus or close to the ear. The microprocessor unit converts in real time the distance measured by the telemeter into a musical note. Scanning the space with an adequate movement of the hand produces musical lines corresponding to the profiles of the environment. We discuss the optical configuration of the system relative to our first year of clinical experimentation.

  9. An improved triangulation laser rangefinder using a custom CMOS HDR linear image sensor

    Science.gov (United States)

    Liscombe, Michael

    3-D triangulation laser rangefinders are used in many modern applications, from terrain mapping to biometric identification. Although a wide variety of designs have been proposed, laser speckle noise still provides a fundamental limitation on range accuracy. These works propose a new triangulation laser rangefinder designed specifically to mitigate the effects of laser speckle noise. The proposed rangefinder uses a precision linear translator to laterally reposition the imaging system (e.g., image sensor and imaging lens). For a given spatial location of the laser spot, capturing N spatially uncorrelated laser spot profiles is shown to improve range accuracy by a factor of N . This technique has many advantages over past speckle-reduction technologies, such as a fixed system cost and form factor, and the ability to virtually eliminate laser speckle noise. These advantages are made possible through spatial diversity and come at the cost of increased acquisition time. The rangefinder makes use of the ICFYKWG1 linear image sensor, a custom CMOS sensor developed at the Vision Sensor Laboratory (York University). Tests are performed on the image sensor's innovative high dynamic range technology to determine its effects on range accuracy. As expected, experimental results have shown that the sensor provides a trade-off between dynamic range and range accuracy.

  10. From causal dynamical triangulations to astronomical observations

    Science.gov (United States)

    Mielczarek, Jakub

    2017-09-01

    This letter discusses phenomenological aspects of dimensional reduction predicted by the Causal Dynamical Triangulations (CDT) approach to quantum gravity. The deformed form of the dispersion relation for the fields defined on the CDT space-time is reconstructed. Using the Fermi satellite observations of the GRB 090510 source we find that the energy scale of the dimensional reduction is E* > 0.7 \\sqrt{4-d\\text{UV}} \\cdot 1010 \\text{GeV} at (95% CL), where d\\text{UV} is the value of the spectral dimension in the UV limit. By applying the deformed dispersion relation to the cosmological perturbations it is shown that, for a scenario when the primordial perturbations are formed in the UV region, the scalar power spectrum PS \\propto kn_S-1 , where n_S-1≈ \\frac{3 r (d\\text{UV}-2)}{(d\\text{UV}-1)r-48} . Here, r is the tensor-to-scalar ratio. We find that within the considered model, the predicted from CDT deviation from the scale invariance (n_S=1) is in contradiction with the up to date Planck and BICEP2.

  11. Fixed-topology Lorentzian triangulations: Quantum Regge Calculus in the Lorentzian domain

    Science.gov (United States)

    Tate, Kyle; Visser, Matt

    2011-11-01

    A key insight used in developing the theory of Causal Dynamical Triangu-lations (CDTs) is to use the causal (or light-cone) structure of Lorentzian manifolds to restrict the class of geometries appearing in the Quantum Gravity (QG) path integral. By exploiting this structure the models developed in CDTs differ from the analogous models developed in the Euclidean domain, models of (Euclidean) Dynamical Triangulations (DT), and the corresponding Lorentzian results are in many ways more "physical". In this paper we use this insight to formulate a Lorentzian signature model that is anal-ogous to the Quantum Regge Calculus (QRC) approach to Euclidean Quantum Gravity. We exploit another crucial fact about the structure of Lorentzian manifolds, namely that certain simplices are not constrained by the triangle inequalities present in Euclidean signa-ture. We show that this model is not related to QRC by a naive Wick rotation; this serves as another demonstration that the sum over Lorentzian geometries is not simply related to the sum over Euclidean geometries. By removing the triangle inequality constraints, there is more freedom to perform analytical calculations, and in addition numerical simulations are more computationally efficient. We first formulate the model in 1 + 1 dimensions, and derive scaling relations for the pure gravity path integral on the torus using two different measures. It appears relatively easy to generate "large" universes, both in spatial and temporal extent. In addition, loopto-loop amplitudes are discussed, and a transfer matrix is derived. We then also discuss the model in higher dimensions.

  12. Depth measurements of drilled holes in bone by laser triangulation for the field of oral implantology

    Science.gov (United States)

    Quest, D.; Gayer, C.; Hering, P.

    2012-01-01

    Laser osteotomy is one possible method of preparing beds for dental implants in the human jaw. A major problem in using this contactless treatment modality is the lack of haptic feedback to control the depth while drilling the implant bed. A contactless measurement system called laser triangulation is presented as a new procedure to overcome this problem. Together with a tomographic picture the actual position of the laser ablation in the bone can be calculated. Furthermore, the laser response is sufficiently fast as to pose little risk to surrounding sensitive areas such as nerves and blood vessels. In the jaw two different bone structures exist, namely the cancellous bone and the compact bone. Samples of both bone structures were examined with test drillings performed either by laser osteotomy or by a conventional rotating drilling tool. The depth of these holes was measured using laser triangulation. The results and the setup are reported in this study.

  13. Spectral triangulation molecular contrast optical coherence tomography with indocyanine green as the contrast agent

    OpenAIRE

    Yang, Changhuei; McGuckin, Laura E. L.; Simon, John D.; Choma, Michael A.; Applegate, Brian E.; Izatt, Joseph A.

    2004-01-01

    We report a new molecular contrast optical coherence tomography (MCOCT) implementation that profiles the contrast agent distribution in a sample by measuring the agent's spectral differential absorption. The method, spectra triangulation MCOCT, can effectively suppress contributions from spectrally dependent scatterings from the sample without a priori knowledge of the scattering properties. We demonstrate molecular imaging with this new MCOCT modality by mapping the distribution of indocyani...

  14. Indirect measurement of molten steel level in tundish based on laser triangulation

    Science.gov (United States)

    Su, Zhiqi; He, Qing; Xie, Zhi

    2016-03-01

    For real-time and precise measurement of molten steel level in tundish during continuous casting, slag level and slag thickness are needed. Among which, the problem of slag thickness measurement has been solved in our previous work. In this paper, a systematic solution for slag level measurement based on laser triangulation is proposed. Being different from traditional laser triangulation, several aspects for measuring precision and robustness have been done. First, laser line is adopted for multi-position measurement to overcome the deficiency of single point laser range finder caused by the uneven surface of the slag. Second, the key parameters, such as installing angle and minimum requirement of the laser power, are analyzed and determined based on the gray-body radiation theory to fulfill the rigorous requirement of measurement accuracy. Third, two kinds of severe noises in the acquired images, which are, respectively, caused by heat radiation and Electro-Magnetic Interference (EMI), are cleaned via morphological characteristic of the liquid slag and color difference between EMI and the laser signals, respectively. Fourth, as false target created by stationary slag usually disorders the measurement, valid signals of the slag are distinguished from the false ones to calculate the slag level. Then, molten steel level is obtained by the slag level minus the slag thickness. The measuring error of this solution is verified by the applications in steel plants, which is ±2.5 mm during steady casting and ±3.2 mm at the end of casting.

  15. Indirect measurement of molten steel level in tundish based on laser triangulation

    Energy Technology Data Exchange (ETDEWEB)

    Su, Zhiqi; He, Qing, E-mail: heqing@ise.neu.edu.cn; Xie, Zhi [State Key Laboratory of Synthetical Automation for Process Industries, School of Information Science and Engineering, Northeastern University, Shenyang 110819 (China)

    2016-03-15

    For real-time and precise measurement of molten steel level in tundish during continuous casting, slag level and slag thickness are needed. Among which, the problem of slag thickness measurement has been solved in our previous work. In this paper, a systematic solution for slag level measurement based on laser triangulation is proposed. Being different from traditional laser triangulation, several aspects for measuring precision and robustness have been done. First, laser line is adopted for multi-position measurement to overcome the deficiency of single point laser range finder caused by the uneven surface of the slag. Second, the key parameters, such as installing angle and minimum requirement of the laser power, are analyzed and determined based on the gray-body radiation theory to fulfill the rigorous requirement of measurement accuracy. Third, two kinds of severe noises in the acquired images, which are, respectively, caused by heat radiation and Electro-Magnetic Interference (EMI), are cleaned via morphological characteristic of the liquid slag and color difference between EMI and the laser signals, respectively. Fourth, as false target created by stationary slag usually disorders the measurement, valid signals of the slag are distinguished from the false ones to calculate the slag level. Then, molten steel level is obtained by the slag level minus the slag thickness. The measuring error of this solution is verified by the applications in steel plants, which is ±2.5 mm during steady casting and ±3.2 mm at the end of casting.

  16. 1:500 Scale Aerial Triangulation Test with Unmanned Airship in Hubei Province

    International Nuclear Information System (INIS)

    Feifei, Xie; Zongjian, Lin; Dezhu, Gui

    2014-01-01

    A new UAVS (Unmanned Aerial Vehicle System) for low altitude aerial photogrammetry is introduced for fine surveying and mapping, including the platform airship, sensor system four-combined wide-angle camera and photogrammetry software MAP-AT. It is demonstrated that this low-altitude aerial photogrammetric system meets the precision requirements of 1:500 scale aerial triangulation based on the test of this system in Hubei province, including the working condition of the airship, the quality of image data and the data processing report. This work provides a possibility for fine surveying and mapping

  17. Triangulating laser profilometer as a navigational aid for the blind: optical aspects

    Science.gov (United States)

    Farcy, R.; Denise, B.; Damaschini, R.

    1996-03-01

    We propose a navigational aid approach for the blind that relies on active optical profilometry with real-time electrotactile interfacing on the skin. Here we are concerned with the optical parts of this system. We point out the particular requirements the profilometer must meet to meet the needs of blind people. We show experimentally that an adequate compromise is possible that consists of a compact class I IR laser-diode triangulation profilometer with the following angular resolution, 20-ms acquisition time per measure of distance, 60 degrees angular scanning field.

  18. UAV PHOTOGRAMMETRY: BLOCK TRIANGULATION COMPARISONS

    Directory of Open Access Journals (Sweden)

    R. Gini

    2013-08-01

    Full Text Available UAVs systems represent a flexible technology able to collect a big amount of high resolution information, both for metric and interpretation uses. In the frame of experimental tests carried out at Dept. ICA of Politecnico di Milano to validate vector-sensor systems and to assess metric accuracies of images acquired by UAVs, a block of photos taken by a fixed wing system is triangulated with several software. The test field is a rural area included in an Italian Park ("Parco Adda Nord", useful to study flight and imagery performances on buildings, roads, cultivated and uncultivated vegetation. The UAV SenseFly, equipped with a camera Canon Ixus 220HS, flew autonomously over the area at a height of 130 m yielding a block of 49 images divided in 5 strips. Sixteen pre-signalized Ground Control Points, surveyed in the area through GPS (NRTK survey, allowed the referencing of the block and accuracy analyses. Approximate values for exterior orientation parameters (positions and attitudes were recorded by the flight control system. The block was processed with several software: Erdas-LPS, EyeDEA (Univ. of Parma, Agisoft Photoscan, Pix4UAV, in assisted or automatic way. Results comparisons are given in terms of differences among digital surface models, differences in orientation parameters and accuracies, when available. Moreover, image and ground point coordinates obtained by the various software were independently used as initial values in a comparative adjustment made by scientific in-house software, which can apply constraints to evaluate the effectiveness of different methods of point extraction and accuracies on ground check points.

  19. Vision-based algorithms for high-accuracy measurements in an industrial bakery

    Science.gov (United States)

    Heleno, Paulo; Davies, Roger; Correia, Bento A. B.; Dinis, Joao

    2002-02-01

    This paper describes the machine vision algorithms developed for VIP3D, a measuring system used in an industrial bakery to monitor the dimensions and weight of loaves of bread (baguettes). The length and perimeter of more than 70 different varieties of baguette are measured with 1-mm accuracy, quickly, reliably and automatically. VIP3D uses a laser triangulation technique to measure the perimeter. The shape of the loaves is approximately cylindrical and the perimeter is defined as the convex hull of a cross-section perpendicular to the baguette axis at mid-length. A camera, mounted obliquely to the measuring plane, captures an image of a laser line projected onto the upper surface of the baguette. Three cameras are used to measure the baguette length, a solution adopted in order to minimize perspective-induced measurement errors. The paper describes in detail the machine vision algorithms developed to perform segmentation of the laser line and subsequent calculation of the perimeter of the baguette. The algorithms used to segment and measure the position of the ends of the baguette, to sub-pixel accuracy, are also described, as are the algorithms used to calibrate the measuring system and compensate for camera-induced image distortion.

  20. Triangulated Proxy Reporting: a technique for improving how communication partners come to know people with severe cognitive impairment.

    Science.gov (United States)

    Lyons, Gordon; De Bortoli, Tania; Arthur-Kelly, Michael

    2017-09-01

    This paper explains and demonstrates the pilot application of Triangulated Proxy Reporting (TPR); a practical technique for enhancing communication around people who have severe cognitive impairment (SCI). An introduction explains SCI and how this impacts on communication; and consequently on quality of care and quality of life. This is followed by an explanation of TPR and its origins in triangulation research techniques. An illustrative vignette explicates its utility and value in a group home for a resident with profound multiple disabilities. The Discussion and Conclusion sections propose the wider application of TPR for different cohorts of people with SCIs, their communication partners and service providers. TPR presents as a practical technique for enhancing communication interactions with people who have SCI. The paper demonstrates the potential of the technique for improving engagement amongst those with profound multiple disabilities, severe acquired brain injury and advanced dementia and their partners in and across different care settings. Implications for Rehabilitation Triangulated Proxy Reporting (TPR) shows potential to improve communications between people with severe cognitive impairments and their communication partners. TPR can lead to improved quality of care and quality of life for people with profound multiple disabilities, very advanced dementia and severe acquired brain injury, who otherwise are very difficult to support. TPR is a relatively simple and inexpensive technique that service providers can incorporate into practice to improving communications between clients with severe cognitive impairments, their carers and other support professionals.

  1. Indoor 3D Route Modeling Based On Estate Spatial Data

    Science.gov (United States)

    Zhang, H.; Wen, Y.; Jiang, J.; Huang, W.

    2014-04-01

    Indoor three-dimensional route model is essential for space intelligence navigation and emergency evacuation. This paper is motivated by the need of constructing indoor route model automatically and as far as possible. By comparing existing building data sources, this paper firstly explained the reason why the estate spatial management data is chosen as the data source. Then, an applicable method of construction three-dimensional route model in a building is introduced by establishing the mapping relationship between geographic entities and their topological expression. This data model is a weighted graph consist of "node" and "path" to express the spatial relationship and topological structure of a building components. The whole process of modelling internal space of a building is addressed by two key steps: (1) each single floor route model is constructed, including path extraction of corridor using Delaunay triangulation algorithm with constrained edge, fusion of room nodes into the path; (2) the single floor route model is connected with stairs and elevators and the multi-floor route model is eventually generated. In order to validate the method in this paper, a shopping mall called "Longjiang New City Plaza" in Nanjing is chosen as a case of study. And the whole building space is constructed according to the modelling method above. By integrating of existing path finding algorithm, the usability of this modelling method is verified, which shows the indoor three-dimensional route modelling method based on estate spatial data in this paper can support indoor route planning and evacuation route design very well.

  2. A physics-based algorithm for real-time simulation of electrosurgery procedures in minimally invasive surgery.

    Science.gov (United States)

    Lu, Zhonghua; Arikatla, Venkata S; Han, Zhongqing; Allen, Brian F; De, Suvranu

    2014-12-01

    High-frequency electricity is used in the majority of surgical interventions. However, modern computer-based training and simulation systems rely on physically unrealistic models that fail to capture the interplay of the electrical, mechanical and thermal properties of biological tissue. We present a real-time and physically realistic simulation of electrosurgery by modelling the electrical, thermal and mechanical properties as three iteratively solved finite element models. To provide subfinite-element graphical rendering of vaporized tissue, a dual-mesh dynamic triangulation algorithm based on isotherms is proposed. The block compressed row storage (BCRS) structure is shown to be critical in allowing computationally efficient changes in the tissue topology due to vaporization. We have demonstrated our physics-based electrosurgery cutting algorithm through various examples. Our matrix manipulation algorithms designed for topology changes have shown low computational cost. Our simulator offers substantially greater physical fidelity compared to previous simulators that use simple geometry-based heat characterization. Copyright © 2013 John Wiley & Sons, Ltd.

  3. Efficient Algorithms for Real-Time GPU Volumetric Cloud Rendering with Enhanced Geometry

    Directory of Open Access Journals (Sweden)

    Carlos Jiménez de Parga

    2018-04-01

    Full Text Available This paper presents several new techniques for volumetric cloud rendering using efficient algorithms and data structures based on ray-tracing methods for cumulus generation, achieving an optimum balance between realism and performance. These techniques target applications such as flight simulations, computer games, and educational software, even with conventional graphics hardware. The contours of clouds are defined by implicit mathematical expressions or triangulated structures inside which volumetric rendering is performed. Novel techniques are used to reproduce the asymmetrical nature of clouds and the effects of light-scattering, with low computing costs. The work includes a new method to create randomized fractal clouds using a recursive grammar. The graphical results are comparable to those produced by state-of-the-art, hyper-realistic algorithms. These methods provide real-time performance, and are superior to particle-based systems. These outcomes suggest that our methods offer a good balance between realism and performance, and are suitable for use in the standard graphics industry.

  4. A multiscale fixed stress split iterative scheme for coupled flow and poromechanics in deep subsurface reservoirs

    Science.gov (United States)

    Dana, Saumik; Ganis, Benjamin; Wheeler, Mary F.

    2018-01-01

    In coupled flow and poromechanics phenomena representing hydrocarbon production or CO2 sequestration in deep subsurface reservoirs, the spatial domain in which fluid flow occurs is usually much smaller than the spatial domain over which significant deformation occurs. The typical approach is to either impose an overburden pressure directly on the reservoir thus treating it as a coupled problem domain or to model flow on a huge domain with zero permeability cells to mimic the no flow boundary condition on the interface of the reservoir and the surrounding rock. The former approach precludes a study of land subsidence or uplift and further does not mimic the true effect of the overburden on stress sensitive reservoirs whereas the latter approach has huge computational costs. In order to address these challenges, we augment the fixed-stress split iterative scheme with upscaling and downscaling operators to enable modeling flow and mechanics on overlapping nonmatching hexahedral grids. Flow is solved on a finer mesh using a multipoint flux mixed finite element method and mechanics is solved on a coarse mesh using a conforming Galerkin method. The multiscale operators are constructed using a procedure that involves singular value decompositions, a surface intersections algorithm and Delaunay triangulations. We numerically demonstrate the convergence of the augmented scheme using the classical Mandel's problem solution.

  5. An Efficient Method to Create Digital Terrain Models from Point Clouds Collected by Mobile LiDAR Systems

    Science.gov (United States)

    Gézero, L.; Antunes, C.

    2017-05-01

    The digital terrain models (DTM) assume an essential role in all types of road maintenance, water supply and sanitation projects. The demand of such information is more significant in developing countries, where the lack of infrastructures is higher. In recent years, the use of Mobile LiDAR Systems (MLS) proved to be a very efficient technique in the acquisition of precise and dense point clouds. These point clouds can be a solution to obtain the data for the production of DTM in remote areas, due mainly to the safety, precision, speed of acquisition and the detail of the information gathered. However, the point clouds filtering and algorithms to separate "terrain points" from "no terrain points", quickly and consistently, remain a challenge that has caught the interest of researchers. This work presents a method to create the DTM from point clouds collected by MLS. The method is based in two interactive steps. The first step of the process allows reducing the cloud point to a set of points that represent the terrain's shape, being the distance between points inversely proportional to the terrain variation. The second step is based on the Delaunay triangulation of the points resulting from the first step. The achieved results encourage a wider use of this technology as a solution for large scale DTM production in remote areas.

  6. Patch-based image segmentation of satellite imagery using minimum spanning tree construction

    Energy Technology Data Exchange (ETDEWEB)

    Skurikhin, Alexei N [Los Alamos National Laboratory

    2010-01-01

    We present a method for hierarchical image segmentation and feature extraction. This method builds upon the combination of the detection of image spectral discontinuities using Canny edge detection and the image Laplacian, followed by the construction of a hierarchy of segmented images of successively reduced levels of details. These images are represented as sets of polygonized pixel patches (polygons) attributed with spectral and structural characteristics. This hierarchy forms the basis for object-oriented image analysis. To build fine level-of-detail representation of the original image, seed partitions (polygons) are built upon a triangular mesh composed of irregular sized triangles, whose spatial arrangement is adapted to the image content. This is achieved by building the triangular mesh on the top of the detected spectral discontinuities that form a network of constraints for the Delaunay triangulation. A polygonized image is represented as a spatial network in the form of a graph with vertices which correspond to the polygonal partitions and graph edges reflecting pairwise partitions relations. Image graph partitioning is based on the iterative graph oontraction using Boruvka's Minimum Spanning Tree algorithm. An important characteristic of the approach is that the agglomeration of partitions is constrained by the detected spectral discontinuities; thus the shapes of agglomerated partitions are more likely to correspond to the outlines of real-world objects.

  7. Intelligent emission-sensitive routing for plugin hybrid electric vehicles.

    Science.gov (United States)

    Sun, Zhonghao; Zhou, Xingshe

    2016-01-01

    The existing transportation sector creates heavily environmental impacts and is a prime cause for the current climate change. The need to reduce emissions from this sector has stimulated efforts to speed up the application of electric vehicles (EVs). A subset of EVs, called plug-in hybrid electric vehicles (PHEVs), backup batteries with combustion engine, which makes PHEVs have a comparable driving range to conventional vehicles. However, this hybridization comes at a cost of higher emissions than all-electric vehicles. This paper studies the routing problem for PHEVs to minimize emissions. The existing shortest-path based algorithms cannot be applied to solving this problem, because of the several new challenges: (1) an optimal route may contain circles caused by detour for recharging; (2) emissions of PHEVs not only depend on the driving distance, but also depend on the terrain and the state of charge (SOC) of batteries; (3) batteries can harvest energy by regenerative braking, which makes some road segments have negative energy consumption. To address these challenges, this paper proposes a green navigation algorithm (GNA) which finds the optimal strategies: where to go and where to recharge. GNA discretizes the SOC, then makes the PHEV routing problem to satisfy the principle of optimality. Finally, GNA adopts dynamic programming to solve the problem. We evaluate GNA using synthetic maps generated by the delaunay triangulation. The results show that GNA can save more than 10 % energy and reduce 10 % emissions when compared to the shortest path algorithm. We also observe that PHEVs with the battery capacity of 10-15 KWh detour most and nearly no detour when larger than 30 KWh. This observation gives some insights when developing PHEVs.

  8. THE DEEP2 GALAXY REDSHIFT SURVEY: THE VORONOI-DELAUNAY METHOD CATALOG OF GALAXY GROUPS

    Energy Technology Data Exchange (ETDEWEB)

    Gerke, Brian F. [KIPAC, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, MS 29, Menlo Park, CA 94725 (United States); Newman, Jeffrey A. [Department of Physics and Astronomy, 3941 O' Hara Street, Pittsburgh, PA 15260 (United States); Davis, Marc [Department of Physics and Department of Astronomy, Campbell Hall, University of California-Berkeley, Berkeley, CA 94720 (United States); Coil, Alison L. [Center for Astrophysics and Space Sciences, University of California, San Diego, 9500 Gilman Drive, MC 0424, La Jolla, CA 92093 (United States); Cooper, Michael C. [Center for Galaxy Evolution, Department of Physics and Astronomy, University of California-Irvine, Irvine, CA 92697 (United States); Dutton, Aaron A. [Department of Physics and Astronomy, University of Victoria, Victoria, BC V8P 5C2 (Canada); Faber, S. M.; Guhathakurta, Puragra; Koo, David C.; Phillips, Andrew C. [UCO/Lick Observatory, University of California-Santa Cruz, Santa Cruz, CA 95064 (United States); Konidaris, Nicholas; Lin, Lihwai [Astronomy Department, Caltech 249-17, Pasadena, CA 91125 (United States); Noeske, Kai [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Rosario, David J. [Max Planck Institute for Extraterrestrial Physics, Giessenbachstr. 1, 85748 Garching bei Muenchen (Germany); Weiner, Benjamin J.; Willmer, Christopher N. A. [Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721 (United States); Yan, Renbin [Department of Astronomy and Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4 (Canada)

    2012-05-20

    We present a public catalog of galaxy groups constructed from the spectroscopic sample of galaxies in the fourth data release from the Deep Extragalactic Evolutionary Probe 2 (DEEP2) Galaxy Redshift Survey, including the Extended Groth Strip (EGS). The catalog contains 1165 groups with two or more members in the EGS over the redshift range 0 < z < 1.5 and 1295 groups at z > 0.6 in the rest of DEEP2. Twenty-five percent of EGS galaxies and fourteen percent of high-z DEEP2 galaxies are assigned to galaxy groups. The groups were detected using the Voronoi-Delaunay method (VDM) after it has been optimized on mock DEEP2 catalogs following similar methods to those employed in Gerke et al. In the optimization effort, we have taken particular care to ensure that the mock catalogs resemble the data as closely as possible, and we have fine-tuned our methods separately on mocks constructed for the EGS and the rest of DEEP2. We have also probed the effect of the assumed cosmology on our inferred group-finding efficiency by performing our optimization on three different mock catalogs with different background cosmologies, finding large differences in the group-finding success we can achieve for these different mocks. Using the mock catalog whose background cosmology is most consistent with current data, we estimate that the DEEP2 group catalog is 72% complete and 61% pure (74% and 67% for the EGS) and that the group finder correctly classifies 70% of galaxies that truly belong to groups, with an additional 46% of interloper galaxies contaminating the catalog (66% and 43% for the EGS). We also confirm that the VDM catalog reconstructs the abundance of galaxy groups with velocity dispersions above {approx}300 km s{sup -1} to an accuracy better than the sample variance, and this successful reconstruction is not strongly dependent on cosmology. This makes the DEEP2 group catalog a promising probe of the growth of cosmic structure that can potentially be used for cosmological tests.

  9. THE DEEP2 GALAXY REDSHIFT SURVEY: THE VORONOI-DELAUNAY METHOD CATALOG OF GALAXY GROUPS

    International Nuclear Information System (INIS)

    Gerke, Brian F.; Newman, Jeffrey A.; Davis, Marc; Coil, Alison L.; Cooper, Michael C.; Dutton, Aaron A.; Faber, S. M.; Guhathakurta, Puragra; Koo, David C.; Phillips, Andrew C.; Konidaris, Nicholas; Lin, Lihwai; Noeske, Kai; Rosario, David J.; Weiner, Benjamin J.; Willmer, Christopher N. A.; Yan, Renbin

    2012-01-01

    We present a public catalog of galaxy groups constructed from the spectroscopic sample of galaxies in the fourth data release from the Deep Extragalactic Evolutionary Probe 2 (DEEP2) Galaxy Redshift Survey, including the Extended Groth Strip (EGS). The catalog contains 1165 groups with two or more members in the EGS over the redshift range 0 0.6 in the rest of DEEP2. Twenty-five percent of EGS galaxies and fourteen percent of high-z DEEP2 galaxies are assigned to galaxy groups. The groups were detected using the Voronoi-Delaunay method (VDM) after it has been optimized on mock DEEP2 catalogs following similar methods to those employed in Gerke et al. In the optimization effort, we have taken particular care to ensure that the mock catalogs resemble the data as closely as possible, and we have fine-tuned our methods separately on mocks constructed for the EGS and the rest of DEEP2. We have also probed the effect of the assumed cosmology on our inferred group-finding efficiency by performing our optimization on three different mock catalogs with different background cosmologies, finding large differences in the group-finding success we can achieve for these different mocks. Using the mock catalog whose background cosmology is most consistent with current data, we estimate that the DEEP2 group catalog is 72% complete and 61% pure (74% and 67% for the EGS) and that the group finder correctly classifies 70% of galaxies that truly belong to groups, with an additional 46% of interloper galaxies contaminating the catalog (66% and 43% for the EGS). We also confirm that the VDM catalog reconstructs the abundance of galaxy groups with velocity dispersions above ∼300 km s –1 to an accuracy better than the sample variance, and this successful reconstruction is not strongly dependent on cosmology. This makes the DEEP2 group catalog a promising probe of the growth of cosmic structure that can potentially be used for cosmological tests.

  10. Virtual reality myringotomy simulation with real-time deformation: development and validity testing.

    Science.gov (United States)

    Ho, Andrew K; Alsaffar, Hussain; Doyle, Philip C; Ladak, Hanif M; Agrawal, Sumit K

    2012-08-01

    Surgical simulation is becoming an increasingly common training tool in residency programs. The first objective was to implement real-time soft-tissue deformation and cutting into a virtual reality myringotomy simulator. The second objective was to test the various implemented incision algorithms to determine which most accurately represents the tympanic membrane during myringotomy. Descriptive and face-validity testing. A deformable tympanic membrane was developed, and three soft-tissue cutting algorithms were successfully implemented into the virtual reality myringotomy simulator. The algorithms included element removal, direction prediction, and Delaunay cutting. The simulator was stable and capable of running in real time on inexpensive hardware. A face-validity study was then carried out using a validated questionnaire given to eight otolaryngologists and four senior otolaryngology residents. Each participant was given an adaptation period on the simulator, was blinded to the algorithm being used, and was presented the three algorithms in a randomized order. A virtual reality myringotomy simulator with real-time soft-tissue deformation and cutting was successfully developed. The simulator was stable, ran in real time on inexpensive hardware, and incorporated haptic feedback and stereoscopic vision. The Delaunay cutting algorithm was found to be the most realistic algorithm representing the incision during myringotomy (P virtual reality myringotomy simulator is being developed and now integrates a real-time deformable tympanic membrane that appears to have face validity. Further development and validation studies are necessary before the simulator can be studied with respect to training efficacy and clinical impact. Copyright © 2012 The American Laryngological, Rhinological, and Otological Society, Inc.

  11. Retrieval Algorithms for Road Surface Modelling Using Laser-Based Mobile Mapping

    Directory of Open Access Journals (Sweden)

    Antero Kukko

    2008-09-01

    Full Text Available Automated processing of the data provided by a laser-based mobile mapping system will be a necessity due to the huge amount of data produced. In the future, vehiclebased laser scanning, here called mobile mapping, should see considerable use for road environment modelling. Since the geometry of the scanning and point density is different from airborne laser scanning, new algorithms are needed for information extraction. In this paper, we propose automatic methods for classifying the road marking and kerbstone points and modelling the road surface as a triangulated irregular network. On the basis of experimental tests, the mean classification accuracies obtained using automatic method for lines, zebra crossings and kerbstones were 80.6%, 92.3% and 79.7%, respectively.

  12. Determination of Shift/Bias in Digital Aerial Triangulation of UAV Imagery Sequences

    Science.gov (United States)

    Wierzbicki, Damian

    2017-12-01

    Currently UAV Photogrammetry is characterized a largely automated and efficient data processing. Depicting from the low altitude more often gains on the meaning in the uses of applications as: cities mapping, corridor mapping, road and pipeline inspections or mapping of large areas e.g. forests. Additionally, high-resolution video image (HD and bigger) is more often use for depicting from the low altitude from one side it lets deliver a lot of details and characteristics of ground surfaces features, and from the other side is presenting new challenges in the data processing. Therefore, determination of elements of external orientation plays a substantial role the detail of Digital Terrain Models and artefact-free ortophoto generation. Parallel a research on the quality of acquired images from UAV and above the quality of products e.g. orthophotos are conducted. Despite so fast development UAV photogrammetry still exists the necessity of accomplishment Automatic Aerial Triangulation (AAT) on the basis of the observations GPS/INS and via ground control points. During low altitude photogrammetric flight, the approximate elements of external orientation registered by UAV are burdened with the influence of some shift/bias errors. In this article, methods of determination shift/bias error are presented. In the process of the digital aerial triangulation two solutions are applied. In the first method shift/bias error was determined together with the drift/bias error, elements of external orientation and coordinates of ground control points. In the second method shift/bias error was determined together with the elements of external orientation, coordinates of ground control points and drift/bias error equals 0. When two methods were compared the difference for shift/bias error is more than ±0.01 m for all terrain coordinates XYZ.

  13. Tutorial: Asteroseismic Stellar Modelling with AIMS

    Science.gov (United States)

    Lund, Mikkel N.; Reese, Daniel R.

    The goal of aims (Asteroseismic Inference on a Massive Scale) is to estimate stellar parameters and credible intervals/error bars in a Bayesian manner from a set of asteroseismic frequency data and so-called classical constraints. To achieve reliable parameter estimates and computational efficiency, it searches through a grid of pre-computed models using an MCMC algorithm—interpolation within the grid of models is performed by first tessellating the grid using a Delaunay triangulation and then doing a linear barycentric interpolation on matching simplexes. Inputs for the modelling consist of individual frequencies from peak-bagging, which can be complemented with classical spectroscopic constraints. aims is mostly written in Python with a modular structure to facilitate contributions from the community. Only a few computationally intensive parts have been rewritten in Fortran in order to speed up calculations.

  14. Analysis of Regularly and Irregularly Sampled Spatial, Multivariate, and Multi-temporal Data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    1994-01-01

    This thesis describes different methods that are useful in the analysis of multivariate data. Some methods focus on spatial data (sampled regularly or irregularly), others focus on multitemporal data or data from multiple sources. The thesis covers selected and not all aspects of relevant data......-variograms are described. As a new way of setting up a well-balanced kriging support the Delaunay triangulation is suggested. Two case studies show the usefulness of 2-D semivariograms of geochemical data from areas in central Spain (with a geologist's comment) and South Greenland, and kriging/cokriging of an undersampled...... are considered as repetitions. Three case studies show the strength of the methods; one uses SPOT High Resolution Visible (HRV) multispectral (XS) data covering economically important pineapple and coffee plantations near Thika, Kiambu District, Kenya, the other two use Landsat Thematic Mapper (TM) data covering...

  15. A Hybrid DV-Hop Algorithm Using RSSI for Localization in Large-Scale Wireless Sensor Networks.

    Science.gov (United States)

    Cheikhrouhou, Omar; M Bhatti, Ghulam; Alroobaea, Roobaea

    2018-05-08

    With the increasing realization of the Internet-of-Things (IoT) and rapid proliferation of wireless sensor networks (WSN), estimating the location of wireless sensor nodes is emerging as an important issue. Traditional ranging based localization algorithms use triangulation for estimating the physical location of only those wireless nodes that are within one-hop distance from the anchor nodes. Multi-hop localization algorithms, on the other hand, aim at localizing the wireless nodes that can physically be residing at multiple hops away from anchor nodes. These latter algorithms have attracted a growing interest from research community due to the smaller number of required anchor nodes. One such algorithm, known as DV-Hop (Distance Vector Hop), has gained popularity due to its simplicity and lower cost. However, DV-Hop suffers from reduced accuracy due to the fact that it exploits only the network topology (i.e., number of hops to anchors) rather than the distances between pairs of nodes. In this paper, we propose an enhanced DV-Hop localization algorithm that also uses the RSSI values associated with links between one-hop neighbors. Moreover, we exploit already localized nodes by promoting them to become additional anchor nodes. Our simulations have shown that the proposed algorithm significantly outperforms the original DV-Hop localization algorithm and two of its recently published variants, namely RSSI Auxiliary Ranging and the Selective 3-Anchor DV-hop algorithm. More precisely, in some scenarios, the proposed algorithm improves the localization accuracy by almost 95%, 90% and 70% as compared to the basic DV-Hop, Selective 3-Anchor, and RSSI DV-Hop algorithms, respectively.

  16. Modification of the laser triangulation method for measuring the thickness of optical layers

    Science.gov (United States)

    Khramov, V. N.; Adamov, A. A.

    2018-04-01

    The problem of determining the thickness of thin films by the method of laser triangulation is considered. An expression is derived for the film thickness and the distance between the focused beams on the photo detector. The possibility of applying the chosen method for measuring thickness is in the range [0.1; 1] mm. We could resolve 2 individual light marks for a minimum film thickness of 0.23 mm. We resolved with the help of computer processing of photos with a resolution of 0.10 mm. The obtained results can be used in ophthalmology for express diagnostics during surgical operations on the corneal layer.

  17. Development of the delyed-neutron triangulation technique for locating failed fuel in LMFBR

    International Nuclear Information System (INIS)

    Kryter, R.C.

    1975-01-01

    Two major accomplishments of the ORNL delayed neutron triangulation program are (1) an analysis of anticipated detector counting rates and sensitivities to unclad fuel and erosion types of pin failure, and (2) an experimental assessment of the accuracy with which the position of failed fuel can be determined in the FFTF (this was performed in a quarter-scale water mockup of realistic outlet plenum geometry using electrolyte injections and conductivity cells to simulate delayed-neutron precursor releases and detections, respectively). The major results and conclusions from these studies are presented, along with plans for further DNT development work at ORNL for the FFTF and CRBR. (author)

  18. The structure of chromatic polynomials of planar triangulations and implications for chromatic zeros and asymptotic limiting quantities

    International Nuclear Information System (INIS)

    Shrock, Robert; Xu Yan

    2012-01-01

    We present an analysis of the structure and properties of chromatic polynomials P(G pt,m-vector, q) of one-parameter and multi-parameter families of planar triangulation graphs G pt,m-vector , where m-vector = (m 1 ,…,m p ) is a vector of integer parameters. We use these to study the ratio of |P(G pt,m-vector, τ+1)| to the Tutte upper bound (τ − 1) n−5 , where τ=(1+√5)/2 and n is the number of vertices in G pt,m-vector . In particular, we calculate limiting values of this ratio as n → ∞ for various families of planar triangulations. We also use our calculations to analyze zeros of these chromatic polynomials. We study a large class of families G pt,m-vector with p = 1 and p = 2 and show that these have a structure of the form P(G pt,m ,q) = c G pt ,1 λ 1 m + c G pt ,2 λ 2 m + c G pt ,3 λ 3 m for p = 1, where λ 1 = q − 2, λ 2 = q − 3, and λ 3 = −1, and P(G pt,m-vector ,q) =Σ i 1 =1 3 Σ i 2 =1 3 c G pt ,i 1 i 2 λ i 1 m 1 λ i 2 m 2 for p = 2. We derive properties of the coefficients c G pt ,i-vector and show that P(G pt,m-vector ,q) has a real chromatic zero that approaches (1/2)(3+√5) as one or more of the m i → ∞. The generalization to p ⩾ 3 is given. Further, we present a one-parameter family of planar triangulations with real zeros that approach 3 from below as m → ∞. Implications for the ground-state entropy of the Potts antiferromagnet are discussed. (paper)

  19. Phase Center Interpolation Algorithm for Airborne GPS through the Kalman Filter

    Directory of Open Access Journals (Sweden)

    Edson A. Mitishita

    2005-12-01

    Full Text Available The aerial triangulation is a fundamental step in any photogrammetric project. The surveying of the traditional control points, depending on region to be mapped, still has a high cost. The distribution of control points at the block, and its positional quality, influence directly in the resulting precisions of the aero triangulation processing. The airborne GPS technique has as key objectives cost reduction and quality improvement of the ground control in the modern photogrammetric projects. Nowadays, in Brazil, the greatest photogrammetric companies are acquiring airborne GPS systems, but those systems are usually presenting difficulties in the operation, due to the need of human resources for the operation, because of the high technology involved. Inside the airborne GPS technique, one of the fundamental steps is the interpolation of the position of the phase center of the GPS antenna, in the photo shot instant. Traditionally, low degree polynomials are used, but recent studies show that those polynomials is reduced in turbulent flights, which are quite common, mainly in great scales flights. This paper has as objective to present a solution for that problem, through an algorithm based on the Kalman Filter, which takes into account the dynamic aspect of the problem. At the end of the paper, the results of a comparison between experiments done with the proposed methodology and a common linear interpolator are shown. These results show a significant accuracy gain at the procedure of linear interpolation, when the Kalman filter is used.

  20. Assessment of behavioral changes associated with oral meloxicam administration at time of dehorning in calves using a remote triangulation device and accelerometers

    Directory of Open Access Journals (Sweden)

    Theurer Miles E

    2012-04-01

    Full Text Available Abstract Background Dehorning is common in the cattle industry, and there is a need for research evaluating pain mitigation techniques. The objective of this study was to determine the effects of oral meloxicam, a non-steroidal anti-inflammatory, on cattle behavior post-dehorning by monitoring the percent of time spent standing, walking, and lying in specific locations within the pen using accelerometers and a remote triangulation device. Twelve calves approximately ten weeks of age were randomized into 2 treatment groups (meloxicam or control in a complete block design by body weight. Six calves were orally administered 0.5 mg/kg meloxicam at the time of dehorning and six calves served as negative controls. All calves were dehorned using thermocautery and behavior of each calf was continuously monitored for 7 days after dehorning using accelerometers and a remote triangulation device. Accelerometers monitored lying behavior and the remote triangulation device was used to monitor each calf’s movement within the pen. Results Analysis of behavioral data revealed significant interactions between treatment (meloxicam vs. control and the number of days post dehorning. Calves that received meloxicam spent more time at the grain bunk on trial days 2 and 6 post-dehorning; spent more time lying down on days 1, 2, 3, and 4; and less time at the hay feeder on days 0 and 1 compared to the control group. Meloxicam calves tended to walk more at the beginning and end of the trial compared to the control group. By day 5, the meloxicam and control group exhibited similar behaviors. Conclusions The noted behavioral changes provide evidence of differences associated with meloxicam administration. More studies need to be performed to evaluate the relationship of behavior monitoring and post-operative pain. To our knowledge this is the first published report demonstrating behavioral changes following dehorning using a remote triangulation device in conjunction

  1. TRIANGULATION OF THE INTERSTELLAR MAGNETIC FIELD

    Energy Technology Data Exchange (ETDEWEB)

    Schwadron, N. A.; Moebius, E. [University of New Hampshire, Durham, NH 03824 (United States); Richardson, J. D. [Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Burlaga, L. F. [Goddard Space Flight Center, Greenbelt, MD 20771 (United States); McComas, D. J. [Southwest Research Institute, San Antonio, TX 78228 (United States)

    2015-11-01

    Determining the direction of the local interstellar magnetic field (LISMF) is important for understanding the heliosphere’s global structure, the properties of the interstellar medium, and the propagation of cosmic rays in the local galactic medium. Measurements of interstellar neutral atoms by Ulysses for He and by SOHO/SWAN for H provided some of the first observational insights into the LISMF direction. Because secondary neutral H is partially deflected by the interstellar flow in the outer heliosheath and this deflection is influenced by the LISMF, the relative deflection of H versus He provides a plane—the so-called B–V plane in which the LISMF direction should lie. Interstellar Boundary Explorer (IBEX) subsequently discovered a ribbon, the center of which is conjectured to be the LISMF direction. The most recent He velocity measurements from IBEX and those from Ulysses yield a B–V plane with uncertainty limits that contain the centers of the IBEX ribbon at 0.7–2.7 keV. The possibility that Voyager 1 has moved into the outer heliosheath now suggests that Voyager 1's direct observations provide another independent determination of the LISMF. We show that LISMF direction measured by Voyager 1 is >40° off from the IBEX ribbon center and the B–V plane. Taking into account the temporal gradient of the field direction measured by Voyager 1, we extrapolate to a field direction that passes directly through the IBEX ribbon center (0.7–2.7 keV) and the B–V plane, allowing us to triangulate the LISMF direction and estimate the gradient scale size of the magnetic field.

  2. Phase extraction based on iterative algorithm using five-frame crossed fringes in phase measuring deflectometry

    Science.gov (United States)

    Jin, Chengying; Li, Dahai; Kewei, E.; Li, Mengyang; Chen, Pengyu; Wang, Ruiyang; Xiong, Zhao

    2018-06-01

    In phase measuring deflectometry, two orthogonal sinusoidal fringe patterns are separately projected on the test surface and the distorted fringes reflected by the surface are recorded, each with a sequential phase shift. Then the two components of the local surface gradients are obtained by triangulation. It usually involves some complicated and time-consuming procedures (fringe projection in the orthogonal directions). In addition, the digital light devices (e.g. LCD screen and CCD camera) are not error free. There are quantization errors for each pixel of both LCD and CCD. Therefore, to avoid the complex process and improve the reliability of the phase distribution, a phase extraction algorithm with five-frame crossed fringes is presented in this paper. It is based on a least-squares iterative process. Using the proposed algorithm, phase distributions and phase shift amounts in two orthogonal directions can be simultaneously and successfully determined through an iterative procedure. Both a numerical simulation and a preliminary experiment are conducted to verify the validity and performance of this algorithm. Experimental results obtained by our method are shown, and comparisons between our experimental results and those obtained by the traditional 16-step phase-shifting algorithm and between our experimental results and those measured by the Fizeau interferometer are made.

  3. Summing Feynman graphs by Monte Carlo: Planar φ3-theory and dynamically triangulated random surfaces

    International Nuclear Information System (INIS)

    Boulatov, D.V.

    1988-01-01

    New combinatorial identities are suggested relating the ratio of (n-1)th and nth orders of (planar) perturbation expansion for any quantity to some average over the ensemble of all planar graphs of the nth order. These identities are used for Monte Carlo calculation of critical exponents γ str (string susceptibility) in planar φ 3 -theory and in the dynamically triangulated random surface (DTRS) model near the convergence circle for various dimensions. In the solvable case D=1 the exact critical properties of the theory are reproduced numerically. (orig.)

  4. Two Strategies for Qualitative Content Analysis: An Intramethod Approach to Triangulation.

    Science.gov (United States)

    Renz, Susan M; Carrington, Jane M; Badger, Terry A

    2018-04-01

    The overarching aim of qualitative research is to gain an understanding of certain social phenomena. Qualitative research involves the studied use and collection of empirical materials, all to describe moments and meanings in individuals' lives. Data derived from these various materials require a form of analysis of the content, focusing on written or spoken language as communication, to provide context and understanding of the message. Qualitative research often involves the collection of data through extensive interviews, note taking, and tape recording. These methods are time- and labor-intensive. With the advances in computerized text analysis software, the practice of combining methods to analyze qualitative data can assist the researcher in making large data sets more manageable and enhance the trustworthiness of the results. This article will describe a novel process of combining two methods of qualitative data analysis, or Intramethod triangulation, as a means to provide a deeper analysis of text.

  5. Tetrahedral meshing via maximal Poisson-disk sampling

    KAUST Repository

    Guo, Jianwei

    2016-02-15

    In this paper, we propose a simple yet effective method to generate 3D-conforming tetrahedral meshes from closed 2-manifold surfaces. Our approach is inspired by recent work on maximal Poisson-disk sampling (MPS), which can generate well-distributed point sets in arbitrary domains. We first perform MPS on the boundary of the input domain, we then sample the interior of the domain, and we finally extract the tetrahedral mesh from the samples by using 3D Delaunay or regular triangulation for uniform or adaptive sampling, respectively. We also propose an efficient optimization strategy to protect the domain boundaries and to remove slivers to improve the meshing quality. We present various experimental results to illustrate the efficiency and the robustness of our proposed approach. We demonstrate that the performance and quality (e.g., minimal dihedral angle) of our approach are superior to current state-of-the-art optimization-based approaches.

  6. Branches of Triangulated Origami Near the Unfolded State

    Directory of Open Access Journals (Sweden)

    Bryan Gin-ge Chen

    2018-02-01

    Full Text Available Origami structures are characterized by a network of folds and vertices joining unbendable plates. For applications to mechanical design and self-folding structures, it is essential to understand the interplay between the set of folds in the unfolded origami and the possible 3D folded configurations. When deforming a structure that has been folded, one can often linearize the geometric constraints, but the degeneracy of the unfolded state makes a linear approach impossible there. We derive a theory for the second-order infinitesimal rigidity of an initially unfolded triangulated origami structure and use it to study the set of nearly unfolded configurations of origami with four boundary vertices. We find that locally, this set consists of a number of distinct “branches” which intersect at the unfolded state, and that the number of these branches is exponential in the number of vertices. We find numerical and analytical evidence that suggests that the branches are characterized by choosing each internal vertex to either “pop up” or “pop down.” The large number of pathways along which one can fold an initially unfolded origami structure strongly indicates that a generic structure is likely to become trapped in a “misfolded” state. Thus, new techniques for creating self-folding origami are likely necessary; controlling the popping state of the vertices may be one possibility.

  7. Branches of Triangulated Origami Near the Unfolded State

    Science.gov (United States)

    Chen, Bryan Gin-ge; Santangelo, Christian D.

    2018-01-01

    Origami structures are characterized by a network of folds and vertices joining unbendable plates. For applications to mechanical design and self-folding structures, it is essential to understand the interplay between the set of folds in the unfolded origami and the possible 3D folded configurations. When deforming a structure that has been folded, one can often linearize the geometric constraints, but the degeneracy of the unfolded state makes a linear approach impossible there. We derive a theory for the second-order infinitesimal rigidity of an initially unfolded triangulated origami structure and use it to study the set of nearly unfolded configurations of origami with four boundary vertices. We find that locally, this set consists of a number of distinct "branches" which intersect at the unfolded state, and that the number of these branches is exponential in the number of vertices. We find numerical and analytical evidence that suggests that the branches are characterized by choosing each internal vertex to either "pop up" or "pop down." The large number of pathways along which one can fold an initially unfolded origami structure strongly indicates that a generic structure is likely to become trapped in a "misfolded" state. Thus, new techniques for creating self-folding origami are likely necessary; controlling the popping state of the vertices may be one possibility.

  8. Discrepancies between qualitative and quantitative evaluation of randomised controlled trial results: achieving clarity through mixed methods triangulation.

    Science.gov (United States)

    Tonkin-Crine, Sarah; Anthierens, Sibyl; Hood, Kerenza; Yardley, Lucy; Cals, Jochen W L; Francis, Nick A; Coenen, Samuel; van der Velden, Alike W; Godycki-Cwirko, Maciek; Llor, Carl; Butler, Chris C; Verheij, Theo J M; Goossens, Herman; Little, Paul

    2016-05-12

    Mixed methods are commonly used in health services research; however, data are not often integrated to explore complementarity of findings. A triangulation protocol is one approach to integrating such data. A retrospective triangulation protocol was carried out on mixed methods data collected as part of a process evaluation of a trial. The multi-country randomised controlled trial found that a web-based training in communication skills (including use of a patient booklet) and the use of a C-reactive protein (CRP) point-of-care test decreased antibiotic prescribing by general practitioners (GPs) for acute cough. The process evaluation investigated GPs' and patients' experiences of taking part in the trial. Three analysts independently compared findings across four data sets: qualitative data collected view semi-structured interviews with (1) 62 patients and (2) 66 GPs and quantitative data collected via questionnaires with (3) 2886 patients and (4) 346 GPs. Pairwise comparisons were made between data sets and were categorised as agreement, partial agreement, dissonance or silence. Three instances of dissonance occurred in 39 independent findings. GPs and patients reported different views on the use of a CRP test. GPs felt that the test was useful in convincing patients to accept a no-antibiotic decision, but patient data suggested that this was unnecessary if a full explanation was given. Whilst qualitative data indicated all patients were generally satisfied with their consultation, quantitative data indicated highest levels of satisfaction for those receiving a detailed explanation from their GP with a booklet giving advice on self-care. Both qualitative and quantitative data sets indicated higher patient enablement for those in the communication groups who had received a booklet. Use of CRP tests does not appear to engage patients or influence illness perceptions and its effect is more centred on changing clinician behaviour. Communication skills and the patient

  9. Zur Rekonstruktion einer Typologie jugendlichen Medienhandelns gemäß dem Leitbild der Triangulation

    Directory of Open Access Journals (Sweden)

    Klaus Peter Treumann

    2017-09-01

    Full Text Available Die im Folgenden dargestellten Ergebnisse sind im Rahmen des von der DFG geförderten Forschungsprojekts „Eine Untersuchung zum Mediennutzungsverhalten 12- bis 20-Jähriger und zur Entwicklung von Medienkompetenz im Jugendalter“ entstanden, das gemeinsam von Klaus Peter Treumann, Uwe Sander und Dorothee Meister geleitet wird. Das Forschungsprojekt untersucht das Medienhandeln Jugendlicher sowohl hinsichtlich Neuer als auch alter Medien. Zum einen fragen wir dabei nach den Ausprägungen von Medienkompetenz in verschiedenen Dimensionen und zum anderen konzentrieren wir uns auf die Entwicklung einer empirisch fundierten Typologie jugendlichen Medienhandelns. Methodologisch ist die Untersuchung an dem Leitbild der Triangulation orientiert und kombiniert qualitative und quantitative Zugänge zum Forschungsfeld in Form von Gruppendiskussionen, leitfadengestützten Einzelinterviews und einer Repräsentativerhebung.

  10. An analytical method for computing atomic contact areas in biomolecules.

    Science.gov (United States)

    Mach, Paul; Koehl, Patrice

    2013-01-15

    We propose a new analytical method for detecting and computing contacts between atoms in biomolecules. It is based on the alpha shape theory and proceeds in three steps. First, we compute the weighted Delaunay triangulation of the union of spheres representing the molecule. In the second step, the Delaunay complex is filtered to derive the dual complex. Finally, contacts between spheres are collected. In this approach, two atoms i and j are defined to be in contact if their centers are connected by an edge in the dual complex. The contact areas between atom i and its neighbors are computed based on the caps formed by these neighbors on the surface of i; the total area of all these caps is partitioned according to their spherical Laguerre Voronoi diagram on the surface of i. This method is analytical and its implementation in a new program BallContact is fast and robust. We have used BallContact to study contacts in a database of 1551 high resolution protein structures. We show that with this new definition of atomic contacts, we generate realistic representations of the environments of atoms and residues within a protein. In particular, we establish the importance of nonpolar contact areas that complement the information represented by the accessible surface areas. This new method bears similarity to the tessellation methods used to quantify atomic volumes and contacts, with the advantage that it does not require the presence of explicit solvent molecules if the surface of the protein is to be considered. © 2012 Wiley Periodicals, Inc. Copyright © 2012 Wiley Periodicals, Inc.

  11. Tle Triangulation Campaign by Japanese High School Students as a Space Educational Project of the Ssh Consortium Kochi

    Science.gov (United States)

    Yamamoto, Masa-Yuki; Okamoto, Sumito; Miyoshi, Terunori; Takamura, Yuzaburo; Aoshima, Akira; Hinokuchi, Jin

    As one of the space educational projects in Japan, a triangulation observation project of TLE (Transient Luminous Events: sprites, elves, blue-jets, etc.) has been carried out since 2006 in collaboration between 29 Super Science High-schools (SSH) and Kochi University of Technol-ogy (KUT). Following with previous success of sprite observations by "Astro High-school" since 2004, the SSH consortium Kochi was established as a national space educational project sup-ported by Japan Science and Technology Agency (JST). High-sensitivity CCD camera (Watec, Neptune-100) with 6 mm F/1.4 C-mount lens (Fujinon) and motion-detective software (UFO-Capture, SonotaCo) were given to each participating team in order to monitor Northern night sky of Japan with almost full-coverage. During each school year (from April to March in Japan) since 2006, thousands of TLE images were taken by many student teams, with considerably large numbers of successful triangulations, i.e., (School year, Numbers of TLE observations, Numbers of triangulations) are (2006, 43, 3), (2007, 441, 95), (2008, 734, 115), and (2009, 337, 78). Note that, school year in Japan begins on April 1 and ends on March 31. The observation campaign began in December 2006, numbers are as of Feb. 28, 2010. Recently, some high schools started wide field observations using multiple cameras, and others started VLF observations using handmade loop antennae and amplifiers. Infomation exchange among the SSH consortium Kochi is frequently communicated with scientific discussion via KUT's mailing lists. Also, interactions with amateur observers in Japan are made through an internet forum of "SonotaCo Network Japan" (http://sonotaco.jp). Not only as an educational project but also as a scientific one, the project is also in success. In February 2008, simultaneous observations of Elves were obtained, in November 2009 a Giant "Graft-shaped" Sprites driven by Jets was clearly imaged with VLF signals. Most recently, ob-servations of Elves

  12. Source parameters for the 1952 Kern County earthquake, California: A joint inversion of leveling and triangulation observations

    OpenAIRE

    Bawden, Gerald W.

    2001-01-01

    Coseismic leveling and triangulation observations are used to determine the faulting geometry and slip distribution of the July 21, 1952, Mw 7.3 Kern County earthquake on the White Wolf fault. A singular value decomposition inversion is used to assess the ability of the geodetic network to resolve slip along a multisegment fault and shows that the network is sufficient to resolve slip along the surface rupture to a depth of 10 km. Below 10 km, the network can only resolve dip slip near the fa...

  13. Triangulating case-finding tools for patient safety surveillance: a cross-sectional case study of puncture/laceration.

    Science.gov (United States)

    Taylor, Jennifer A; Gerwin, Daniel; Morlock, Laura; Miller, Marlene R

    2011-12-01

    To evaluate the need for triangulating case-finding tools in patient safety surveillance. This study applied four case-finding tools to error-associated patient safety events to identify and characterise the spectrum of events captured by these tools, using puncture or laceration as an example for in-depth analysis. Retrospective hospital discharge data were collected for calendar year 2005 (n=48,418) from a large, urban medical centre in the USA. The study design was cross-sectional and used data linkage to identify the cases captured by each of four case-finding tools. Three case-finding tools (International Classification of Diseases external (E) and nature (N) of injury codes, Patient Safety Indicators (PSI)) were applied to the administrative discharge data to identify potential patient safety events. The fourth tool was Patient Safety Net, a web-based voluntary patient safety event reporting system. The degree of mutual exclusion among detection methods was substantial. For example, when linking puncture or laceration on unique identifiers, out of 447 potential events, 118 were identical between PSI and E-codes, 152 were identical between N-codes and E-codes and 188 were identical between PSI and N-codes. Only 100 events that were identified by PSI, E-codes and N-codes were identical. Triangulation of multiple tools through data linkage captures potential patient safety events most comprehensively. Existing detection tools target patient safety domains differently, and consequently capture different occurrences, necessitating the integration of data from a combination of tools to fully estimate the total burden.

  14. A Fast Multi-layer Subnetwork Connection Method for Time Series InSAR Technique

    Directory of Open Access Journals (Sweden)

    WU Hong'an

    2016-10-01

    Full Text Available Nowadays, times series interferometric synthetic aperture radar (InSAR technique has been widely used in ground deformation monitoring, especially in urban areas where lots of stable point targets can be detected. However, in standard time series InSAR technique, affected by atmospheric correlation distance and the threshold of linear model coherence, the Delaunay triangulation for connecting point targets can be easily separated into many discontinuous subnetworks. Thus it is difficult to retrieve ground deformation in non-urban areas. In order to monitor ground deformation in large areas efficiently, a novel multi-layer subnetwork connection (MLSC method is proposed for connecting all subnetworks. The advantage of the method is that it can quickly reduce the number of subnetworks with valid edges layer-by-layer. This method is compared with the existing complex network connecting mehod. The experimental results demonstrate that the data processing time of the proposed method is only 32.56% of the latter one.

  15. Generation of segmental chips in metal cutting modeled with the PFEM

    Science.gov (United States)

    Rodriguez Prieto, J. M.; Carbonell, J. M.; Cante, J. C.; Oliver, J.; Jonsén, P.

    2017-09-01

    The Particle Finite Element Method, a lagrangian finite element method based on a continuous Delaunay re-triangulation of the domain, is used to study machining of Ti6Al4V. In this work the method is revised and applied to study the influence of the cutting speed on the cutting force and the chip formation process. A parametric methodology for the detection and treatment of the rigid tool contact is presented. The adaptive insertion and removal of particles are developed and employed in order to sidestep the difficulties associated with mesh distortion, shear localization as well as for resolving the fine-scale features of the solution. The performance of PFEM is studied with a set of different two-dimensional orthogonal cutting tests. It is shown that, despite its Lagrangian nature, the proposed combined finite element-particle method is well suited for large deformation metal cutting problems with continuous chip and serrated chip formation.

  16. Multiple Illuminant Colour Estimation via Statistical Inference on Factor Graphs.

    Science.gov (United States)

    Mutimbu, Lawrence; Robles-Kelly, Antonio

    2016-08-31

    This paper presents a method to recover a spatially varying illuminant colour estimate from scenes lit by multiple light sources. Starting with the image formation process, we formulate the illuminant recovery problem in a statistically datadriven setting. To do this, we use a factor graph defined across the scale space of the input image. In the graph, we utilise a set of illuminant prototypes computed using a data driven approach. As a result, our method delivers a pixelwise illuminant colour estimate being devoid of libraries or user input. The use of a factor graph also allows for the illuminant estimates to be recovered making use of a maximum a posteriori (MAP) inference process. Moreover, we compute the probability marginals by performing a Delaunay triangulation on our factor graph. We illustrate the utility of our method for pixelwise illuminant colour recovery on widely available datasets and compare against a number of alternatives. We also show sample colour correction results on real-world images.

  17. Numerical electromagnetic frequency domain analysis with discrete exterior calculus

    Science.gov (United States)

    Chen, Shu C.; Chew, Weng Cho

    2017-12-01

    In this paper, we perform a numerical analysis in frequency domain for various electromagnetic problems based on discrete exterior calculus (DEC) with an arbitrary 2-D triangular or 3-D tetrahedral mesh. We formulate the governing equations in terms of DEC for 3-D and 2-D inhomogeneous structures, and also show that the charge continuity relation is naturally satisfied. Then we introduce a general construction for signed dual volume to incorporate material information and take into account the case when circumcenters fall outside triangles or tetrahedrons, which may lead to negative dual volume without Delaunay triangulation. Then we examine the boundary terms induced by the dual mesh and provide a systematical treatment of various boundary conditions, including perfect magnetic conductor (PMC), perfect electric conductor (PEC), Dirichlet, periodic, and absorbing boundary conditions (ABC) within this method. An excellent agreement is achieved through the numerical calculation of several problems, including homogeneous waveguides, microstructured fibers, photonic crystals, scattering by a 2-D PEC, and resonant cavities.

  18. Triangulation and Gender Perspectives in ‘Falling Man’ by Don DeLillo

    Directory of Open Access Journals (Sweden)

    Noemi Abe

    2011-09-01

    Susannah Radstone argues that the rhetorical response to 9/11 by the Bush administration is based on the opposition of two father figures: “the 'chastened' but powerful 'good' patriarchal father” Vs. “the 'bad' archaic father”. She explains: “In this Manichean fantasy can be glimpsed the continuing battle between competing versions of masculinity” (2002:459 that leaves women on the margins. The battle of the fathers of Bush’s rhetoric is counterposed in Falling Man by a battle between two men that stands for an unaccomplished fatherhood. Furthermore, the dualistic vision engendered by post-9/11 rhetoric and reflected in the novel should be evaluated in a trilateral dimension, given that at its core lies a triangulation built upon three stereotypical representations: the white middle-class man; the Arab terrorist; and a composite character in the middle, the woman, who shifts from ally, to victim, to a plausible supporter of the enemy.

  19. Adaptive mixed-hybrid and penalty discontinuous Galerkin method for two-phase flow in heterogeneous media

    KAUST Repository

    Hou, Jiangyong

    2016-02-05

    In this paper, we present a hybrid method, which consists of a mixed-hybrid finite element method and a penalty discontinuous Galerkin method, for the approximation of a fractional flow formulation of a two-phase flow problem in heterogeneous media with discontinuous capillary pressure. The fractional flow formulation is comprised of a wetting phase pressure equation and a wetting phase saturation equation which are coupled through a total velocity and the saturation affected coefficients. For the wetting phase pressure equation, the continuous mixed-hybrid finite element method space can be utilized due to a fundamental property that the wetting phase pressure is continuous. While it can reduce the computational cost by using less degrees of freedom and avoiding the post-processing of velocity reconstruction, this method can also keep several good properties of the discontinuous Galerkin method, which are important to the fractional flow formulation, such as the local mass balance, continuous normal flux and capability of handling the discontinuous capillary pressure. For the wetting phase saturation equation, the penalty discontinuous Galerkin method is utilized due to its capability of handling the discontinuous jump of the wetting phase saturation. Furthermore, an adaptive algorithm for the hybrid method together with the centroidal Voronoi Delaunay triangulation technique is proposed. Five numerical examples are presented to illustrate the features of proposed numerical method, such as the optimal convergence order, the accurate and efficient velocity approximation, and the applicability to the simulation of water flooding in oil field and the oil-trapping or barrier effect phenomena.

  20. Adaptive mixed-hybrid and penalty discontinuous Galerkin method for two-phase flow in heterogeneous media

    KAUST Repository

    Hou, Jiangyong; Chen, Jie; Sun, Shuyu; Chen, Zhangxin

    2016-01-01

    In this paper, we present a hybrid method, which consists of a mixed-hybrid finite element method and a penalty discontinuous Galerkin method, for the approximation of a fractional flow formulation of a two-phase flow problem in heterogeneous media with discontinuous capillary pressure. The fractional flow formulation is comprised of a wetting phase pressure equation and a wetting phase saturation equation which are coupled through a total velocity and the saturation affected coefficients. For the wetting phase pressure equation, the continuous mixed-hybrid finite element method space can be utilized due to a fundamental property that the wetting phase pressure is continuous. While it can reduce the computational cost by using less degrees of freedom and avoiding the post-processing of velocity reconstruction, this method can also keep several good properties of the discontinuous Galerkin method, which are important to the fractional flow formulation, such as the local mass balance, continuous normal flux and capability of handling the discontinuous capillary pressure. For the wetting phase saturation equation, the penalty discontinuous Galerkin method is utilized due to its capability of handling the discontinuous jump of the wetting phase saturation. Furthermore, an adaptive algorithm for the hybrid method together with the centroidal Voronoi Delaunay triangulation technique is proposed. Five numerical examples are presented to illustrate the features of proposed numerical method, such as the optimal convergence order, the accurate and efficient velocity approximation, and the applicability to the simulation of water flooding in oil field and the oil-trapping or barrier effect phenomena.

  1. Geometry Processing of Conventionally Produced Mouse Brain Slice Images.

    Science.gov (United States)

    Agarwal, Nitin; Xu, Xiangmin; Gopi, M

    2018-04-21

    Brain mapping research in most neuroanatomical laboratories relies on conventional processing techniques, which often introduce histological artifacts such as tissue tears and tissue loss. In this paper we present techniques and algorithms for automatic registration and 3D reconstruction of conventionally produced mouse brain slices in a standardized atlas space. This is achieved first by constructing a virtual 3D mouse brain model from annotated slices of Allen Reference Atlas (ARA). Virtual re-slicing of the reconstructed model generates ARA-based slice images corresponding to the microscopic images of histological brain sections. These image pairs are aligned using a geometric approach through contour images. Histological artifacts in the microscopic images are detected and removed using Constrained Delaunay Triangulation before performing global alignment. Finally, non-linear registration is performed by solving Laplace's equation with Dirichlet boundary conditions. Our methods provide significant improvements over previously reported registration techniques for the tested slices in 3D space, especially on slices with significant histological artifacts. Further, as one of the application we count the number of neurons in various anatomical regions using a dataset of 51 microscopic slices from a single mouse brain. To the best of our knowledge the presented work is the first that automatically registers both clean as well as highly damaged high-resolutions histological slices of mouse brain to a 3D annotated reference atlas space. This work represents a significant contribution to this subfield of neuroscience as it provides tools to neuroanatomist for analyzing and processing histological data. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Automatic system for 3D reconstruction of the chick eye based on digital photographs.

    Science.gov (United States)

    Wong, Alexander; Genest, Reno; Chandrashekar, Naveen; Choh, Vivian; Irving, Elizabeth L

    2012-01-01

    The geometry of anatomical specimens is very complex and accurate 3D reconstruction is important for morphological studies, finite element analysis (FEA) and rapid prototyping. Although magnetic resonance imaging, computed tomography and laser scanners can be used for reconstructing biological structures, the cost of the equipment is fairly high and specialised technicians are required to operate the equipment, making such approaches limiting in terms of accessibility. In this paper, a novel automatic system for 3D surface reconstruction of the chick eye from digital photographs of a serially sectioned specimen is presented as a potential cost-effective and practical alternative. The system is designed to allow for automatic detection of the external surface of the chick eye. Automatic alignment of the photographs is performed using a combination of coloured markers and an algorithm based on complex phase order likelihood that is robust to noise and illumination variations. Automatic segmentation of the external boundaries of the eye from the aligned photographs is performed using a novel level-set segmentation approach based on a complex phase order energy functional. The extracted boundaries are sampled to construct a 3D point cloud, and a combination of Delaunay triangulation and subdivision surfaces is employed to construct the final triangular mesh. Experimental results using digital photographs of the chick eye show that the proposed system is capable of producing accurate 3D reconstructions of the external surface of the eye. The 3D model geometry is similar to a real chick eye and could be used for morphological studies and FEA.

  3. Simulations of four-dimensional simplicial quantum gravity as dynamical triangulation

    International Nuclear Information System (INIS)

    Agishtein, M.E.; Migdal, A.A.

    1992-01-01

    In this paper, Four-Dimensional Simplicial Quantum Gravity is simulated using the dynamical triangulation approach. The authors studied simplicial manifolds of spherical topology and found the critical line for the cosmological constant as a function of the gravitational one, separating the phases of opened and closed Universe. When the bare cosmological constant approaches this line from above, the four-volume grows: the authors reached about 5 x 10 4 simplexes, which proved to be sufficient for the statistical limit of infinite volume. However, for the genuine continuum theory of gravity, the parameters of the lattice model should be further adjusted to reach the second order phase transition point, where the correlation length grows to infinity. The authors varied the gravitational constant, and they found the first order phase transition, similar to the one found in three-dimensional model, except in 4D the fluctuations are rather large at the transition point, so that this is close to the second order phase transition. The average curvature in cutoff units is large and positive in one phase (gravity), and small negative in another (antigravity). The authors studied the fractal geometry of both phases, using the heavy particle propagator to define the geodesic map, as well as with the old approach using the shortest lattice paths

  4. Spectral triangulation: a 3D method for locating single-walled carbon nanotubes in vivo

    Science.gov (United States)

    Lin, Ching-Wei; Bachilo, Sergei M.; Vu, Michael; Beckingham, Kathleen M.; Bruce Weisman, R.

    2016-05-01

    Nanomaterials with luminescence in the short-wave infrared (SWIR) region are of special interest for biological research and medical diagnostics because of favorable tissue transparency and low autofluorescence backgrounds in that region. Single-walled carbon nanotubes (SWCNTs) show well-known sharp SWIR spectral signatures and therefore have potential for noninvasive detection and imaging of cancer tumours, when linked to selective targeting agents such as antibodies. However, such applications face the challenge of sensitively detecting and localizing the source of SWIR emission from inside tissues. A new method, called spectral triangulation, is presented for three dimensional (3D) localization using sparse optical measurements made at the specimen surface. Structurally unsorted SWCNT samples emitting over a range of wavelengths are excited inside tissue phantoms by an LED matrix. The resulting SWIR emission is sampled at points on the surface by a scanning fibre optic probe leading to an InGaAs spectrometer or a spectrally filtered InGaAs avalanche photodiode detector. Because of water absorption, attenuation of the SWCNT fluorescence in tissues is strongly wavelength-dependent. We therefore gauge the SWCNT-probe distance by analysing differential changes in the measured SWCNT emission spectra. SWCNT fluorescence can be clearly detected through at least 20 mm of tissue phantom, and the 3D locations of embedded SWCNT test samples are found with sub-millimeter accuracy at depths up to 10 mm. Our method can also distinguish and locate two embedded SWCNT sources at distinct positions.Nanomaterials with luminescence in the short-wave infrared (SWIR) region are of special interest for biological research and medical diagnostics because of favorable tissue transparency and low autofluorescence backgrounds in that region. Single-walled carbon nanotubes (SWCNTs) show well-known sharp SWIR spectral signatures and therefore have potential for noninvasive detection and

  5. Potentiation of E-4031-induced torsade de pointes by HMR1556 or ATX-II is not predicted by action potential short-term variability or triangulation.

    Science.gov (United States)

    Michael, G; Dempster, J; Kane, K A; Coker, S J

    2007-12-01

    Torsade de pointes (TdP) can be induced by a reduction in cardiac repolarizing capacity. The aim of this study was to assess whether IKs blockade or enhancement of INa could potentiate TdP induced by IKr blockade and to investigate whether short-term variability (STV) or triangulation of action potentials preceded TdP. Experiments were performed in open-chest, pentobarbital-anaesthetized, alpha 1-adrenoceptor-stimulated, male New Zealand White rabbits, which received three consecutive i.v. infusions of either the IKr blocker E-4031 (1, 3 and 10 nmol kg(-1) min(-1)), the IKs blocker HMR1556 (25, 75 and 250 nmol kg(-1) min(-1)) or E-4031 and HMR1556 combined. In a second study rabbits received either the same doses of E-4031, the INa enhancer, ATX-II (0.4, 1.2 and 4.0 nmol kg(-1)) or both of these drugs. ECGs and epicardial monophasic action potentials were recorded. HMR1556 alone did not cause TdP but increased E-4031-induced TdP from 25 to 80%. ATX-II alone caused TdP in 38% of rabbits, as did E-4031; 75% of rabbits receiving both drugs had TdP. QT intervals were prolonged by all drugs but the extent of QT prolongation was not related to the occurrence of TdP. No changes in STV were detected and triangulation was only increased after TdP occurred. Giving modulators of ion channels in combination substantially increased TdP but, in this model, neither STV nor triangulation of action potentials could predict TdP.

  6. Triangulation of Qualitative Methods for the Exploration of Activity Systems in Ergonomics

    Directory of Open Access Journals (Sweden)

    Monika Hackel

    2008-08-01

    Full Text Available Research concerning ergonomic issues in interdisciplinary projects often raises several very specific questions depending on project objectives. To answer these questions the application of research methods should be thoroughly considered, regarding both the expenditure and the options within the scope of the given resources. The project AQUIMO develops an adaptable modelling tool for mechatronical engineering and creates a related qualification program. The task of social scientific research within this project is to identify requirements viewed from the perspective of the subsequent users. This formative evaluation is based on the approach of "developmental work research" as set forth by ENGESTRÖM and, thus, is a form of "action research". This paper discusses the triangulation of several qualitative methods addressing the examination of difficulties in interdisciplinary collaboration in mechatronical engineering. After a description of both background and analytic approach within the project AQUIMO, the methods are briefly described concerning their advantages and critical points. Their application within the research project AQUIMO is explained from an activity theoretical perspective. URN: urn:nbn:de:0114-fqs0803158

  7. A FAST APPROACH FOR STITCHING OF AERIAL IMAGES

    Directory of Open Access Journals (Sweden)

    A. Moussa

    2016-06-01

    Full Text Available The last few years have witnessed an increasing volume of aerial image data because of the extensive improvements of the Unmanned Aerial Vehicles (UAVs. These newly developed UAVs have led to a wide variety of applications. A fast assessment of the achieved coverage and overlap of the acquired images of a UAV flight mission is of great help to save the time and cost of the further steps. A fast automatic stitching of the acquired images can help to visually assess the achieved coverage and overlap during the flight mission. This paper proposes an automatic image stitching approach that creates a single overview stitched image using the acquired images during a UAV flight mission along with a coverage image that represents the count of overlaps between the acquired images. The main challenge of such task is the huge number of images that are typically involved in such scenarios. A short flight mission with image acquisition frequency of one second can capture hundreds to thousands of images. The main focus of the proposed approach is to reduce the processing time of the image stitching procedure by exploiting the initial knowledge about the images positions provided by the navigation sensors. The proposed approach also avoids solving for all the transformation parameters of all the photos together to save the expected long computation time if all the parameters were considered simultaneously. After extracting the points of interest of all the involved images using Scale-Invariant Feature Transform (SIFT algorithm, the proposed approach uses the initial image’s coordinates to build an incremental constrained Delaunay triangulation that represents the neighborhood of each image. This triangulation helps to match only the neighbor images and therefore reduces the time-consuming features matching step. The estimated relative orientation between the matched images is used to find a candidate seed image for the stitching process. The pre

  8. Thermal Entanglement and Critical Behavior of Magnetic Properties on a Triangulated Kagomé Lattice

    Directory of Open Access Journals (Sweden)

    N. Ananikian

    2011-01-01

    Full Text Available The equilibrium magnetic and entanglement properties in a spin-1/2 Ising-Heisenberg model on a triangulated Kagomé lattice are analyzed by means of the effective field for the Gibbs-Bogoliubov inequality. The calculation is reduced to decoupled individual (clusters trimers due to the separable character of the Ising-type exchange interactions between the Heisenberg trimers. The concurrence in terms of the three qubit isotropic Heisenberg model in the effective Ising field in the absence of a magnetic field is non-zero. The magnetic and entanglement properties exhibit common (plateau, peak features driven by a magnetic field and (antiferromagnetic exchange interaction. The (quantum entangled and non-entangled phases can be exploited as a useful tool for signalling the quantum phase transitions and crossovers at finite temperatures. The critical temperature of order-disorder coincides with the threshold temperature of thermal entanglement.

  9. Health, utilisation of health services, 'core' information, and reasons for non-participation: a triangulation study amongst non-respondents.

    Science.gov (United States)

    Näslindh-Ylispangar, Anita; Sihvonen, Marja; Kekki, Pertti

    2008-11-01

    To explore health, use of health services, 'core' information and reasons for non-participation amongst males. Gender may provide an explanation for non-participation in the healthcare system. A growing body of research suggests that males are less likely than females to seek help from health professionals for their problems. The current research had its beginnings with the low response rate in a prior voluntary survey and health examination for Finnish males born in 1961. Data triangulation among 28 non-respondent middle-aged males in Helsinki was used. The methods involved structured and in-depth interviews and health measurements to explore the views of these males concerning their health-related behaviours and use of health services. Non-respondent males seldom used healthcare services. Despite clinical risk factors (e.g. obesity and blood pressure) and various symptoms, males perceived their health status as good. Work was widely experienced as excessively demanding, causing insomnia and other stress symptoms. Males expressed sensitive messages when a session was ending and when the participant was close to the door and leaving the room. This 'core' information included major causes of concern, anxiety, fears and loneliness. This triangulation study showed that by using an in-depth interview as one research strategy, more sensitive 'feminist' expressions in health and ill-health were got by men. The results emphasise a male's self-perception of his masculinity that may have relevance to the health experience of the male population. Nurses and physicians need to pay special attention to the requirements of gender-specific healthcare to be most effective in the delivery of healthcare to males.

  10. Exploring Forms of Triangulation to Facilitate Collaborative Research Practice: Reflections From a Multidisciplinary Research Group

    Directory of Open Access Journals (Sweden)

    Tarja Tiainen

    2006-10-01

    Full Text Available This article contains critical reflections of a multidisciplinary research group studying the human and technological dynamics around some newly offered electronic services in a specific rural area of Finland. For their research, the group adopted ethnography. On facing the challenges of doing ethnographic research in a multidisciplinary setting, the group evolved its own breed of research practice based on multiple forms of triangulation. This implied the use of multiple data sources, methods, theories, and researchers, in different combinations. One of the outcomes of the work is a model for collaborative research. It highlights, among others, the importance of creating a climate for collaboration within the research group and following a process of individual and collaborative writing to achieve the potential benefits of such research. The article also identifies a set of remaining challenges relevant to collaborative research.

  11. Multiomics Data Triangulation for Asthma Candidate Biomarkers and Precision Medicine.

    Science.gov (United States)

    Pecak, Matija; Korošec, Peter; Kunej, Tanja

    2018-06-01

    Asthma is a common complex disorder and has been subject to intensive omics research for disease susceptibility and therapeutic innovation. Candidate biomarkers of asthma and its precision treatment demand that they stand the test of multiomics data triangulation before they can be prioritized for clinical applications. We classified the biomarkers of asthma after a search of the literature and based on whether or not a given biomarker candidate is reported in multiple omics platforms and methodologies, using PubMed and Web of Science, we identified omics studies of asthma conducted on diverse platforms using keywords, such as asthma, genomics, metabolomics, and epigenomics. We extracted data about asthma candidate biomarkers from 73 articles and developed a catalog of 190 potential asthma biomarkers (167 human, 23 animal data), comprising DNA loci, transcripts, proteins, metabolites, epimutations, and noncoding RNAs. The data were sorted according to 13 omics types: genomics, epigenomics, transcriptomics, proteomics, interactomics, metabolomics, ncRNAomics, glycomics, lipidomics, environmental omics, pharmacogenomics, phenomics, and integrative omics. Importantly, we found that 10 candidate biomarkers were apparent in at least two or more omics levels, thus promising potential for further biomarker research and development and precision medicine applications. This multiomics catalog reported herein for the first time contributes to future decision-making on prioritization of biomarkers and validation efforts for precision medicine in asthma. The findings may also facilitate meta-analyses and integrative omics studies in the future.

  12. Data governance requirements for distributed clinical research networks: triangulating perspectives of diverse stakeholders.

    Science.gov (United States)

    Kim, Katherine K; Browe, Dennis K; Logan, Holly C; Holm, Roberta; Hack, Lori; Ohno-Machado, Lucila

    2014-01-01

    There is currently limited information on best practices for the development of governance requirements for distributed research networks (DRNs), an emerging model that promotes clinical data reuse and improves timeliness of comparative effectiveness research. Much of the existing information is based on a single type of stakeholder such as researchers or administrators. This paper reports on a triangulated approach to developing DRN data governance requirements based on a combination of policy analysis with experts, interviews with institutional leaders, and patient focus groups. This approach is illustrated with an example from the Scalable National Network for Effectiveness Research, which resulted in 91 requirements. These requirements were analyzed against the Fair Information Practice Principles (FIPPs) and Health Insurance Portability and Accountability Act (HIPAA) protected versus non-protected health information. The requirements addressed all FIPPs, showing how a DRN's technical infrastructure is able to fulfill HIPAA regulations, protect privacy, and provide a trustworthy platform for research. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  13. Mesh-morphing algorithms for specimen-specific finite element modeling.

    Science.gov (United States)

    Sigal, Ian A; Hardisty, Michael R; Whyne, Cari M

    2008-01-01

    Despite recent advances in software for meshing specimen-specific geometries, considerable effort is still often required to produce and analyze specimen-specific models suitable for biomechanical analysis through finite element modeling. We hypothesize that it is possible to obtain accurate models by adapting a pre-existing geometry to represent a target specimen using morphing techniques. Here we present two algorithms for morphing, automated wrapping (AW) and manual landmarks (ML), and demonstrate their use to prepare specimen-specific models of caudal rat vertebrae. We evaluate the algorithms by measuring the distance between target and morphed geometries and by comparing response to axial loading simulated with finite element (FE) methods. First a traditional reconstruction process based on microCT was used to obtain two natural specimen-specific FE models. Next, the two morphing algorithms were used to compute mappings from the surface of one model, the source, to the other, the target, and to use this mapping to morph the source mesh to produce a target mesh. The microCT images were then used to assign element-specific material properties. In AW the mappings were obtained by wrapping the source and target surfaces with an auxiliary triangulated surface. In ML, landmarks were manually placed on corresponding locations on the surfaces of both source and target. Both morphing algorithms were successful in reproducing the shape of the target vertebra with a median distance between natural and morphed models of 18.8 and 32.2 microm, respectively, for AW and ML. Whereas AW-morphing produced a surface more closely resembling that of the target, ML guaranteed correspondence of the landmark locations between source and target. Morphing preserved the quality of the mesh producing models suitable for FE simulation. Moreover, there were only minor differences between natural and morphed models in predictions of deformation, strain and stress. We therefore conclude that

  14. Detecting and extracting clusters in atom probe data: A simple, automated method using Voronoi cells

    International Nuclear Information System (INIS)

    Felfer, P.; Ceguerra, A.V.; Ringer, S.P.; Cairney, J.M.

    2015-01-01

    The analysis of the formation of clusters in solid solutions is one of the most common uses of atom probe tomography. Here, we present a method where we use the Voronoi tessellation of the solute atoms and its geometric dual, the Delaunay triangulation to test for spatial/chemical randomness of the solid solution as well as extracting the clusters themselves. We show how the parameters necessary for cluster extraction can be determined automatically, i.e. without user interaction, making it an ideal tool for the screening of datasets and the pre-filtering of structures for other spatial analysis techniques. Since the Voronoi volumes are closely related to atomic concentrations, the parameters resulting from this analysis can also be used for other concentration based methods such as iso-surfaces. - Highlights: • Cluster analysis of atom probe data can be significantly simplified by using the Voronoi cell volumes of the atomic distribution. • Concentration fields are defined on a single atomic basis using Voronoi cells. • All parameters for the analysis are determined by optimizing the separation probability of bulk atoms vs clustered atoms

  15. Analysis of nanopore arrangement of porous alumina layers formed by anodizing in oxalic acid at relatively high temperatures

    Science.gov (United States)

    Zaraska, Leszek; Stępniowski, Wojciech J.; Jaskuła, Marian; Sulka, Grzegorz D.

    2014-06-01

    Anodic aluminum oxide (AAO) layers were formed by a simple two-step anodization in 0.3 M oxalic acid at relatively high temperatures (20-30 °C) and various anodizing potentials (30-65 V). The effect of anodizing conditions on structural features of as-obtained oxides was carefully investigated. A linear and exponential relationships between cell diameter, pore density and anodizing potential were confirmed, respectively. On the other hand, no effect of temperature and duration of anodization on pore spacing and pore density was found. Detailed quantitative and qualitative analyses of hexagonal arrangement of nanopore arrays were performed for all studied samples. The nanopore arrangement was evaluated using various methods based on the fast Fourier transform (FFT) images, Delaunay triangulations (defect maps), pair distribution functions (PDF), and angular distribution functions (ADF). It was found that for short anodizations performed at relatively high temperatures, the optimal anodizing potential that results in formation of nanostructures with the highest degree of pore order is 45 V. No direct effect of temperature and time of anodization on the nanopore arrangement was observed.

  16. AN IMAGE-BASED TECHNIQUE FOR 3D BUILDING RECONSTRUCTION USING MULTI-VIEW UAV IMAGES

    Directory of Open Access Journals (Sweden)

    F. Alidoost

    2015-12-01

    Full Text Available Nowadays, with the development of the urban areas, the automatic reconstruction of the buildings, as an important objects of the city complex structures, became a challenging topic in computer vision and photogrammetric researches. In this paper, the capability of multi-view Unmanned Aerial Vehicles (UAVs images is examined to provide a 3D model of complex building façades using an efficient image-based modelling workflow. The main steps of this work include: pose estimation, point cloud generation, and 3D modelling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM is applied on UAV images and a dense point cloud is generated. Then, a mesh model of points is calculated using Delaunay 2.5D triangulation and refined to obtain an accurate model of building. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough details of building based on visual assessment.

  17. Determination of subcellular compartment sizes for estimating dose variations in radiotherapy

    International Nuclear Information System (INIS)

    Poole, Christopher M.; Ahnesjo, Anders; Enger, Shirin A.

    2015-01-01

    The variation in specific energy absorbed to different cell compartments caused by variations in size and chemical composition is poorly investigated in radiotherapy. The aim of this study was to develop an algorithm to derive cell and cell nuclei size distributions from 2D histology samples, and build 3D cellular geometries to provide Monte Carlo (MC)-based dose calculation engines with a morphologically relevant input geometry. Stained and unstained regions of the histology samples are segmented using a Gaussian mixture model, and individual cell nuclei are identified via thresholding. Delaunay triangulation is applied to determine the distribution of distances between the centroids of nearest neighbour cells. A pouring simulation is used to build a 3D virtual tissue sample, with cell radii randomised according to the cell size distribution determined from the histology samples. A slice with the same thickness as the histology sample is cut through the 3D data and characterised in the same way as the measured histology. The comparison between this virtual slice and the measured histology is used to adjust the initial cell size distribution into the pouring simulation. This iterative approach of a pouring simulation with adjustments guided by comparison is continued until an input cell size distribution is found that yields a distribution in the sliced geometry that agrees with the measured histology samples. The thus obtained morphologically realistic 3D cellular geometry can be used as input to MC-based dose calculation programs for studies of dose response due to variations in morphology and size of tumour/healthy tissue cells/nuclei, and extracellular material. (authors)

  18. Solving the Einstein constraint equations on multi-block triangulations using finite element methods

    Energy Technology Data Exchange (ETDEWEB)

    Korobkin, Oleg; Pazos, Enrique [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803 (United States); Aksoylu, Burak [Center for Computation and Technology, Louisiana State University, Baton Rouge, LA 70803 (United States); Holst, Michael [Department of Mathematics, University of California at San Diego 9500 Gilman Drive La Jolla, CA 92093-0112 (United States); Tiglio, Manuel [Department of Physics, University of Maryland, College Park, MD 20742 (United States)

    2009-07-21

    In order to generate initial data for nonlinear relativistic simulations, one needs to solve the Einstein constraints, which can be cast into a coupled set of nonlinear elliptic equations. Here we present an approach for solving these equations on three-dimensional multi-block domains using finite element methods. We illustrate our approach on a simple example of Brill wave initial data, with the constraints reducing to a single linear elliptic equation for the conformal factor psi. We use quadratic Lagrange elements on semi-structured simplicial meshes, obtained by triangulation of multi-block grids. In the case of uniform refinement the scheme is superconvergent at most mesh vertices, due to local symmetry of the finite element basis with respect to local spatial inversions. We show that in the superconvergent case subsequent unstructured mesh refinements do not improve the quality of our initial data. As proof of concept that this approach is feasible for generating multi-block initial data in three dimensions, after constructing the initial data we evolve them in time using a high-order finite-differencing multi-block approach and extract the gravitational waves from the numerical solution.

  19. Solving the Einstein constraint equations on multi-block triangulations using finite element methods

    International Nuclear Information System (INIS)

    Korobkin, Oleg; Pazos, Enrique; Aksoylu, Burak; Holst, Michael; Tiglio, Manuel

    2009-01-01

    In order to generate initial data for nonlinear relativistic simulations, one needs to solve the Einstein constraints, which can be cast into a coupled set of nonlinear elliptic equations. Here we present an approach for solving these equations on three-dimensional multi-block domains using finite element methods. We illustrate our approach on a simple example of Brill wave initial data, with the constraints reducing to a single linear elliptic equation for the conformal factor ψ. We use quadratic Lagrange elements on semi-structured simplicial meshes, obtained by triangulation of multi-block grids. In the case of uniform refinement the scheme is superconvergent at most mesh vertices, due to local symmetry of the finite element basis with respect to local spatial inversions. We show that in the superconvergent case subsequent unstructured mesh refinements do not improve the quality of our initial data. As proof of concept that this approach is feasible for generating multi-block initial data in three dimensions, after constructing the initial data we evolve them in time using a high-order finite-differencing multi-block approach and extract the gravitational waves from the numerical solution.

  20. Restoration of an object from its complex cross sections and surface smoothing of the object

    International Nuclear Information System (INIS)

    Agui, Takeshi; Arai, Kiyoshi; Nakajima, Masayuki

    1990-01-01

    In clinical medicine, restoring the surface of a three-dimensional object from its set of parallel cross sections obtained by CT or MRI is useful in diagnoses. A method of connecting a pair of contours on neighboring cross sections to each other by triangular patches is generally used for this restoration. This method, however, has the complexity of triangulation algorithm, and requires the numerous quantity of calculations when surface smoothing is executed. In our new method, the positions of sampling points are expressed in cylindrical coordinates. Sampling points including auxiliary points are extracted and connected using simple algorithm. Surface smoothing is executed by moving sampling points. This method extends the application scope of restoring objects by triangulation. (author)

  1. Orbit Determination with Very Short Arcs: Admissible Regions

    Science.gov (United States)

    Gronchi, G. F.; Milani, A.; de'Michieli Vitturi, M.; Knezevic, Z.

    2004-05-01

    Contemporary observational surveys provide a huge number of detections of small solar system bodies, in particular of asteroids. These have to be reduced in real time in order to optimize the observational strategy and to select the targets for the follow-up and for the subsequent determination of an orbit. Typically, reported astrometry consists of few positions over a short time span, and this information is often not enough to compute a preliminary orbit and perform an identification. Classical methods for preliminary orbit determination based on three observations fail in such cases, and a new approach is necessary to cope with the problem. We introduce the concept of attributable, which is a vector composed by two angles and two angular velocities at a given time. It is then shown that the missing values (geocentric range and range rate), necessary for the computation of an orbit, can be constrained to a compact set that we call admissible region (AR). The latter is defined on the basis of requirements that the body belongs to the solar system, that it is not a satellite of the Earth, and that it is not a "shooting star" (very close and very small). A mathematical description of the AR is given, together with the proof of its topological properties: it turns out that the AR cannot have more than two connected components. A sampling of the AR can be performed by means of a Delaunay triangulation. A finite number of six-parameter sets of initial conditions are thus defined, with each node of triangulation representing a Virtual Asteroid for which it is possible to propagate the corresponding orbit and to predict ephemerides.

  2. Improving Completeness of Geometric Models from Terrestrial Laser Scanning Data

    Directory of Open Access Journals (Sweden)

    Clemens Nothegger

    2011-12-01

    Full Text Available The application of terrestrial laser scanning for the documentation of cultural heritage assets is becoming increasingly common. While the point cloud by itself is sufficient for satisfying many documentation needs, it is often desirable to use this data for applications other than documentation. For these purposes a triangulated model is usually required. The generation of topologically correct triangulated models from terrestrial laser scans, however, still requires much interactive editing. This is especially true when reconstructing models from medium range panoramic scanners and many scan positions. Because of residual errors in the instrument calibration and the limited spatial resolution due to the laser footprint, the point clouds from different scan positions never match perfectly. Under these circumstances many of the software packages commonly used for generating triangulated models produce models which have topological errors such as surface intersecting triangles, holes or triangles which violate the manifold property. We present an algorithm which significantly reduces the number of topological errors in the models from such data. The algorithm is a modification of the Poisson surface reconstruction algorithm. Poisson surfaces are resilient to noise in the data and the algorithm always produces a closed manifold surface. Our modified algorithm partitions the data into tiles and can thus be easily parallelized. Furthermore, it avoids introducing topological errors in occluded areas, albeit at the cost of producing models which are no longer guaranteed to be closed. The algorithm is applied to scan data of sculptures of the UNESCO World Heritage Site Schönbrunn Palace and data of a petrified oyster reef in Stetten, Austria. The results of the method’s application are discussed and compared with those of alternative methods.

  3. An evaluation of orthopaedic nurses’ participation in an educational intervention promoting research utilization – A triangulation convergence model

    DEFF Research Database (Denmark)

    Berthelsen, Connie Bøttcher; Hølge-Hazelton, Bibi

    2016-01-01

    Aims and objectives To describe the orthopaedic nurses' experiences regarding the relevance of an educational intervention and their personal and contextual barriers to participation in the intervention. Background One of the largest barriers against nurses' research usage in clinical practice...... is the lack of participation. A previous survey identified 32 orthopaedic nurses as interested in participating in nursing research. An educational intervention was conducted to increase the orthopaedic nurses' research knowledge and competencies. However, only an average of six nurses participated. Design...... A triangulation convergence model was applied through a mixed methods design to combine quantitative results and qualitative findings for evaluation. Methods Data were collected from 2013–2014 from 32 orthopaedic nurses in a Danish regional hospital through a newly developed 21-item questionnaire and two focus...

  4. Binary Labelings for Plane Quadrangulations and their Relatives

    OpenAIRE

    Felsner, Stefan; Huemer, Clemens; Kappes, Sarah; Orden, David

    2006-01-01

    Motivated by the bijection between Schnyder labelings of a plane triangulation and partitions of its inner edges into three trees, we look for binary labelings for quadrangulations (whose edges can be partitioned into two trees). Our labeling resembles many of the properties of Schnyder's one for triangulations: Apart from being in bijection with tree decompositions, paths in these trees allow to define the regions of a vertex such that counting faces in them yields an algorithm for embedding...

  5. Voronoi Tessellations and Their Application to Climate and Global Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Lili [University of South Carolina; Ringler, Todd [Los Alamos National Laboratory; Gunzburger, Max [Florida State University

    2011-01-01

    We review the use of Voronoi tessellations for grid generation, especially on the whole sphere or in regions on the sphere. Voronoi tessellations and the corresponding Delaunay tessellations in regions and surfaces on Euclidean space are defined and properties they possess that make them well-suited for grid generation purposes are discussed, as are algorithms for their construction. This is followed by a more detailed look at one very special type of Voronoi tessellation, the centroidal Voronoi tessellation (CVT). After defining them, discussing some of their properties, and presenting algorithms for their construction, we illustrate the use of CVTs for producing both quasi-uniform and variable resolution meshes in the plane and on the sphere. Finally, we briefly discuss the computational solution of model equations based on CVTs on the sphere.

  6. Super-Resolution for Synthetic Zooming

    Directory of Open Access Journals (Sweden)

    Li Xin

    2006-01-01

    Full Text Available Optical zooming is an important feature of imaging systems. In this paper, we investigate a low-cost signal processing alternative to optical zooming—synthetic zooming by super-resolution (SR techniques. Synthetic zooming is achieved by registering a sequence of low-resolution (LR images acquired at varying focal lengths and reconstructing the SR image at a larger focal length or increased spatial resolution. Under the assumptions of constant scene depth and zooming speed, we argue that the motion trajectories of all physical points are related to each other by a unique vanishing point and present a robust technique for estimating its D coordinate. Such a line-geometry-based registration is the foundation of SR for synthetic zooming. We address the issue of data inconsistency arising from the varying focal length of optical lens during the zooming process. To overcome the difficulty of data inconsistency, we propose a two-stage Delaunay-triangulation-based interpolation for fusing the LR image data. We also present a PDE-based nonlinear deblurring to accommodate the blindness and variation of sensor point spread functions. Simulation results with real-world images have verified the effectiveness of the proposed SR techniques for synthetic zooming.

  7. The boundary value problem for discrete analytic functions

    KAUST Repository

    Skopenkov, Mikhail

    2013-06-01

    This paper is on further development of discrete complex analysis introduced by R.Isaacs, J.Ferrand, R.Duffin, and C.Mercat. We consider a graph lying in the complex plane and having quadrilateral faces. A function on the vertices is called discrete analytic, if for each face the difference quotients along the two diagonals are equal.We prove that the Dirichlet boundary value problem for the real part of a discrete analytic function has a unique solution. In the case when each face has orthogonal diagonals we prove that this solution uniformly converges to a harmonic function in the scaling limit. This solves a problem of S.Smirnov from 2010. This was proved earlier by R.Courant-K.Friedrichs-H.Lewy and L.Lusternik for square lattices, by D.Chelkak-S.Smirnov and implicitly by P.G.Ciarlet-P.-A.Raviart for rhombic lattices.In particular, our result implies uniform convergence of the finite element method on Delaunay triangulations. This solves a problem of A.Bobenko from 2011. The methodology is based on energy estimates inspired by alternating-current network theory. © 2013 Elsevier Ltd.

  8. Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG ShunJin; ZHANG Hua

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  9. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  10. The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators

    Directory of Open Access Journals (Sweden)

    Dazhi Jiang

    2015-01-01

    Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.

  11. Mesh Generation via Local Bisection Refinement of Triangulated Grids

    Science.gov (United States)

    2015-06-01

    Science and Technology Organisation DSTO–TR–3095 ABSTRACT This report provides a comprehensive implementation of an unstructured mesh generation method...and Technology Organisation 506 Lorimer St, Fishermans Bend, Victoria 3207, Australia Telephone: 1300 333 362 Facsimile: (03) 9626 7999 c© Commonwealth...their behaviour is critically linked to Maubach’s method and the data structures N and T . The top- level mesh refinement algorithm is also presented

  12. Outcomes and impact of HIV prevention, ART and TB programs in Swaziland--early evidence from public health triangulation.

    Science.gov (United States)

    van Schalkwyk, Cari; Mndzebele, Sibongile; Hlophe, Thabo; Garcia Calleja, Jesus Maria; Korenromp, Eline L; Stoneburner, Rand; Pervilhac, Cyril

    2013-01-01

    Swaziland's severe HIV epidemic inspired an early national response since the late 1980s, and regular reporting of program outcomes since the onset of a national antiretroviral treatment (ART) program in 2004. We assessed effectiveness outcomes and mortality trends in relation to ART, HIV testing and counseling (HTC), tuberculosis (TB) and prevention of mother to child transmission (PMTCT). Data triangulated include intervention coverage and outcomes according to program registries (2001-2010), hospital admissions and deaths disaggregated by age and sex (2001-2010) and population mortality estimates from the 1997 and 2007 censuses and the 2007 demographic and health survey. By 2010, ART reached 70% of the estimated number of people living with HIV/AIDS with CD4impact to specific interventions (versus natural epidemic dynamics) will require additional data from future household surveys, and improved routine (program, surveillance, and hospital) data at district level.

  13. Construction of the Calibration Set through Multivariate Analysis in Visible and Near-Infrared Prediction Model for Estimating Soil Organic Matter

    Directory of Open Access Journals (Sweden)

    Xiaomi Wang

    2017-02-01

    Full Text Available The visible and near-infrared (VNIR spectroscopy prediction model is an effective tool for the prediction of soil organic matter (SOM content. The predictive accuracy of the VNIR model is highly dependent on the selection of the calibration set. However, conventional methods for selecting the calibration set for constructing the VNIR prediction model merely consider either the gradients of SOM or the soil VNIR spectra and neglect the influence of environmental variables. However, soil samples generally present a strong spatial variability, and, thus, the relationship between the SOM content and VNIR spectra may vary with respect to locations and surrounding environments. Hence, VNIR prediction models based on conventional calibration set selection methods would be biased, especially for estimating highly spatially variable soil content (e.g., SOM. To equip the calibration set selection method with the ability to consider SOM spatial variation and environmental influence, this paper proposes an improved method for selecting the calibration set. The proposed method combines the improved multi-variable association relationship clustering mining (MVARC method and the Rank–Kennard–Stone (Rank-KS method in order to synthetically consider the SOM gradient, spectral information, and environmental variables. In the proposed MVARC-R-KS method, MVARC integrates the Apriori algorithm, a density-based clustering algorithm, and the Delaunay triangulation. The MVARC method is first utilized to adaptively mine clustering distribution zones in which environmental variables exert a similar influence on soil samples. The feasibility of the MVARC method is proven by conducting an experiment on a simulated dataset. The calibration set is evenly selected from the clustering zones and the remaining zone by using the Rank-KS algorithm in order to avoid a single property in the selected calibration set. The proposed MVARC-R-KS approach is applied to select a

  14. The Study Related to the Execution of a Triangulation Network in the Dump of Rovinari Pit, in Order to be Restored to the Economic Circuit

    Directory of Open Access Journals (Sweden)

    George Popescu

    2016-11-01

    Full Text Available The lignite mining extraction within the mining perimeter in Rovinari is carried out through mining works in the open, by using large equipments for the excavation, transport and storage of the mining material. These surfaces are currently being set up in the area of level two of the dump, the west and north-west part of Rovinari pit. In order to carry out the set-up works and of follow-up of the stability of the pit levels it is necessary to maintain the triangulation network.

  15. Restrictions on Measurement of Roughness of Textile Fabrics by Laser Triangulation: A Phenomenological Approach

    International Nuclear Information System (INIS)

    Berberi, Pellumb; Tabaku, Burhan

    2010-01-01

    Laser triangulation method is one of the methods used for contactless measurement of roughness of textile fabrics. Method is based on measurement of distance between the sensor and the object by imaging the light scattered from the surface. However, experimental results, especially for high values of roughness, show a strong dependence to duration of exposure time to laser pulses. Use of very short exposure times and long exposures times causes appearance on the surface of the scanned textile of pixels with Active peak heights. The number of Active peaks increases with decrease of exposure time down to 0.1 ms, and increases with increase of exposure time up to 100 ms. Appearance of Active peaks leads to nonrealistic increase of roughness of the surface both for short exposure times and long exposure times reaching a minimum somewhere in the region of medium exposure times, 1 to 2 ms. The above effect suggests a careful analysis of experimental data and, also, becomes an important restriction to the method. In this paper we attempt to make a phenomenological approach to the mechanisms leading to these effects. We suppose that effect is related both to scattering properties of scanned surface and to physical parameters of CCD sensors. The first factor becomes more important in the region of long exposure times, while second factor becomes more important in the region of short exposure times.

  16. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  17. Effect of DEM resolution on rainfall-triggered landslide modeling within a triangulated network-based model. A case study in the Luquillo Forest, Puerto Rico

    Science.gov (United States)

    Arnone, E.; Dialynas, Y. G.; Noto, L. V.; Bras, R. L.

    2013-12-01

    Catchment slope distribution is one of the topographic characteristics that significantly control rainfall-triggered landslide modeling, in both direct and indirect ways. Slope directly determines the soil volume associated with instability. Indirectly slope also affects the subsurface lateral redistribution of soil moisture across the basin, which in turn determines the water pore pressure conditions that impact slope stability. In this study, we investigate the influence of DEM resolution on slope stability and the slope stability analysis by using a distributed eco-hydrological and landslide model, the tRIBS-VEGGIE (Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator - VEGetation Generator for Interactive Evolution). The model implements a triangulated irregular network to describe the topography, and it is capable of evaluating vegetation dynamics and predicting shallow landslides triggered by rainfall. The impact of DEM resolution on the landslide prediction was studied using five TINs derived from five grid DEMs at different resolutions, i.e. 10, 20, 30, 50 and 70 m respectively. The analysis was carried out on the Mameyes Basin, located in the Luquillo Experimental Forest in Puerto Rico, where previous landslide analyses have been carried out. Results showed that the use of the irregular mesh reduced the loss of accuracy in the derived slope distribution when coarser resolutions were used. The impact of the different resolutions on soil moisture patterns was important only when the lateral redistribution was considerable, depending on hydrological properties and rainfall forcing. In some cases, the use of different DEM resolutions did not significantly affect tRIBS-VEGGIE landslide output, in terms of landslide locations, and values of slope and soil moisture at failure.

  18. I/O-Efficient Algorithms for Computing Contour Lines on a Terrain

    DEFF Research Database (Denmark)

    Agarwal, Pankaj Kumar; Arge, Lars; Sadri, Bardia

    2008-01-01

    A terrain M is the graph of a bivariate function. We assume that M is represented as a triangulated surface with N vertices. A contour (or isoline) of M is a connected component of a level set of M. Generically, each contour is a closed polygonal curve; at "critical" levels these curves may touch...

  19. STAR Algorithm Integration Team - Facilitating operational algorithm development

    Science.gov (United States)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  20. Selfish Gene Algorithm Vs Genetic Algorithm: A Review

    Science.gov (United States)

    Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed

    2016-11-01

    Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.

  1. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  2. Metric Accuracy Evaluation of Dense Matching Algorithms in Archeological Applications

    Directory of Open Access Journals (Sweden)

    C. Re

    2011-12-01

    Full Text Available In the cultural heritage field the recording and documentation of small and medium size objects with very detailed Digital Surface Models (DSM is readily possible by through the use of high resolution and high precision triangulation laser scanners. 3D surface recording of archaeological objects can be easily achieved in museums; however, this type of record can be quite expensive. In many cases photogrammetry can provide a viable alternative for the generation of DSMs. The photogrammetric procedure has some benefits with respect to laser survey. The research described in this paper sets out to verify the reconstruction accuracy of DSMs of some archaeological artifacts obtained by photogrammetric survey. The experimentation has been carried out on some objects preserved in the Petrie Museum of Egyptian Archaeology at University College London (UCL. DSMs produced by two photogrammetric software packages are compared with the digital 3D model obtained by a state of the art triangulation color laser scanner. Intercomparison between the generated DSM has allowed an evaluation of metric accuracy of the photogrammetric approach applied to archaeological documentation and of precision performances of the two software packages.

  3. Stakeholder management in the local government decision-making area: evidences from a triangulation study with the English local government

    Directory of Open Access Journals (Sweden)

    Ricardo Corrêa Gomes

    2006-01-01

    Full Text Available The stakeholder theory has been in the management agenda for about thirty years and reservations about its acceptance as a comprehensive theory still remains. It was introduced as a managerial issue by the Labour Party in 1997 aiming to make public management more inclusive. This article aims to contribute to the stakeholder theory adding descriptive issues to its theoretical basis. The findings are derived from an inductive investigationcarried out with English Local Authorities, which will most likely be reproduced in other contexts. Data collection and analysis is based on a data triangulation method that involves case-studies, interviews of validation and analysis of documents. The investigation proposes a model for representing the nature of therelationships between stakeholders and the decision-making process of such organizations. The decision-making of local government organizations is in fact a stakeholder-based process in which stakeholders are empowered to exert influences due to power over and interest in the organization’s operations and outcomes.

  4. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  5. A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories.

    Science.gov (United States)

    Yang, Wei; Ai, Tinghua; Lu, Wei

    2018-04-19

    Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT). First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS) traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction) by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality.

  6. Neural networks for the generation of sea bed models using airborne lidar bathymetry data

    Science.gov (United States)

    Kogut, Tomasz; Niemeyer, Joachim; Bujakiewicz, Aleksandra

    2016-06-01

    Various sectors of the economy such as transport and renewable energy have shown great interest in sea bed models. The required measurements are usually carried out by ship-based echo sounding, but this method is quite expensive. A relatively new alternative is data obtained by airborne lidar bathymetry. This study investigates the accuracy of these data, which was obtained in the context of the project `Investigation on the use of airborne laser bathymetry in hydrographic surveying'. A comparison to multi-beam echo sounding data shows only small differences in the depths values of the data sets. The IHO requirements of the total horizontal and vertical uncertainty for laser data are met. The second goal of this paper is to compare three spatial interpolation methods, namely Inverse Distance Weighting (IDW), Delaunay Triangulation (TIN), and supervised Artificial Neural Networks (ANN), for the generation of sea bed models. The focus of our investigation is on the amount of required sampling points. This is analyzed by manually reducing the data sets. We found that the three techniques have a similar performance almost independently of the amount of sampling data in our test area. However, ANN are more stable when using a very small subset of points.

  7. Neural networks for the generation of sea bed models using airborne lidar bathymetry data

    Directory of Open Access Journals (Sweden)

    Kogut Tomasz

    2016-06-01

    Full Text Available Various sectors of the economy such as transport and renewable energy have shown great interest in sea bed models. The required measurements are usually carried out by ship-based echo sounding, but this method is quite expensive. A relatively new alternative is data obtained by airborne lidar bathymetry. This study investigates the accuracy of these data, which was obtained in the context of the project ‘Investigation on the use of airborne laser bathymetry in hydrographic surveying’. A comparison to multi-beam echo sounding data shows only small differences in the depths values of the data sets. The IHO requirements of the total horizontal and vertical uncertainty for laser data are met. The second goal of this paper is to compare three spatial interpolation methods, namely Inverse Distance Weighting (IDW, Delaunay Triangulation (TIN, and supervised Artificial Neural Networks (ANN, for the generation of sea bed models. The focus of our investigation is on the amount of required sampling points. This is analyzed by manually reducing the data sets. We found that the three techniques have a similar performance almost independently of the amount of sampling data in our test area. However, ANN are more stable when using a very small subset of points.

  8. Hybrid mesh generation for the new generation of oil reservoir simulators: 3D extension; Generation de maillage hybride pour les simulateurs de reservoir petrolier de nouvelle generation: extension 3D

    Energy Technology Data Exchange (ETDEWEB)

    Flandrin, N.

    2005-09-15

    During the exploitation of an oil reservoir, it is important to predict the recovery of hydrocarbons and to optimize its production. A better comprehension of the physical phenomena requires to simulate 3D multiphase flows in increasingly complex geological structures. In this thesis, we are interested in this spatial discretization and we propose to extend in 3D the 2D hybrid model proposed by IFP in 1998 that allows to take directly into account in the geometry the radial characteristics of the flows. In these hybrid meshes, the wells and their drainage areas are described by structured radial circular meshes and the reservoirs are represented by structured meshes that can be a non uniform Cartesian grid or a Corner Point Geometry grids. In order to generate a global conforming mesh, unstructured transition meshes based on power diagrams and satisfying finite volume properties are used to connect the structured meshes together. Two methods have been implemented to generate these transition meshes: the first one is based on a Delaunay triangulation, the other one uses a frontal approach. Finally, some criteria are introduced to measure the quality of the transition meshes and optimization procedures are proposed to increase this quality under finite volume properties constraints. (author)

  9. Assessment of infrasound signals recorded on seismic stations and infrasound arrays in the western United States using ground truth sources

    Science.gov (United States)

    Park, Junghyun; Hayward, Chris; Stump, Brian W.

    2018-06-01

    Ground truth sources in Utah during 2003-2013 are used to assess the contribution of temporal atmospheric conditions to infrasound detection and the predictive capabilities of atmospheric models. Ground truth sources consist of 28 long duration static rocket motor burn tests and 28 impulsive rocket body demolitions. Automated infrasound detections from a hybrid of regional seismometers and infrasound arrays use a combination of short-term time average/long-term time average ratios and spectral analyses. These detections are grouped into station triads using a Delaunay triangulation network and then associated to estimate phase velocity and azimuth to filter signals associated with a particular source location. The resulting range and azimuth distribution from sources to detecting stations varies seasonally and is consistent with predictions based on seasonal atmospheric models. Impulsive signals from rocket body detonations are observed at greater distances (>700 km) than the extended duration signals generated by the rocket burn test (up to 600 km). Infrasound energy attenuation associated with the two source types is quantified as a function of range and azimuth from infrasound amplitude measurements. Ray-tracing results using Ground-to-Space atmospheric specifications are compared to these observations and illustrate the degree to which the time variations in characteristics of the observations can be predicted over a multiple year time period.

  10. A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories

    Directory of Open Access Journals (Sweden)

    Wei Yang

    2018-04-01

    Full Text Available Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT. First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality.

  11. Essential algorithms a practical approach to computer algorithms

    CERN Document Server

    Stephens, Rod

    2013-01-01

    A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s

  12. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  13. Denni Algorithm An Enhanced Of SMS (Scan, Move and Sort) Algorithm

    Science.gov (United States)

    Aprilsyah Lubis, Denni; Salim Sitompul, Opim; Marwan; Tulus; Andri Budiman, M.

    2017-12-01

    Sorting has been a profound area for the algorithmic researchers, and many resources are invested to suggest a more working sorting algorithm. For this purpose many existing sorting algorithms were observed in terms of the efficiency of the algorithmic complexity. Efficient sorting is important to optimize the use of other algorithms that require sorted lists to work correctly. Sorting has been considered as a fundamental problem in the study of algorithms that due to many reasons namely, the necessary to sort information is inherent in many applications, algorithms often use sorting as a key subroutine, in algorithm design there are many essential techniques represented in the body of sorting algorithms, and many engineering issues come to the fore when implementing sorting algorithms., Many algorithms are very well known for sorting the unordered lists, and one of the well-known algorithms that make the process of sorting to be more economical and efficient is SMS (Scan, Move and Sort) algorithm, an enhancement of Quicksort invented Rami Mansi in 2010. This paper presents a new sorting algorithm called Denni-algorithm. The Denni algorithm is considered as an enhancement on the SMS algorithm in average, and worst cases. The Denni algorithm is compared with the SMS algorithm and the results were promising.

  14. Python algorithms mastering basic algorithms in the Python language

    CERN Document Server

    Hetland, Magnus Lie

    2014-01-01

    Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc

  15. Bloch Modes and Evanescent Modes of Photonic Crystals: Weak Form Solutions Based on Accurate Interface Triangulation

    Directory of Open Access Journals (Sweden)

    Matthias Saba

    2015-01-01

    Full Text Available We propose a new approach to calculate the complex photonic band structure, both purely dispersive and evanescent Bloch modes of a finite range, of arbitrary three-dimensional photonic crystals. Our method, based on a well-established plane wave expansion and the weak form solution of Maxwell’s equations, computes the Fourier components of periodic structures composed of distinct homogeneous material domains from a triangulated mesh representation of the inter-material interfaces; this allows substantially more accurate representations of the geometry of complex photonic crystals than the conventional representation by a cubic voxel grid. Our method works for general two-phase composite materials, consisting of bi-anisotropic materials with tensor-valued dielectric and magnetic permittivities ε and μ and coupling matrices ς. We demonstrate for the Bragg mirror and a simple cubic crystal closely related to the Kelvin foam that relatively small numbers of Fourier components are sufficient to yield good convergence of the eigenvalues, making this method viable, despite its computational complexity. As an application, we use the single gyroid crystal to demonstrate that the consideration of both conventional and evanescent Bloch modes is necessary to predict the key features of the reflectance spectrum by analysis of the band structure, in particular for light incident along the cubic [111] direction.

  16. Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm

    Science.gov (United States)

    Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad

    2018-01-01

    Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.

  17. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  18. El uso de la triangulación en un estudio de detección de necesidades de formación permanente en profesorado no universitario de la Comunidad de Madrid. Using triangulation to assess continuing education teacher needs in Madrid (Spain

    Directory of Open Access Journals (Sweden)

    Coral González

    2009-01-01

    Full Text Available El presente artículo pretende destacar la importancia de la triangulación como elemento o herramienta para comparar y validar informaciones obtenidas mediante diferentes fuentes y métodos. Para ello se apoya en los resultados de un estudio realizado en la Comunidad de Madrid con el fin de determinar las necesidades que el profesorado manifiesta con respecto a la oferta de formación permanente que se les ofrece en la actualidad. Dichos resultados son producto de la utilización de diferentes modos de recogida de información así como de diferentes técnicas de análisis de datos, hecho que los dota de mayor complejidad y riqueza. Partiendo de una breve introducción sobre la técnica de triangulación, se presentan los métodos, fuentes y análisis de datos llevados a cabo junto a los resultados y las conclusiones principales del estudio. This article aims at highlighting the importance of triangulation as tool to compare and validate information coming from different sources and procedures. To do so, we assessed the needs for in-service training demanded by teachers and offered by the educational administration in Madrid (Spain. The data was collected using different techniques and analyzed with different data-analysis method and from this combination the results are richer and more complex. Starting with a short introduction about triangulation, we present methods, sources and analysis of the data as well main results and conclusions obtained via triangulation.

  19. The Surface Extraction from TIN based Search-space Minimization (SETSM) algorithm

    Science.gov (United States)

    Noh, Myoung-Jong; Howat, Ian M.

    2017-07-01

    Digital Elevation Models (DEMs) provide critical information for a wide range of scientific, navigational and engineering activities. Submeter resolution, stereoscopic satellite imagery with high geometric and radiometric quality, and wide spatial coverage are becoming increasingly accessible for generating stereo-photogrammetric DEMs. However, low contrast and repeatedly-textured surfaces, such as snow and glacial ice at high latitudes, and mountainous terrains challenge existing stereo-photogrammetric DEM generation techniques, particularly without a-priori information such as existing seed DEMs or the manual setting of terrain-specific parameters. To utilize these data for fully-automatic DEM extraction at a large scale, we developed the Surface Extraction from TIN-based Search-space Minimization (SETSM) algorithm. SETSM is fully automatic (i.e. no search parameter settings are needed) and uses only the sensor model Rational Polynomial Coefficients (RPCs). SETSM adopts a hierarchical, combined image- and object-space matching strategy utilizing weighted normalized cross-correlation with both original distorted and geometrically corrected images for overcoming ambiguities caused by foreshortening and occlusions. In addition, SETSM optimally minimizes search-spaces to extract optimal matches over problematic terrains by iteratively updating object surfaces within a Triangulated Irregular Network, and utilizes a geometric-constrained blunder and outlier detection in object space. We prove the ability of SETSM to mitigate typical stereo-photogrammetric matching problems over a range of challenging terrains. SETSM is the primary DEM generation software for the US National Science Foundation's ArcticDEM project.

  20. Proposals for the Operationalisation of the Discourse Theory of Laclau and Mouffe Using a Triangulation of Lexicometrical and Interpretative Methods

    Directory of Open Access Journals (Sweden)

    Georg Glasze

    2007-05-01

    Full Text Available The discourse theory of Ernesto LACLAU and Chantal MOUFFE brings together three elements: the FOUCAULTian notion of discourse, the (post- MARXist notion of hegemony, and the poststructuralist writings of Jacques DERRIDA and Roland BARTHES. Discourses are regarded as temporary fixations of differential relations. Meaning, i.e. any social "objectivity", is conceptualised as an effect of such a fixation. The discussion on an appropriate operationalisation of such a discourse theory is just beginning. In this paper, it is argued that a triangulation of two linguistic methods is appropriate to reveal temporary fixations: by means of corpus-driven lexicometric procedures as well as by the analysis of narrative patterns, the regularities of the linkage of elements can be analysed (for example, in diachronic comparisons. The example of a geographic research project shows how, in so doing, the historically contingent constitution of an international community and "world region" can be analysed. URN: urn:nbn:de:0114-fqs0702143

  1. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  2. Angles-Only Navigation: Position and Velocity Solution from Absolute Triangulation

    Science.gov (United States)

    2011-01-01

    contrast to the Kalman filter approach , the algorithm presented here does not require any pre- vious estimate of position or motion, and is of closed... geocentric position vectors. Using two vectors derived from each such observation (see next section), a solution for a portion of the boat’s track was...t)x0 describes the curvature of the path in the direction x 0, which, for a geocentric coordinate system and /(t) < 0, will be toward the center of

  3. Computational Complexity of Combinatorial Surfaces

    NARCIS (Netherlands)

    Vegter, Gert; Yap, Chee K.

    1990-01-01

    We investigate the computational problems associated with combinatorial surfaces. Specifically, we present an algorithm (based on the Brahana-Dehn-Heegaard approach) for transforming the polygonal schema of a closed triangulated surface into its canonical form in O(n log n) time, where n is the

  4. Algorithm aversion: people erroneously avoid algorithms after seeing them err.

    Science.gov (United States)

    Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade

    2015-02-01

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

  5. The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.

    Science.gov (United States)

    Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P

    1999-10-01

    In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.

  6. 3D CENTRAL LINE EXTRACTION OF FOSSIL OYSTER SHELLS

    Directory of Open Access Journals (Sweden)

    A. Djuricic

    2016-06-01

    Full Text Available Photogrammetry provides a powerful tool to digitally document protected, inaccessible, and rare fossils. This saves manpower in relation to current documentation practice and makes the fragile specimens more available for paleontological analysis and public education. In this study, high resolution orthophoto (0.5 mm and digital surface models (1 mm are used to define fossil boundaries that are then used as an input to automatically extract fossil length information via central lines. In general, central lines are widely used in geosciences as they ease observation, monitoring and evaluation of object dimensions. Here, the 3D central lines are used in a novel paleontological context to study fossilized oyster shells with photogrammetric and LiDAR-obtained 3D point cloud data. 3D central lines of 1121 Crassostrea gryphoides oysters of various shapes and sizes were computed in the study. Central line calculation included: i Delaunay triangulation between the fossil shell boundary points and formation of the Voronoi diagram; ii extraction of Voronoi vertices and construction of a connected graph tree from them; iii reduction of the graph to the longest possible central line via Dijkstra’s algorithm; iv extension of longest central line to the shell boundary and smoothing by an adjustment of cubic spline curve; and v integration of the central line into the corresponding 3D point cloud. The resulting longest path estimate for the 3D central line is a size parameter that can be applied in oyster shell age determination both in paleontological and biological applications. Our investigation evaluates ability and performance of the central line method to measure shell sizes accurately by comparing automatically extracted central lines with manually collected reference data used in paleontological analysis. Our results show that the automatically obtained central line length overestimated the manually collected reference by 1.5% in the test set, which

  7. Cart'Eaux: an automatic mapping procedure for wastewater networks using machine learning and data mining

    Science.gov (United States)

    Bailly, J. S.; Delenne, C.; Chahinian, N.; Bringay, S.; Commandré, B.; Chaumont, M.; Derras, M.; Deruelle, L.; Roche, M.; Rodriguez, F.; Subsol, G.; Teisseire, M.

    2017-12-01

    In France, local government institutions must establish a detailed description of wastewater networks. The information should be available, but it remains fragmented (different formats held by different stakeholders) and incomplete. In the "Cart'Eaux" project, a multidisciplinary team, including an industrial partner, develops a global methodology using Machine Learning and Data Mining approaches applied to various types of large data to recover information in the aim of mapping urban sewage systems for hydraulic modelling. Deep-learning is first applied using a Convolution Neural Network to localize manhole covers on 5 cm resolution aerial RGB images. The detected manhole covers are then automatically connected using a tree-shaped graph constrained by industry rules. Based on a Delaunay triangulation, connections are chosen to minimize a cost function depending on pipe length, slope and possible intersection with roads or buildings. A stochastic version of this algorithm is currently being developed to account for positional uncertainty and detection errors, and generate sets of probable networks. As more information is required for hydraulic modeling (slopes, diameters, materials, etc.), text data mining is used to extract network characteristics from data posted on the Web or available through governmental or specific databases. Using an appropriate list of keywords, the web is scoured for documents which are saved in text format. The thematic entities are identified and linked to the surrounding spatial and temporal entities. The methodology is developed and tested on two towns in southern France. The primary results are encouraging: 54% of manhole covers are detected with few false detections, enabling the reconstruction of probable networks. The data mining results are still being investigated. It is clear at this stage that getting numerical values on specific pipes will be challenging. Thus, when no information is found, decision rules will be used to

  8. D Central Line Extraction of Fossil Oyster Shells

    Science.gov (United States)

    Djuricic, A.; Puttonen, E.; Harzhauser, M.; Mandic, O.; Székely, B.; Pfeifer, N.

    2016-06-01

    Photogrammetry provides a powerful tool to digitally document protected, inaccessible, and rare fossils. This saves manpower in relation to current documentation practice and makes the fragile specimens more available for paleontological analysis and public education. In this study, high resolution orthophoto (0.5 mm) and digital surface models (1 mm) are used to define fossil boundaries that are then used as an input to automatically extract fossil length information via central lines. In general, central lines are widely used in geosciences as they ease observation, monitoring and evaluation of object dimensions. Here, the 3D central lines are used in a novel paleontological context to study fossilized oyster shells with photogrammetric and LiDAR-obtained 3D point cloud data. 3D central lines of 1121 Crassostrea gryphoides oysters of various shapes and sizes were computed in the study. Central line calculation included: i) Delaunay triangulation between the fossil shell boundary points and formation of the Voronoi diagram; ii) extraction of Voronoi vertices and construction of a connected graph tree from them; iii) reduction of the graph to the longest possible central line via Dijkstra's algorithm; iv) extension of longest central line to the shell boundary and smoothing by an adjustment of cubic spline curve; and v) integration of the central line into the corresponding 3D point cloud. The resulting longest path estimate for the 3D central line is a size parameter that can be applied in oyster shell age determination both in paleontological and biological applications. Our investigation evaluates ability and performance of the central line method to measure shell sizes accurately by comparing automatically extracted central lines with manually collected reference data used in paleontological analysis. Our results show that the automatically obtained central line length overestimated the manually collected reference by 1.5% in the test set, which is deemed

  9. Barriers to energy efficiency in shipping: A triangulated approach to investigate the principal agent problem

    International Nuclear Information System (INIS)

    Rehmatulla, Nishatabbas; Smith, Tristan

    2015-01-01

    Energy efficiency is a key policy strategy to meet some of the challenges being faced today and to plan for a sustainable future. Numerous empirical studies in various sectors suggest that there are cost-effective measures that are available but not always implemented due to existence of barriers to energy efficiency. Several cost-effective energy efficient options (technologies for new and existing ships and operations) have also been identified for improving energy efficiency of ships. This paper is one of the first to empirically investigate barriers to energy efficiency in the shipping industry using a novel framework and multidisciplinary methods to gauge implementation of cost-effective measures, perception on barriers and observations of barriers. It draws on findings of a survey conducted of shipping companies, content analysis of shipping contracts and analysis of energy efficiency data. Initial results from these methods suggest the existence of the principal agent problem and other market failures and barriers that have also been suggested in other sectors and industries. Given this finding, policies to improve implementation of energy efficiency in shipping need to be carefully considered to improve their efficacy and avoid unintended consequences. -- Highlights: •We provide the first analysis of the principal agent problem in shipping. •We develop a framework that incorporates methodological triangulation. •Our results show the extent to which this barrier is observed and perceived. •The presence of the barrier has implications on the policy most suited to shipping

  10. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  11. Pseudo-deterministic Algorithms

    OpenAIRE

    Goldwasser , Shafi

    2012-01-01

    International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...

  12. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...

  13. An advanced analysis method of initial orbit determination with too short arc data

    Science.gov (United States)

    Li, Binzhe; Fang, Li

    2018-02-01

    This paper studies the initial orbit determination (IOD) based on space-based angle measurement. Commonly, these space-based observations have short durations. As a result, classical initial orbit determination algorithms give poor results, such as Laplace methods and Gauss methods. In this paper, an advanced analysis method of initial orbit determination is developed for space-based observations. The admissible region and triangulation are introduced in the method. Genetic algorithm is also used for adding some constraints of parameters. Simulation results show that the algorithm can successfully complete the initial orbit determination.

  14. An Adaptive Filtering Algorithm Based on Genetic Algorithm-Backpropagation Network

    Directory of Open Access Journals (Sweden)

    Kai Hu

    2013-01-01

    Full Text Available A new image filtering algorithm is proposed. GA-BPN algorithm uses genetic algorithm (GA to decide weights in a back propagation neural network (BPN. It has better global optimal characteristics than traditional optimal algorithm. In this paper, we used GA-BPN to do image noise filter researching work. Firstly, this paper uses training samples to train GA-BPN as the noise detector. Then, we utilize the well-trained GA-BPN to recognize noise pixels in target image. And at last, an adaptive weighted average algorithm is used to recover noise pixels recognized by GA-BPN. Experiment data shows that this algorithm has better performance than other filters.

  15. Comparative investigation of micro-flaw models for the simulation of brittle fracture in rock

    CSIR Research Space (South Africa)

    Sellers, E

    1997-07-01

    Full Text Available can be covered by a set of Voronoi polygons or Delaunay tri- angles (Napier and Peirce 1995). A subset of the edges of these polygons is selected and designated as pre-existing ?aws with assigned strength an friction sliding properties. A speci?ed load... of incre- mental displacements were applied to the surface of a rectangular block to simulate compression tests have been performed to study the fracture mechanisms induced in random Voronoi and Delaunay tessellation patterns (Napier and Peirce 1995; Napier...

  16. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  17. Convex hull ranking algorithm for multi-objective evolutionary algorithms

    NARCIS (Netherlands)

    Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.

    2012-01-01

    Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity

  18. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  19. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  20. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.

    2014-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  1. L1 Use in EFL Classes with English-only Policy: Insights from Triangulated Data

    Directory of Open Access Journals (Sweden)

    Seyyed Hatam Tamimi Sa’d

    2015-06-01

    Full Text Available This study examines the role of the use of the L1 in EFL classes from the perspective of EFL learners. The triangulated data were collected using class observations, focus group semi-structured interviews and the learners’ written reports of their perceptions and attitudes in a purpose-designed questionnaire. The participants consisted of sixty male Iranian EFL learners who constituted three classes. The results indicated a strong tendency among the participants toward L1 and its positive effects on language learning; while only a minority of the learners favoured an English-only policy, the majority supported the judicious, limited and occasional use of the L1, particularly on the part of the teacher. The participants mentioned the advantages as well as the disadvantages of the use/non-use of the L1. While the major advantage and the main purpose of L1 use was said to be the clarification and intelligibility of instructions, grammatical and lexical items, the main advantages of avoiding it were stated as being the improvement of speaking and listening skills, aximizing learners’ exposure to English and their becoming accustomed to it. The study concludes that, overall and in line with the majority of the previous research studies, a judicious, occasional and limited use of the L1 is a better approach to take in EFL classes than to include or exclude it totally. In conclusion, a re-examination of the English-only policy and a reconsideration of the role of the L1 are recommended. Finally, the commonly held assumption that L1 is a hindrance and an impediment to the learners’ language learning is challenged.

  2. Archimedean copula estimation of distribution algorithm based on artificial bee colony algorithm

    Institute of Scientific and Technical Information of China (English)

    Haidong Xu; Mingyan Jiang; Kun Xu

    2015-01-01

    The artificial bee colony (ABC) algorithm is a com-petitive stochastic population-based optimization algorithm. How-ever, the ABC algorithm does not use the social information and lacks the knowledge of the problem structure, which leads to in-sufficiency in both convergent speed and searching precision. Archimedean copula estimation of distribution algorithm (ACEDA) is a relatively simple, time-economic and multivariate correlated EDA. This paper proposes a novel hybrid algorithm based on the ABC algorithm and ACEDA cal ed Archimedean copula estima-tion of distribution based on the artificial bee colony (ACABC) algorithm. The hybrid algorithm utilizes ACEDA to estimate the distribution model and then uses the information to help artificial bees to search more efficiently in the search space. Six bench-mark functions are introduced to assess the performance of the ACABC algorithm on numerical function optimization. Experimen-tal results show that the ACABC algorithm converges much faster with greater precision compared with the ABC algorithm, ACEDA and the global best (gbest)-guided ABC (GABC) algorithm in most of the experiments.

  3. All roads lead to Rome

    DEFF Research Database (Denmark)

    Ottosen, Thorsten Jørgen; Vomlel, Jiri

    2012-01-01

    size triangulations. The search methods are made faster by efficient dynamic maintenance of the cliques of a graph. This problem was investigated by Stix, and in this paper we derive a new simple method based on the Bron-Kerbosch algorithm that compares favourably to Stix’ approach. The new approach...

  4. A filtered backprojection algorithm with characteristics of the iterative landweber algorithm

    OpenAIRE

    L. Zeng, Gengsheng

    2012-01-01

    Purpose: In order to eventually develop an analytical algorithm with noise characteristics of an iterative algorithm, this technical note develops a window function for the filtered backprojection (FBP) algorithm in tomography that behaves as an iterative Landweber algorithm.

  5. Super-Encryption Implementation Using Monoalphabetic Algorithm and XOR Algorithm for Data Security

    Science.gov (United States)

    Rachmawati, Dian; Andri Budiman, Mohammad; Aulia, Indra

    2018-03-01

    The exchange of data that occurs offline and online is very vulnerable to the threat of data theft. In general, cryptography is a science and art to maintain data secrecy. An encryption is a cryptography algorithm in which data is transformed into cipher text, which is something that is unreadable and meaningless so it cannot be read or understood by other parties. In super-encryption, two or more encryption algorithms are combined to make it more secure. In this work, Monoalphabetic algorithm and XOR algorithm are combined to form a super- encryption. Monoalphabetic algorithm works by changing a particular letter into a new letter based on existing keywords while the XOR algorithm works by using logic operation XOR Since Monoalphabetic algorithm is a classical cryptographic algorithm and XOR algorithm is a modern cryptographic algorithm, this scheme is expected to be both easy-to-implement and more secure. The combination of the two algorithms is capable of securing the data and restoring it back to its original form (plaintext), so the data integrity is still ensured.

  6. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

    Science.gov (United States)

    Bektešević, Dino; Vinković, Dejan

    2017-11-01

    Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

  7. Golden Sine Algorithm: A Novel Math-Inspired Algorithm

    Directory of Open Access Journals (Sweden)

    TANYILDIZI, E.

    2017-05-01

    Full Text Available In this study, Golden Sine Algorithm (Gold-SA is presented as a new metaheuristic method for solving optimization problems. Gold-SA has been developed as a new search algorithm based on population. This math-based algorithm is inspired by sine that is a trigonometric function. In the algorithm, random individuals are created as many as the number of search agents with uniform distribution for each dimension. The Gold-SA operator searches to achieve a better solution in each iteration by trying to bring the current situation closer to the target value. The solution space is narrowed by the golden section so that the areas that are supposed to give only good results are scanned instead of the whole solution space scan. In the tests performed, it is seen that Gold-SA has better results than other population based methods. In addition, Gold-SA has fewer algorithm-dependent parameters and operators than other metaheuristic methods, increasing the importance of this method by providing faster convergence of this new method.

  8. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    Science.gov (United States)

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  9. Information in a Network of Neuronal Cells: Effect of Cell Density and Short-Term Depression

    KAUST Repository

    Onesto, Valentina; Cosentino, Carlo; Di Fabrizio, Enzo M.; Cesarelli, Mario; Amato, Francesco; Gentile, Francesco

    2016-01-01

    Neurons are specialized, electrically excitable cells which use electrical to chemical signals to transmit and elaborate information. Understanding how the cooperation of a great many of neurons in a grid may modify and perhaps improve the information quality, in contrast to few neurons in isolation, is critical for the rational design of cell-materials interfaces for applications in regenerative medicine, tissue engineering, and personalized lab-on-a-chips. In the present paper, we couple an integrate-and-fire model with information theory variables to analyse the extent of information in a network of nerve cells. We provide an estimate of the information in the network in bits as a function of cell density and short-term depression time. In the model, neurons are connected through a Delaunay triangulation of not-intersecting edges; in doing so, the number of connecting synapses per neuron is approximately constant to reproduce the early time of network development in planar neural cell cultures. In simulations where the number of nodes is varied, we observe an optimal value of cell density for which information in the grid is maximized. In simulations in which the posttransmission latency time is varied, we observe that information increases as the latency time decreases and, for specific configurations of the grid, it is largely enhanced in a resonance effect.

  10. A generation-attraction model for renewable energy flows in Italy: A complex network approach

    Science.gov (United States)

    Valori, Luca; Giannuzzi, Giovanni Luca; Facchini, Angelo; Squartini, Tiziano; Garlaschelli, Diego; Basosi, Riccardo

    2016-10-01

    In recent years, in Italy, the trend of the electricity demand and the need to connect a large number of renewable energy power generators to the power-grid, developed a novel type of energy transmission/distribution infrastructure. The Italian Transmission System Operator (TSO) and the Distribution System Operator (DSO), worked on a new infrastructural model, based on electronic meters and information technology. In pursuing this objective it is crucial importance to understand how even more larger shares of renewable energy can be fully integrated, providing a constant and reliable energy background over space and time. This is particularly true for intermittent sources as photovoltaic installations due to the fine-grained distribution of them across the Country. In this work we use an over-simplified model to characterize the Italian power grid as a graph whose nodes are Italian municipalities and the edges cross the administrative boundaries between a selected municipality and its first neighbours, following a Delaunay triangulation. Our aim is to describe the power flow as a diffusion process over a network, and using open data on the solar irradiation at the ground level, we estimate the production of photovoltaic energy in each node. An attraction index was also defined using demographic data, in accordance with average per capita energy consumption data. The available energy on each node was calculated by finding the stationary state of a generation-attraction model.

  11. Three dimensional modelling for the target asteroid of HAYABUSA

    Science.gov (United States)

    Demura, H.; Kobayashi, S.; Asada, N.; Hashimoto, T.; Saito, J.

    Hayabusa program is the first sample return mission of Japan. This was launched at May 9 2003, and will arrive at the target asteroid 25143 Itokawa on June 2005. The spacecraft has three optical navigation cameras, which are two wide angle ones and a telescopic one. The telescope with a filter wheel was named AMICA (Asteroid Multiband Imaging CAmera). We are going to model a shape of the target asteroid by this telescope; expected resolution: 1m/pixel at 10 km in distanc, field of view: 5.7 squared degrees, MPP-type CCD with 1024 x 1000 pixels. Because size of the Hayabusa is about 1x1x1 m, our goal is shape modeling with about 1m in precision on the basis of a camera system with scanning by rotation of the asteroid. This image-based modeling requires sequential images via AMICA and a history of distance between the asteroid and Hayabusa provided by a Laser Range Finder. We established a system of hierarchically recursive search with sub-pixel matching of Ground Control Points, which are picked up with Susan Operator. The matched dataset is restored with a restriction of epipolar geometry, and the obtained a group of three dimensional points are converted to a polygon model with Delaunay Triangulation. The current status of our development for the shape modeling is displayed.

  12. Non Machinable Volume Calculation Method for 5-Axis Roughing Based on Faceted Models through Closed Bounded Area Evaluation

    Directory of Open Access Journals (Sweden)

    Kiswanto Gandjar

    2017-01-01

    Full Text Available The increase in the volume of rough machining on the CBV area is one of the indicators of increased efficiencyof machining process. Normally, this area is not subject to the rough machining process, so that the volume of the rest of the material is still big. With the addition of CC point and tool orientation to CBV area on a complex surface, the finishing will be faster because the volume of the excess material on this process will be reduced. This paper presents a method for volume calculation of the parts which do not allow further occurrence of the machining process, particulary for rough machining on a complex object. By comparing the total volume of raw materials and machining area volume, the volume of residual material,on which machining process cannot be done,can be determined. The volume of the total machining area has been taken into account for machiningof the CBV and non CBV areas. By using delaunay triangulation for the triangle which includes the machining and CBV areas. The volume will be calculated using Divergence(Gaussian theorem by focusing on the direction of the normal vector on each triangle. This method can be used as an alternative to selecting tothe rough machining methods which select minimum value of nonmachinable volume so that effectiveness can be achieved in the machining process.

  13. Information in a Network of Neuronal Cells: Effect of Cell Density and Short-Term Depression

    KAUST Repository

    Onesto, Valentina

    2016-05-10

    Neurons are specialized, electrically excitable cells which use electrical to chemical signals to transmit and elaborate information. Understanding how the cooperation of a great many of neurons in a grid may modify and perhaps improve the information quality, in contrast to few neurons in isolation, is critical for the rational design of cell-materials interfaces for applications in regenerative medicine, tissue engineering, and personalized lab-on-a-chips. In the present paper, we couple an integrate-and-fire model with information theory variables to analyse the extent of information in a network of nerve cells. We provide an estimate of the information in the network in bits as a function of cell density and short-term depression time. In the model, neurons are connected through a Delaunay triangulation of not-intersecting edges; in doing so, the number of connecting synapses per neuron is approximately constant to reproduce the early time of network development in planar neural cell cultures. In simulations where the number of nodes is varied, we observe an optimal value of cell density for which information in the grid is maximized. In simulations in which the posttransmission latency time is varied, we observe that information increases as the latency time decreases and, for specific configurations of the grid, it is largely enhanced in a resonance effect.

  14. Virtual hydrology observatory: an immersive visualization of hydrology modeling

    Science.gov (United States)

    Su, Simon; Cruz-Neira, Carolina; Habib, Emad; Gerndt, Andreas

    2009-02-01

    The Virtual Hydrology Observatory will provide students with the ability to observe the integrated hydrology simulation with an instructional interface by using a desktop based or immersive virtual reality setup. It is the goal of the virtual hydrology observatory application to facilitate the introduction of field experience and observational skills into hydrology courses through innovative virtual techniques that mimic activities during actual field visits. The simulation part of the application is developed from the integrated atmospheric forecast model: Weather Research and Forecasting (WRF), and the hydrology model: Gridded Surface/Subsurface Hydrologic Analysis (GSSHA). Both the output from WRF and GSSHA models are then used to generate the final visualization components of the Virtual Hydrology Observatory. The various visualization data processing techniques provided by VTK are 2D Delaunay triangulation and data optimization. Once all the visualization components are generated, they are integrated into the simulation data using VRFlowVis and VR Juggler software toolkit. VR Juggler is used primarily to provide the Virtual Hydrology Observatory application with fully immersive and real time 3D interaction experience; while VRFlowVis provides the integration framework for the hydrologic simulation data, graphical objects and user interaction. A six-sided CAVETM like system is used to run the Virtual Hydrology Observatory to provide the students with a fully immersive experience.

  15. Optimization of deformation monitoring networks using finite element strain analysis

    Science.gov (United States)

    Alizadeh-Khameneh, M. Amin; Eshagh, Mehdi; Jensen, Anna B. O.

    2018-04-01

    An optimal design of a geodetic network can fulfill the requested precision and reliability of the network, and decrease the expenses of its execution by removing unnecessary observations. The role of an optimal design is highlighted in deformation monitoring network due to the repeatability of these networks. The core design problem is how to define precision and reliability criteria. This paper proposes a solution, where the precision criterion is defined based on the precision of deformation parameters, i. e. precision of strain and differential rotations. A strain analysis can be performed to obtain some information about the possible deformation of a deformable object. In this study, we split an area into a number of three-dimensional finite elements with the help of the Delaunay triangulation and performed the strain analysis on each element. According to the obtained precision of deformation parameters in each element, the precision criterion of displacement detection at each network point is then determined. The developed criterion is implemented to optimize the observations from the Global Positioning System (GPS) in Skåne monitoring network in Sweden. The network was established in 1989 and straddled the Tornquist zone, which is one of the most active faults in southern Sweden. The numerical results show that 17 out of all 21 possible GPS baseline observations are sufficient to detect minimum 3 mm displacement at each network point.

  16. Information in a Network of Neuronal Cells: Effect of Cell Density and Short-Term Depression

    Directory of Open Access Journals (Sweden)

    Valentina Onesto

    2016-01-01

    Full Text Available Neurons are specialized, electrically excitable cells which use electrical to chemical signals to transmit and elaborate information. Understanding how the cooperation of a great many of neurons in a grid may modify and perhaps improve the information quality, in contrast to few neurons in isolation, is critical for the rational design of cell-materials interfaces for applications in regenerative medicine, tissue engineering, and personalized lab-on-a-chips. In the present paper, we couple an integrate-and-fire model with information theory variables to analyse the extent of information in a network of nerve cells. We provide an estimate of the information in the network in bits as a function of cell density and short-term depression time. In the model, neurons are connected through a Delaunay triangulation of not-intersecting edges; in doing so, the number of connecting synapses per neuron is approximately constant to reproduce the early time of network development in planar neural cell cultures. In simulations where the number of nodes is varied, we observe an optimal value of cell density for which information in the grid is maximized. In simulations in which the posttransmission latency time is varied, we observe that information increases as the latency time decreases and, for specific configurations of the grid, it is largely enhanced in a resonance effect.

  17. Interactive reconstructions of cranial 3D implants under MeVisLab as an alternative to commercial planning software.

    Directory of Open Access Journals (Sweden)

    Jan Egger

    Full Text Available In this publication, the interactive planning and reconstruction of cranial 3D Implants under the medical prototyping platform MeVisLab as alternative to commercial planning software is introduced. In doing so, a MeVisLab prototype consisting of a customized data-flow network and an own C++ module was set up. As a result, the Computer-Aided Design (CAD software prototype guides a user through the whole workflow to generate an implant. Therefore, the workflow begins with loading and mirroring the patients head for an initial curvature of the implant. Then, the user can perform an additional Laplacian smoothing, followed by a Delaunay triangulation. The result is an aesthetic looking and well-fitting 3D implant, which can be stored in a CAD file format, e.g. STereoLithography (STL, for 3D printing. The 3D printed implant can finally be used for an in-depth pre-surgical evaluation or even as a real implant for the patient. In a nutshell, our research and development shows that a customized MeVisLab software prototype can be used as an alternative to complex commercial planning software, which may also not be available in every clinic. Finally, not to conform ourselves directly to available commercial software and look for other options that might improve the workflow.

  18. Interactive reconstructions of cranial 3D implants under MeVisLab as an alternative to commercial planning software

    Science.gov (United States)

    Egger, Jan; Gall, Markus; Tax, Alois; Ücal, Muammer; Zefferer, Ulrike; Li, Xing; von Campe, Gord; Schäfer, Ute; Schmalstieg, Dieter; Chen, Xiaojun

    2017-01-01

    In this publication, the interactive planning and reconstruction of cranial 3D Implants under the medical prototyping platform MeVisLab as alternative to commercial planning software is introduced. In doing so, a MeVisLab prototype consisting of a customized data-flow network and an own C++ module was set up. As a result, the Computer-Aided Design (CAD) software prototype guides a user through the whole workflow to generate an implant. Therefore, the workflow begins with loading and mirroring the patients head for an initial curvature of the implant. Then, the user can perform an additional Laplacian smoothing, followed by a Delaunay triangulation. The result is an aesthetic looking and well-fitting 3D implant, which can be stored in a CAD file format, e.g. STereoLithography (STL), for 3D printing. The 3D printed implant can finally be used for an in-depth pre-surgical evaluation or even as a real implant for the patient. In a nutshell, our research and development shows that a customized MeVisLab software prototype can be used as an alternative to complex commercial planning software, which may also not be available in every clinic. Finally, not to conform ourselves directly to available commercial software and look for other options that might improve the workflow. PMID:28264062

  19. Signatures of the Primordial Universe from Its Emptiness: Measurement of Baryon Acoustic Oscillations from Minima of the Density Field.

    Science.gov (United States)

    Kitaura, Francisco-Shu; Chuang, Chia-Hsun; Liang, Yu; Zhao, Cheng; Tao, Charling; Rodríguez-Torres, Sergio; Eisenstein, Daniel J; Gil-Marín, Héctor; Kneib, Jean-Paul; McBride, Cameron; Percival, Will J; Ross, Ashley J; Sánchez, Ariel G; Tinker, Jeremy; Tojeiro, Rita; Vargas-Magana, Mariana; Zhao, Gong-Bo

    2016-04-29

    Sound waves from the primordial fluctuations of the Universe imprinted in the large-scale structure, called baryon acoustic oscillations (BAOs), can be used as standard rulers to measure the scale of the Universe. These oscillations have already been detected in the distribution of galaxies. Here we propose to measure BAOs from the troughs (minima) of the density field. Based on two sets of accurate mock halo catalogues with and without BAOs in the seed initial conditions, we demonstrate that the BAO signal cannot be obtained from the clustering of classical disjoint voids, but it is clearly detected from overlapping voids. The latter represent an estimate of all troughs of the density field. We compute them from the empty circumsphere centers constrained by tetrahedra of galaxies using Delaunay triangulation. Our theoretical models based on an unprecedented large set of detailed simulated void catalogues are remarkably well confirmed by observational data. We use the largest recently publicly available sample of luminous red galaxies from SDSS-III BOSS DR11 to unveil for the first time a >3σ BAO detection from voids in observations. Since voids are nearly isotropically expanding regions, their centers represent the most quiet places in the Universe, keeping in mind the cosmos origin and providing a new promising window in the analysis of the cosmological large-scale structure from galaxy surveys.

  20. Conflict Detection and Resolution for Future Air Transportation Management

    Science.gov (United States)

    Krozel, Jimmy; Peters, Mark E.; Hunter, George

    1997-01-01

    With a Free Flight policy, the emphasis for air traffic control is shifting from active control to passive air traffic management with a policy of intervention by exception. Aircraft will be allowed to fly user preferred routes, as long as safety Alert Zones are not violated. If there is a potential conflict, two (or more) aircraft must be able to arrive at a solution for conflict resolution without controller intervention. Thus, decision aid tools are needed in Free Flight to detect and resolve conflicts, and several problems must be solved to develop such tools. In this report, we analyze and solve problems of proximity management, conflict detection, and conflict resolution under a Free Flight policy. For proximity management, we establish a system based on Delaunay Triangulations of aircraft at constant flight levels. Such a system provides a means for analyzing the neighbor relationships between aircraft and the nearby free space around air traffic which can be utilized later in conflict resolution. For conflict detection, we perform both 2-dimensional and 3-dimensional analyses based on the penetration of the Protected Airspace Zone. Both deterministic and non-deterministic analyses are performed. We investigate several types of conflict warnings including tactical warnings prior to penetrating the Protected Airspace Zone, methods based on the reachability overlap of both aircraft, and conflict probability maps to establish strategic Alert Zones around aircraft.

  1. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  2. Fermion cluster algorithms

    International Nuclear Information System (INIS)

    Chandrasekharan, Shailesh

    2000-01-01

    Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm

  3. Algorithm improvement program nuclide identification algorithm scoring criteria and scoring application.

    Energy Technology Data Exchange (ETDEWEB)

    Enghauser, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  4. Algorithm of Particle Data Association for SLAM Based on Improved Ant Algorithm

    Directory of Open Access Journals (Sweden)

    KeKe Gen

    2015-01-01

    Full Text Available The article considers a problem of data association algorithm for simultaneous localization and mapping guidelines in determining the route of unmanned aerial vehicles (UAVs. Currently, these equipments are already widely used, but mainly controlled from the remote operator. An urgent task is to develop a control system that allows for autonomous flight. Algorithm SLAM (simultaneous localization and mapping, which allows to predict the location, speed, the ratio of flight parameters and the coordinates of landmarks and obstacles in an unknown environment, is one of the key technologies to achieve real autonomous UAV flight. The aim of this work is to study the possibility of solving this problem by using an improved ant algorithm.The data association for SLAM algorithm is meant to establish a matching set of observed landmarks and landmarks in the state vector. Ant algorithm is one of the widely used optimization algorithms with positive feedback and the ability to search in parallel, so the algorithm is suitable for solving the problem of data association for SLAM. But the traditional ant algorithm in the process of finding routes easily falls into local optimum. Adding random perturbations in the process of updating the global pheromone to avoid local optima. Setting limits pheromone on the route can increase the search space with a reasonable amount of calculations for finding the optimal route.The paper proposes an algorithm of the local data association for SLAM algorithm based on an improved ant algorithm. To increase the speed of calculation, local data association is used instead of the global data association. The first stage of the algorithm defines targets in the matching space and the observed landmarks with the possibility of association by the criterion of individual compatibility (IC. The second stage defines the matched landmarks and their coordinates using improved ant algorithm. Simulation results confirm the efficiency and

  5. The BR eigenvalue algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  6. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  7. Algorithms in Singular

    Directory of Open Access Journals (Sweden)

    Hans Schonemann

    1996-12-01

    Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].

  8. Multimodal optimization by using hybrid of artificial bee colony algorithm and BFGS algorithm

    Science.gov (United States)

    Anam, S.

    2017-10-01

    Optimization has become one of the important fields in Mathematics. Many problems in engineering and science can be formulated into optimization problems. They maybe have many local optima. The optimization problem with many local optima, known as multimodal optimization problem, is how to find the global solution. Several metaheuristic methods have been proposed to solve multimodal optimization problems such as Particle Swarm Optimization (PSO), Genetics Algorithm (GA), Artificial Bee Colony (ABC) algorithm, etc. The performance of the ABC algorithm is better than or similar to those of other population-based algorithms with the advantage of employing a fewer control parameters. The ABC algorithm also has the advantages of strong robustness, fast convergence and high flexibility. However, it has the disadvantages premature convergence in the later search period. The accuracy of the optimal value cannot meet the requirements sometimes. Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is a good iterative method for finding a local optimum. Compared with other local optimization methods, the BFGS algorithm is better. Based on the advantages of the ABC algorithm and the BFGS algorithm, this paper proposes a hybrid of the artificial bee colony algorithm and the BFGS algorithm to solve the multimodal optimization problem. The first step is that the ABC algorithm is run to find a point. In the second step is that the point obtained by the first step is used as an initial point of BFGS algorithm. The results show that the hybrid method can overcome from the basic ABC algorithm problems for almost all test function. However, if the shape of function is flat, the proposed method cannot work well.

  9. Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Surafel Luleseged Tilahun

    2012-01-01

    Full Text Available Firefly algorithm is one of the new metaheuristic algorithms for optimization problems. The algorithm is inspired by the flashing behavior of fireflies. In the algorithm, randomly generated solutions will be considered as fireflies, and brightness is assigned depending on their performance on the objective function. One of the rules used to construct the algorithm is, a firefly will be attracted to a brighter firefly, and if there is no brighter firefly, it will move randomly. In this paper we modify this random movement of the brighter firefly by generating random directions in order to determine the best direction in which the brightness increases. If such a direction is not generated, it will remain in its current position. Furthermore the assignment of attractiveness is modified in such a way that the effect of the objective function is magnified. From the simulation result it is shown that the modified firefly algorithm performs better than the standard one in finding the best solution with smaller CPU time.

  10. Insights from triangulation of two purchase choice elicitation methods to predict social decision making in healthcare.

    Science.gov (United States)

    Whitty, Jennifer A; Rundle-Thiele, Sharyn R; Scuffham, Paul A

    2012-03-01

    Discrete choice experiments (DCEs) and the Juster scale are accepted methods for the prediction of individual purchase probabilities. Nevertheless, these methods have seldom been applied to a social decision-making context. To gain an overview of social decisions for a decision-making population through data triangulation, these two methods were used to understand purchase probability in a social decision-making context. We report an exploratory social decision-making study of pharmaceutical subsidy in Australia. A DCE and selected Juster scale profiles were presented to current and past members of the Australian Pharmaceutical Benefits Advisory Committee and its Economic Subcommittee. Across 66 observations derived from 11 respondents for 6 different pharmaceutical profiles, there was a small overall median difference of 0.024 in the predicted probability of public subsidy (p = 0.003), with the Juster scale predicting the higher likelihood. While consistency was observed at the extremes of the probability scale, the funding probability differed over the mid-range of profiles. There was larger variability in the DCE than Juster predictions within each individual respondent, suggesting the DCE is better able to discriminate between profiles. However, large variation was observed between individuals in the Juster scale but not DCE predictions. It is important to use multiple methods to obtain a complete picture of the probability of purchase or public subsidy in a social decision-making context until further research can elaborate on our findings. This exploratory analysis supports the suggestion that the mixed logit model, which was used for the DCE analysis, may fail to adequately account for preference heterogeneity in some contexts.

  11. Network-Oblivious Algorithms

    DEFF Research Database (Denmark)

    Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino

    2016-01-01

    A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network......-oblivious algorithm be specified on a parallel model of computation where the only parameter is the problem’s input size, and then evaluated on a model with two parameters, capturing parallelism granularity and communication latency. It is shown that for a wide class of network-oblivious algorithms, optimality...... of cache hierarchies, to the realm of parallel computation. Its effectiveness is illustrated by providing optimal network-oblivious algorithms for a number of key problems. Some limitations of the oblivious approach are also discussed....

  12. A novel hybrid algorithm of GSA with Kepler algorithm for numerical optimization

    Directory of Open Access Journals (Sweden)

    Soroor Sarafrazi

    2015-07-01

    Full Text Available It is now well recognized that pure algorithms can be promisingly improved by hybridization with other techniques. One of the relatively new metaheuristic algorithms is Gravitational Search Algorithm (GSA which is based on the Newton laws. In this paper, to enhance the performance of GSA, a novel algorithm called “Kepler”, inspired by the astrophysics, is introduced. The Kepler algorithm is based on the principle of the first Kepler law. The hybridization of GSA and Kepler algorithm is an efficient approach to provide much stronger specialization in intensification and/or diversification. The performance of GSA–Kepler is evaluated by applying it to 14 benchmark functions with 20–1000 dimensions and the optimal approximation of linear system as a practical optimization problem. The results obtained reveal that the proposed hybrid algorithm is robust enough to optimize the benchmark functions and practical optimization problems.

  13. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  14. DEVELOPMENT OF A NEW ALGORITHM FOR KEY AND S-BOX GENERATION IN BLOWFISH ALGORITHM

    Directory of Open Access Journals (Sweden)

    TAYSEER S. ATIA

    2014-08-01

    Full Text Available Blowfish algorithm is a block cipher algorithm, its strong, simple algorithm used to encrypt data in block of size 64-bit. Key and S-box generation process in this algorithm require time and memory space the reasons that make this algorithm not convenient to be used in smart card or application requires changing secret key frequently. In this paper a new key and S-box generation process was developed based on Self Synchronization Stream Cipher (SSS algorithm where the key generation process for this algorithm was modified to be used with the blowfish algorithm. Test result shows that the generation process requires relatively slow time and reasonably low memory requirement and this enhance the algorithm and gave it the possibility for different usage.

  15. Monte Carlo algorithms with absorbing Markov chains: Fast local algorithms for slow dynamics

    International Nuclear Information System (INIS)

    Novotny, M.A.

    1995-01-01

    A class of Monte Carlo algorithms which incorporate absorbing Markov chains is presented. In a particular limit, the lowest order of these algorithms reduces to the n-fold way algorithm. These algorithms are applied to study the escape from the metastable state in the two-dimensional square-lattice nearest-neighbor Ising ferromagnet in an unfavorable applied field, and the agreement with theoretical predictions is very good. It is demonstrated that the higher-order algorithms can be many orders of magnitude faster than either the traditional Monte Carlo or n-fold way algorithms

  16. Long-term versus short-term deformation of the meizoseismal area of the 2008 Achaia-Elia (MW 6.4) earthquake in NW Peloponnese, Greece: Evidence from historical triangulation and morphotectonic data

    Science.gov (United States)

    Stiros, Stathis; Moschas, Fanis; Feng, Lujia; Newman, Andrew

    2013-04-01

    The deformation of the meizoseismal area of the 2008 Achaia-Elia (MW 6.4) earthquake in NW Peloponnese, of the first significant strike slip earthquake in continental Greece, was examined in two time scales; of 102 years, based on the analysis of high-accuracy historical triangulation data describing shear, and of 105-106 years, based on the analysis of the hydrographic network of the area for signs of streams offset by faulting. Our study revealed pre-seismic accumulation of shear strain of the order of 0.2 μrad/year in the study area, consistent with recent GPS evidence, but no signs of significant strike slip-induced offsets in the hydrographic network. These results confirm the hypothesis that the 2008 fault, which did not reached the surface and was not associated with significant seismic ground deformation, probably because of a surface flysch layer filtering high-strain events, was associated with an immature or a dormant, recently activated fault. This fault, about 150 km long and discordant to the morphotectonic trends of the area, seems first, to contain segments which have progressively reactivated in a specific direction in the last 20 years, reminiscent of the North Anatolian Fault, and second, to limit an 150 km wide (recent?) shear zone in the internal part of the arc, in a region mostly dominated by thrust faulting and strong destructive earthquakes. Deformation of the first main strike slip fault in continental Greece analyzed. Triangulation data show preseismic shear, hydrographic net no previous faulting. Surface shear deformation only in low strain rates. Immature or reactivated dormant strike slip fault, with gradual oriented rupturing. Interplay between shear and thrusting along the arc.

  17. Dynamic route guidance algorithm based algorithm based on artificial immune system

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    To improve the performance of the K-shortest paths search in intelligent traffic guidance systems,this paper proposes an optimal search algorithm based on the intelligent optimization search theory and the memphor mechanism of vertebrate immune systems.This algorithm,applied to the urban traffic network model established by the node-expanding method,can expediently realize K-shortest paths search in the urban traffic guidance systems.Because of the immune memory and global parallel search ability from artificial immune systems,K shortest paths can be found without any repeat,which indicates evidently the superiority of the algorithm to the conventional ones.Not only does it perform a better parallelism,the algorithm also prevents premature phenomenon that often occurs in genetic algorithms.Thus,it is especially suitable for real-time requirement of the traffic guidance system and other engineering optimal applications.A case study verifies the efficiency and the practicability of the algorithm aforementioned.

  18. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  19. Optical profilometer using laser based conical triangulation for inspection of inner geometry of corroded pipes in cylindrical coordinates

    Science.gov (United States)

    Buschinelli, Pedro D. V.; Melo, João. Ricardo C.; Albertazzi, Armando; Santos, João. M. C.; Camerini, Claudio S.

    2013-04-01

    An axis-symmetrical optical laser triangulation system was developed by the authors to measure the inner geometry of long pipes used in the oil industry. It has a special optical configuration able to acquire shape information of the inner geometry of a section of a pipe from a single image frame. A collimated laser beam is pointed to the tip of a 45° conical mirror. The laser light is reflected in such a way that a radial light sheet is formed and intercepts the inner geometry and forms a bright laser line on a section of the inspected pipe. A camera acquires the image of the laser line through a wide angle lens. An odometer-based triggering system is used to shot the camera to acquire a set of equally spaced images at high speed while the device is moved along the pipe's axis. Image processing is done in real-time (between images acquisitions) thanks to the use of parallel computing technology. The measured geometry is analyzed to identify corrosion damages. The measured geometry and results are graphically presented using virtual reality techniques and devices as 3D glasses and head-mounted displays. The paper describes the measurement principles, calibration strategies, laboratory evaluation of the developed device, as well as, a practical example of a corroded pipe used in an industrial gas production plant.

  20. Modified Clipped LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Lotfizad Mojtaba

    2005-01-01

    Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.

  1. Algorithms as fetish: Faith and possibility in algorithmic work

    Directory of Open Access Journals (Sweden)

    Suzanne L Thomas

    2018-01-01

    Full Text Available Algorithms are powerful because we invest in them the power to do things. With such promise, they can transform the ordinary, say snapshots along a robotic vacuum cleaner’s route, into something much more, such as a clean home. Echoing David Graeber’s revision of fetishism, we argue that this easy slip from technical capabilities to broader claims betrays not the “magic” of algorithms but rather the dynamics of their exchange. Fetishes are not indicators of false thinking, but social contracts in material form. They mediate emerging distributions of power often too nascent, too slippery or too disconcerting to directly acknowledge. Drawing primarily on 2016 ethnographic research with computer vision professionals, we show how faith in what algorithms can do shapes the social encounters and exchanges of their production. By analyzing algorithms through the lens of fetishism, we can see the social and economic investment in some people’s labor over others. We also see everyday opportunities for social creativity and change. We conclude that what is problematic about algorithms is not their fetishization but instead their stabilization into full-fledged gods and demons – the more deserving objects of critique.

  2. Quick fuzzy backpropagation algorithm.

    Science.gov (United States)

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  3. A New Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Medha Gupta

    2016-07-01

    Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.

  4. Quantum Computation and Algorithms

    International Nuclear Information System (INIS)

    Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.

    1999-01-01

    It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution

  5. Semioptimal practicable algorithmic cooling

    International Nuclear Information System (INIS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-01-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  6. An Ordering Linear Unification Algorithm

    Institute of Scientific and Technical Information of China (English)

    胡运发

    1989-01-01

    In this paper,we present an ordering linear unification algorithm(OLU).A new idea on substituteion of the binding terms is introduced to the algorithm,which is able to overcome some drawbacks of other algorithms,e.g.,MM algorithm[1],RG1 and RG2 algorithms[2],Particularly,if we use the directed eyclie graphs,the algoritm needs not check the binding order,then the OLU algorithm can also be aplied to the infinite tree data struceture,and a higher efficiency can be expected.The paper focuses upon the discussion of OLU algorithm and a partial order structure with respect to the unification algorithm.This algorithm has been implemented in the GKD-PROLOG/VAX 780 interpreting system.Experimental results have shown that the algorithm is very simple and efficient.

  7. VISUALIZATION OF PAGERANK ALGORITHM

    OpenAIRE

    Perhaj, Ervin

    2013-01-01

    The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...

  8. RFID Location Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Zi Min

    2016-01-01

    Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.

  9. Improved multivariate polynomial factoring algorithm

    International Nuclear Information System (INIS)

    Wang, P.S.

    1978-01-01

    A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timing are included

  10. Governance by algorithms

    Directory of Open Access Journals (Sweden)

    Francesca Musiani

    2013-08-01

    Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.

  11. Algorithmic and user study of an autocompletion algorithm on a large medical vocabulary.

    Science.gov (United States)

    Sevenster, Merlijn; van Ommering, Rob; Qian, Yuechen

    2012-02-01

    Autocompletion supports human-computer interaction in software applications that let users enter textual data. We will be inspired by the use case in which medical professionals enter ontology concepts, catering the ongoing demand for structured and standardized data in medicine. Goal is to give an algorithmic analysis of one particular autocompletion algorithm, called multi-prefix matching algorithm, which suggests terms whose words' prefixes contain all words in the string typed by the user, e.g., in this sense, opt ner me matches optic nerve meningioma. Second we aim to investigate how well it supports users entering concepts from a large and comprehensive medical vocabulary (snomed ct). We give a concise description of the multi-prefix algorithm, and sketch how it can be optimized to meet required response time. Performance will be compared to a baseline algorithm, which gives suggestions that extend the string typed by the user to the right, e.g. optic nerve m gives optic nerve meningioma, but opt ner me does not. We conduct a user experiment in which 12 participants are invited to complete 40 snomed ct terms with the baseline algorithm and another set of 40 snomed ct terms with the multi-prefix algorithm. Our results show that users need significantly fewer keystrokes when supported by the multi-prefix algorithm than when supported by the baseline algorithm. The proposed algorithm is a competitive candidate for searching and retrieving terms from a large medical ontology. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Optimal Fungal Space Searching Algorithms.

    Science.gov (United States)

    Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V

    2016-10-01

    Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.

  13. Autonomous Star Tracker Algorithms

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  14. Fast geometric algorithms

    International Nuclear Information System (INIS)

    Noga, M.T.

    1984-01-01

    This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry

  15. Genetic Algorithm Applied to the Eigenvalue Equalization Filtered-x LMS Algorithm (EE-FXLMS

    Directory of Open Access Journals (Sweden)

    Stephan P. Lovstedt

    2008-01-01

    Full Text Available The FXLMS algorithm, used extensively in active noise control (ANC, exhibits frequency-dependent convergence behavior. This leads to degraded performance for time-varying tonal noise and noise with multiple stationary tones. Previous work by the authors proposed the eigenvalue equalization filtered-x least mean squares (EE-FXLMS algorithm. For that algorithm, magnitude coefficients of the secondary path transfer function are modified to decrease variation in the eigenvalues of the filtered-x autocorrelation matrix, while preserving the phase, giving faster convergence and increasing overall attenuation. This paper revisits the EE-FXLMS algorithm, using a genetic algorithm to find magnitude coefficients that give the least variation in eigenvalues. This method overcomes some of the problems with implementing the EE-FXLMS algorithm arising from finite resolution of sampled systems. Experimental control results using the original secondary path model, and a modified secondary path model for both the previous implementation of EE-FXLMS and the genetic algorithm implementation are compared.

  16. On factoring RSA modulus using random-restart hill-climbing algorithm and Pollard’s rho algorithm

    Science.gov (United States)

    Budiman, M. A.; Rachmawati, D.

    2017-12-01

    The security of the widely-used RSA public key cryptography algorithm depends on the difficulty of factoring a big integer into two large prime numbers. For many years, the integer factorization problem has been intensively and extensively studied in the field of number theory. As a result, a lot of deterministic algorithms such as Euler’s algorithm, Kraitchik’s, and variants of Pollard’s algorithms have been researched comprehensively. Our study takes a rather uncommon approach: rather than making use of intensive number theories, we attempt to factorize RSA modulus n by using random-restart hill-climbing algorithm, which belongs the class of metaheuristic algorithms. The factorization time of RSA moduli with different lengths is recorded and compared with the factorization time of Pollard’s rho algorithm, which is a deterministic algorithm. Our experimental results indicates that while random-restart hill-climbing algorithm is an acceptable candidate to factorize smaller RSA moduli, the factorization speed is much slower than that of Pollard’s rho algorithm.

  17. Opposition-Based Adaptive Fireworks Algorithm

    Directory of Open Access Journals (Sweden)

    Chibing Gong

    2016-07-01

    Full Text Available A fireworks algorithm (FWA is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA. The purpose of this paper is to add opposition-based learning (OBL to AFWA with the goal of further boosting performance and achieving global optimization. Twelve benchmark functions are tested in use of an opposition-based adaptive fireworks algorithm (OAFWA. The final results conclude that OAFWA significantly outperformed EFWA and AFWA in terms of solution accuracy. Additionally, OAFWA was compared with a bat algorithm (BA, differential evolution (DE, self-adapting control parameters in differential evolution (jDE, a firefly algorithm (FA, and a standard particle swarm optimization 2011 (SPSO2011 algorithm. The research results indicate that OAFWA ranks the highest of the six algorithms for both solution accuracy and runtime cost.

  18. Opposite Degree Algorithm and Its Applications

    Directory of Open Access Journals (Sweden)

    Xiao-Guang Yue

    2015-12-01

    Full Text Available The opposite (Opposite Degree, referred to as OD algorithm is an intelligent algorithm proposed by Yue Xiaoguang et al. Opposite degree algorithm is mainly based on the concept of opposite degree, combined with the idea of design of neural network and genetic algorithm and clustering analysis algorithm. The OD algorithm is divided into two sub algorithms, namely: opposite degree - numerical computation (OD-NC algorithm and opposite degree - Classification computation (OD-CC algorithm.

  19. A triangulation approach to the identification of acute sector nurses' training needs for formal nurse practitioner status.

    Science.gov (United States)

    Hicks, C; Hennessy, D

    1998-01-01

    The current confusion surrounding the definition and role function of the nurse practitioner (NP) has created a situation in which advanced clinical practice is delivered in a variety of ways and at many levels. Not surprisingly, this has led to difficulties in regulating educational provision for NPs. This study reports a survey of the perceptions of the role definitions and training needs of all nurses working at advanced clinical levels within an acute sector Trust. Although this concept is not a novel one in advanced nursing practice, the procedure adopted differed from previous studies in two fundamental ways: firstly, a unique training needs assessment instrument was used, which because of its validity and opacity, was capable of yielding a highly reliable data-base, comprising a prioritized profile of real training needs as opposed to the standard wish-list typically elicited. Secondly, it did not rely simply on the self-reported needs of the nurse sample, but also included the perceptions of the sample's immediate medical and managerial colleagues. In this way, a triangulation paradigm was adopted. The results indicated that overall, there was high agreement between the nurses and their managers, regarding both the definition of the NP role and the essential training requirements, with somewhat different opinions being offered by the medical staff. When the raw scores were standardized to correct for response bias, the data provided an operational definition of the role of the NP and a prioritized profile of training needs for nurses who wished to train to this level.

  20. Algorithmic phase diagrams

    Science.gov (United States)

    Hockney, Roger

    1987-01-01

    Algorithmic phase diagrams are a neat and compact representation of the results of comparing the execution time of several algorithms for the solution of the same problem. As an example, the recent results are shown of Gannon and Van Rosendale on the solution of multiple tridiagonal systems of equations in the form of such diagrams. The act of preparing these diagrams has revealed an unexpectedly complex relationship between the best algorithm and the number and size of the tridiagonal systems, which was not evident from the algebraic formulae in the original paper. Even so, for a particular computer, one diagram suffices to predict the best algorithm for all problems that are likely to be encountered the prediction being read directly from the diagram without complex calculation.

  1. Recursive forgetting algorithms

    DEFF Research Database (Denmark)

    Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan

    1992-01-01

    In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...

  2. Agency and Algorithms

    Directory of Open Access Journals (Sweden)

    Hanns Holger Rutz

    2016-11-01

    Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory. 

  3. A Discrete Model for Color Naming

    Science.gov (United States)

    Menegaz, G.; Le Troter, A.; Sequeira, J.; Boi, J. M.

    2006-12-01

    The ability to associate labels to colors is very natural for human beings. Though, this apparently simple task hides very complex and still unsolved problems, spreading over many different disciplines ranging from neurophysiology to psychology and imaging. In this paper, we propose a discrete model for computational color categorization and naming. Starting from the 424 color specimens of the OSA-UCS set, we propose a fuzzy partitioning of the color space. Each of the 11 basic color categories identified by Berlin and Kay is modeled as a fuzzy set whose membership function is implicitly defined by fitting the model to the results of an ad hoc psychophysical experiment (Experiment 1). Each OSA-UCS sample is represented by a feature vector whose components are the memberships to the different categories. The discrete model consists of a three-dimensional Delaunay triangulation of the CIELAB color space which associates each OSA-UCS sample to a vertex of a 3D tetrahedron. Linear interpolation is used to estimate the membership values of any other point in the color space. Model validation is performed both directly, through the comparison of the predicted membership values to the subjective counterparts, as evaluated via another psychophysical test (Experiment 2), and indirectly, through the investigation of its exploitability for image segmentation. The model has proved to be successful in both cases, providing an estimation of the membership values in good agreement with the subjective measures as well as a semantically meaningful color-based segmentation map.

  4. Image segmentation by hierarchial agglomeration of polygons using ecological statistics

    Science.gov (United States)

    Prasad, Lakshman; Swaminarayan, Sriram

    2013-04-23

    A method for rapid hierarchical image segmentation based on perceptually driven contour completion and scene statistics is disclosed. The method begins with an initial fine-scale segmentation of an image, such as obtained by perceptual completion of partial contours into polygonal regions using region-contour correspondences established by Delaunay triangulation of edge pixels as implemented in VISTA. The resulting polygons are analyzed with respect to their size and color/intensity distributions and the structural properties of their boundaries. Statistical estimates of granularity of size, similarity of color, texture, and saliency of intervening boundaries are computed and formulated into logical (Boolean) predicates. The combined satisfiability of these Boolean predicates by a pair of adjacent polygons at a given segmentation level qualifies them for merging into a larger polygon representing a coarser, larger-scale feature of the pixel image and collectively obtains the next level of polygonal segments in a hierarchy of fine-to-coarse segmentations. The iterative application of this process precipitates textured regions as polygons with highly convolved boundaries and helps distinguish them from objects which typically have more regular boundaries. The method yields a multiscale decomposition of an image into constituent features that enjoy a hierarchical relationship with features at finer and coarser scales. This provides a traversable graph structure from which feature content and context in terms of other features can be derived, aiding in automated image understanding tasks. The method disclosed is highly efficient and can be used to decompose and analyze large images.

  5. Real-Coded Quantum-Inspired Genetic Algorithm-Based BP Neural Network Algorithm

    Directory of Open Access Journals (Sweden)

    Jianyong Liu

    2015-01-01

    Full Text Available The method that the real-coded quantum-inspired genetic algorithm (RQGA used to optimize the weights and threshold of BP neural network is proposed to overcome the defect that the gradient descent method makes the algorithm easily fall into local optimal value in the learning process. Quantum genetic algorithm (QGA is with good directional global optimization ability, but the conventional QGA is based on binary coding; the speed of calculation is reduced by the coding and decoding processes. So, RQGA is introduced to explore the search space, and the improved varied learning rate is adopted to train the BP neural network. Simulation test shows that the proposed algorithm is effective to rapidly converge to the solution conformed to constraint conditions.

  6. A cluster algorithm for graphs

    NARCIS (Netherlands)

    S. van Dongen

    2000-01-01

    textabstractA cluster algorithm for graphs called the emph{Markov Cluster algorithm (MCL~algorithm) is introduced. The algorithm provides basically an interface to an algebraic process defined on stochastic matrices, called the MCL~process. The graphs may be both weighted (with nonnegative weight)

  7. Unsupervised learning algorithms

    CERN Document Server

    Aydin, Kemal

    2016-01-01

    This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...

  8. Generalized-ensemble molecular dynamics and Monte Carlo algorithms beyond the limit of the multicanonical algorithm

    International Nuclear Information System (INIS)

    Okumura, Hisashi

    2010-01-01

    I review two new generalized-ensemble algorithms for molecular dynamics and Monte Carlo simulations of biomolecules, that is, the multibaric–multithermal algorithm and the partial multicanonical algorithm. In the multibaric–multithermal algorithm, two-dimensional random walks not only in the potential-energy space but also in the volume space are realized. One can discuss the temperature dependence and pressure dependence of biomolecules with this algorithm. The partial multicanonical simulation samples a wide range of only an important part of potential energy, so that one can concentrate the effort to determine a multicanonical weight factor only on the important energy terms. This algorithm has higher sampling efficiency than the multicanonical and canonical algorithms. (review)

  9. Hierarchical layered and semantic-based image segmentation using ergodicity map

    Science.gov (United States)

    Yadegar, Jacob; Liu, Xiaoqing

    2010-04-01

    Image segmentation plays a foundational role in image understanding and computer vision. Although great strides have been made and progress achieved on automatic/semi-automatic image segmentation algorithms, designing a generic, robust, and efficient image segmentation algorithm is still challenging. Human vision is still far superior compared to computer vision, especially in interpreting semantic meanings/objects in images. We present a hierarchical/layered semantic image segmentation algorithm that can automatically and efficiently segment images into hierarchical layered/multi-scaled semantic regions/objects with contextual topological relationships. The proposed algorithm bridges the gap between high-level semantics and low-level visual features/cues (such as color, intensity, edge, etc.) through utilizing a layered/hierarchical ergodicity map, where ergodicity is computed based on a space filling fractal concept and used as a region dissimilarity measurement. The algorithm applies a highly scalable, efficient, and adaptive Peano- Cesaro triangulation/tiling technique to decompose the given image into a set of similar/homogenous regions based on low-level visual cues in a top-down manner. The layered/hierarchical ergodicity map is built through a bottom-up region dissimilarity analysis. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level of detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanisms for contextual topological object/region relationship generation. Experiments have been conducted within the maritime image environment where the segmented layered semantic objects include the basic level objects (i.e. sky/land/water) and deeper level objects in the sky/land/water surfaces. Experimental results demonstrate the proposed algorithm has the capability to robustly and efficiently segment images into layered semantic objects

  10. Theoretic derivation of directed acyclic subgraph algorithm and comparisons with message passing algorithm

    Science.gov (United States)

    Ha, Jeongmok; Jeong, Hong

    2016-07-01

    This study investigates the directed acyclic subgraph (DAS) algorithm, which is used to solve discrete labeling problems much more rapidly than other Markov-random-field-based inference methods but at a competitive accuracy. However, the mechanism by which the DAS algorithm simultaneously achieves competitive accuracy and fast execution speed, has not been elucidated by a theoretical derivation. We analyze the DAS algorithm by comparing it with a message passing algorithm. Graphical models, inference methods, and energy-minimization frameworks are compared between DAS and message passing algorithms. Moreover, the performances of DAS and other message passing methods [sum-product belief propagation (BP), max-product BP, and tree-reweighted message passing] are experimentally compared.

  11. Shadow algorithms data miner

    CERN Document Server

    Woo, Andrew

    2012-01-01

    Digital shadow generation continues to be an important aspect of visualization and visual effects in film, games, simulations, and scientific applications. This resource offers a thorough picture of the motivations, complexities, and categorized algorithms available to generate digital shadows. From general fundamentals to specific applications, it addresses shadow algorithms and how to manage huge data sets from a shadow perspective. The book also examines the use of shadow algorithms in industrial applications, in terms of what algorithms are used and what software is applicable.

  12. Diversity-Guided Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Ursem, Rasmus Kjær

    2002-01-01

    Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few...... algorithms have used a measure to guide the search. The diversity-guided evolutionary algorithm (DGEA) uses the wellknown distance-to-average-point measure to alternate between phases of exploration (mutation) and phases of exploitation (recombination and selection). The DGEA showed remarkable results...

  13. Contact-impact algorithms on parallel computers

    International Nuclear Information System (INIS)

    Zhong Zhihua; Nilsson, Larsgunnar

    1994-01-01

    Contact-impact algorithms on parallel computers are discussed within the context of explicit finite element analysis. The algorithms concerned include a contact searching algorithm and an algorithm for contact force calculations. The contact searching algorithm is based on the territory concept of the general HITA algorithm. However, no distinction is made between different contact bodies, or between different contact surfaces. All contact segments from contact boundaries are taken as a single set. Hierarchy territories and contact territories are expanded. A three-dimensional bucket sort algorithm is used to sort contact nodes. The defence node algorithm is used in the calculation of contact forces. Both the contact searching algorithm and the defence node algorithm are implemented on the connection machine CM-200. The performance of the algorithms is examined under different circumstances, and numerical results are presented. ((orig.))

  14. A review on quantum search algorithms

    Science.gov (United States)

    Giri, Pulak Ranjan; Korepin, Vladimir E.

    2017-12-01

    The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.

  15. Magnet sorting algorithms

    International Nuclear Information System (INIS)

    Dinev, D.

    1996-01-01

    Several new algorithms for sorting of dipole and/or quadrupole magnets in synchrotrons and storage rings are described. The algorithms make use of a combinatorial approach to the problem and belong to the class of random search algorithms. They use an appropriate metrization of the state space. The phase-space distortion (smear) is used as a goal function. Computational experiments for the case of the JINR-Dubna superconducting heavy ion synchrotron NUCLOTRON have shown a significant reduction of the phase-space distortion after the magnet sorting. (orig.)

  16. Composite Differential Search Algorithm

    Directory of Open Access Journals (Sweden)

    Bo Liu

    2014-01-01

    Full Text Available Differential search algorithm (DS is a relatively new evolutionary algorithm inspired by the Brownian-like random-walk movement which is used by an organism to migrate. It has been verified to be more effective than ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011, and CMA-ES. In this paper, we propose four improved solution search algorithms, namely “DS/rand/1,” “DS/rand/2,” “DS/current to rand/1,” and “DS/current to rand/2” to search the new space and enhance the convergence rate for the global optimization problem. In order to verify the performance of different solution search methods, 23 benchmark functions are employed. Experimental results indicate that the proposed algorithm performs better than, or at least comparable to, the original algorithm when considering the quality of the solution obtained. However, these schemes cannot still achieve the best solution for all functions. In order to further enhance the convergence rate and the diversity of the algorithm, a composite differential search algorithm (CDS is proposed in this paper. This new algorithm combines three new proposed search schemes including “DS/rand/1,” “DS/rand/2,” and “DS/current to rand/1” with three control parameters using a random method to generate the offspring. Experiment results show that CDS has a faster convergence rate and better search ability based on the 23 benchmark functions.

  17. Artifact removal algorithms for stroke detection using a multistatic MIST beamforming algorithm.

    Science.gov (United States)

    Ricci, E; Di Domenico, S; Cianca, E; Rossi, T

    2015-01-01

    Microwave imaging (MWI) has been recently proved as a promising imaging modality for low-complexity, low-cost and fast brain imaging tools, which could play a fundamental role to efficiently manage emergencies related to stroke and hemorrhages. This paper focuses on the UWB radar imaging approach and in particular on the processing algorithms of the backscattered signals. Assuming the use of the multistatic version of the MIST (Microwave Imaging Space-Time) beamforming algorithm, developed by Hagness et al. for the early detection of breast cancer, the paper proposes and compares two artifact removal algorithms. Artifacts removal is an essential step of any UWB radar imaging system and currently considered artifact removal algorithms have been shown not to be effective in the specific scenario of brain imaging. First of all, the paper proposes modifications of a known artifact removal algorithm. These modifications are shown to be effective to achieve good localization accuracy and lower false positives. However, the main contribution is the proposal of an artifact removal algorithm based on statistical methods, which allows to achieve even better performance but with much lower computational complexity.

  18. Law and Order in Algorithmics

    NARCIS (Netherlands)

    Fokkinga, M.M.

    1992-01-01

    An algorithm is the input-output effect of a computer program; mathematically, the notion of algorithm comes close to the notion of function. Just as arithmetic is the theory and practice of calculating with numbers, so is ALGORITHMICS the theory and practice of calculating with algorithms. Just as

  19. Algorithms in Algebraic Geometry

    CERN Document Server

    Dickenstein, Alicia; Sommese, Andrew J

    2008-01-01

    In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its

  20. Explaining algorithms using metaphors

    CERN Document Server

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  1. Portfolios of quantum algorithms.

    Science.gov (United States)

    Maurer, S M; Hogg, T; Huberman, B A

    2001-12-17

    Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.

  2. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  3. A verified LLL algorithm

    NARCIS (Netherlands)

    Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa

    2018-01-01

    The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,

  4. SPECIAL LIBRARIES OF FRAGMENTS OF ALGORITHMIC NETWORKS TO AUTOMATE THE DEVELOPMENT OF ALGORITHMIC MODELS

    Directory of Open Access Journals (Sweden)

    V. E. Marley

    2015-01-01

    Full Text Available Summary. The concept of algorithmic models appeared from the algorithmic approach in which the simulated object, the phenomenon appears in the form of process, subject to strict rules of the algorithm, which placed the process of operation of the facility. Under the algorithmic model is the formalized description of the scenario subject specialist for the simulated process, the structure of which is comparable with the structure of the causal and temporal relationships between events of the process being modeled, together with all information necessary for its software implementation. To represent the structure of algorithmic models used algorithmic network. Normally, they were defined as loaded finite directed graph, the vertices which are mapped to operators and arcs are variables, bound by operators. The language of algorithmic networks has great features, the algorithms that it can display indifference the class of all random algorithms. In existing systems, automation modeling based on algorithmic nets, mainly used by operators working with real numbers. Although this reduces their ability, but enough for modeling a wide class of problems related to economy, environment, transport, technical processes. The task of modeling the execution of schedules and network diagrams is relevant and useful. There are many counting systems, network graphs, however, the monitoring process based analysis of gaps and terms of graphs, no analysis of prediction execution schedule or schedules. The library is designed to build similar predictive models. Specifying source data to obtain a set of projections from which to choose one and take it for a new plan.

  5. Group leaders optimization algorithm

    Science.gov (United States)

    Daskin, Anmer; Kais, Sabre

    2011-03-01

    We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.

  6. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  7. Emerging quasi-0D states at vanishing total entropy of the 1D hard sphere system: A coarse-grained similarity to the car parking problem

    Science.gov (United States)

    Frusawa, Hiroshi

    2014-05-01

    A coarse-grained system of one-dimensional (1D) hard spheres (HSs) is created using the Delaunay tessellation, which enables one to define the quasi-0D state. It is found from comparing the quasi-0D and 1D free energy densities that a frozen state due to the emergence of quasi-0D HSs is thermodynamically more favorable than fluidity with a large-scale heterogeneity above crossover volume fraction of ϕc=e/(1+e)=0.731⋯ , at which the total entropy of the 1D state vanishes. The Delaunay-based lattice mapping further provides a similarity between the dense HS system above ϕc and the jamming limit in the car parking problem.

  8. Emerging quasi-0D states at vanishing total entropy of the 1D hard sphere system: A coarse-grained similarity to the car parking problem

    International Nuclear Information System (INIS)

    Frusawa, Hiroshi

    2014-01-01

    A coarse-grained system of one-dimensional (1D) hard spheres (HSs) is created using the Delaunay tessellation, which enables one to define the quasi-0D state. It is found from comparing the quasi-0D and 1D free energy densities that a frozen state due to the emergence of quasi-0D HSs is thermodynamically more favorable than fluidity with a large-scale heterogeneity above crossover volume fraction of ϕ c =e/(1+e)=0.731⋯ , at which the total entropy of the 1D state vanishes. The Delaunay-based lattice mapping further provides a similarity between the dense HS system above ϕ c and the jamming limit in the car parking problem.

  9. The global Minmax k-means algorithm.

    Science.gov (United States)

    Wang, Xiaoyan; Bai, Yanping

    2016-01-01

    The global k -means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k -means to minimize the sum of the intra-cluster variances. However the global k -means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k -means algorithm. In this paper, we modified the global k -means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k -means clustering error method to global k -means algorithm to overcome the effect of bad initialization, proposed the global Minmax k -means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k -means algorithm, the global k -means algorithm and the MinMax k -means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.

  10. New Optimization Algorithms in Physics

    CERN Document Server

    Hartmann, Alexander K

    2004-01-01

    Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.

  11. Learning algorithms and automatic processing of languages; Algorithmes a apprentissage et traitement automatique des langues

    Energy Technology Data Exchange (ETDEWEB)

    Fluhr, Christian Yves Andre

    1977-06-15

    This research thesis concerns the field of artificial intelligence. It addresses learning algorithms applied to automatic processing of languages. The author first briefly describes some mechanisms of human intelligence in order to describe how these mechanisms are simulated on a computer. He outlines the specific role of learning in various manifestations of intelligence. Then, based on the Markov's algorithm theory, the author discusses the notion of learning algorithm. Two main types of learning algorithms are then addressed: firstly, an 'algorithm-teacher dialogue' type sanction-based algorithm which aims at learning how to solve grammatical ambiguities in submitted texts; secondly, an algorithm related to a document system which structures semantic data automatically obtained from a set of texts in order to be able to understand by references to any question on the content of these texts.

  12. Graph Colouring Algorithms

    DEFF Research Database (Denmark)

    Husfeldt, Thore

    2015-01-01

    This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...

  13. FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2015-05-01

    Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.

  14. From Genetics to Genetic Algorithms

    Indian Academy of Sciences (India)

    Genetic algorithms (GAs) are computational optimisation schemes with an ... The algorithms solve optimisation problems ..... Genetic Algorithms in Search, Optimisation and Machine. Learning, Addison-Wesley Publishing Company, Inc. 1989.

  15. The Triangulation Algorithmic: A Transformative Function for Designing and Deploying Effective Educational Technology Assessment Instruments

    Science.gov (United States)

    Osler, James Edward

    2013-01-01

    This paper discusses the implementation of the Tri-Squared Test as an advanced statistical measure used to verify and validate the research outcomes of Educational Technology software. A mathematical and epistemological rational is provided for the transformative process of qualitative data into quantitative outcomes through the Tri-Squared Test…

  16. Wavelet Radiosity on Arbitrary Planar Surfaces

    OpenAIRE

    Holzschuch , Nicolas; Cuny , François; Alonso , Laurent

    2000-01-01

    Colloque avec actes et comité de lecture. internationale.; International audience; Wavelet radiosity is, by its nature, restricted to parallelograms or triangles. This paper presents an innovative technique enabling wavelet radiosity computations on planar surfaces of arbitrary shape, including concave contours or contours with holes. This technique replaces the need for triangulating such complicated shapes, greatly reducing the complexity of the wavelet radiosity algorithm and the computati...

  17. Quantum random-walk search algorithm

    International Nuclear Information System (INIS)

    Shenvi, Neil; Whaley, K. Birgitta; Kempe, Julia

    2003-01-01

    Quantum random walks on graphs have been shown to display many interesting properties, including exponentially fast hitting times when compared with their classical counterparts. However, it is still unclear how to use these novel properties to gain an algorithmic speedup over classical algorithms. In this paper, we present a quantum search algorithm based on the quantum random-walk architecture that provides such a speedup. It will be shown that this algorithm performs an oracle search on a database of N items with O(√(N)) calls to the oracle, yielding a speedup similar to other quantum search algorithms. It appears that the quantum random-walk formulation has considerable flexibility, presenting interesting opportunities for development of other, possibly novel quantum algorithms

  18. Recognition algorithms in knot theory

    International Nuclear Information System (INIS)

    Dynnikov, I A

    2003-01-01

    In this paper the problem of constructing algorithms for comparing knots and links is discussed. A survey of existing approaches and basic results in this area is given. In particular, diverse combinatorial methods for representing links are discussed, the Haken algorithm for recognizing a trivial knot (the unknot) and a scheme for constructing a general algorithm (using Haken's ideas) for comparing links are presented, an approach based on representing links by closed braids is described, the known algorithms for solving the word problem and the conjugacy problem for braid groups are described, and the complexity of the algorithms under consideration is discussed. A new method of combinatorial description of knots is given together with a new algorithm (based on this description) for recognizing the unknot by using a procedure for monotone simplification. In the conclusion of the paper several problems are formulated whose solution could help to advance towards the 'algorithmization' of knot theory

  19. An Algorithm Computing the Local $b$ Function by an Approximate Division Algorithm in $\\hat{\\mathcal{D}}$

    OpenAIRE

    Nakayama, Hiromasa

    2006-01-01

    We give an algorithm to compute the local $b$ function. In this algorithm, we use the Mora division algorithm in the ring of differential operators and an approximate division algorithm in the ring of differential operators with power series coefficient.

  20. Introduction to Evolutionary Algorithms

    CERN Document Server

    Yu, Xinjie

    2010-01-01

    Evolutionary algorithms (EAs) are becoming increasingly attractive for researchers from various disciplines, such as operations research, computer science, industrial engineering, electrical engineering, social science, economics, etc. This book presents an insightful, comprehensive, and up-to-date treatment of EAs, such as genetic algorithms, differential evolution, evolution strategy, constraint optimization, multimodal optimization, multiobjective optimization, combinatorial optimization, evolvable hardware, estimation of distribution algorithms, ant colony optimization, particle swarm opti

  1. Evidence-based algorithm for heparin dosing before cardiopulmonary bypass. Part 1: Development of the algorithm.

    Science.gov (United States)

    McKinney, Mark C; Riley, Jeffrey B

    2007-12-01

    The incidence of heparin resistance during adult cardiac surgery with cardiopulmonary bypass has been reported at 15%-20%. The consistent use of a clinical decision-making algorithm may increase the consistency of patient care and likely reduce the total required heparin dose and other problems associated with heparin dosing. After a directed survey of practicing perfusionists regarding treatment of heparin resistance and a literature search for high-level evidence regarding the diagnosis and treatment of heparin resistance, an evidence-based decision-making algorithm was constructed. The face validity of the algorithm decisive steps and logic was confirmed by a second survey of practicing perfusionists. The algorithm begins with review of the patient history to identify predictors for heparin resistance. The definition for heparin resistance contained in the algorithm is an activated clotting time 450 IU/kg heparin loading dose. Based on the literature, the treatment for heparin resistance used in the algorithm is anti-thrombin III supplement. The algorithm seems to be valid and is supported by high-level evidence and clinician opinion. The next step is a human randomized clinical trial to test the clinical procedure guideline algorithm vs. current standard clinical practice.

  2. Algorithmic Principles of Mathematical Programming

    NARCIS (Netherlands)

    Faigle, Ulrich; Kern, Walter; Still, Georg

    2002-01-01

    Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear

  3. ADORE-GA: Genetic algorithm variant of the ADORE algorithm for ROP detector layout optimization in CANDU reactors

    International Nuclear Information System (INIS)

    Kastanya, Doddy

    2012-01-01

    Highlights: ► ADORE is an algorithm for CANDU ROP Detector Layout Optimization. ► ADORE-GA is a Genetic Algorithm variant of the ADORE algorithm. ► Robustness test of ADORE-GA algorithm is presented in this paper. - Abstract: The regional overpower protection (ROP) systems protect CANDU® reactors against overpower in the fuel that could reduce the safety margin-to-dryout. The overpower could originate from a localized power peaking within the core or a general increase in the global core power level. The design of the detector layout for ROP systems is a challenging discrete optimization problem. In recent years, two algorithms have been developed to find a quasi optimal solution to this detector layout optimization problem. Both of these algorithms utilize the simulated annealing (SA) algorithm as their optimization engine. In the present paper, an alternative optimization algorithm, namely the genetic algorithm (GA), has been implemented as the optimization engine. The implementation is done within the ADORE algorithm. Results from evaluating the effects of using various mutation rates and crossover parameters are presented in this paper. It has been demonstrated that the algorithm is sufficiently robust in producing similar quality solutions.

  4. Where genetic algorithms excel.

    Science.gov (United States)

    Baum, E B; Boneh, D; Garrett, C

    2001-01-01

    We analyze the performance of a genetic algorithm (GA) we call Culling, and a variety of other algorithms, on a problem we refer to as the Additive Search Problem (ASP). We show that the problem of learning the Ising perceptron is reducible to a noisy version of ASP. Noisy ASP is the first problem we are aware of where a genetic-type algorithm bests all known competitors. We generalize ASP to k-ASP to study whether GAs will achieve "implicit parallelism" in a problem with many more schemata. GAs fail to achieve this implicit parallelism, but we describe an algorithm we call Explicitly Parallel Search that succeeds. We also compute the optimal culling point for selective breeding, which turns out to be independent of the fitness function or the population distribution. We also analyze a mean field theoretic algorithm performing similarly to Culling on many problems. These results provide insight into when and how GAs can beat competing methods.

  5. Mao-Gilles Stabilization Algorithm

    OpenAIRE

    Jérôme Gilles

    2013-01-01

    Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different sce...

  6. Efficient GPS Position Determination Algorithms

    National Research Council Canada - National Science Library

    Nguyen, Thao Q

    2007-01-01

    ... differential GPS algorithm for a network of users. The stand-alone user GPS algorithm is a direct, closed-form, and efficient new position determination algorithm that exploits the closed-form solution of the GPS trilateration equations and works...

  7. Improved autonomous star identification algorithm

    International Nuclear Information System (INIS)

    Luo Li-Yan; Xu Lu-Ping; Zhang Hua; Sun Jing-Rong

    2015-01-01

    The log–polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. (paper)

  8. Fluid-structure-coupling algorithm

    International Nuclear Information System (INIS)

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure, and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed here have been extended to three dimensions and implemented in the computer code PELE-3D

  9. Fluid structure coupling algorithm

    International Nuclear Information System (INIS)

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two-dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed have been extended to three dimensions and implemented in the computer code PELE-3D

  10. Majorization arrow in quantum-algorithm design

    International Nuclear Information System (INIS)

    Latorre, J.I.; Martin-Delgado, M.A.

    2002-01-01

    We apply majorization theory to study the quantum algorithms known so far and find that there is a majorization principle underlying the way they operate. Grover's algorithm is a neat instance of this principle where majorization works step by step until the optimal target state is found. Extensions of this situation are also found in algorithms based in quantum adiabatic evolution and the family of quantum phase-estimation algorithms, including Shor's algorithm. We state that in quantum algorithms the time arrow is a majorization arrow

  11. Foundations of genetic algorithms 1991

    CERN Document Server

    1991-01-01

    Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition

  12. Opposition-Based Adaptive Fireworks Algorithm

    OpenAIRE

    Chibing Gong

    2016-01-01

    A fireworks algorithm (FWA) is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA) proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA). The purpose of this paper is to add opposition-based learning (OBL) to AFWA with the goal of further boosting performance and achieving global optimization. Twelve benchmark functions are tested in use of an opposition-based a...

  13. Chinese handwriting recognition an algorithmic perspective

    CERN Document Server

    Su, Tonghua

    2013-01-01

    This book provides an algorithmic perspective on the recent development of Chinese handwriting recognition. Two technically sound strategies, the segmentation-free and integrated segmentation-recognition strategy, are investigated and algorithms that have worked well in practice are primarily focused on. Baseline systems are initially presented for these strategies and are subsequently expanded on and incrementally improved. The sophisticated algorithms covered include: 1) string sample expansion algorithms which synthesize string samples from isolated characters or distort realistic string samples; 2) enhanced feature representation algorithms, e.g. enhanced four-plane features and Delta features; 3) novel learning algorithms, such as Perceptron learning with dynamic margin, MPE training and distributed training; and lastly 4) ensemble algorithms, that is, combining the two strategies using both parallel structure and serial structure. All the while, the book moves from basic to advanced algorithms, helping ...

  14. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  15. THE APPROACHING TRAIN DETECTION ALGORITHM

    OpenAIRE

    S. V. Bibikov

    2015-01-01

    The paper deals with detection algorithm for rail vibroacoustic waves caused by approaching train on the background of increased noise. The urgency of algorithm development for train detection in view of increased rail noise, when railway lines are close to roads or road intersections is justified. The algorithm is based on the method of weak signals detection in a noisy environment. The information statistics ultimate expression is adjusted. We present the results of algorithm research and t...

  16. Combinatorial optimization algorithms and complexity

    CERN Document Server

    Papadimitriou, Christos H

    1998-01-01

    This clearly written, mathematically rigorous text includes a novel algorithmic exposition of the simplex method and also discusses the Soviet ellipsoid algorithm for linear programming; efficient algorithms for network flow, matching, spanning trees, and matroids; the theory of NP-complete problems; approximation algorithms, local search heuristics for NP-complete problems, more. All chapters are supplemented by thought-provoking problems. A useful work for graduate-level students with backgrounds in computer science, operations research, and electrical engineering.

  17. Enhanced sampling algorithms.

    Science.gov (United States)

    Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko

    2013-01-01

    In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.

  18. Economic dispatch using chaotic bat algorithm

    International Nuclear Information System (INIS)

    Adarsh, B.R.; Raghunathan, T.; Jayabarathi, T.; Yang, Xin-She

    2016-01-01

    This paper presents the application of a new metaheuristic optimization algorithm, the chaotic bat algorithm for solving the economic dispatch problem involving a number of equality and inequality constraints such as power balance, prohibited operating zones and ramp rate limits. Transmission losses and multiple fuel options are also considered for some problems. The chaotic bat algorithm, a variant of the basic bat algorithm, is obtained by incorporating chaotic sequences to enhance its performance. Five different example problems comprising 6, 13, 20, 40 and 160 generating units are solved to demonstrate the effectiveness of the algorithm. The algorithm requires little tuning by the user, and the results obtained show that it either outperforms or compares favorably with several existing techniques reported in literature. - Highlights: • The chaotic bat algorithm, a new metaheuristic optimization algorithm has been used. • The problem solved – the economic dispatch problem – is nonlinear, discontinuous. • It has number of equality and inequality constraints. • The algorithm has been demonstrated to be applicable on high dimensional problems.

  19. Inclusive Flavour Tagging Algorithm

    International Nuclear Information System (INIS)

    Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex

    2016-01-01

    Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment. (paper)

  20. Distributed k-Means Algorithm and Fuzzy c-Means Algorithm for Sensor Networks Based on Multiagent Consensus Theory.

    Science.gov (United States)

    Qin, Jiahu; Fu, Weiming; Gao, Huijun; Zheng, Wei Xing

    2016-03-03

    This paper is concerned with developing a distributed k-means algorithm and a distributed fuzzy c-means algorithm for wireless sensor networks (WSNs) where each node is equipped with sensors. The underlying topology of the WSN is supposed to be strongly connected. The consensus algorithm in multiagent consensus theory is utilized to exchange the measurement information of the sensors in WSN. To obtain a faster convergence speed as well as a higher possibility of having the global optimum, a distributed k-means++ algorithm is first proposed to find the initial centroids before executing the distributed k-means algorithm and the distributed fuzzy c-means algorithm. The proposed distributed k-means algorithm is capable of partitioning the data observed by the nodes into measure-dependent groups which have small in-group and large out-group distances, while the proposed distributed fuzzy c-means algorithm is capable of partitioning the data observed by the nodes into different measure-dependent groups with degrees of membership values ranging from 0 to 1. Simulation results show that the proposed distributed algorithms can achieve almost the same results as that given by the centralized clustering algorithms.

  1. A survey of parallel multigrid algorithms

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1987-01-01

    A typical multigrid algorithm applied to well-behaved linear-elliptic partial-differential equations (PDEs) is described. Criteria for designing and evaluating parallel algorithms are presented. Before evaluating the performance of some parallel multigrid algorithms, consideration is given to some theoretical complexity results for solving PDEs in parallel and for executing the multigrid algorithm. The effect of mapping and load imbalance on the partial efficiency of the algorithm is studied.

  2. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  3. GPU-based Scalable Volumetric Reconstruction for Multi-view Stereo

    Energy Technology Data Exchange (ETDEWEB)

    Kim, H; Duchaineau, M; Max, N

    2011-09-21

    We present a new scalable volumetric reconstruction algorithm for multi-view stereo using a graphics processing unit (GPU). It is an effectively parallelized GPU algorithm that simultaneously uses a large number of GPU threads, each of which performs voxel carving, in order to integrate depth maps with images from multiple views. Each depth map, triangulated from pair-wise semi-dense correspondences, represents a view-dependent surface of the scene. This algorithm also provides scalability for large-scale scene reconstruction in a high resolution voxel grid by utilizing streaming and parallel computation. The output is a photo-realistic 3D scene model in a volumetric or point-based representation. We demonstrate the effectiveness and the speed of our algorithm with a synthetic scene and real urban/outdoor scenes. Our method can also be integrated with existing multi-view stereo algorithms such as PMVS2 to fill holes or gaps in textureless regions.

  4. Quantum algorithm for support matrix machines

    Science.gov (United States)

    Duan, Bojia; Yuan, Jiabin; Liu, Ying; Li, Dan

    2017-09-01

    We propose a quantum algorithm for support matrix machines (SMMs) that efficiently addresses an image classification problem by introducing a least-squares reformulation. This algorithm consists of two core subroutines: a quantum matrix inversion (Harrow-Hassidim-Lloyd, HHL) algorithm and a quantum singular value thresholding (QSVT) algorithm. The two algorithms can be implemented on a universal quantum computer with complexity O[log(npq) ] and O[log(pq)], respectively, where n is the number of the training data and p q is the size of the feature space. By iterating the algorithms, we can find the parameters for the SMM classfication model. Our analysis shows that both HHL and QSVT algorithms achieve an exponential increase of speed over their classical counterparts.

  5. Mao-Gilles Stabilization Algorithm

    Directory of Open Access Journals (Sweden)

    Jérôme Gilles

    2013-07-01

    Full Text Available Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different scenarios involving non-rigid deformations.

  6. Stereo matching and view interpolation based on image domain triangulation.

    Science.gov (United States)

    Fickel, Guilherme Pinto; Jung, Claudio R; Malzbender, Tom; Samadani, Ramin; Culbertson, Bruce

    2013-09-01

    This paper presents a new approach for stereo matching and view interpolation problems based on triangular tessellations suitable for a linear array of rectified cameras. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect triangles with similar disparities generating a full 3D mesh related to each camera (view), which are used to generate new synthesized views along the linear camera array. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes.

  7. Effects of visualization on algorithm comprehension

    Science.gov (United States)

    Mulvey, Matthew

    Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.

  8. Algorithmic approach to diagram techniques

    International Nuclear Information System (INIS)

    Ponticopoulos, L.

    1980-10-01

    An algorithmic approach to diagram techniques of elementary particles is proposed. The definition and axiomatics of the theory of algorithms are presented, followed by the list of instructions of an algorithm formalizing the construction of graphs and the assignment of mathematical objects to them. (T.A.)

  9. Multisensor data fusion algorithm development

    Energy Technology Data Exchange (ETDEWEB)

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  10. A new cluster algorithm for graphs

    NARCIS (Netherlands)

    S. van Dongen

    1998-01-01

    textabstractA new cluster algorithm for graphs called the emph{Markov Cluster algorithm ($MCL$ algorithm) is introduced. The graphs may be both weighted (with nonnegative weight) and directed. Let~$G$~be such a graph. The $MCL$ algorithm simulates flow in $G$ by first identifying $G$ in a

  11. Computation of Hyperbolic Structures in Knot Theory

    OpenAIRE

    Weeks, Jeffrey R.

    2003-01-01

    This chapter from the upcoming Handbook of Knot Theory (eds. Menasco and Thistlethwaite) shows how to construct hyperbolic structures on link complements and perform hyperbolic Dehn filling. Along with a new elementary exposition of the standard ideas from Thurston's work, the article includes never-before-published explanations of SnapPea's algorithms for triangulating a link complement efficiently and for converging quickly to the hyperbolic structure while avoiding singularities in the par...

  12. An overview of smart grid routing algorithms

    Science.gov (United States)

    Wang, Junsheng; OU, Qinghai; Shen, Haijuan

    2017-08-01

    This paper summarizes the typical routing algorithm in smart grid by analyzing the communication business and communication requirements of intelligent grid. Mainly from the two kinds of routing algorithm is analyzed, namely clustering routing algorithm and routing algorithm, analyzed the advantages and disadvantages of two kinds of typical routing algorithm in routing algorithm and applicability.

  13. A Parametric k-Means Algorithm

    Science.gov (United States)

    Tarpey, Thaddeus

    2007-01-01

    Summary The k points that optimally represent a distribution (usually in terms of a squared error loss) are called the k principal points. This paper presents a computationally intensive method that automatically determines the principal points of a parametric distribution. Cluster means from the k-means algorithm are nonparametric estimators of principal points. A parametric k-means approach is introduced for estimating principal points by running the k-means algorithm on a very large simulated data set from a distribution whose parameters are estimated using maximum likelihood. Theoretical and simulation results are presented comparing the parametric k-means algorithm to the usual k-means algorithm and an example on determining sizes of gas masks is used to illustrate the parametric k-means algorithm. PMID:17917692

  14. DNABIT Compress – Genome compression algorithm

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  15. Some software algorithms for microprocessor ratemeters

    International Nuclear Information System (INIS)

    Savic, Z.

    1991-01-01

    After a review of the basic theoretical ratemeter problem and a general discussion of microprocessor ratemeters, a short insight into their hardware organization is given. Three software algorithms are described: the old ones the quasi-exponential and floating mean algorithm, and a new weighted moving average algorithm. The equations for statistical characterization of the new algorithm are given and an intercomparison is made. It is concluded that the new algorithm has statistical advantages over the old ones. (orig.)

  16. Some software algorithms for microprocessor ratemeters

    Energy Technology Data Exchange (ETDEWEB)

    Savic, Z. (Military Technical Inst., Belgrade (Yugoslavia))

    1991-03-15

    After a review of the basic theoretical ratemeter problem and a general discussion of microprocessor ratemeters, a short insight into their hardware organization is given. Three software algorithms are described: the old ones the quasi-exponential and floating mean algorithm, and a new weighted moving average algorithm. The equations for statistical characterization of the new algorithm are given and an intercomparison is made. It is concluded that the new algorithm has statistical advantages over the old ones. (orig.).

  17. Higher-order force gradient symplectic algorithms

    Science.gov (United States)

    Chin, Siu A.; Kidwell, Donald W.

    2000-12-01

    We show that a recently discovered fourth order symplectic algorithm, which requires one evaluation of force gradient in addition to three evaluations of the force, when iterated to higher order, yielded algorithms that are far superior to similarly iterated higher order algorithms based on the standard Forest-Ruth algorithm. We gauge the accuracy of each algorithm by comparing the step-size independent error functions associated with energy conservation and the rotation of the Laplace-Runge-Lenz vector when solving a highly eccentric Kepler problem. For orders 6, 8, 10, and 12, the new algorithms are approximately a factor of 103, 104, 104, and 105 better.

  18. The Retina Algorithm

    CERN Multimedia

    CERN. Geneva; PUNZI, Giovanni

    2015-01-01

    Charge particle reconstruction is one of the most demanding computational tasks found in HEP, and it becomes increasingly important to perform it in real time. We envision that HEP would greatly benefit from achieving a long-term goal of making track reconstruction happen transparently as part of the detector readout ("detector-embedded tracking"). We describe here a track-reconstruction approach based on a massively parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature ('RETINA algorithm'). It turns out that high-quality tracking in large HEP detectors is possible with very small latencies, when this algorithm is implemented in specialized processors, based on current state-of-the-art, high-speed/high-bandwidth digital devices.

  19. Machine Learning an algorithmic perspective

    CERN Document Server

    Marsland, Stephen

    2009-01-01

    Traditional books on machine learning can be divided into two groups - those aimed at advanced undergraduates or early postgraduates with reasonable mathematical knowledge and those that are primers on how to code algorithms. The field is ready for a text that not only demonstrates how to use the algorithms that make up machine learning methods, but also provides the background needed to understand how and why these algorithms work. Machine Learning: An Algorithmic Perspective is that text.Theory Backed up by Practical ExamplesThe book covers neural networks, graphical models, reinforcement le

  20. The development of three dimensional inspection and tracking system for the maintenance of pipes in the nuclear power plants

    International Nuclear Information System (INIS)

    Hwang, Suk Young; Kim, Chul Jung; Baik, Sung Hoon; Cho, Jai Wan; Park, Seung Kyu

    1999-12-01

    We developed 3D laser camera sensors for weld seam tracking and inspection of radioactive NPP pipes. The developed sensor's optical system adopts the optical triangulation method with the line beam generation and imaging optics. A laser line extraction algorithm accompanying preprocessing of noise reduction has been developed on images captured from the sensor. Experimental results validate the physical accuracy of the sensor hardware and the robustness of the image processing algorithms. A 3D shape reconstruction algorithm from multiple laser lines was proposed and the resulting 3D shape was visualized on the developed 3D graphic program environment utilizing OpenGL graphic libraries. And also, two D.O.F precise servo controlled mechanism was developed. The experimental results on weld seam tracking and inspection tasks show the practical feasibility of the developed sensors and the image processing algorithms. (author)

  1. The development of three dimensional inspection and tracking system for the maintenance of pipes in the nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Suk Young; Kim, Chul Jung; Baik, Sung Hoon; Cho, Jai Wan; Park, Seung Kyu

    1999-12-01

    We developed 3D laser camera sensors for weld seam tracking and inspection of radioactive NPP pipes. The developed sensor's optical system adopts the optical triangulation method with the line beam generation and imaging optics. A laser line extraction algorithm accompanying preprocessing of noise reduction has been developed on images captured from the sensor. Experimental results validate the physical accuracy of the sensor hardware and the robustness of the image processing algorithms. A 3D shape reconstruction algorithm from multiple laser lines was proposed and the resulting 3D shape was visualized on the developed 3D graphic program environment utilizing OpenGL graphic libraries. And also, two D.O.F precise servo controlled mechanism was developed. The experimental results on weld seam tracking and inspection tasks show the practical feasibility of the developed sensors and the image processing algorithms. (author)

  2. Instance-specific algorithm configuration

    CERN Document Server

    Malitsky, Yuri

    2014-01-01

    This book presents a modular and expandable technique in the rapidly emerging research area of automatic configuration and selection of the best algorithm for the instance at hand. The author presents the basic model behind ISAC and then details a number of modifications and practical applications. In particular, he addresses automated feature generation, offline algorithm configuration for portfolio generation, algorithm selection, adaptive solvers, online tuning, and parallelization.    The author's related thesis was honorably mentioned (runner-up) for the ACP Dissertation Award in 2014,

  3. Quantum algorithm for linear regression

    Science.gov (United States)

    Wang, Guoming

    2017-07-01

    We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.

  4. Algorithm FIRE-Feynman Integral REduction

    International Nuclear Information System (INIS)

    Smirnov, A.V.

    2008-01-01

    The recently developed algorithm FIRE performs the reduction of Feynman integrals to master integrals. It is based on a number of strategies, such as applying the Laporta algorithm, the s-bases algorithm, region-bases and integrating explicitly over loop momenta when possible. Currently it is being used in complicated three-loop calculations.

  5. Synthesis of Greedy Algorithms Using Dominance Relations

    Science.gov (United States)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  6. Algorithm for Compressing Time-Series Data

    Science.gov (United States)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  7. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  8. Novel medical image enhancement algorithms

    Science.gov (United States)

    Agaian, Sos; McClendon, Stephen A.

    2010-01-01

    In this paper, we present two novel medical image enhancement algorithms. The first, a global image enhancement algorithm, utilizes an alpha-trimmed mean filter as its backbone to sharpen images. The second algorithm uses a cascaded unsharp masking technique to separate the high frequency components of an image in order for them to be enhanced using a modified adaptive contrast enhancement algorithm. Experimental results from enhancing electron microscopy, radiological, CT scan and MRI scan images, using the MATLAB environment, are then compared to the original images as well as other enhancement methods, such as histogram equalization and two forms of adaptive contrast enhancement. An image processing scheme for electron microscopy images of Purkinje cells will also be implemented and utilized as a comparison tool to evaluate the performance of our algorithm.

  9. Wavelet-LMS algorithm-based echo cancellers

    Science.gov (United States)

    Seetharaman, Lalith K.; Rao, Sathyanarayana S.

    2002-12-01

    This paper presents Echo Cancellers based on the Wavelet-LMS Algorithm. The performance of the Least Mean Square Algorithm in Wavelet transform domain is observed and its application in Echo cancellation is analyzed. The Widrow-Hoff Least Mean Square Algorithm is most widely used algorithm for Adaptive filters that function as Echo Cancellers. The present day communication signals are widely non-stationary in nature and some errors crop up when Least Mean Square Algorithm is used for the Echo Cancellers handling such signals. The analysis of non-stationary signals often involves a compromise between how well transitions or discontinuities can be located. The multi-scale or multi-resolution of signal analysis, which is the essence of wavelet transform, makes Wavelets popular in non-stationary signal analysis. In this paper, we present a Wavelet-LMS algorithm wherein the wavelet coefficients of a signal are modified adaptively using the Least Mean Square Algorithm and then reconstructed to give an Echo-free signal. The Echo Canceller based on this Algorithm is found to have a better convergence and a comparatively lesser MSE (Mean Square error).

  10. Analysis and Improvement of Fireworks Algorithm

    Directory of Open Access Journals (Sweden)

    Xi-Guang Li

    2017-02-01

    Full Text Available The Fireworks Algorithm is a recently developed swarm intelligence algorithm to simulate the explosion process of fireworks. Based on the analysis of each operator of Fireworks Algorithm (FWA, this paper improves the FWA and proves that the improved algorithm converges to the global optimal solution with probability 1. The proposed algorithm improves the goal of further boosting performance and achieving global optimization where mainly include the following strategies. Firstly using the opposition-based learning initialization population. Secondly a new explosion amplitude mechanism for the optimal firework is proposed. In addition, the adaptive t-distribution mutation for non-optimal individuals and elite opposition-based learning for the optimal individual are used. Finally, a new selection strategy, namely Disruptive Selection, is proposed to reduce the running time of the algorithm compared with FWA. In our simulation, we apply the CEC2013 standard functions and compare the proposed algorithm (IFWA with SPSO2011, FWA, EFWA and dynFWA. The results show that the proposed algorithm has better overall performance on the test functions.

  11. Empirical study of parallel LRU simulation algorithms

    Science.gov (United States)

    Carr, Eric; Nicol, David M.

    1994-01-01

    This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.

  12. A Hybrid Chaotic Quantum Evolutionary Algorithm

    DEFF Research Database (Denmark)

    Cai, Y.; Zhang, M.; Cai, H.

    2010-01-01

    A hybrid chaotic quantum evolutionary algorithm is proposed to reduce amount of computation, speed up convergence and restrain premature phenomena of quantum evolutionary algorithm. The proposed algorithm adopts the chaotic initialization method to generate initial population which will form a pe...... tests. The presented algorithm is applied to urban traffic signal timing optimization and the effect is satisfied....

  13. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    International Nuclear Information System (INIS)

    Cheng Sheng-Yi; Liu Wen-Jin; Chen Shan-Qiu; Dong Li-Zhi; Yang Ping; Xu Bing

    2015-01-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n 2 ) ∼ O(n 3 ) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ∼ (O(n) 3/2 ), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. (paper)

  14. The Parallel Algorithm Based on Genetic Algorithm for Improving the Performance of Cognitive Radio

    Directory of Open Access Journals (Sweden)

    Liu Miao

    2018-01-01

    Full Text Available The intercarrier interference (ICI problem of cognitive radio (CR is severe. In this paper, the machine learning algorithm is used to obtain the optimal interference subcarriers of an unlicensed user (un-LU. Masking the optimal interference subcarriers can suppress the ICI of CR. Moreover, the parallel ICI suppression algorithm is designed to improve the calculation speed and meet the practical requirement of CR. Simulation results show that the data transmission rate threshold of un-LU can be set, the data transmission quality of un-LU can be ensured, the ICI of a licensed user (LU is suppressed, and the bit error rate (BER performance of LU is improved by implementing the parallel suppression algorithm. The ICI problem of CR is solved well by the new machine learning algorithm. The computing performance of the algorithm is improved by designing a new parallel structure and the communication performance of CR is enhanced.

  15. Look-ahead fermion algorithm

    International Nuclear Information System (INIS)

    Grady, M.

    1986-01-01

    I describe a fast fermion algorithm which utilizes pseudofermion fields but appears to have little or no systematic error. Test simulations on two-dimensional gauge theories are described. A possible justification for the algorithm being exact is discussed. 8 refs

  16. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  17. Algorithmic complexity of quantum capacity

    Science.gov (United States)

    Oskouei, Samad Khabbazi; Mancini, Stefano

    2018-04-01

    We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.

  18. Learning from nature: Nature-inspired algorithms

    DEFF Research Database (Denmark)

    Albeanu, Grigore; Madsen, Henrik; Popentiu-Vladicescu, Florin

    2016-01-01

    .), genetic and evolutionary strategies, artificial immune systems etc. Well-known examples of applications include: aircraft wing design, wind turbine design, bionic car, bullet train, optimal decisions related to traffic, appropriate strategies to survive under a well-adapted immune system etc. Based......During last decade, the nature has inspired researchers to develop new algorithms. The largest collection of nature-inspired algorithms is biology-inspired: swarm intelligence (particle swarm optimization, ant colony optimization, cuckoo search, bees' algorithm, bat algorithm, firefly algorithm etc...... on collective social behaviour of organisms, researchers have developed optimization strategies taking into account not only the individuals, but also groups and environment. However, learning from nature, new classes of approaches can be identified, tested and compared against already available algorithms...

  19. Cache-Oblivious Algorithms and Data Structures

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting

    2004-01-01

    Frigo, Leiserson, Prokop and Ramachandran in 1999 introduced the ideal-cache model as a formal model of computation for developing algorithms in environments with multiple levels of caching, and coined the terminology of cache-oblivious algorithms. Cache-oblivious algorithms are described...... as standard RAM algorithms with only one memory level, i.e. without any knowledge about memory hierarchies, but are analyzed in the two-level I/O model of Aggarwal and Vitter for an arbitrary memory and block size and an optimal off-line cache replacement strategy. The result are algorithms that automatically...... apply to multi-level memory hierarchies. This paper gives an overview of the results achieved on cache-oblivious algorithms and data structures since the seminal paper by Frigo et al....

  20. Portable Health Algorithms Test System

    Science.gov (United States)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.